Customizable Computing
Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
1122528502
Customizable Computing
Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
36.4 Out Of Stock
Customizable Computing

Customizable Computing

Customizable Computing

Customizable Computing

Paperback

$36.40  $40.00 Save 9% Current price is $36.4, Original price is $40. You Save 9%.
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.

Product Details

ISBN-13: 9781627057677
Publisher: Morgan and Claypool Publishers
Publication date: 06/01/2015
Pages: 120
Product dimensions: 7.50(w) x 9.25(h) x 0.25(d)

About the Author

Yu-Ting Chen is a Ph.D. candidate in the Computer Science Department at the University of California, Los Angeles. He received a B.S. degree in computer science, a B.A. degree in economics, and an M.S. degree in computer science from National Tsing Hua University, HsinTsu, Taiwan, R.O.C., in 2005 and 2007, respectively. He worked at TSMC as a summer intern in 2005 and at Intel Labs as a summer intern in 2013. His research interests include computer architecture, cluster computing, and bioinformatics in DNA sequencing technologies.

Jason Cong received his B.S. degree in computer science from Peking University in 1985, and his M.S. and Ph.D. degrees in computer science from the University of Illinois at Urbana–Champaign in 1987 and 1990, respectively. Currently, he is a Chancellors Professor at the Computer Science Department, with a joint appointment from the Electrical Engineering Department, at the University of California, Los Angeles. He is the director of the Center for Domain- Specific Computing (CDSC), co-director of the UCLA/Peking University Joint Research Institute in Science and Engineering, and director of the VLSI Architecture, Synthesis, and Technology (VAST) Laboratory. He also served as the chair the UCLA Computer Science Department from 2005–2008. Dr. Cong’s research interests include synthesis of VLSI circuits and systems, programmable systems, novel computer architectures, nano-systems, and highly scalable algorithms. He has over 400 publications in these areas, including 10 best paper awards, two 10-Year Most Influential Paper Awards (from ICCAD’14 and ASPDAC’15), and the 2011 ACM/IEEE A. Richard Newton Technical Impact Award in Electric Design Automation. He was elected to an IEEE Fellow in 2000 and ACM Fellow in 2008. He is the recipient of the 2010 IEEE Circuits and System (CAS) Society Technical Achievement Award “for seminal contributions to electronic design automation, especially in FPGA synthesis, VLSI interconnect optimization, and physical design automation."

Michael Gill received a B.S. degree in computer science from California Polytechnic University, Pomona, and an M.S. and a Ph.D. in computer science from the University of California, Los Angeles. His research is primarily focused on high-performance architectures, and the interaction between these architectures and compilers, run time systems, and operating systems.

Glenn Reinman received his B.S. in computer science and engineering from the Massachusetts Institute of Technology in 1996. He earned his M.S. and Ph.D in computer science from the University of California, San Diego, in 1999 and 2001, respectively. He is currently a professor in the Computer Science Department at the University of California, Los Angeles.

Bingjun Xiao received a B.S. degree in microelectronics from Peking University, Beijing, China, in 2010. He received an M.S. degree and a Ph.D. degree in electrical engineering from UCLA in 2012 and 2015, respectively. His research interests include machine learning, cluster computing, and data flow optimization.

Table of Contents

Table of Contents: Acknowledgments / Introduction / Road Map / Customization of Cores / Loosely Coupled Compute Engines / On-Chip Memory Customization / Interconnect Customization / Concluding Remarks / Bibliography / Authors' Biographies
From the B&N Reads Blog

Customer Reviews