MOMENTAN AUSVERKAUFT

Synthesis Lectures on Computer Architecture Ser.: Deep Learning for Computer Architects by Paul Whatmough, Brandon Reagen, Robert Adolf, David Brooks and Gu-Yeon Wei (2017, Trade Paperback)

Über dieses Produkt

Product Identifiers

PublisherMorgan & Claypool Publishers
ISBN-101627057285
ISBN-139781627057288
eBay Product ID (ePID)240299249

Product Key Features

Number of Pages123 Pages
LanguageEnglish
Publication NameDeep Learning for Computer Architects
Publication Year2017
SubjectSystems Architecture / General, Intelligence (Ai) & Semantics, Neural Networks
TypeTextbook
Subject AreaComputers
AuthorPaul Whatmough, Brandon Reagen, Robert Adolf, David Brooks, Gu-Yeon Wei
SeriesSynthesis Lectures on Computer Architecture Ser.
FormatTrade Paperback

Dimensions

Item Height0.3 in
Item Weight8 Oz
Item Length9.2 in
Item Width7.5 in

Additional Product Features

Intended AudienceTrade
IllustratedYes
Table Of ContentPreface Introduction Foundations of Deep Learning Methods and Models Neural Network Accelerator Optimization: A Case Study A Literature Survey and Review Conclusion Bibliography Authors' Biographies
SynopsisThis is a primer written for computer architects in the new and rapidly evolving field of deep learning. It reviews how machine learning has evolved since its inception in the 1960s and tracks the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. It also reviews representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, it also details the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, it presents a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context., A primer for computer architects in a new and rapidly evolving field. The authors review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that have emerged in the last decade., This is a primer written for computer architects in the new and rapidly evolving field of deep learning . It reviews how machine learning has evolved since its inception in the 1960s and tracks the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. It also reviews representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, it also details the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, it presents a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.