MOMENTAN AUSVERKAUFT

Springerbriefs in Computer Science Ser.: Deep Neural Networks in a Mathematical Framework by Dong Eui Chang and Anthony L. Caterini (2018, Trade Paperback)

Über dieses Produkt

Product Identifiers

PublisherSpringer International Publishing A&G
ISBN-103319753037
ISBN-139783319753034
eBay Product ID (ePID)242610503

Product Key Features

Number of PagesXiii, 84 Pages
LanguageEnglish
Publication NameDeep Neural Networks in a Mathematical Framework
SubjectIntelligence (Ai) & Semantics, Neural Networks, Computer Vision & Pattern Recognition
Publication Year2018
TypeTextbook
Subject AreaComputers
AuthorDong Eui Chang, Anthony L. Caterini
SeriesSpringerbriefs in Computer Science Ser.
FormatTrade Paperback

Dimensions

Item Weight16 Oz
Item Length9.3 in
Item Width6.1 in

Additional Product Features

Number of Volumes1 vol.
IllustratedYes
SynopsisThis SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks. This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but alsoto those outside of the neutral network community., This SpringerBrief describes how to build a rigorous end-to-end mathematical framework for deep neural networks. The authors provide tools to represent and describe neural networks, casting previous results in the field in a more natural light. In particular, the authors derive gradient descent algorithms in a unified way for several neural network structures, including multilayer perceptrons, convolutional neural networks, deep autoencoders and recurrent neural networks. Furthermore, the authors developed framework is both more concise and mathematically intuitive than previous representations of neural networks. This SpringerBrief is one step towards unlocking the black box of Deep Learning. The authors believe that this framework will help catalyze further discoveries regarding the mathematical properties of neural networks.This SpringerBrief is accessible not only to researchers, professionals and students working and studying in the field of deep learning, but also to those outside of the neutral network community.
LC Classification NumberQ334-342