深度学习的随机矩阵理论模型-v0.1.pptxVIP

深度学习的随机矩阵理论模型-v0.1.pptx

  1. 1、本文档共69页,可阅读全部内容。
  2. 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  5. 5、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  6. 6、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  7. 7、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  8. 8、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
深度学习的随机矩阵理论模型-v0.1

深度学习的随机矩阵理论模型;;神经网络将许多单一的神经元连接在一起 一个神经元的输出作为另一个神经元的输入 多层神经网络模型可以理解为多个非线性函数“嵌套” 多层神经网络层数可以无限叠加 具有无限建模能力, 可以拟合任意函数 ;Sigmoid Tanh Rectified linear units (ReLU) ;层数逐年增加;Features are learned rather than hand-crafted More layers capture more invariances More data to train deeper networks More computing (GPUs) Better regularization: Dropout New nonlinearities Max pooling, Rectified linear units (ReLU) Theoretical understanding of deep networks remains shallow;Experimental Neuroscience uncovered: Neural architecture of Retina/LGN/V1/V2/V3/ etc Existence of neurons with weights and activation functions (simple cells) Pooling neurons (complex cells) All these features are somehow present in Deep Learning systems;Olshausen and Field demonstrated that receptive fields learned from image patches. Olshausen and Field showed that optimization process can drive learning image representations. ;Olshausen-Field representations bear strong resemblance to defined mathematical objects from harmonic analysis wavelets, ridgelets, curvelets. Harmonic analysis: long history of developing optimal representations via optimization Research in 1990's: Wavelets etc are optimal sparsifying transforms for certain classes of images ;Class prediction rule can be viewed as function f(x) of high-dimensional argument Curse of Dimensionality Traditional theoretical obstacle to high-dimensional approximation Functions of high dimensional x can wiggle in too many dimensions to be learned from finite datasets;Approximation theory Perceptrons and multilayer feedforward networks are universal approximators: Cybenko ’89, Hornik ’89, Hornik ’91, Barron ‘93 Optimization theory No spurious local optima for linear networks: Baldi & Hornik ’89 Stuck in local minima: Brady ‘89 Stuck in local minima, but convergence guarantees for linearly separable data: Gori & Tesi ‘92 Manifold of spurious local optima: Frasconi ’97;Invariance, stability, and learning theory Scattering networks: Bruna ’11, Bruna ’13, Mallat ’13 De

文档评论(0)

zijingling + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档