Operator Theory Seminar

October 8, 2019 - 1:30pm to 2:30pm
309 VAN

Professor Palle Jorgensen; Department of Mathematics; The University of Iowa

“Analysis on Neural Networks”

Abstract: With view to Neural Networks, we study reduction schemes for functions of “many” variables into system of functions in one variable. Our setting includes infinite-dimensions. Following Cybenko-Kolmogorov, the outline for our results is as follows: We present explicit reductions schemes for multivariable problems, covering both a finite, and an infinite, number of variables. Starting with functions in “many” variables, we offer constructive reductions into superposition, with component terms, that make use of only functions in one variable, and specified choices of coordinate directions.  Explicit transforms are given, incl., Fourier and Radon; as well as multivariable Shannon interpolation. Motivation: : “How do neural nets (NN) learn?” For example, consider a single (hidden) layer neural net. With a specified starting point; how will it then be updated through “backward propagation”, or “backprop”, as each training example passes through the system? The process aims for the output to then be able to approximate uniformly any continuous function, i.e., (in the language of learning machines) the system learns a hypothesis. A feature of this neural network (NN) is that backprop is fast enough to be able to train very large networks.

PJorgensen