[Translate to Englisch:] ©Trevor Hastie

Research Colloquium on Business Informatics

Research Colloquium on Business Informatics

21. Oct

As part of the Research Colloquium on Information Systems and Data Science, Prof. Trevor Hastie, Department of Statistics and Department of Biomedical Data Science, Stanford University will speak on 21 October 2021 at 4:15 pm on Neural.

Neural

networks rose to fame in the late 1980s. There was a lot of excitement and a certain amount of hype associated with this approach. This was followed by a synthesis stage, where the properties of neural networks were analyzed by machine learners, mathematicians and statisticians; algorithms were improved, and the methodology stabilized. Then along came SVMs, boosting, and random forests, and neural networks fell somewhat from favor. Part of the reason was that neural networks required a lot of tinkering, while the new methods were more automatic. Also, on many problems the new methods outperformed poorly-trained neural networks. This was the status quo for the first decade in the new millennium. All the while, though, a core group of neural-network enthusiasts were pushing their technology harder on ever-larger computing architectures and data sets. Neural networks resurfaced after 2010 with the new name deep learning, with new architectures, additional bells and whistles, and a string of success stories. In this talk, I discuss the basics of neural networks and deep learning, and then go into some of the specializations for specific problems, such as convolutional neural networks (CNNs) for image classification, and recurrent neural networks (RNNs) for time series and other sequences.

Access data
Join Zoom Meeting:

https://leuphana.zoom.us/j/99663552105?pwd=dG9KZUtJU0hUMkE4RHNEVDhiUlNIZz09

Meeting-ID: 99663552105
Identification code: FoKoWI