Keynote Speakers

Neuroinformatics - 2019



Monday, October 7                    11:20 – 13:00
Lecture-hall Ауд. 4.18 (5.17)

Chair: Prof. DUNIN-BARKOWSKI WITALI

1. GORBAN A.N.
University of Leicester, Great Britain; Lobachevsky State University of Nizhny Novgorod, Russia
Artificial intelligence mistakes, their correction and simplicity revolution in neurosciences

AI systems make mistakes and will always make mistakes. The future development of AI and its applications depends critically on our ability to detect and correct these mistakes. Correction operations should be simple and fast despite of complexity of the AI systems. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. The Gibbs equivalence of ensembles with further generalizations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? The lecture is based on the paper Gorban, A.N., Makarov, V.A., & Tyukin, I.Y. (2019). The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Physics of Life Reviews, https://doi.org/10.1016/j.plrev.2018.09.005

2. LEMPITSKIY V.S.
Skolkovo Institute of Science and Technology
Neural network photo-realistic avatars

The lecturer will present the results on modeling the appearance of humans using generative convolutional neural networks that have been obtained recently in Samsung lab.

Monday, October 7                    14:00 – 16:00
Lecture-hall Ауд. 4.18 (5.17)

Chair: Prof. SHUMSKIY SERGEY

3. VILZITER YU. V.
State Research Institute of Aviation Systems, Moscow
Deep neural networks and deep optimization

There is no any AI revolution. It’s just a partial case of global “Deep” revolution. Deep Neuron Nets (DNN) could now solve any optimization problem using graph models and Deep Reinforcement Learning. It is wider than just AI, it’s about all technical tasks and devices. DNNs and Deep optimization tools evolve as follows: Deep Data Analysis - Deep Object/Process Modelling – Deep Object/Process Control – Deep Object/Process Design. We propose the new Russian software platform for DNN design, learning and hardware implementation.

4. KAZANOVICH YA. B.
Institute of Mathematical Problems of Biology RAS
Modeling of cognitive functions of the brain using oscillatory neural networks

We describe an oscillatory neural network designed as a system of generalized phase oscillators with a central element. It is shown that a winner-take-all principle can be realized in this system in terms of the competition of peripheral oscillators for the synchronization with a central oscillator. Several examples illustrate how this network can be used for the simulation of various cognitive functions: consecutive selection of objects in the image, visual search, and multiple object tracking.

Tuesday, October 8                    10:30 – 11:30
Lecture-hall Ауд. 4.18 (5.17)

Chair: Prof. DOLENKO SERGEY

5. DOROGOV A. YU.
Saint Petersburg Electrotechnical University "LETI"
Regular transformations with deep neural network architecture

Сlass of multilayer tunable transformations with regular architecture of deep neural networks is considered. Transformations of this class have a unique possibility of analytical representation of the topology of the implementing network, which allows to develop learning algorithms that absolutely converge in a finite number of steps. The paper shows that a regular deep learning network can be constructed as a structural extension of the architecture of fast transformation algorithms by additional planes with controlled switching. It is proved that any linear transformation with factorizable dimension can be realized by a suitable multilayer regular deep learning network of this class. As applied to the problem of pattern recognition, the consequence of this conclusion is a cardinal increase in the number of recognized classes up to the dimension of the output vector of the implementing network. Moreover, multi-layering provides the ability to build fast conveyor processors for any linear transformations, including the Karunen-Loev transformations. The paper presents examples of the use of regular deep learning networks for recognition of one-dimensional signals, images and multidimensional data. It is shown that in flow version the linear regular deep networks can be used to implement quantum algorithms and measurement schemes.

Thursday, October 10                    10:30 – 11:20
Lecture-hall Ауд. 4.18 (5.17)

Chair: Prof. EZHOV ALEXANDER

6. XU KUNITSYN
Huawei
Efficient neural network design for terminal computing

In this talk the pipeline and framework for efficient neural network design will be discussed. We will introduce the challenges for neural network inference on devices, and present the current research in Huawei, including pruning and quantization, cell design, distillation, and application of AutoML to address these challenges.

Российская нейросетевая ассоциация Российская академия наук Министерство образования и науки Российской Федерации МФТИ НИЯУ МИФИ НИИСИ РАН МАИ Институт перспективных исследований мозга МГУ
AIRI iLabs Приоритет 2030