Keynote Speakers
Neuroinformatics - 2023
Monday, October 23 10:15 – 13:15
Lecture-hall Актовый зал
Chair: Prof. DUNIN-BARKOWSKI WITALI
University of Leicester, Great Britain
Topological grammars and dynamic clustering in big data analysis with applications in bioinformatics and medical informatics
2. KARANDASHEV YA.M.
Scientific Research Institute for System Analysis, Moscow
Neural network approach to solving problems of gas dynamics with chemical transformations
3. V.A. DEMIN
National Research Centre "Kurchatov Institute", Moscow
Neuromorphic computing architecture: from components to principles
Monday, October 23 14:00 – 16:00
Lecture-hall Актовый зал
Chair: Prof. MALSAGOV MAGOMED
London Institute for Mathematical Sciences, United Kingdom
Features and limitations of large language models
5. SAMSONOVICH A.V.
National Research Nuclear University (MEPhI), Moscow
Socio-emotional artificial intelligence based on cognitive architectures and large language models
Monday, October 23 16:30 – 18:30
Lecture-hall Актовый зал
Chair: Prof. MALSAGOV MAGOMED
State Research Institute of Aviation Systems, Moscow
AI for robotics and control: from weak AI to general universal intelligence
7. YUDIN D.A.
The Moscow Institute of Physics and Technology (State University)
Neural network methods for constructing multimodal maps and their application
Tuesday, October 24 10:00 – 12:00
Lecture-hall НЛК 2 этаж, Конференц-зал
Chair: Prof. DMITRY YUDIN
Institute for Systems Analysis of Russian Academy of Sciences, Moscow
Integral neural networks
We introduce a new family of deep neural networks, where instead of the conventional representation of network layers as N-dimensional weight tensors, we use a continuous layer representation along the filter and channel dimensions. We call such networks Integral Neural Networks (INNs). In particular, the weights of INNs are represented as continuous functions defined on N-dimensional hypercubes, and the discrete transformations of inputs to the layers are replaced by continuous integration operations, accordingly. During the inference stage, our continuous layers can be converted into the traditional tensor representation via numerical integral quadratures. Such kind of representation allows the discretization of a network to an arbitrary size with various discretization intervals for the integral kernels. This approach can be applied to prune the model directly on an edge device while suffering only a small performance loss at high rates of structural pruning without any fine-tuning. To evaluate the practical benefits of our proposed approach, we have conducted experiments using various neural network architectures on multiple tasks. Our reported results show that the proposed INNs achieve the same performance with their conventional discrete counterparts, while being able to preserve approximately the same performance (2% accuracy loss for ResNet18 on Imagenet) at a high rate (up to 30%) of structural pruning without fine-tuning, compared to 65% accuracy loss of the conventional pruning methods under the same conditions.
9. OSELEDETS I.V.
Skolkovo Institute of Science and Technology (Skoltech)
Mathematics and machine learning
Thursday, October 26 14:00 – 16:00
Lecture-hall НЛК 2 этаж, Конференц-зал
Chair: Prof. DUNIN-BARKOWSKI WITALI
Institute of Psychology of Russian Academy of Sciences, Moscow
Neural patterns of learning and memory reproduction in animals
11. K.V ANOKHIN
Cognitome: an algorithmic theory of higher brain organization
Thursday, October 26 18:00 – 20:00
Lecture-hall НЛК 2 этаж, Конференц-зал
Chair: Prof. VVEDENSKY VIKTOR
Neurocognitive dynamics
13. M. LEBEDEV
National Research University "Higher School of Economics", Moscow
Neurointerfaces for rehabilitation
Friday, October 27 10:00 – 11:00
Lecture-hall НЛК 2 этаж, Конференц-зал
Chair: Prof. USHAKOV VADIM
Optopharmacologic modulation and optosensory analysis of nerve cell activity