Keynote Speakers

Neuroinformatics - 2015


Alexander Gorban

Alexander N. Gorban is Professor of the University of Leicester, director of the Mathematical Modeling Centre. Alexander N. Gorban has contributed to many areas of fundamental and applied science, including statistical physics, non-equilibrium thermodynamics, machine learning and mathematical biology. Alexander N. Gorban is the author of about 20 books and 300 scientific publications. He has founded several scientific schools in the areas of physical and chemical kinetics, dynamical systems theory and artificial neural networks.




Evgeny Mirkes

Mirkes, Evgeny M. is Professor of Siberian Federal University. He is an expert in the field of mathematical modeling, applied mathematics and programming, developer of training methods for artificial neural networks and standards of their programming and training. At present he is Research fellow at University of Leicester and is engaged in the development of data mining methods for different types of data (vet, medicine, psychological, geological and so on).





Andrei Zinovyev

Andrei Zinovyev is a physicist by training at the university. He received his Ph.D. in speciality "Mathematical and software tools for computers, computational complexes and computer networks". Since 2000, he is working in Bioinformatics and Systems Biology.

Current occupation: Researcher at the Bioinformatics, biostatistics, epidemiology and computational systems biology of cancer unit of Institute Curie, Paris. Coordinator of Computational Systems Biology of Cancer team.




Ruedi Stoop

Ruedi (Rudolf) Stoop studied Mathematics and did a PhD in Computer-Aided Physics at Zürich University. At the University of Berne, he did a Habilitation in the field of Nonlinear Dynamics. As an Associate Professor, he lectures at the University of Berne, at the Swiss Federal Institute of Technology Zurich (ETHZ), at the University of Zürich (UNIZH) and at the University of Applied Sciences of Northwestern Switzerland. Currently, at the Institute of Neuroinformatics ETHZ/UNIZH, he leads a research group concentrating on the application of Nonlinear Dynamics and Statistical Mechanics to problems in Neuroscience.



Misha Tsodyks

Misha Tsodyks received his Ph.D. degree in Theoretical Physics from the Landau Institute of Theoretical Physics in Moscow. He then held various research positions in Moscow, Rome, Jerusalem and San Diego, before joining the Weizmann Institute of Science in Rehovot, Israel, in 1995, where he became a full professor in 2005. Misha Tsodyks works on a wide range of topics in computational neuroscience, such as attractor neural networks, place-related activity in hippocampus, mathematical models of short- and long-term synaptic plasticity in the neocortex, population activity and functional architecture in the primary visual cortex and perceptual learning in the human visual system.


Yulia Timofeeva

Yu. Timofeeva is an Associate Professor in the Department of Computer Science ( Computational Biology and Bioimaging Group ) and the Centre for Complexity Science at Warwick. Her research interests lie in the general area of Mathematical Biology and Theoretical/Computational Neuroscience. In particular, she focused on the application of principles from biophysics, nonlinear dynamics and numerical analysis to the modeling and study of cellular systems.




Konstantin Anokhin

Head of the Neuroscience Department of the Russian Scientific Centre "Kurchatov Institute", Moscow. Head of the Memory Neurobiology Laboratory at Hominal Physiology Scientific Centre (Russian Academy of Medical Sciences). What memory is, how our subjective experiences are stored in cells and brain molecules, how our memories change with time, how this is connected to consciousness and how does our brains produce consciousness. These are some of the topics of Konstantin Anokhin's studies.




Alexei Samsonovich

 Alexei V. Samsonovich is a Research Assistant Professor at Krasnow Institute for Advanced Study at George Mason University. He is the Chair of the annual conference on Biologically Inspired Cognitive Architectures (BICA) since 2008 and the Editor-in-Chief of the Elsevier journal BICA since 2012. He holds a M.Sc. in Theoretical Physics from the Moscow Institute of Physics and Technology and a Ph.D. in Applied Mathematics from the University of Arizona, where together with Bruce McNaughton and Lynn Nadel he developed the well-known continuous-attractor theory of hippocampal spatial maps and the mental-state framework for cognitive modeling. His current research focuses on BICA, including spiking neural network cognitive modeling, semantic cognitive mapping and metacognitive architectures.


Gennady Osipov

 Gennady S. Osipov (born October 13, 1948) is a Russian scientist, holding a Ph.D. and a Dr. Sci. in theoretical computer science, information technologies and artificial intelligence. He is the vice-president of the Institute for Systems Analysis of the Russian Academy of Sciences, professor at the Moscow Institute of Physics and Technology (State University), and at Bauman Moscow State Technical University. Osipov has contributed to the Theory of Dynamic Intelligent Systems and heterogeneous semantic networks used in applied intelligent systems.

Seventh time President of Russian Association for Artificial Intelligence. In 1997-1999, 1999–2001, 2001–2003 Gennady Osipov received Governmental Grants for Outstanding Scholars by the Decree of the President of Russian Federation. Osipov is a member of the Russian Academy of Natural Sciences and of the Academy of Astronautics of Tsiolkovsky, Fellow of European Coordinating Committee for Artificial Intelligence (ECCAI fellow) and the vice-editor in chief of the “Artificial Intelligence and Decision Making” journal.


Oleg Kuznetsov

 Oleg Kuznetsov is a research scientist and expert in the field of artificial intelligence and logic control, head of the laboratory of the Institute of Control Sciences, Professor, Ph.D., Chairman of the Scientific Council of the Russian Association for Artificial Intelligence, a member of the editorial boards of the journals "Automation and Remote Control", "Problems management "," Artificial intelligence and decision-making."






Monday, January 19                    11:20 – 13:00
Lecture-hall Акт. зал

Chair: Prof. DUNIN-BARKOWSKI WITALI

1. A.N. GORBAN, EVGENY M. MIRKES, A. ZINOVYEV
1University of Leicester, Great Britain
2Curie Institute, Paris, France
Robust topological grammars for unsupervised neural networks

We develop a family of robust algorithms for unsupervised learning. These algorithms aim to assimilate datasets of complex topologies and to approximate data by dendrite and cubic complexes. We develop the measure of the approximator complexity and find the balance between accuracy and complexity and to define the optimal approximations.

2. R. STOOP
Institute of Neuroinformatics, UZH and ETHZ of Zurich, Switzerland
The cochlea – a prototypical ancient neural network with a critical architecture

Using a biophysically close implementation of the mammalian hearing sensor, we have recently shown that the pitch we perceive for complex sounds is of purely physical, in contrast to a cortical, origin. From the physical principles that guidelined the evolution of our hearing sensor, we infer the nature of pitch as the embracing description the emergent complexity from the interaction of the nonlinear amplifiers present in the sensor, and use it to purposefully to tune the sensor towards the perception of sounds we want to listen to. We then show that the network of the amplifiers resident in the hearing sensor is critical, and observe how this changes, as we listen to a target sound.

Monday, January 19                    14:00 – 15:45
Lecture-hall Акт. зал

Chair: Prof. DUNIN-BARKOWSKI WITALI

3. M. TSODYKS
Weizmann Institute of Science, Rehovot, Israel
Understanding the capacity of information retrieval from long-term memory

Human memory stores vast amounts of information. Yet retrieving this information is challenging when specific cues are lacking. Classical experiments on free recall of lists of randomly assembled words indicate non-trivial scaling laws for the number of recalled words for lists of increasing length. The fundamental factors that control retrieval capacity are not clear. Here we propose a simple associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts retrieval capacity laws that are compatible with the psychological literature.

4. YU. TIMOFEEVA, D MICHIELETTO, Y LU, S. COOMBES
1University of Warwick, Great Britain
2The University of Nottingham, Great Britain
Dendrites, neurons and resonances

Gap junctions, also referred to as electrical synapses, are expressed along the entire central nervous system and are important in mediating various brain rhythms in both normal and pathological states. These connections can form between the dendritic trees of individual cells. To obtain insight into the modulatory role of gap junctions in tuning networks of resonant dendritic trees, I will present two methods for calculating the response function of a network of gap-junction coupled neurons. These methods will then be used to construct compact closed form solutions for a two-cell network of spatially extended neurons which will allow the study of the role of location and strength of the gap junction on network dynamics.

Tuesday, January 20                    10:30 – 12:15
Lecture-hall Акт. зал

Chair: Prof. DUNIN-BARKOWSKI WITALI

5. K.V. ANOKHIN

The cognitome: a hypernetwork brain model

Despite impressive advances in neuroscience, the nature of the higher brain functions still eludes satisfactory understanding. This situation, known as «explanatory gap», calls for new explanatory models and principles. This report will suggest a model of the brain as the cognitive hypernetwork – K-network. The vertices of the K-network (cognitome) are COGs (GOgnitive Groups) - subsets of vertices from the underlying N-network (connectome) associated by a common cognitive experience. Edges between k-vertices in the cognitome are formed by the sum of edges between subsets of corresponding n-vertices in connectome. In terms of algebraic topology COG is a relational simplex or hypersimplex. Its base is the simplex from vertices of the underlying N-network and its apex is the vertex possessing a new quality at the higher-level K-network. Hypernetworks generalize networks and hypergraphs, provide formalism for description of emergent phenomena in the multi-level systems, and allow modeling much more complex structures than networks and hypergraphs. The report presents some non-trivial bipartite relations between N- and K-networks that form a background for the hypernetwork brain theory (HNBT).

6. A. V. SAMSONOVICH
National Research Nuclear University (MEPhI), Moscow
Functional capabilities of biologically inspired cognitive architectures

General functional aspects of human cognition can be described at a computational level and replicated in a machine based on principles that do not require detailed modeling of neurons and brain structures. These are, primarily, basic principles of perception, reasoning, decision making and action control, formulated in terms of symbolic models like cognitive architectures. The key principles are those that support social-emotional intelligence, narrative intelligence, metareasoning, autonomous goal reasoning, semantic mapping, human-level teachability and creativity. Creation of a machine analog of the human mind based on these principles and its acceptance as a human-equivalent character will lead to a technological breakthrough with an impact on all aspects of human life.

Tuesday, January 20                    13:15 – 15:00
Lecture-hall Акт. зал

Chair: Prof. DUNIN-BARKOWSKI WITALI

7. G.S. OSIPOV
Institute for Systems Analysis of Russian Academy of Sciences, Moscow
Neurophysiological and psychological foundation of the symbolic picture of the world

A model of the world picture of the active subject is considered. The model is based on data of neurophysiological studies and on the well-known psychological phenomena. The processes of the formation of patterns, patters significance and personal meaning as well as the operations of generalization, agglutination, and introspection are considered. Models of such functions of consciousness as goal setting and synthesis of behavior are designed. The existence of different types of world pictures generated by active subjects is explained.

8. O.P.KUZNETSOV, L.YU.ZHILYAKOVA
V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow
Complex networks and cognitive sciences

Complex networks are networks with a large number of nodes and connections between them. Examples: Internet, social networks, networks of airlines, nervous networks of a brain. In the report the basic concepts of the theory of complex networks are stated, important classes of complex networks (scale-free networks and small-world networks) are described. Processes of activity propagation in networks are considered. The important role of the theory of complex networks in researches of brain networks is noted.

Российская нейросетевая ассоциация Российская академия наук Министерство образования и науки Российской Федерации МФТИ НИЯУ МИФИ НИИСИ РАН МАИ Институт перспективных исследований мозга МГУ
AIRI iLabs Приоритет 2030