Video and Audio Content

Alexander Gorban

Alexander N. Gorban is Professor of the University of Leicester, director of the Mathematical Modeling Centre. Alexander N. Gorban has contributed to many areas of fundamental and applied science, including statistical physics, non-equilibrium thermodynamics, machine learning and mathematical biology. Alexander N. Gorban is the author of about 20 books and 300 scientific publications. He has founded several scientific schools in the areas of physical and chemical kinetics, dynamical systems theory and artificial neural networks.

Robust topological grammars for unsupervised neural networks

We develop a family of robust algorithms for unsupervised learning. These algorithms aim to assimilate datasets of complex topologies and to approximate data by dendrite and cubic complexes. We develop the measure of the approximator complexity and find the balance between accuracy and complexity and to define the optimal approximations.

Presentation (.pdf)

Ruedi Stoop

Ruedi (Rudolf) Stoop studied Mathematics and did a PhD in Computer-Aided Physics at Zürich University. At the University of Berne, he did a Habilitation in the field of Nonlinear Dynamics. As an Associate Professor, he lectures at the University of Berne, at the Swiss Federal Institute of Technology Zurich (ETHZ), at the University of Zürich (UNIZH) and at the University of Applied Sciences of Northwestern Switzerland. Currently, at the Institute of Neuroinformatics ETHZ/UNIZH, he leads a research group concentrating on the application of Nonlinear Dynamics and Statistical Mechanics to problems in Neuroscience.

The cochlea – a prototypical ancient neural network with a critical architecture

Using a biophysically close implementation of the mammalian hearing sensor, we have recently shown that the pitch we perceive for complex sounds is of purely physical, in contrast to a cortical, origin. From the physical principles that guidelined the evolution of our hearing sensor, we infer the nature of pitch as the embracing description the emergent complexity from the interaction of the nonlinear amplifiers present in the sensor, and use it to purposefully to tune the sensor towards the perception of sounds we want to listen to. We then show that the network of the amplifiers resident in the hearing sensor is critical, and observe how this changes, as we listen to a target sound.

Presentation (.pdf)

Misha Tsodyks

Misha Tsodyks received his Ph.D. degree in Theoretical Physics from the Landau Institute of Theoretical Physics in Moscow. He then held various research positions in Moscow, Rome, Jerusalem and San Diego, before joining the Weizmann Institute of Science in Rehovot, Israel, in 1995, where he became a full professor in 2005. Misha Tsodyks works on a wide range of topics in computational neuroscience, such as attractor neural networks, place-related activity in hippocampus, mathematical models of short- and long-term synaptic plasticity in the neocortex, population activity and functional architecture in the primary visual cortex and perceptual learning in the human visual system.

Understanding the capacity of information retrieval from long-term memory

Human memory stores vast amounts of information. Yet retrieving this information is challenging when specific cues are lacking. Classical experiments on free recall of lists of randomly assembled words indicate non-trivial scaling laws for the number of recalled words for lists of increasing length. The fundamental factors that control retrieval capacity are not clear. Here we propose a simple associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts retrieval capacity laws that are compatible with the psychological literature.

Presentation (.pdf)

Yulia Timofeeva

Yu. Timofeeva is an Associate Professor in the Department of Computer Science ( Computational Biology and Bioimaging Group ) and the Centre for Complexity Science at Warwick. Her research interests lie in the general area of Mathematical Biology and Theoretical/Computational Neuroscience. In particular, she focused on the application of principles from biophysics, nonlinear dynamics and numerical analysis to the modeling and study of cellular systems.

Dendrites, neurons and resonances

Gap junctions, also referred to as electrical synapses, are expressed along the entire central nervous system and are important in mediating various brain rhythms in both normal and pathological states. These connections can form between the dendritic trees of individual cells. To obtain insight into the modulatory role of gap junctions in tuning networks of resonant dendritic trees, I will present two methods for calculating the response function of a network of gap-junction coupled neurons. These methods will then be used to construct compact closed form solutions for a two-cell network of spatially extended neurons which will allow the study of the role of location and strength of the gap junction on network dynamics.

Presentation (.pdf)

Konstantin Anokhin

Head of the Neuroscience Department of the Russian Scientific Centre "Kurchatov Institute", Moscow. Head of the Memory Neurobiology Laboratory at Hominal Physiology Scientific Centre (Russian Academy of Medical Sciences). What memory is, how our subjective experiences are stored in cells and brain molecules, how our memories change with time, how this is connected to consciousness and how does our brains produce consciousness. These are some of the topics of Konstantin Anokhin's studies.

Когнитом: гиперсетевая модель мозга

Несмотря на впечатляющие успехи нейронауки, природа высших функций мозга все еще ускользает от нашего удовлетворительного понимания. Эта ситуация, известная как «провал в объяснении» (explanatory gap), требует поиска новых объяснительных моделей и принципов. В настоящем докладе будет предложена модель организации мозга как когнитивной гиперсети – К-сети. Вершинами К-сети (когнитома), являются КОГи (когнитивные группы), представляющие собой подмножества вершин нижележащей N-сети (коннектома), объединенные единым когнитивным опытом. Ребра между k-вершинами в когнитоме формируются как совокупности ребер между образующими их подмножествами n-вершин в коннектоме. В понятиях алгебраической топологии КОГ представляет собой реляционный симплекс или гиперсимплекс, основанием которого служит симплекс из вершин опорной N-сети, одновременно выступающий вершиной с новым качеством в К-сети более высокого уровня. Формализм гиперсетей обобщает сети и гиперграфы, давая аппарат, необходимый для отображения феноменов эмерджентности в многоуровневых системах, и позволяя моделировать гораздо более сложные структуры, чем сети и гиперграфы. В докладе будут рассмотрены некоторые нетривиальные следствия двусторонних отношений N- и K-сетей, лежащие в основе гиперсетевой теории мозга (ГCТМ).

Presentation (.pdf)

Alexei Samsonovich

 Alexei V. Samsonovich is a Research Assistant Professor at Krasnow Institute for Advanced Study at George Mason University. He is the Chair of the annual conference on Biologically Inspired Cognitive Architectures (BICA) since 2008 and the Editor-in-Chief of the Elsevier journal BICA since 2012. He holds a M.Sc. in Theoretical Physics from the Moscow Institute of Physics and Technology and a Ph.D. in Applied Mathematics from the University of Arizona, where together with Bruce McNaughton and Lynn Nadel he developed the well-known continuous-attractor theory of hippocampal spatial maps and the mental-state framework for cognitive modeling. His current research focuses on BICA, including spiking neural network cognitive modeling, semantic cognitive mapping and metacognitive architectures.

Functional capabilities of biologically inspired cognitive architectures

General functional aspects of human cognition can be described at a computational level and replicated in a machine based on principles that do not require detailed modeling of neurons and brain structures. These are, primarily, basic principles of perception, reasoning, decision making and action control, formulated in terms of symbolic models like cognitive architectures. The key principles are those that support social-emotional intelligence, narrative intelligence, metareasoning, autonomous goal reasoning, semantic mapping, human-level teachability and creativity. Creation of a machine analog of the human mind based on these principles and its acceptance as a human-equivalent character will lead to a technological breakthrough with an impact on all aspects of human life.

Presentation (.pdf)

Gennady Osipov

 Gennady S. Osipov (born October 13, 1948) is a Russian scientist, holding a Ph.D. and a Dr. Sci. in theoretical computer science, information technologies and artificial intelligence. He is the vice-president of the Institute for Systems Analysis of the Russian Academy of Sciences, professor at the Moscow Institute of Physics and Technology (State University), and at Bauman Moscow State Technical University. Osipov has contributed to the Theory of Dynamic Intelligent Systems and heterogeneous semantic networks used in applied intelligent systems.

Seventh time President of Russian Association for Artificial Intelligence. In 1997-1999, 1999–2001, 2001–2003 Gennady Osipov received Governmental Grants for Outstanding Scholars by the Decree of the President of Russian Federation. Osipov is a member of the Russian Academy of Natural Sciences and of the Academy of Astronautics of Tsiolkovsky, Fellow of European Coordinating Committee for Artificial Intelligence (ECCAI fellow) and the vice-editor in chief of the “Artificial Intelligence and Decision Making” journal.

Нейрофизиологические и психологические основания знаковой картины мира

Рассматривается модель картины мира субъекта деятельности, которая, с одной стороны, опирается на данные нейрофизиологических исследований, с другой, - на известные психологические феномены. Рассматриваются процессы формирования образов, значений и личностных смыслов, операции обобщения, агглютинации, интроспекции. Строятся модели таких функций сознания как целеполагание и синтез поведения, объясняется существование различных типов картин мира субъектов поведения.

Presentation (.pdf)

Oleg Kuznetsov

 Oleg Kuznetsov is a research scientist and expert in the field of artificial intelligence and logic control, head of the laboratory of the Institute of Control Sciences, Professor, Ph.D., Chairman of the Scientific Council of the Russian Association for Artificial Intelligence, a member of the editorial boards of the journals "Automation and Remote Control", "Problems management "," Artificial intelligence and decision-making."

Сложные сети и когнитивные науки

Сложные сети – это сети с большим числом узлов и связей между ними. Примеры: Интернет, социальные сети, сети авиалиний, нервные сети головного мозга. В докладе излагаются основные понятия теории сложных сетей, описываются важные классы сложных сетей: безмасштабные сети и сети тесного мира. Рассматриваются процессы распространения активности в сетях. Отмечается важная роль теории сложных сетей в исследованиях нервных сетей головного мозга.

Presentation (.pdf)

Dmitry A. Sakharov

Head of the Laboratory of the Institute of Developmental Biology, Russian Academy of Sciences; born November 1, 1930; graduated from Moscow State University in 1953, Doctor of Biological Sciences; Academy of Natural Sciences; Key research areas: basic mechanisms of behavior, neurobiology of invertebrates, neurons, neurotransmitters; board member of the International Society for Invertebrate Neurobiology; Foreign Member Sosiety for Neurossienses (USA-Canada); Honorary Member of the Hungarian Physiological Society; Member of the International Brain Research Organization; member of the editorial boards of "Sellular and Molesular Neurobiology", "Neurobiology", a member of the editorial board of "Journal of Higher Nervous Activity"; winner of the prize Orbeli.

Другая нервная система

Биология мозга переживает период смены парадигм. Электрическая рефлекторная доктрина, утвердившаяся в 19 столетии и доминирующая поныне, безнадёжно противоречит эмпирическим знаниям о том, как устроены и функционируют естественные нервные системы. Альтернативой техногенным метафорам мозга (мозг как телефонная станция, голографическое устройство, компьютер, итп) может служить универсальная идея Т. Добжанского: "Nothing in biology makes sense еxcept in the light of evolution". В приложении к мозгу эта идея реализуема в рамках представления, согласно которому в основе нервных функций лежат гетерохимические механизмы, унаследованные от донервных регуляторных систем (Х.С. Коштоянц и его школа). В этом свете будут рассмотрены современные результаты нейронаук.

Presentation (.ppt)

Российская академия наук Министерство образования и науки Российской Федерации Госкорпорация «Росатом» НИЯУ МИФИ НИИСИ РАН МАИ ТРИНИТИ