Neuronale Informationsverarbeitung

Projekt Neuronale Informationsverarbeitung

In diesem Kurs werden Teilnehmer ein zeitlich begrenztes Projekt aus der aktuellen Forschung des Fachgebiets Neuronale Informationsverarbeitungbearbeiten. Die Themen umfassen nicht nur künstliche Neuronale Netze, sondern auch die Modellierung und Analyse von echten Neuronalen Netzwerken, und die Anwendungen von Machinelles Lernen und künstlicher Intelligenz. Der Kurs ist als betreute Einführung in selbstständiges wissenschaftliches Arbeiten angelegt. Wärend des Projektes werden Teilnehmer zunächst unter Anleitung und später selbstständig Originalliteratur lesen; eine kurze Projektzusammenfassung verfassen und diese einem Fachpublikum präsentieren; sich mit komplexen realen Forschungsproblemen auseinandersetzen; ihre Ergebnisse in einer wissenschaftlichen Posterpräsentation verteidigen und in einem kurzen Artikel nach wissenschaftlichen Maßstäben dokumentieren. Der Kurs entält daher auch einen Seminarteil.

Formales:

Zusätzliche Informationen finden Sie im ISIS Kurs Einzelheiten zum Verlauf und zur Anmeldung werden im ISIS Kurs bekanntgegeben. Weitere Fragen bitte an Christoph Metzner.

Angebotene Projekte im SS24:

  • Real-Time Control of Neuronal Population Activity with Reinforcement Learning
  • The Jansen-Rit model of neural dynamics: implementation and exploration in neurolib
  • Intrinsic Dimension Estimation for the Union of Data Manifolds
  • Dependence of Representational Similarity Analysis (RSA) on the Stimulus Set

Veranstaltungsdaten (SS 24)

UmfangZeitDozentOrtSprache
6 SWS ProjektMi 10:15 - 11:45Prof. Dr. Klaus Obermayer & MitarbeiterMAR 5.060Deutsch & Englisch

Projekte aus vergangenen Semestern

Random cross-embedding on neural spike trains

Supervisor: Veronika Koren

We work with spiking neural networks of smallish size from monkey's brain. The network is spatially organized in 3 layers, deep, middle and the superfical layer. We are interested in how do the three layers interact with each other. This is easily done by computing the correlation between neural signals across the pairs of layers. However, the correlation is a bidirectional measure that does not tell us in which direction does the influence flow. We therefore use a nonlinear method of cross-mapping, that allows to determine the directionality of interaction between the two layers. The method uses an embedding in a high-dimensional space and computation of nearest neighbord in that space. Let's say we have the signal from layer 1 and we call it X(t) and from layer 2, we call it Y(t). We compute an embedding for each of the signals, Mx and My. We now use the information from Mx to reconstruct My and vice-versa, use the information from My to reconstruct Mx. The method is not super simple, but not too hard to implement either. I have already implemented it and I can help if you get stuck. The method is generic and can be used on many types of data, e.g., here there is an application on face recognition: 

Roweis, Sam T., and Lawrence K. Saul. "Nonlinear dimensionality reduction by locally linear embedding." science 290.5500 (2000): 2323-2326.

You can also check:

Saul, Lawrence K., and Sam T. Roweis. "Think globally, fit locally: unsupervised learning of low dimensional manifolds." Journal of machine learning research 4.Jun (2003): 119-155.

Computational modelling of separable resonant circuits controlled by different interneuron types

Supervisor: Christoph Metzner

Cortical interneurons show a remarkable morphological, anatomical and electrophysiological diversity, however, we are just beginning to understand how this diversity translates to functional differences. While it is well established that fast-spiking, parvalbumin-positive interneurons are crucially involved in the generation of fast cortical oscillations, the circuitry controling slower cortical oscillations remains elusive. Through computational modeling, Vierling-Claassen et al. (Vierling-Claassen et al.: Computational modeling of distinct neocortical oscillations driven by cell-type selective optogenetic drive: separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons. Frontiers in Human Neuroscience 2010, 4, 198) have hypothesized that low-threshold-spiking, somatostatin-positive interneurons control low-frequency oscillatory activity in cortical circuits. In this project, we will replicate the computational model the above mentioned study using a novel simulation tool (NetPyNE) that allows for automatic parallelization of simulations and therefore, for significant speed up. Furthermore, we will replace one of the single cell models with a newly developed model that allows for the integration of data on genetic variants found in schizophrenic patients. This will in turn enable the exploration of genetic mechanisms underlying oscillatory deficits in patients with schizophrenia.

Solving games with RL (Montezuma’s Revenge, Hanabi)

Supervisor: Vaios Lashos

Reinforcement learning is an area of machine learning concerned with how artificial agents have to act in an environment so as to maximize some notion of cumulative reward.  Reinforcement learning lives in the intersection of many disciplines (game theory, control theory, operations research, information theory, simulation-based optimization etc) and it has many application in real life (economics, robotics, etc). Video games and board games provide useful testbeds for creating and improving reinforcement learning algorithms, because they have sufficient complexity for helping us understand basic principles, but at the same time the complexity is not that high that makes analysis "impossible" (like most real life problems). The goal of this project is to make artificial agents that play games where either the reward is sparse or the state of the environment is not fully accessible to the agent.

Beyond Relative Entropy: Training GANs with alternative error functions

Supervisor: Vaios Laschos

In the framework of Generative Adversarial Networks (GANs), couples of interacting networks are studied: a "generative" network is trained to produce new samples from the distribution of the training data while a "discriminative" network is trained to decide whether or not a sample has been drawn from the training data distribution. At the end of the training phase, the discriminator is not able to distinguish between training data and newly generated samples anymore. This procedure yields a network to generate new samples of very complex objects such as natural images from unstructured input including the case of images with added noise. In the traditional approach, the relative entropy function is used to quantify the distance between the learnt distribution and the distribution of the data during the learning phase. In a more recent approach, the standard error function is substituted by the Wasserstein-1 distance  with superior results. The main goal of this project is to replicate the classical results by training GANs with both the relative entropy as benchmark error, and alternative distances. Finally, the training results are compared and analysed.

Dimensionality reduction of brain microcircuits

Supervisor: Veronika Koren

In this project, we study the behavior of small networks, recorded in monkey's visual areas V1 and V4. The dimensionality of high-dimensional data is reduced with Principal Component Analysis and the aim is to determine if different experimental conditions are linearly separable in the PCA space. We search for separability of 2 types of stimuli, as well as of two types of behavioral choices. Required background: MI course or another machine learning course.

Slides