Sport Livestreams für Fußball Bundesliga, DFB-Pokal, Champions League, Europa League, NFL, NBA & Co.
Jetzt neu und kostenlos: Sport Live bei radio.de. Egal ob 1. oder 2. deutsche Fußball Bundesliga, DFB-Pokal, UEFA Fußball Europameisterschaft, UEFA Champions League, UEFA Europa League, Premier League, NFL, NBA oder die MLB - seid live dabei mit radio.de.
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand i...
BI 205 Dmitri Chklovskii: Neurons Are Smarter Than You Think
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released:
To explore more neuroscience news and perspectives, visit thetransmitter.org.
Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example?
Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics.
We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work.
Chklovskii Lab.
Twitter: @chklovskii.
Related papers
The Neuron as a Direct Data-Driven Controller.
Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction.
Related episodes
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
BI 119 Henry Yin: The Crisis in Neuroscience
0:00 - Intro
7:34 - Physicists approach for neuroscience
12:39 - What's missing in AI and neuroscience?
16:36 - Connectomes
31:51 - Understanding complex systems
33:17 - Earliest models of neurons
39:08 - Smart neurons
42:56 - Neuron theories that influenced Mitya
46:50 - Neuron as a controller
55:03 - How to test the neuron as controller hypothesis
1:00:29 - Direct data-driven control
1:11:09 - Experimental evidence
1:22:25 - Single neuron doctrine and population doctrine
1:25:30 - Neurons as agents
1:28:52 - Implications for AI
1:30:02 - Limits to control perspective
--------
1:39:05
BI 204 David Robbe: Your Brain Doesn’t Measure Time
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released:
To explore more neuroscience news and perspectives, visit thetransmitter.org.
When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave.
David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time.
Cortical-Basal Ganglia Circuits and Behavior Lab.
Twitter: @dav_robbe
Related papers
Lost in time: Relocating the perception of duration outside the brain.
Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging.
0:00 - Intro
3:59 - Why behavior is so important in itself
10:27 - Henri Bergson
21:17 - Bergson's view of life
26:25 - A task to test how animals time things
34:08 - Back to Bergson and duree
39:44 - Externalizing time
44:11 - Internal representation of time
1:03:38 - Cognition as internal movement
1:09:14 - Free will
1:15:27 - Implications for AI
--------
1:37:37
BI 203 David Krakauer: How To Think Like a Complexity Scientist
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.
David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book The Complex World: An Introduction to the Fundamentals of Complexity Science.
The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same.
David's website.
David's SFI homepage.
The book: The Complex World: An Introduction to the Fundamentals of Complexity Science.
The 4-Volume Series: Foundational Papers in Complexity Science.
Mentioned:
Aeon article: Problem-solving matter.
The information theory of individuality.
Read the transcript.
0:00 - Intro
3:45 - Origins of The Complex World
20:10 - 4 pillars of complexity
36:27 - 40s to 70s in complexity
42:33 - How to proceed as a complexity scientist
54:32 - Broken symmetries
1:02:40 - Emergence
1:13:25 - Time scales and complexity
1:18:48 - Consensus and how ideas migrate
1:29:25 - Disciplinary matrix (Kuhn)
1:32:45 - Intelligence vs. life
--------
1:46:03
BI 202 Eli Sennesh: Divide-and-Conquer to Predict
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new Brain Inspired episode is released.
Eli Sennesh is a postdoc at Vanderbilt University, one of my old stomping grounds, currently in the lab of Andre Bastos. Andre’s lab focuses on understanding brain dynamics within cortical circuits, particularly how communication between brain areas is coordinated in perception, cognition, and behavior. So Eli is busy doing work along those lines, as you'll hear more about. But the original impetus for having him on his recently published proposal for how predictive coding might be implemented in brains. So in that sense, this episode builds on the last episode with Rajesh Rao, where we discussed Raj's "active predictive coding" account of predictive coding. As a super brief refresher, predictive coding is the proposal that the brain is constantly predicting what's about the happen, then stuff happens, and the brain uses the mismatch between its predictions and the actual stuff that's happening, to learn how to make better predictions moving forward. I refer you to the previous episode for more details. So Eli's account, along with his co-authors of course, which he calls "divide-and-conquer" predictive coding, uses a probabilistic approach in an attempt to account for how brains might implement predictive coding, and you'll learn more about that in our discussion. But we also talk quite a bit about the difference between practicing theoretical and experimental neuroscience, and Eli's experience moving into the experimental side from the theoretical side.
Eli's website.
Bastos lab.
Twitter: @EliSennesh
Related papers
Divide-and-Conquer Predictive Coding: a Structured Bayesian Inference Algorithm.
Related episode:
BI 201 Rajesh Rao: Active Predictive Coding.
Read the transcript.
0:00 - Intro
3:59 - Eli's worldview
17:56 - NeuroAI is hard
24:38 - Prediction errors vs surprise
55:16 - Divide and conquer
1:13:24 - Challenges
1:18:44 - How to build AI
1:25:56 - Affect
1:31:55 - Abolish the value function
--------
1:38:11
BI 201 Rajesh Rao: From Predictive Coding to Brain Co-Processors
Support the show to get full episodes, full archive, and join the Discord community.
Today I'm in conversation with Rajesh Rao, a distinguished professor of computer science and engineering at the University of Washington, where he also co-directs the Center for Neurotechnology. Back in 1999, Raj and Dana Ballard published what became quite a famous paper, which proposed how predictive coding might be implemented in brains. What is predictive coding, you may be wondering? It's roughly the idea that your brain is constantly predicting incoming sensory signals, and it generates that prediction as a top-down signal that meets the bottom-up sensory signals. Then the brain computes a difference between the prediction and the actual sensory input, and that difference is sent back up to the "top" where the brain then updates its internal model to make better future predictions.
So that was 25 years ago, and it was focused on how the brain handles sensory information. But Raj just recently published an update to the predictive coding framework, one that incorporates actions and perception, suggests how it might be implemented in the cortex - specifically which cortical layers do what - something he calls "Active predictive coding." So we discuss that new proposal, we also talk about his engineering work on brain-computer interface technologies, like BrainNet, which basically connects two brains together, and like neural co-processors, which use an artificial neural network as a prosthetic that can do things like enhance memories, optimize learning, and help restore brain function after strokes, for example. Finally, we discuss Raj's interest and work on deciphering an ancient Indian text, the mysterious Indus script.
Raj's website.
Twitter: @RajeshPNRao.
Related papers
A sensory–motor theory of the neocortex.
Brain co-processors: using AI to restore and augment brain function.
Towards neural co-processors for the brain: combining decoding and encoding in brain–computer interfaces.
BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains.
Read the transcript.
0:00 - Intro
7:40 - Predictive coding origins
16:14 - Early appreciation of recurrence
17:08 - Prediction as a general theory of the brain
18:38 - Rao and Ballard 1999
26:32 - Prediction as a general theory of the brain
33:24 - Perception vs action
33:28 - Active predictive coding
45:04 - Evolving to augment our brains
53:03 - BrainNet
57:12 - Neural co-processors
1:11:19 - Decoding the Indus Script
1:20:18 - Transformer models relation to active predictive coding
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.