Sport Livestreams für Fußball Bundesliga, DFB-Pokal, Champions League, Europa League, NFL, NBA & Co.
Jetzt neu und kostenlos: Sport Live bei radio.de. Egal ob 1. oder 2. deutsche Fußball Bundesliga, DFB-Pokal, UEFA Fußball Europameisterschaft, UEFA Champions League, UEFA Europa League, Premier League, NFL, NBA oder die MLB - seid live dabei mit radio.de.
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand i...
Here's the link to learn more and sign up:
Complexity Group Email List.
--------
6:47
BI 206 Ciara Greene: Memories Are Useful, Not Accurate
Support the show to get full episodes, full archive, and join the Discord community.
Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example.
Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories.
Attention and Memory Lab
Twitter: @ciaragreene01.
Book: Memory Lane: The Perfectly Imperfect Ways We Remember
Read the transcript.
0:00 - Intro
5:35 - The function of memory
6:41 - Reconstructive nature of memory
13:50 - Memory schemas, highly superior autobiographical memory
20:49 - Misremembering and flashbulb memories
27:52 - Forgetting and schemas
36:06 - What is a "good" memory?
39:35 - Memories and intention
43:47 - Memory and context
49:55 - Implanting false memories
1:04:10 - Memory suggestion during interrogations
1:06:30 - Memory, imagination, and creativity
1:13:45 - Artificial intelligence and memory
1:21:21 - Driven by questions
--------
1:29:10
BI 205 Dmitri Chklovskii: Neurons Are Smarter Than You Think
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released:
To explore more neuroscience news and perspectives, visit thetransmitter.org.
Since the 1940s and 50s, back at the origins of what we now think of as artificial intelligence, there have been lots of ways of conceiving what it is that brains do, or what the function of the brain is. One of those conceptions, going to back to cybernetics, is that the brain is a controller that operates under the principles of feedback control. This view has been carried down in various forms to us in present day. Also since that same time period, when McCulloch and Pitts suggested that single neurons are logical devices, there have been lots of ways of conceiving what it is that single neurons do. Are they logical operators, do they each represent something special, are they trying to maximize efficiency, for example?
Dmitri Chklovskii, who goes by Mitya, runs the Neural Circuits and Algorithms lab at the Flatiron Institute. Mitya believes that single neurons themselves are each individual controllers. They're smart agents, each trying to predict their inputs, like in predictive processing, but also functioning as an optimal feedback controller. We talk about historical conceptions of the function of single neurons and how this differs, we talk about how to think of single neurons versus populations of neurons, some of the neuroscience findings that seem to support Mitya's account, the control algorithm that simplifies the neuron's otherwise impossible control task, and other various topics.
We also discuss Mitya's early interests, coming from a physics and engineering background, in how to wire up our brains efficiently, given the limited amount of space in our craniums. Obviously evolution produced its own solutions for this problem. This pursuit led Mitya to study the C. elegans worm, because its connectome was nearly complete- actually, Mitya and his team helped complete the connectome so he'd have the whole wiring diagram to study it. So we talk about that work, and what knowing the whole connectome of C. elegans has and has not taught us about how brains work.
Chklovskii Lab.
Twitter: @chklovskii.
Related papers
The Neuron as a Direct Data-Driven Controller.
Normative and mechanistic model of an adaptive circuit for efficient encoding and feature extraction.
Related episodes
BI 143 Rodolphe Sepulchre: Mixed Feedback Control
BI 119 Henry Yin: The Crisis in Neuroscience
Read the transcript.
0:00 - Intro
7:34 - Physicists approach for neuroscience
12:39 - What's missing in AI and neuroscience?
16:36 - Connectomes
31:51 - Understanding complex systems
33:17 - Earliest models of neurons
39:08 - Smart neurons
42:56 - Neuron theories that influenced Mitya
46:50 - Neuron as a controller
55:03 - How to test the neuron as controller hypothesis
1:00:29 - Direct data-driven control
1:11:09 - Experimental evidence
1:22:25 - Single neuron doctrine and population doctrine
1:25:30 - Neurons as agents
1:28:52 - Implications for AI
1:30:02 - Limits to control perspective
--------
1:39:05
BI 204 David Robbe: Your Brain Doesn’t Measure Time
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released:
To explore more neuroscience news and perspectives, visit thetransmitter.org.
When you play hide and seek, as you do on a regular basis I'm sure, and you count to ten before shouting, "Ready or not, here I come," how do you keep track of time? Is it a clock in your brain, as many neuroscientists assume and therefore search for in their research? Or is it something else? Maybe the rhythm of your vocalization as you say, "one-one thousand, two-one thousand"? Even if you’re counting silently, could it be that you’re imagining the movements of speaking aloud and tracking those virtual actions? My guest today, neuroscientist David Robbe, believes we don't rely on clocks in our brains, or measure time internally, or really that we measure time at all. Rather, our estimation of time emerges through our interactions with the world around us and/or the world within us as we behave.
David is group leader of the Cortical-Basal Ganglia Circuits and Behavior Lab at the Institute of Mediterranean Neurobiology. His perspective on how organisms measure time is the result of his own behavioral experiments with rodents, and by revisiting one of his favorite philosophers, Henri Bergson. So in this episode, we discuss how all of this came about - how neuroscientists have long searched for brain activity that measures or keeps track of time in areas like the basal ganglia, which is the brain region David focuses on, how the rodents he studies behave in surprising ways when he asks them to estimate time intervals, and how Bergson introduce the world to the notion of durée, our lived experience and feeling of time.
Cortical-Basal Ganglia Circuits and Behavior Lab.
Twitter: @dav_robbe
Related papers
Lost in time: Relocating the perception of duration outside the brain.
Running, Fast and Slow: The Dorsal Striatum Sets the Cost ofMovement During Foraging.
0:00 - Intro
3:59 - Why behavior is so important in itself
10:27 - Henri Bergson
21:17 - Bergson's view of life
26:25 - A task to test how animals time things
34:08 - Back to Bergson and duree
39:44 - Externalizing time
44:11 - Internal representation of time
1:03:38 - Cognition as internal movement
1:09:14 - Free will
1:15:27 - Implications for AI
--------
1:37:37
BI 203 David Krakauer: How To Think Like a Complexity Scientist
Support the show to get full episodes, full archive, and join the Discord community.
The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists.
Read more about our partnership.
Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.
David Krakauer is the president of the Santa Fe Institute, where their mission is officially "Searching for Order in the Complexity of Evolving Worlds." When I think of the Santa Fe institute, I think of complexity science, because that is the common thread across the many subjects people study at SFI, like societies, economies, brains, machines, and evolution. David has been on before, and I invited him back to discuss some of the topics in his new book The Complex World: An Introduction to the Fundamentals of Complexity Science.
The book on the one hand serves as an introduction and a guide to a 4 volume collection of foundational papers in complexity science, which you'll David discuss in a moment. On the other hand, The Complex World became much more, discussing and connecting ideas across the history of complexity science. Where did complexity science come from? How does it fit among other scientific paradigms? How did the breakthroughs come about? Along the way, we discuss the four pillars of complexity science - entropy, evolution, dynamics, and computation, and how complexity scientists draw from these four areas to study what David calls "problem-solving matter." We discuss emergence, the role of time scales, and plenty more all with my own self-serving goal to learn and practice how to think like a complexity scientist to improve my own work on how brains do things. Hopefully our conversation, and David's book, help you do the same.
David's website.
David's SFI homepage.
The book: The Complex World: An Introduction to the Fundamentals of Complexity Science.
The 4-Volume Series: Foundational Papers in Complexity Science.
Mentioned:
Aeon article: Problem-solving matter.
The information theory of individuality.
Read the transcript.
0:00 - Intro
3:45 - Origins of The Complex World
20:10 - 4 pillars of complexity
36:27 - 40s to 70s in complexity
42:33 - How to proceed as a complexity scientist
54:32 - Broken symmetries
1:02:40 - Emergence
1:13:25 - Time scales and complexity
1:18:48 - Consensus and how ideas migrate
1:29:25 - Disciplinary matrix (Kuhn)
1:32:45 - Intelligence vs. life
Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.