Apple Podcasts | Google Podcasts | Spotify

Rhys Lindmark: Hello, listeners. Today, I'm excited to chat with Grace Lindsay. Grace is a computational neuroscientist who recently wrote the book, Models of the Mind: How Physics, Engineering, and Mathematics Have Shaped Our Understanding of the Brain. Grace, thanks for being on the show, and welcome.

Grace Lindsay: Thank you for having me.

Rhys: Yeah, excited to dive in. This book, it was good, it was a great book about how, I'm a noob neuroscientist person and I'm trying to understand how brains work and so this was, but I'm also a computer scientist and this book was a great way to be, oh OK, here's how some of these cool computer scientists and other, you know, mathematical ideas have kind of shaped to understanding the brain. Before we go into the brain stuff, I just want to understand, were you, Grace, personally, why are you into this brain stuff, and what's the through-line that kind of ties your interest into brains together? Brains and mathematics and that kind of thing?

Grace: Mhm. Yeah, so I, you said you're on the computer science side and, you know, this is like a gateway into neuroscience. And I came the other way. I startedby the time I left high school, I knew that I wanted to study neuroscience and I was also pretty sure I wanted to get a Ph.D. in neuroscience because I can't really explain why I'm interested in, you know, humans and human minds because I just feel like, isn't everybody, like it's so obvious, that we want to understand ourselves. But then when I was learning about psychology in high school, I kind of felt that it wasn't answering the question of kind of why or how people were the way they were. It was just documenting things that people do. And then I learned that neuroscience is the city of the brain and how the brain produces the behaviors that psychology studies, for the most part.

And so that's when I switched into thinking I want to do neuroscience and then when I was studying neuroscience as an undergraduate at the University of Pittsburgh, I learned about computational neuroscience and that was kind of the same process again because neuroscience, the experimental side, felt like it was documenting a lot of facts about, you know, what neurons do and how they fire and how they relate to each other and all of that.

But didn't feel like it was answering my—how—questions of how does all of that lead to interesting computations and behavior and building mathematical models where you piece together all the experimental findings felt like it was finally getting more at the how and why are we the way that we are that initially interested me way back in high school. So I think that's what it was and I also had a parallel interest in astronomy when I was in high school and so I guess kind of the more physicsy side of things. I never really shy away from.

Rhys: That's cool, that's an interesting thing where it's, yeah, I mean to some extent it's what the physicists or mathematics folks would kind of hang their hat on where they're, there's all these, you know, more gooshy kind of things, but then you can just start to explain them more and more in terms of these more like primitive concepts or whatever, and so or more foundational concepts or something like that. Yeah, it's an interesting thing where you're searching for just how it works instead of what it's actually doing.

That makes me think actually, as we kind of dive into some of the book and things like that. You know there have been these different metaphors for how the brain works overtime and back in the day it was like, oh, the brain is a Swiss army knife or the brain is a machine like a factory or the brain is these organs.

Now, we kind of are starting to think of the brain as a computer. Do you think that metaphor is just like the metaphor of our times or is there something deeply true about computational processes, both with computers and with our brain that is actually fundamental?

Grace: Yeah, so this is a real hot topic on Neuroscience Twitter, and in some articles that have been written for Popular Science outlets. This idea of the brain is a computer or is not, has been argued in different ways. So when I say or think the brain is a computer. I mean it in a literal way, as in it is a thing that processes information, it computes to some extent and kind of on top of that, we kind of think of its purpose as to process information.

So that's why I would describe the brain as a computer in a literal way. Now, saying the brain is a laptop or, you know, is a cell phone or is a particular type of man-made computer that we use in our everyday lives. That's a metaphor, analogy, and, you know, it has its limits because it is that as all metaphors do. So you can still get some ways in understanding the brain by comparing it to the computers that we use and that can be interesting to explain how the brain works. Maybe to someone or to kind of write it up in a more poetic way, but I think that the version of the literal truth is almost not deniable yet.

Some people do deny it, so again it just depends on how you're defining all of these words. But I think there is that trend of always comparing the brain to the latest technology and when people are comparing the brain to computers that humans have made they're doing that and has its place, but there's also just the literal truth to it.

Rhys: Yeah, I like that. I think that there's some, yes, brains are computational or something and computers are also computational. There are these processing info things versus no, my brain is not my iPhone. That's a little bit different and that's, you know, the apps and that kind of thing.

Grace: Even the word computer originally described humans, who were using their brain to compute things, so it doesn't have to mean the stuff that, you know, Intel or Apple makes.

Rhys: So kind of diving into the book, I mean, and yeah, just for our listeners. You've talked about it in your background were for me trying to understand how the brain works. You just provide this kind of overarching kind of narrative of how, how brains have taken things or whatever, not taking things, but how neuroscience is kind of learned from other disciplines and applied that to the brain to kind of understand the brain better. Maybe, at a very, very high level, could you try to explain this kind of a tough question because you try to explain how the mind works or your model of the mind in these kinds of mechanistic computational neuroscience terms?

Grace: Yeah, so as I said, I'm kind of interested in building models that can then synthesize a bunch of experimental data and yet provide a mechanism. So a lot of the mathematical models that I talk about are in that category where it's like we understand that a neuron works the same way as an electrical circuit, and so you can take the equations from electrical engineering and you can use them to describe how a neuron takes an input and produces its output, and so that's kind of a direct mechanistic explanation of what an individual neuron is doing.

Then you have other models that try to explain how populations of neurons interact and those pull from, for example, physics, which modeled how particles in a gas or a fluid interact and use those equations to model the interactions between neuron populations and then you can kind of get a little bit more high level or metaphorical in how you're looking at the brain and look at things like the structure of the brain using graph theory and network science and try to understand the relationship between structure and function in the brain.

But that's at a more high level. You're not really like thinking about how neurons work there and so there isn't a single model that I'm proposing or advocating for, for how we understand the brain because the brain is made up of so many different parts and can be dissected and studied in so many different ways. Every neuroscientist has their own reason why they're studying neuroscience and their own little corner of it that they’re interested in, in mathematical modeling and the influence from physics and computer science and other fields can be seen at.

You know, almost all of those levels and areas. So there's, yeah, there's just a lot of different ways that you can pull in from other subjects to try to study something about the brain. I really wanted to just showcase that full range and, yeah, because it's true, it's what the field is right now and because I think it's interesting to see how interdisciplinary, the study of the brain is.

Rhys: Yeah I like that. Yeah, it really does pull from all these different fields and it was a cool thing just reading the book how it's like, oh, once we even understood how electrical circuits worked, then we could start to think about our brain in terms of those circuits. The one that you just talked about that I didn't, I definitely get the electrical circuit side. I get the, you know, the computer science graph theory side where you say, OK, we have, you know, one of the cool ones that you shared in your book was the small world theory where it's like, oh, our brain roughly has a network graph that is a small world graph A.K.A. it has a bunch of hubs and spoke things where anything is only a couple jumps away from each other and we see that in real life, in a lot of different ways, and we see it in our brain.

So it's kind of cool to see how networks in the real world map are similar to the networks in the brain. But one that I didn't understand that you just chat about was how the physics of molecules interacting or liquid or gas dynamics, how that applies to groups of neurons in the brains? Could you double-click on that for a second?

Grace: Yeah, so that comes up in a few different ways. One way is when we think about memory in the brain. So basically you can think of a memory as a certain activity state of your neurons. You have a bunch of neurons, and broadly speaking they can be on or off, and so a certain pattern of on and off and all of these neurons represents one memory. So this work on kind of relating the interaction of particles to the interaction of neurons is known as the Hopfield network. It was done by John Hopfield, who was a physicist who just said, hey, I'm going to go study the brain and try to apply my tools to the brain.

He really just directly imported the math of what’s known as the Ising model, which studies how particles interact. For example, in a block of iron, the different magnetic dipoles of the different atoms in the block of iron pull on each other. So there's interactions between the different atoms and neurons that connect to each other. So there's interactions between neurons and the more active one neuron is the more active it will make another neuron. So you can just kind of port that mathematical model from atoms in a block of iron to neurons in a population and show that you can put in kind of a little pattern of a memory, a little bit of the activity state that, that full population is supposed to have, and through the interactions, you can reinstate that whole memory.

If the network is set up so that memory is an attractor state, and so that's a mathematical concept that's using a lot of different fields. The idea of an attractor, and so that's just really as I said, kind of taking very directly the math and the concepts and the terminology from physics and mathematics and just saying I think neurons were similar enough that we can go with this and it's actually been very influential like people now experimental neuroscientists look for attractors in the brain. You'll just see papers that are about that, and that's just directly from, you know, this influence from physics.

Rhys: Love it, yeah, cool. Yeah, I love the attractor framing where it’s just like, oh, OK, we have a, yeah, the memory is a set of on and off states and so if you just trigger a couple of them then the whole pattern will then replicate or will trigger and fire as well. So any of those parts of that, anything that leads to that basin or whatever that attractor basin, that's what a memory is. So the thing I didn't understand was the iron piece and how that was like a direct correlation from that. So that's cool.

The other thing that, when I think about your book it's like, you know, another thing that you pulled from which feels like it's a new kind of connection here. So around like probability theory, and Bayesian kind of predictions and stuff like that. Could say a little bit more like how probability theory and Bayesianism or whatever is kind of showing up in neuroscience these days?

Grace: Yeah, so Bayes rule is used a lot in neuroscience. I mean, it's used, you know, and on one side just in terms of analyzing data, you know, you use probability and statistics and Bayes rule comes up, but it's used a lot also in modeling more in the behavioral level to try to explain people’s behavior and so the main features of a Bayesian approach that makes it different than the more traditional approach to how we might think about cognition is that, one, it’s probabilistic, so you're dealing with a probability of something being true rather than just a flat true or false value, or the probability of a certain variable being a certain value.

And then also there's this idea of priors, which says that to come to some conclusion you're taking in the evidence that you get in that moment, but you're also combining it with past knowledge that you've gained through experience or development or genetics or whatever it is. So the way that it comes up a lot in the study of behavior is specifically in perception science and understanding why people kind of come to the conclusions that they come to when they take in a visual image, for example. So you can imagine like a visual image that is ambiguous.

You can't really tell exactly what's in it and maybe some people will come to one conclusion and other people will come to a different conclusion and the reason for that would be that they have different priors. So that one person might expect to see like a person that they know in a certain room, and even if the lighting is dim, they're going to conclude that the person that they know is there, even though they're not actually getting a lot of strong evidence at that time.

So Bayes rule can really explain that well because it says, oh, when you're in a regime of weak evidence in the moment, the probability of everything is kind of low and spread out, but your prior is still strong, and so you just kind of combine those and then just say, OK, I'm going to like rely more on my prior at this moment because my evidence is weak at that time. So this has been used to explain a lot of different aspects of perception and also just other elements of cognition at this behavioral level. And then people, once they've established that this equation kind of captures behavior in a certain way, then they go looking for the neural mechanisms that could be implementing that equation.

Rhys: Yeah, I love that. I think that there's, I mean, just hearing you as you talk about this, I'm hearing this really cool

skill that you have or just like the multilevelness to it where you're like, OK, sometimes we're talking about, you know, the neurons, individual neurons, sometimes trying out groups of neurons and then sometimes trying out the behavioral level and often the probability stuff happens at the behavioral level.

Yeah, and I think that we just had the predictive pro~, Andy Clark, on the podcast, predictive processing stuff, and so it's like, OK, yeah, how do we, yeah, it's like we have all these priors of the world and then we just get this like random evidence from the sensory data that loops into those priors and so kind of informs a lot of, it's a good explainer for our biases and how we predict the world is just constantly being aware that we have these priors that are kind of pushing out there. The one...

Grace: Yeah, yeah, and I think you hear a lot of people who are well versed in this area speak in terms of priors when they explain their own behavior.

Rhys: Priors, priors. Totally, totally.

Grace: It's good, like a meta thing. As a scientist, it's like, well, I know that this paper is claiming this, but my prior says that it's not true.

Rhys: I'm part of the effective altruist and rationalist folks to some extent, so they're always talking about priors. A question that I have is, let me highlight one of the cool things here, just hearing you talk about some of this stuff, and some of the network stuff is like complex systems E but then the other stuff that's kind of connected each to complex systems is like the information theory side, which I thought was really cool to learn about how information theory and rate-based coding has kind of informed neuroscience. So could you explain a little bit about how information theory like the rate-based coding stuff?

Grace: Yeah, so information theory comes from Shannon and it's this idea about how can you create a code that will help you efficiently communicate information and so almost as soon as Shannon came up with this, people started applying it to biology because they're like, yeah, we want to be able to quantify information in some sense. We want to know what are these systems doing?

There's a lot of, not just in the brain, but a lot of complex systems in the body that people want to understand,  what are they telling each other? There were information seems it would be a good thing to be able to study in those systems and to have a formal way to get a handle on that seemed very helpful to people. The way that it comes up in neuroscience, a big focus has been defining what the neural code is. So in order to calculate the amount of information that a neuron is sending, you need to know the code that the neuron is using.

That's how you calculate the entropy and the amount of information that's in the code. So people have proposed different things. The rate-based coding that you mentioned says that a symbol in the neural code is the number of spikes or action potentials that are neuron fires, and so it's in a set amount of time you calculate or count up how many spikes a neuron had. Then you say that's the symbol. That's what it's representing in that moment and you can do that for each chunk of time and across different neurons and that's how you define this code.

Then you can calculate how much information is in the neural code but that's just a choice that a scientist is making to some extent, to say that that's what the code should be, and other people have proposed other codes which, for example, could be one based on spike timing, which is not just the number of spikes in a certain amount of time, but the time between two spikes or the time of the first spike relative to some other event.

So people kind of come up with different things that could possibly be the neural code to try to quantify information differently. It's an issue because the only way to verify who's right to some extent is to ask the other area of the brain that's receiving these spikes, from these neurons. So where these neurons connect to? That's what matters is what those neurons do with the spikes and so we can discuss things in terms of the code and entropy and try to define it in terms of number of spikes or whatever. But what matters is what the neurons later on do, and eventually what your muscles do when they receive input from neurons. So it's very tempting to want to study the brain in this way.

But it does have this problem of maybe making it seem like things are more solidified as what the neural code is then than what we actually know because, yeah, and it'll vary by brain area. It'll vary by what the brain area is doing and all of that. So while many people have written papers on what is the neural code and their definition of the neural code, there isn't a single answer yet, though rate-based coding is probably most commonly assumed.

Rhys: Got it. OK, cool. So, yeah, 'cause as I was reading your book like my takeaway was kind of, oh, rate-based coding is the main one but as you're saying there's a lot of different languages or codes or whatever for how the individual neurons talk to each other. It's kind of an interesting game in just like going to a different country and how do you all speak to each other? Or if you're you’re listening to birds or whatever, like how did the birds talking? And here we're trying to understand how are the neurons talking to each other and the only thing that they can, or yeah, and it's interesting because there are weird constraints on the system that the neurons can essentially, they don't have that, we have lots of different ways that we can speak as humans, but the neurons only have, you know, like you can either fire or you could not fire roughly, and you can determine. So that would be hard. It’s kind of like speaking in terms of Morse code or binary or something like that with a time-based element to it. So gosh, that's interesting.

Do you think that there's, you know, kind of pulling this all together as you, and I'm hearing what you’re saying before, it’s just like there's all these different frames on neuroscience that these different fields have and those frames can operate at different levels and can help us understand what's going on in there. At the end of your book, you kind of talk about some of these grand unified theories. Is there any, and one of them is, or maybe could you say what those theories are and say which one you might be the most inclined towards if at all if you had to choose one, or for you had to pump with your own or whatever?

Grace: Yes, so, yeah, I chose three to cover there and it was a little bit difficult to choose exactly what to cover there because it's not like there's a formal definition of what a grand unified theory is, but the three that I talked about were the free-energy principle, which is associated with Karl Friston, and briefly, just kind of puts a lot of emphasis on the notion that the brain is trying to predict what's happening, is trying to predict its own sensory inputs is a big thing that people talk about with respect to that one. Then there is Jeff Hawkins’ theory, which now goes by the name Thousand Brains, A Thousand Brains theory, which is a little more in the weeds with respect to the things about neuroscience, that it pulls from. But it's kind of focusing on this idea that you have a bunch of parallel processing units in your cortex that are really focused on kind of representing things spatially, it’s one way to describe it.

Then there was Integrated information theory which is specifically about consciousness or not really a full theory of the brain, but it says that a thing is conscious, it's not even specific to brains, thing is conscious to the extent that it integrates information in this specific way that that they kind of pull from these axioms that they create. So those are the three that I covered. That was a chapter, that was the most difficult to write from a social diplomatic perspective because there's a lot of big personalities involved in these theories and I think that I, you know, kind of say this in the chapter though. I don't think that it makes sense to have a grand unified theory of the brain, so there really isn't any of these that I would, you know, pick to be the winner and there isn't really any at all that I would pick to be the winner just because the brain has evolved over eons, and I don't think that it's working according to simple principles.

It does too many different things with too many different mechanisms and I don't know that trying to come to put simple principles on it is productive in understanding it. Obviously, there's a level, you know, you don't just want to have to describe absolutely everything all the time, as though it's completely from scratch. But at the same time, I don't think you're going to derive from first principles how the brain works and then be able to explain all the data either.

Rhys: Yeah, it's interesting I think, and I get that it's funny for me just like coming as essentially an outsider or it's you talking about the different, you know, the big personalities. Maybe some people were sad they didn't get into the chapter and maybe the people who are in the chat. Yeah, so there's all kinds of that like sociology of, you know, science stuff. I do think, for me, when I start to learn about, you know, the free-energy principle. Correct me if I’m like, free energy principle stuff is very connected to predictive processing and this kind, you know, you have these predictions of the world and then you fit this sensory data into them and you're trying to minimize error or surprisal or whatever is that, are they roughly the same?

Grace: Yeah, so predictive processing and sensory information is definitely a big part of it. Because as you said, yeah, you have to predict something and then compare it to what you get as an input and then you want to have  low error on that.

Rhys:  Yeah, the funny thing for me is as I was reading now and then as I was reading the Thousand Brain stuff. I felt that they were talking and relatively similar terms almost. I know the thousand brains one as you kind of noted, it has these specific it says, oh, no, it's a cortical column, maximalist or whatever. It's like we have these cortical columns that are 150,000 of them, and they all kind of, you know, talk to each other in different ways and doesn't even matter. You could like put them anywhere and they could do their same thing but the shared thing between them is that they both share this predictive modeling of the world to some extent.

Where it's like, hey, they're making these claims about the models of the world. They're kind of multi-level and that there's, you know, these different levels within the columns or whatever that also then kind of, you know, when you're trying to aggregate this information, they kind of synthesize it with each other by, you know, combining the models of the world together. Am I hearing that right that they are kind of connected in that way? I was kind of like, oh, these are kind of tour of the same cloth or something? Is that right or am I wrong there?

Grace: I could see how that's related because A Thousand Brains theory is about kind of building up a model. And then yeah, usually when you talk about predictions, you have to have some model that you use to make a prediction. You have a model of how the world works that you use to make a prediction. I don't know that a thousand brains cares about calculating errors or minimizing errors, or anything like that, which is the focus of the free-energy principle. But I could see how they're similar. I actually, I just wrote a piece for the Simons Foundation, which is a foundation that does love neuroscience research about predictive coding, particularly for the study of the visual system.

So I interviewed some people who are studying it for that and really, one of the big takeaways was just that it doesn't have a super clean definition that everyone agrees upon and there were a lot of things that you can cast in a predictive light.

So it can be very expensive in that way, which makes it, you know, a candidate for a grand unified theory, because you can take almost any data and say, well, this is a prediction if you look at it this way. Sometimes it's helpful to think of it that way, 'cause sometimes people aren't thinking about the fact that the brain could be generating predictions at all. So it can be good too, you know, have that in the back of your mind as a candidate for what a brain area might be doing in a given moment, but I do think that that is an issue if we really want to be claiming something very strong. If we're just saying like, generally, predictions happen, then, yeah, for sure. But yeah, for trying to make it seem like, oh, this is really exactly what everything in the brain is doing then that feels a little out of place.

Rhys: Got it. Yeah, love it. I think and I'm coming from the perspective of the generalist or whatever. And so I'm just like, OK, predictions are happening. These are connected or whatever like these things talking to, but actually getting down to brass tacks, it makes sense that you'd need to be more formal. A question that I have like kind of, so we have this model of the brain which is a, you know, it’s this information processing thing that is informed by these various different interdisciplinary fields, and we can imagine the neurons interact or the neurons themselves, how they work, and their groups of neurons, and how they work in terms of network theory and information theory and that kind of goes up another level to behavior and like how our brain does behavior and does predictions and stuff like that.

Given that model, the kind of, the reason why and this can be like co conversation here is like thinking about, I want to kind of understand a little bit more about like, so the stuff that I'm doing is about what information wants and I'm writing a book called What Information Wants like how information flows both in text and on the Internet, and stuff like that. But also how information flows in our brains, and like how our brains work both as an information storage thing, but also as an information processing thing and thinking of the kinds of information that can actually live there, or like, you know, what kind of, like why do we have catchy songs in our brains and stuff like that?

So kind of going in that direction a bit, and then we'll also go down like the AI direction a bit in a second, how do you think about the brain, how should we think about the kinds of information or memes that are that like can be stored in the brain or that are fit to be stored in the mind?

Grace: So there's a trend. I think it's a trend. I'm hoping that it's a trend in neuroscience lately about focusing on studying ethologically relevant tests in animals because the standard is like you take a mouse or a monkey or some animal and you put it, you know, in front of a screen and show it like very basic lines and shapes and have it do some task with that and it's just not what the animal is used to doing it's evolutionary niche, it's just not.

It's not hitting, you know, the circuits as they would normally be used in that animal’s proper life outside of the lab. And there's a concern that then we're not going to actually get at any interesting principles if we're doing such artificial tasks, even if the animals can kind of learn how to do them, they're probably not doing them in a way that we would understand or a way that relates to what they're normally using their brain for. So I think any answer to that question would have to, you know, specify whose brain, or at least what species brain are we talking about.

I assume humans are most relevant here, but to that point then it should be noted that a lot of neuroscience is not done on human.

So I think that, yeah, there's a sense of you have to think about the, essentially, just the evolutionary history of the species and what it would be happening to do in its day-to-day life to survive and what it does do. So, actually, like looking at the literature about animal behavior and understanding what they do before you're going to understand what kind of stimuli and what kind of tasks are best suited to study in that animal.

Now, obviously, there are a lot of people who are interested in the evolution of humans and who try to kind of cast anything that humans do or think in light of, you know, how it served them when they were caveman or whatever and that's you know, could be speculative and of varying degrees of quality. Those arguments and at the same time also humans have proven themselves incredibly adaptive, so we are obviously capable of understanding very complex things that don't seem to have any relationship to what we would need to survive in the wild.

So when it comes to humans, it's, you know, it's tricky, there's also, I'm focusing just on recent trends in neuroscience, which is probably not the most, like not the best answer. The way to get it the best answer is just what people are thinking about, but there has been a lot of focus lately on thinking about spatial maps and how people think spatially even for abstract things and how, you know, you can organize concepts spatially and there have been tests that try to show that indeed if you try to have people learn some kind of complex graph, they will traverse it as though, you know, they’re walking through a house or something like that, like they'll have to go in order.

So I think that that's interesting 'cause that does suggest some kind of constraints on our thoughts that were really based in particularly 3-dimensional space is kind of how we have to think and things in that form will, you know, be easiest for us to process, whereas I think there are some video games where they try to teach people to navigate a 4-dimensional space and it's just really trippy and it takes a lot of practice.

So there are, you know, some built-in cognitive mechanisms that are probably relevant to what we can and can't take in as humans.

Rhys: Yeah, I like that. I think that there's, and again and thank you for the, what did you call the thing where people are doing like ethologically relevant or what you call it?

Grace: Ethologically. Yeah, like they're ethological niche there. Yeah, that's like.

Rhys: Ah, not echologic?

Grace: No, it's ethological. Well, so animal ethology is like studying the behavior of animals in their natural environment kind of stuff. At least that's how I've always understood it.

Rhys: OK, no, no, totally. I've never heard that word. So I’m excited to learn, so ethologically relevant. It's not like ethics, but it's like the, yeah, OK, great, great, great. It's not echo. OK, cool. Yeah, so that's cool. I think that, that even, you know, expands my mind a bit. So like I’m trying to understand what kinds of information is “fit for the brain.” We have to think, yeah, what species are we talking about?

And that different species have different things that are fit for them and to understand the things that are fit for them, understanding the evolutionary history of like, oh, this brain was built on Earth, you know, through this time, like, OK, that makes sense and like a classic version of this might be something like, you know, how plants can only see a certain part of the spectrum because that is the only part of the spectrum that makes it through the, like greenhouse gas layer or whatever.

And like our eyes have like a similar thing where it's like when we were, our eyes developed in the water and so like there's only a certain wavelengths that come through the water to our eyes are kind of like fit for the wavelengths that we could see, and so you could make a similar analogy, like what kinds of things could live in our brains based of like how we've gone through time. So I get that that's interesting. I think the 3D spacing is another good thing or it's like, OK, yeah, it's, obviously it would be tough for like four-dimensional stuff to “fit in our brains” in an easy way 'cause we don't...

Grace: And this comes up a lot as a neuroscientist because you're trying to understand like the activity of a hundred neurons at once and so basically what people do is they use mathematical dimensionality reduction to plot it in three-dimensional space so that you can look at it 'cause it's just like what am I supposed to do with all this data if I can't look at it and I can't look at anything that's higher than 3-dimensions. So…

Rhys: That's funny. Yeah, I mean, even stuff like that just gets me think about like, you know, things that are fit for our brains or things that are that the natural world existent, you know, like 3-dimensions and those kinds of things. Let me ask another question on this though, which is like, like in my mind and here you talk about the like our brains being, if we try to remember some graph, whatever like we'll walk through it in our minds, pretending like it's a 3-dimensional space or whatever.

That reminds me of how our brain, within short-term memory, has a visuospatial sketchpad A.K.A. our mind’s eye. And then we also kind of have like a mind’s ear, which is the like final logical loop or whatever, which is where like we can like talk where we can like remember things that were said recently or something like that. In my mind, something like a catchy song, catchy songs kind of emerged because they could like live in the phonological loop because they were like could live in the brain, kind of like that, and maybe some of these spatial things, and I know you're like specialties around vision and stuff like that. Am I correct in saying or directionally correct instead that like sketchy songs like live in our, you know, phonological loop? Then from a vision perspective, how do you think about the kinds of things that might be fit to live in our visual processing units or whatever?

Grace: Yeah, I mean I think it's a similar story of, you know, it's based on what you're used to, in a sense, and what you've experienced a lot of. So I think there are studies that show that you know if people are shown images that have kind of outliers in them like there's an object that's in a location it's very rarely in. So I think I saw one example of where it was like there was a shoe where a toothbrush would be on a sink or something like that. It's just like something that isn't normally there and how people kind of process that differently and maybe don't remember the details of that image as well, because in your memory you don't, you know, you're obviously we're not perfectly storing everything we just briefly see, so when you recall things you kind of fill it in with what is statistically there.

So what's most likely to be there, and what makes most sense to be there? Even if it wasn't actually there in the image, or if something else was there instead? Now sometimes oddball things really stick in your memory 'cause you're like, what else I should do in there? So sometimes it'll make you remember better, but in certain cases, you know, it's something that is incongruous will stand out or it will be something that you can't recall because you're filling it in. So I do think that it is based, and so where you get those expectations are based both on genetics and evolutionary history and your own personal development and also even your kind of recent experience, you know, you can learn a new location and learn what is normally in that location and that, you know, office building or whatever and you might have trouble noticing when things are slightly off because, you know, you're so used to seeing in a certain way. So yeah, I think it all does depend on your sampling of the statistics of the environment basically and how you're storing those statistics.

Rhys: Yeah, even that frame is help for me. You’re sampling of the statistics of the environment and you're storing of those statistics. Let me ask one final question on this thing and we can move to AI stuff. I guess what I'm wondering is when I think about something like biological evolution, I think about like, you know, genes kind of being this informational unit that then kind of iterates and iterates with variation heredity to eventually fit into an environmental niche and like access, energy or whatever and like mapping that onto brains right now or it's like, OK, we have this information which is, you could call it culture, you could call like this, you know, the words that we're transmitting between us, like the words that exist on paper or things like that and that stuff, like iterating, iterating, and then like trying to find homes and its environmental niche, which can be brains or paper or the Internet or whatever.

Is there a part of neuroscience, I think a lot of what we've been chatting about today is kind of what I would call the, maybe like the environmental piece which is understanding like how brains work. You're saying like this is how brains work. This is how they do their thing, you know, but is there another part of neuroscience or is there a subfield within neuroscience? It's like trying to ask or answer some of these questions that I'm kind of proposing here of like, not necessarily how brains work, but kind of, instead of understanding the homes that brains create, instead of understanding like the information that could fit in those homes?

Is that question makes sense? Is there a feel for that or no?

Grace: So if there is, my guess is it would be more in psychology where people are kind of studying, you know, yeah, what can people comprehend and what can they remember in more of just a black box input-output thing, not necessarily thinking about the neural mechanisms of that. I think there's probably a lot more categorization of just what does and doesn’t work, and honestly probably even in psychology of education. I would imagine people care a lot about what information you can get into a brain and how. So yeah, I'm sure that people are thinking about that. I don't know of a strong component of it in neuroscience specifically.

Rhys: OK, be(()), that's helpful. So talk about this AI stuff for a second. The other cool part of your book is that you kind of looked at the emergence of, you know, artificial intelligence network so, you know, and artificial works of artificial neurons or whatever and how that kind of co-evolved with some of our understanding with the brain. I guess the most general question here is how do you see this ongoing research relationship between AI and the brain?

Grace: Yeah, so I think, I mean in the early days of artificial intelligence it was kind of wrapped up with cognitive science and the study of the mind even more so than the study of the brain, so to speak. So people were just, yeah, interested in how people think and how thought could be automated. So in that way, it was really based on, you know, humans and what humans were thinking and, yeah, how those processes could be mapped to some machine.

To me, that's kind of that’s a good place to ground things because my evidence that artificial intelligence should be possible that we should be able to create a human-like a machine. A machine that can do everything that humans can do is the fact that humans exist and we can do all the things that we do. So, and I believe that we use physical things like our brain and bodies to do all those things, and so there's no reason why we shouldn't be able to make machine that does it If only, you know, if the even if what that means is that we replicate the human body cell by cell, we should at least still be able to do it.

It wouldn't really be sensible 'cause there's a lot easier ways to create more humans, so we don't need to be doing that in a scientific engineering way but yeah, so I think the human brain is kind of the go-to existence proof that we should be able to make intelligent machines, and then in terms of the actual history of the fields, they kind of weave in and out in terms of how much artificial intelligence is really looking to human minds and brains for inspiration versus going off on its own engineering track to just be able to get as much done with the tools that exist in the moment because, you know, artificial intelligence is it's a scientific endeavor, but it's also an engineering endeavor in the sense that you're trying to create a product that works, and it's even a commercial endeavor where you want to create a product that works right now for cheap.

So there are a lot more constraints on the creation of artificial intelligence that don't apply to the pure study of the brain. But insofar as study of the brain can be useful, artificial intelligence seems willing to take from it, you know, the field and the people studying it. Right now, we're in another moment of tight interaction where artificial neural networks are, you know, as the name suggests, inspired by how neurons work and how brains work, and they're doing really well at artificial intelligence now, and that is influencing the way that neuroscientists are thinking about in studying the brain and making the artificial intelligence people kind of even more interested in maybe, you know, peaking over at neuroscience and psychology and taking even more inspiration from there, which, you know, works to varying degrees depending on the problem. But yeah, we're at a time where they are talking again a lot right now.

Rhys: Yeah, talking is good. Yeah, it's funny. I mean, I think that there's a, you know, thinking about the future of, I don't know, I'm not sure that the right way to ask this. I mean, how do you think, maybe, one way to ask this is, you know, one of my friends is working with this company, Anthropic, who does artificial safety, AI safety research and one of their things like trying to understand how artificial neural networks work and so they like, we'll go in there like actually look at them and be like what's going on with these neurons and what's going on with these neurons or whatever these artificial neurons.

Is there, I don't know, it kind of reminds me of a lot of the stuff in your book, you know, where you're like talking about, I guess maybe the way to ask this question is how do you see us? You know, like we would like to try to understand AI in a similar way as we've been like continuing to try to understand our brains. Do you have any instincts or like what that process will look like or continue to understanding of AI?

Grace: So that's actually very related to my actual research as a scientist with the stuff that I do day today because I currently use artificial neural networks as a model of the brain, a model of the visual system in particular. So, you know, you build an artificial neural network and you train it to be good at doing some visual task where it takes in some images and produce some output and then, yeah, part of what I do is then try to dissect that network to see if it's working the same way that we think the brain is, and if it is or isn't can it help us generate hypotheses about how the brain could be working in cases where we don't really know.

So yeah, it's a similar set of tools as what could be used to understand artificial neural networks for the purposes of safety or you know, deploying them in the real world, you want to know what their properties are and how they're coming to their conclusions. So it's very similar, and so you would think, you know, with all the years that neuroscience has existed that once we got to this point where people wanted to understand artificial neural networks which are perfectly observable, you can do anything you want. You can get the activity of all the neurons. You can see all of the connections. You can do any experiment you want which is not how the real brain is. You would think that we'd be real ready to be like, oh, here's exactly what you should do to try to understand that artificial neural network.

Here's the tools you should use and here's the conclusions they'll be able to draw for you. It's not exactly where we're at. I mean we can and I have and others have taken the tools of neuroscience and applied them to these networks and you get some understanding. You know you do get some, you know, results that you can say, OK, now I have some better sense of how this network is working and how maybe I'd want to tweak it either to make it work better or to make it a better model of the brain and a lot of times those are completely aligned, so we have some options, but I'm actually quite interested in using neural networks as a way to test out and develop tools that we can then apply to the brain.

Because there are these perfect experimental settings for us where we can do whatever we want. I think that that's a good avenue forward for computational neuroscience to prepare us for a time when we will be able to record a lot more from the brain and do a lot more experiments. We want to have the analysis and the mathematical modeling tools ready for all that data. So they're not, they're not fully ready for AI right now. They're in the works, and hopefully, the AI people will develop their tools that then we can also steal [unintelligible].

Rhys: Yeah, I think what they're doing, what this other company is doing with like linguistics or whatever with language you're doing with visuals where you're like, OK, what the hell, what is this set of neurons doing? Oh, that's the one that looks at the color red or whatever. It's cool. I like what you just said there which is like as we get better and better at modeling brain activity and understand. Yeah, you know, we got whatever a hundred billion neurons or 86 billion neurons or whatever.

It's like, OK, once we have that full mapping then the nice thing is that we'll have used a playground for the last couple of decades or whatever around artificial neural networks such that when we actually do it on the brain it won't be our first time at that or whatever. Yeah, I like that. The other bonus question here is around, kind of, you know, neural link and neurosensory and these like ways to kind of tap, you know, understand brain functioning or to like use brain functioning to modify real-world stuff or whatever.

I guess my question here is maybe something like, you know, how do we think or how do you think that, I’m thinking about this like information flow piece as well, or it's like we have information flowing within humans or whatever and eventually that stuff is going. We're going to have an increasingly robust API layer to our brain. How do you see that like API layer evolving or whatever both the input and the output to it?

Grace: Yeah, so I think my immediate reaction is that any true neural interface is very far off for the average person because to get good signal you usually have to open the skull and that's a big deal. You're not going to be casually opening your skull.

Rhys: Not on a random Tuesday or whatever.

Grace: Exactly, yeah. The alternative then is to use some sort of outside the skull monitor that's going to have far less signal to noise, and it's just at least the ones that most of the ones that are in existence now are more frustrating to use than they are fun and so I think for most people on a day-to-day basis, they're not going to be doing a lot of that. But if we think you know, you know, if I'm being real, you know, optimistic, we'll far in the future. Someday we do get really good brain-computer interfaces where you can, you know, the activity of your neurons is actually guiding something in the real world directly.

I think it's very interesting because studies on people who have had these implants and also on animals has shown that you can actually learn to use new inputs and outputs in an interesting way, and so I think that that's kind of a fun experience that not a lot of humans have had in terms of, yeah, like creating a new interface with the world and figuring out how to use it and kind of creating the experience in your mind of what that is the same way that you can with your normal senses and your limbs and all of that.

So I think it's cool and it would be fun and I'm sure that people will find, you know, ways to use that that make them really good at some obscure thing I can't anticipate now. But people, you know, when you give them technology, they figure out interesting things to do with it, so it's probably going to be a fun and interesting future. I just think it's not going to be for a very, very long time.

Rhys: Yeah, cool. I think what you said around the, you know, yeah, there's like it's going to be a long time because it's hard to open up the skull and once we get it, you know, that that sense of like that our brains are just these like correlation devices or whatever and that they can learn new and there's these information processing things. It's like we know what it feels like to move our hands around or whatever and we will start to know, as in people start to know once you like, do these more, this new weird interface like Google Glass and stuff like that. It’s like wow, but once stuff is actually put in will just kind of become used to it over there will be a new kind of interface for our mind.

Grace: And there have though. There have been studies, in terms of motor interfaces in non-human primates in the lab that shows that there are certain patterns of, you know, controlling something with your neurons that the animals can learn faster and other patterns that they can't learn as well. So there are some, you know, pre-set, as we've talked about kind of pre-set patterns that will make it easier to learn a certain type of control or in the real world versus a different type. Understanding that would make, you know, make it make a better product for the people who are using this and also just is interesting from a neuroscience perspective.

Rhys: Yeah, love it. So as the final two questions here, one is for a recommendation piece, where do you feel like the great neuroscience folks hang out, you know, is there a place you'd recommend folks go to or a conference or something like that?

Grace: Yeah, so the main conferences for, so for computational neuroscience, the main conference is called COSYNE, which is C O S Y N E. So not the mathematical function, although now whenever I want to type the mathematical function, I spell it that way 'cause I go to COSYNE all the time. So that's, Computational and Systems Neuroscience conference and then another one that is more recent but is really aligned with this kind of neuroscience, AI, artificial neural networks thing is the Cognitive Computational Neuroscience conference, which does some really interesting initiatives in terms of trying to get actual like explicit debates amongst people who have differing views and really get people thinking and talking and so I like that conference a lot. Then a lot of the people who participate in all that stuff they are on Twitter and they’re arguing with each other very openly about things. So that's definitely a place to go.

Rhys: Join Neuroscience Twitter people, you know, that's what you want, and then a final question here is one overrated underrated? So I'll just like say one or two of these and you can tell me whether you're hot take on whether I think it's overrated or underrated. Do you think that the role of dopamine, is that overrated or underrated?

Grace: I'm going to say underrated because it probably does a whole bunch of crazy computations in the brain that we don't even know yet. Even though people in Popular Science are already excited and talk about it a lot, they're missing out on probably, you know, 80 to 90% of its function.

Rhys: Wow, cool yeah, that's I'm surprised by that one. Because dopamine, yeah, like normal people know about it like I knew about it before and so it's like, but actually…

Grace: You probably don't know a lot about it. Yeah, and there's still a lot of neuroscientists don't know about it, so I think it's gonna keep delivering.

Rhys: OK, nice. What do you think about the neuron as a fundamental unit? Is that overrated or underrated?

Grace: I think it's true, but I think it's overrated. So yeah, we could be looking beyond as well or even sub-neuron. Yeah, I think it's mostly true but it’s overrated.

Rhys: Cool and then neural link, it sounds like you might think overrated. But is that right?

Grace: Yeah, on the whole, I have to say overrated. They're doing interesting methodological work that, you know, neuroscientists will probably be excited to use, but in terms of the vision they're selling to the public, that's overrated.

Rhys: Great. Classic. Well, beautiful. Well, thank you again, Grace, for coming on the show. Hey listeners, definitely check out, I mean, if you want to understand the book from a cool mathematical perspective, Models of the Mind, just want to check out. I'll put it in the show notes and Grace, also, is there a place people can find you on Twitter?

Grace: Yeah, I'm @neurograce.

Rhys: So nice, nice. You knew you were newer scientist when you joined Twitter.

Grace: Exactly.

Rhys: Well, thank you again for coming tonight, Grace, and thanks everybody for listening. Goodbye, everybody.

Grace: Thanks.