Rhys Lindmark: Hello, listeners. Today, I'm excited to chat with Andy Clark. Andy is a British philosopher and a professor of cognitive philosophy at the University of Sussex. He's written some great books on humans as cyborgs, extended cognition, and most recently Surfing Uncertainty, which is on this thing called predictive processing. Andy, thanks for being on the show, and welcome.

Andy Clark: Hi Rhys, I'm delighted to be here. Thanks for having me.

Rhys: Yeah, we're excited to dive in. As Andy and I were chatting about this beforehand, the overview of the conversation is that, I'm writing this book on information and how information flows, and Andy he's been thinking a ton about how the brain works and how the mind works and how humans interact with technology. So it's just going to be a co-exploratory conversation around trying to pick Andy’s brain about his body of work there. So to dive in with that specifically, Andy, how do you think about the through-line that ties all of your work together?

Andy: Yeah, I was pondering that earlier a little bit and I think that the thing that ties it all together is a conviction that embodiment really matters, that it's the human body that sort of is the linchpin in some way for the human mind, and I think that that's an interest in what body in action do for minds like ours, was run through all my work from rejecting good old fashioned artificial intelligence way back in the early 80s or something like that, through to an interest in artificial neural networks, robotics, the extended mind, and most recently predictive processing that I think is a lovely account of how perception and action come together for embodied beings.

Rhys: Yeah, that's interesting. I think it's interesting take too because it's when we think of our minds. We mostly think of them as the special thing that exists in the brain. Then your body is just kind of the meat suit that holds the mind. But you're saying, no no, no, it actually the embodiment really matters. So could we dive into that for a second and could you tell us a little bit more about embodied cognition and what is it and why we should think of ourselves as having embodied or 4E cognition instead of just existing in the brain?

Andy: Yeah, if you think about just anything that we do. Even so, let’s take something quite sort of cognitive to start with, like thinking through a math problem. If you're thinking through a math problem, then one thing that you'll do if you're trying to explain what you're thinking or the reason you're going through to someone else is what I'm actually doing here, which is waving my hands around a little bit as I get a lot. It turns out in a large and interesting body of work by Susan Goldin-Meadow that the physical gestures that we make while we're explaining ourselves actually seem to be helping our reasoning along.

So if you actually get kids to sit on their hands while they try to explain how they solve the math problem, they're much, much worse than they would be otherwise and it's not really just a matter of communication 'cause even people that are congenitally blind or gesture as they talk to other congenitally blind people and you know you're probably gestures when you talk on the phone. So it's there's that kind of thought that loop through the body is actually doing some kind of cognitive work. We see, of course, that kind of sharing of the load between the brain and the body. You see it in other cases very dramatically, like in the case of walking, so if you look at an example that I like to use is that the big sort of lumbering robot, the Asimo, Honda's ASIMO robots, if you Google those up, you'll see these big state-of-the-art pieces of technology that can do some decent walking around, you know, they can even get upstairs, but they're very, very energy inefficient.

They use a huge amount of energy to do what they do, and that's because they have to control each and every one of their bodily joints in order to get things done so the body is just like a problem for them to solve and they use a lot of power and energy to do it. Whereas for us biological walkers, the brain and the body co-evolved so that our brain is very neatly, sort of, giving minimal commands to a body that actually does a lot of the work. So there are things called passive dynamic walkers that you can see online as well, they're like, they started off with little toys in the Victorian era and they were just things where you put them on a little incline and they would walk down that slope with a fairly biological-looking motion.

It turns out that's because of the way the arms are weighted and the way that the legs are held together, and you might have little sort of, you might have rounded heels if you like, and waits on the end of the arms and these kinds of sort of gross bodily dynamics really simplify the task for the brain. So I think it's you know it's not as if we've got a body and our brain is kind of parachuted into that body to try and control it. The two systems co-evolved and that's why I think they are so adept at spreading the load. Even if they do things like counting on our fingers. There's all things to look like.

Rhys: Yeah, it's interesting. Thinking about it as sharing the load. I really like that framing and then also the coevolutionary framing where it's obvious. Brains were not just put on a random body, like 10 years ago or whatever, it's, no, this happened over the course of tens of millions of years of brains and bodies co-evolving with each other before even monkeys and all these things. So I like that kind of framing. I think a bonus piece of it, there's kind of the embodiment framing, and there's also this 4E model of cognition which has embodied and then enacted, expressed and extended where we have like it's not just us in our bodies, but it's also the whole system around us of technology and other things. How are you thinking about the full system of cognition there?

Andy: Yeah, so there's that 4E cognition is a kind of huge area in its own right. There's a thousand-page manual called the Oxford Handbook of 4E Cognition, so it's not going to be a very quick answer. But differences between the E’s and 4E’s cognition. I think the bedrock is the embody. That, to me, that's the one that holds it all together and that's just what we've already been talking about. The importance of the body for what we would perhaps think of as disembodied cognition. The role of the body in reasoning, in thinking, and formulating ideas. Then there's a kind of weakest of these, which is embedded.

So embedded cognition is just the idea that we don't really need to change our science of the mind. What you do is you just take the body and the world very seriously. It's a rather mild position that that is sometimes defended in this area because you get to sort of keep your traditional perspective on the world, but you also get to take on board some of these cool things about passive dynamic walking, and the stuff that I've been talking about. But then there are more radical folk like the enactive and extended folk. That's the other two, traditionally, is at least enactive and extended.  

Enactivism is kind of the idea that meaningfulness comes about through action, that we make sense of the world by acting in the world, and that we shouldn't think of cognition or something separate from action. So there's a so-called sensorimotor enactivist and their typical picture is think of a blind person feeling their way around with a cane that constantly probe in the world with action and that's what the perception itself consists in and they think, okay, all perceptions like that when you say card around the scene with your eyes, it's all active exploration and that's what really makes perception what it is. There's another variant of enactivism. I told you this would be a long answer.

There's another variant of enactivism that is about meaningfulness. The origin of meaningfulness and it's the idea that we do not encounter a pre-given world, but we bring a meaningful world into being by our actions and, the example of that, that I've always liked is just imagined that there's a new university canvas. There are no paths already laid down, it's just covered in grass and you let everyone move and they walk around and they lay down parts according to where they want to go. Then other people will follow those parts, so they structured the world in a meaningful way by their actions and then they continue to explore the world that they've structured like that. There's a lot more going on in enactivism than that, and it's metaphysically quite challenging at times.

Then there's one dangling E which is extended E, and that's the one that I've been most associated with, kind of, the E comes from sort of the extended mind story that I threw out there with David Charmers, back in 1998, I think it was. This is the idea. It's the opposite of embedded in some sense. It's the idea that the environment can be so important that it should actually sometimes be counted as part of the mechanism of mind. So when I was talking about the role of gesture there, you say, look, the body isn't just the environment in which the thinking happens, it's actually part of the active body, it’s part and whatever it’s thinking.

Extended mind theorists think that that doesn't stop at the body that when I'm doing stuff like scribbling, while I think through a math problem or any kind of problem or sketching, I'm an artist or designer. Those loops through the external media can actually be part of the thinking mechanism. So the extended mind is kind of the more metaphysically challenging position that says your mind doesn't even live solely inside your brain. There's more to it than that held together with these sensorimotor loops.

Rhys: Yeah. I love it. Well, thank you for, we got the idea of the book, thousand pages, like two minutes, boom done. I think it's interesting because I think, as I noted before, it just challenges our natural idea of  how the brain, how we do thinking. We're so egotistical and from self-focused, and that these things, that kind of push us. Do you see any overlap with Buddhist mindsets around having no self or the self, you know, consciousness just kind of appearing or whatever, and does that connect to you, for 4E cognition?

Andy: Yeah, it really does. It seems to keep coming up in the different bits of this landscape. So it kind of comes up in the sort of cyborg mind part of the landscape as the idea that there's no concrete self or selfhood, and it's very much negotiable. You know we could exist in all kinds of physical forms. You know I can control all kinds of complicated machinery with the right sort of interfaces, and I might feel myself present in those ways and at times, if you take that far enough, you start to think, the notion of the self is just some kind of construct. It's a useful construct that helps me negotiate the world, but it's not something kind of pre-given, and I think that comes out of the predictive coding or predicted process inside too. We're just about everything about ourselves turns out to be some kind of construct, but it’s good in that story, if I guess, we'll get to in due course, it's all about minimizing prediction error as you kind of find your way through the world.

So that's all the Buddhist picture or that kind of soft self picture. It's one that I'm very, very much attracted to. I think we make a lot of mistakes if we think ourselves as sort of cartesian cells have a kind of a very firm, independent existence in here, and they kind of they’re the things I know best. I know them sort of unfathomably. I think if you start in that from that point all kinds of bad things will happen to you as a thinker and the reason that they might even happen to you badly in the world.

Rhys: What kind of bad things are you talking about?

Andy: Things like starting to believe in hard problems of consciousness and kind of the ineffability of qualia and subjective experience. The idea that there's a whole realm of things here that science just can't really get to grips with. I think a lot of that flows from the fact that the simplest self-model that we've got that enables us to successfully negotiate the world is one that kind of describes to ourselves all kinds of weird and wonderful properties. But we don't actually have to think that it's the job of science to show how those properties really exist. It's the job of the science to show how can we infer that those properties exist. That's the life.

Rhys: Interesting, I think I got that, but let me double-check there. So hearing those different kinds of ease is really interesting. I think I'm especially interested to dive in a bit on the extended one, but thinking about you know this the self and how this Buddhist version of the soft self. I really like that terminology for it. I think it's helpful as you were saying to think about all of the parts of ourselves as, everything is a construct, like we are just constantly, our mind is creating constructs in the self in of itself is a construct. So you're saying that if I believe that the self is some kind of special thing, then that could be bad because then it requires us to go down these paths where we're like, okay, if the self is special and consciousness or whatever then we have to determine scientific things in order to explain this thing that we've determined special, but it's actually not special. Am I hearing that right?


Andy: Yeah, you are hearing that right. Yeah, I don't think we need a new kind of science to understand mind and consciousness. I think we've got roughly the right kind of science and what we need to do is understand why it is that we think that the roughly the right kind of science we've got isn't enough, and I think that we can give a good account of that from predictive processing and embodied cognition principles to do with the role of minimal models in helping us get through the world.

So there's a kind of you like that out there as well that you will find in work by Graziano and there it's all about sort of having an attention schema, but it's the same kind of picture that there's something here that looks more problematic than it is. Because by its nature, we present it as something that it actually isn't. That's slightly confusing even to me, actually. But this belongs in the ballpark of there's a kind of move in the consciousness debate at the moment to shift the problem a little way from the hard problem, which is how is this ineffable experience that I'm having now, this experience of blueness and windiness and so on. How is that possible to something like the question of what kind of organization, what kind of mental or sort of computational organization would give rise to a creature that would make reports like how is it possible that this kind of ineffable blue experience is happening to me right now? So the meta puzzle is a puzzle of how you build things that get puzzled or that appear to get puzzled, and some of the (()) off and down then it kind of think if you solve that puzzle, you've probably done enough and other people like Dave Charmers, who's my collaborator, of course, on the extended mind story thinks that that's not enough.

Rhys: I love, yeah. I've recently learned this meta hard problem of consciousness. I love that reframe which is we think this is so ineffable and so crazy, you know, the phenomenological experience of being a human and consciousness is so crazy, wait a second, instead of trying to explain that, let's try to explain why we think that, you know? That's weird in and of itself. So I like that. That's interesting. Let's kind of get the full map here, in the full territory here, which is to talk about predictive processing, and so you know this is in your book, Surfing Uncertainty, and I guess just for our listeners, could you give a quick overview of what predictive processing is?

Andy: Yeah, at least I'll have a crack of it, so it's the idea that what brains fundamentally our prediction machines so that there's a kind of fundamental thought, is that instead of seeing the brain as in the business of if you like trying to register how the world is on the basis of incoming information. It's trying to predict how the world is and every time it gets a prediction, wrong prediction, error signals result and it gets to run kind of routines to try and update and slow that you bring a structured world interview by finding the best way to get rid of prediction error as you attempt to predict the stream of sensory information that's coming at you.

That's a bedrock picture. I mean, to make it a bit more intuitive. I like to use something like sine-wave speech, I don't know if you come across that, but I brought a little demo 'cause it's my favorite for doing this. If I can find it on my desktop and it's only a sound demo, so I don't think it's got up. So what sine-wave speech is? It’s a version of speech where a lot of the ordinary signals being stripped away, and so what's left is something that sounds a little bit sort of science-fictiony like beeps and boops like [unintelligible]. So what I'm going to play you is a fragment of sine-waves speech. You'll then hear the original sentence and then the sine-wave speech will replay and what you should be listening for, what you'll hear very clearly, I think, is a major difference in your experience, which is the difference that being able to properly predict the flow of that sound signal makes once you've got on top of the original. So I'll just play it and see what you think it works.

Sine-wave speech: It was a sunny day and the children were going to the park.

Sine-wave speech: The camel was kept in a cage at the zoo.

Andy: One more.

Sine-wave speech: He was sitting at his desk in his office.

Andy: So you know the kind of use of that. It's a little bit like those pictures of Dalmatian dogs hidden away in the noisy background. The thing is when you've got a good prediction in place then a lot of structure kind of comes into view, and in this case, meaning as well comes into view. Now, of course, we're all already good predictors using arrow in sort of natural language and so this demo is piggybacking obviously on that just like the Dalmatian dog demo is piggybacking on what you can already know how to see.

But the kind of idea is that learning to perceive the world was a process a bit like that. It was a process where you're hit with all this stuff. It's kind of noisy, and what the brain has to learn to do is get a high-level grip on the sort of patterns that might matter in that, use that to try to predict the shape of the signal, and as that happens, you separate out the signal from the noise and the kind of meaningful salient bits of structure get to emerge, and that's the predictive processing picture really for at least for perception. The whole thing about the overall picture is it's got a similar story to tell about action, but I imagine that we'll get to that in due course.

Rhys: Thank you for that demo. So far listeners, the high-level views that there's this we have our top-down models of the world, these kinds of Bayesian, oh, I predict this, I predict that, what's going to happen today? It's like probably the world is going to be roughly the same as it was yesterday, and then when we get this bottom-up sensory data, it's kind of put into those models, and those models are actively like predicting all the time and then when the bottom-up sensory data is different than the model that we predicted, then it's like, oh, no, this is bad. This is, you know, in your book which called surprisal or this prediction error, where it's we're just constantly trying to create good models of the world to minimize the surprisal or to minimize the prediction error.

I think that both those examples do a great job of it, which is like once we have when you hear those first little bits of weird alien speech, it's like I don't understand it at all. But then, once we have a model, oh, I know what is going to come out next, it's going to be this thing. Then we can hear it correctly, and similarly with the viewing things, whether it's the Dalmatian picture or the cow picture. These weird Rorschach blot tests, for those of you who haven't seen them I'll put some in the show notes, where you look at it initially, like I have no idea what this is, just a bunch of black and white stuff on the page, and then once you're told oh this is a Dalmatian or oh this is a cow then you see it immediately. So that makes sense and well, I guess one question that I have for you is. Why does it matter? Okay, so our world is full or our mind works through these Bayesian prediction models. Who cares? So what?

Andy: Yeah, I mean, of course, it's a question you can always ask of any sort of grand high-level theory about the mind as well, you know, did we really need that? You know we've got neuroscience. You've got psychiatry. What more do you need? That's kind of actually you need to sort of get on top of your own mind but...


Rhys: This is your whole life, maybe as a your work where people, who cares Andy, and like no, it's important.

Andy: Well, and also I asked myself the same question quite often. So it's not just when you go down the path, but actually, I think it does matter because actually having the right picture of the nature of our relationship to what we kind of unthinkingly think of as reality is quite important,  I think. It's seeing the distance that separates us from whatever structure is really out there in the world I think it's hugely important. So it pushes back against the idea that there's simply a world out there, and the job of the brain is to get it right, and therefore we should probably all end up with the same picture of the world.

If you think that world come into view in part because of the expectations that you have, and it's clear that everyone has a very different history and they know about very different things. Then it becomes immediately clear that they're going to perceptually experience different worlds. And that seems to be quite important. It may be politically significant. There's work by Lisa Feldman Barrett and colleagues talking about how this could be important in rather difficult situations, like police officers encountering suspects in a dark alleyway, and perhaps mistaking a handheld phone for a drawn weapon, you know?

The predictive processing account of what could happen there is that your own bodily self-predictions are kind of getting into the train because we're not just trying to predict the signals coming from the external world, we're trying to predict the ones coming for our body, and if your body is in a highly anxious sort of frightened, scared state, the Bayesian brain just tries to take that as more evidence for what might be out there and you can see how you could very easily skew things and make terrible and tragic and horrible mistakes as a result of overweighting bodily evidence in a situation like that. So I think understanding how these pictures of the world come into view is going to be hugely important for (()) to them flourish in as a self-reflective species.


Rhys: Yeah, I love that. I think that in some ways solidifies this idea of frames on the world or lenses on the world and what things will often call something like a bias or whatever. And it's like it might be more powerful to instead of thinking, oh, right now we have this biased framing, which is like, oh, here's this one bias over here, confirmation bias and here's this other bias over here, fundamental attribution error or whatever. It's kind of better to say no, no, no. Instead of thinking of ourselves as sensory world out there and then we kind of take it as input, think about what frames are we constantly applying to the world around us and how does that change.

This is like the classic right versus left, you know, red versus blue.

The other side thing. It's like people have different frames on the world in understanding those frames, and then how information both from the world and from our body is kind of gets put into those frames is seemed pretty crucial. So I do agree with that and to dive on that a little bit more. So I want to transition this a bit to understand your perspective on 'cause this is kind of getting into the world of how information flows. So thinking about this book that I'm writing, What Information Wants, and thinking about over the course of history both biological evolution, also human evolution we have information and whether its genes that want to continue to replicate or whether it's in our current world we have, I would claim that these, you know, and others have claimed that these memes in the Dawkins sense want to replicate from mind to mind and then we get this bigger kind of memeplexes like religions and stuff that have powerful proselytizing properties and things.

One part of this story that I'm trying to understand better, like the information that is kind of living or is this moving around. How does, your view on embodied cognition or does your view on predictive processing, how do those change the ways that information flows, or kind of the homes that these memes can live in?


Andy: Yeah, so caution here I'm no expert on the meme. Basically, I think that there is a flow of information, as you say, and that some of that information becomes materialized in structures that we share. Structures in books and in films and even soundwaves in the air. Memetics I guess it is in some sense, the science of those structures and what they do. I certainly think that those structures are hugely important as we kind of materialize our thoughts, we create new objects that have properties of their own and that's a very, very, very powerful force   I think in the evolution of human thought.

One of the things that I think it can do that comes out of the predictive process in perspective on it is that I think it can actually help us break our own models of the world. So you talk about all these kinds of biases that we have and that's true so when I think about something I'm sucked into the sort of biases that I have. That's the nature of how the inner machinery works, I think. At the same time, if I create something as an external object, then other people can approach it and I can approach it as a kind of detached object and that I think gives us a chance to break apart these models a bit rather than just always being sucked into their big sort of attractor basins. You can kind of keep them a little bit at arms length and poke prod them. A bit like I've got a mental vision of the perfect training shoe, and I keep thinking about training shoes in that way.

Then I just build a big model and I walk around it might poke and prod it and I can learn different things and imagine different possibilities that way 'cause I'm not getting constantly sucked back into my own sort of kind of mental attractor basins. So I think in that sense that process is hugely important. There's something about the flow of information in this sort of sphere that we've created that I think is also potentially dangerous in the kind of religious cases, maybe that you mentioned, which is prediction error minimizing brains want the simplest model that is consistent with whatever they're taken seriously. So the goodness of a predictive model is its accuracy minus its complexity.

So you're always being driven towards something simple if it will do the job, and I think a lot of the sort of more dangerous memes that pass around the ones that are piggybacking on that because they got simplicity nailed and they're pretending at least to explain an awful lot of stuff that perhaps they're not fundamentally, actually,  getting to grips with but the attraction of simplicity is written pretty deep into the kind of Bayesian brain if you like. So I think there are at least those points of contact. I think there might be something useful to say about the kind of what information wants if you like using it (()). Yeah.

Rhys: Yeah, it's interesting. I really like what you said about this ability to kind of produce and this is talked about in kind of relational dynamics. You know this ability to produce a third rail or to kind of externalize your internal state, and so it's like if you and I, Andy, are having an argument, were like ah blah blah blah, you said this, I'm like, no I said this, and what we can do is we can try to produce a shared third rail of whatever, where we each like co-creating to create what actually happened there or whatever.

Then once we create this thing then it's kind of removed from our internal cells and it's kind of out there in the world, and so we can kind of talk about it as this third object, and so I think thinking about our frames in our minds as these kinds of third objects and then like externalizing them for others that could be very powerful. So I'm hearing you there and then I think you’re right to say, yeah, there's an interesting difficulty here where it's like our brains are looking to minimize the prediction error or whatever, and it is to minimize surprisal.

So if you have a model of the world that has, yeah, it's kind of  X is good and X and Y is bad. That can be, okay, that's easy, there's less air here versus well, everybody is complicated, you know, and the brain is like, well, then what is my actual model? What am I trying to predict? So I think that could be, that one it's actually interesting 'cause I think how do you think about creating models in a world of complexity or something like that?


Andy: So I think actually that's the point at which those two kinds of strands come right together. That's why I think why we've got all the institutions of science, for example, and art for that matter. But you know, think of peer review in science and all of those sort of attempt to, you know, you'll turn your theory into something publicly inspectable. Then you will run it past a whole bunch of other people, and they'll poke and prod at it. What we found over time is that this is a way that making sure that we do justice, or at least try to do some justice to, to the actual complexity of the solutions or the complexity of the world that we're trying to understand, as opposed to simply being sucked into the most minimal model that seems to accommodate the data points that you happen to be taken most seriously. Because that's what your brain wants to do. Take a few data points seriously, find a minimal model and sit with it. I think what we've done is we've created all these structures that push back against that, and therefore ultimately let us do more things and think more things.


Rhys: I like that framing of we have our natural human bias towards small simple models and just taking a little bit of you know sensory data input then and what we need to do is create these more meta institutions that allow us to kind of push back on that and to create to actually judge the world in its full complexity instead of what our minds want to determine its full complexity is.

Let me ask another version of this question, which is, there's one framing that you gave on memes, which is true, but I want to give this other one, which is, so you talked about how there's these, we have these artifacts in the world, and whether it’s in books or it can be in kind of art or whatever and these are kind of these magnetic pieces that are information kind of crystallized or something like that.

So there's you know the McLuhan idea that the medium is the message and we have this information that's flowing in the world and with like a gene as an example, it's like, okay, genes are fit for their environment. If you have an environment with no oxygen you won't get a Cambrian explosion and animals and plants. But once the environment has lots of oxygen, then boom genes will iterate and learn to fit that world where they have access to a lot more energy.

Similarly, like information ideas are fit for their medium, whether it's the vocal medium with humans at the beginning or then you know, written and printed, and now the Internet, and I think that one of these mediums is our brains. How do you think about the kinds of memes that can live in brains? And I'm thinking about stuff like N-grams or whatever that are just like memory stores or whatever. But there's something about predictive processing or embodied cognition that says, hey, here's the information, here are the homes that memes can have in our bodies.


Andy: Yeah, that's an interesting question. I want to say it's something like anything that does multi-level reduction of prediction error that...


Rhys: I love that answer. By the way, that's very abstract, but tell me more on that.

Andy: My (()) feeling is that if you think about any kind of aha moment. What's going on in those sort of aha moments where things seem to fall together and then an idea is really kind of suddenly solidified for you. I think those are cases where you're not just as it were getting rid of prediction error with respect to one of your models of the world, you’re sort of getting rid of error in a way that cascades down or upwards or sideways and gets rid of an awful lot more than expected amounts of error, and so one of the things that's going on in this area at the moment is looking at the slope of prediction error reduction and what happens when we do better or worse or getting rid of prediction error than we expected to.

So just as we have expectations about the world, predictions about the world, we have predictions about our own likely success. In dealing with the world and with some situations, I guess. So when we do better than expected at minimizing prediction error, that typically feels good when we do worse than expected at minimizing prediction error. That typically feels bad, causes us to become anxious, and so I think that the home for the most powerful kind of memes in the brain is what there is when more than expected amounts of prediction error are being minimized and although it can be hijacked, that's the only trouble. Just like anything can be hijacked. These mechanisms can be hijacked too and alcohol and certain drugs are very good at giving us the the feeling as it were, that we are minimizing more than expected amounts of prediction error or any of that we're probably not. So sometimes when I've had a pint or two, I think what it all feels really good, feels like I'm minimizing lots of errors. But then I have that little voice on my shoulder saying, yeah, but you're not.

Rhys: I love that. So that makes sense to me. I just want to reflect it for a second and I think it's, yeah, makes sense that you're essentially taking the integral or you're saying, okay, you're saying, look our brain has these levels of from a predictive processing perspective with the top down and the bottom up. So we want to minimize prediction error generally, and especially,we want to, if we can minimize, if we can make some kind of claim or have some kind of idea in our mind that minimizes the prediction error, not just for that one little set, but has reach. It minimizes prediction error across lots of things that feels good and you know, maybe with something, you know, this can happen when you have a well-fit idea like, I'll just share one that's on the top of my mind recently, where it's like, you know, I've just been recognizing, I’ve reheard the term networked individualism recently and I was like, oh, that explains lots of sociological constructs on the Internet these days and so I'm just seeing it again and again, and maybe that's and that's maybe a form of this minimizing prediction error where I feel like, okay, I've found something that fits in my brain that makes more of the world makes sense. Is that kind of what you're saying?


Andy: That's exactly right, yeah. So I think something that, yeah, it's fit in, it's literally fit in a space that is there in virtue of persistent errors that you are now able to get rid of, all of a sudden, and that feels good.


Rhys: That's a hilarious part about this. It's just like it feels good. It's just like making dopamine or whatever. The funny thing,  yeah, I think that what you're saying too is the hijacking this both with drugs but also with other in whether we want to talk about religion. We want to talk about other things that can kind of make it feel like. It's kind of like what you were saying before this balance versus accuracy and complexity and like ways things that are fit are things that can minimize the multi-level prediction error but are actually and I guess a crucial piece of this is they have this ability to in something like religions. Also says something like, hey both believe what is over here to be true and also that other stuff over there like you have to kind of discount it. You have to say like, oh no, that person over there saying God isn't real or whatever. But let's not take that into account in our worldview. So there's kind of this monopolistic environment there or something like that.

Andy: Yeah, that sounds. Yeah, that's right. Yeah.

Rhys: So I have another like you know as another...

Andy: I was getting in with a little something there which is just a I wouldn't want it to make it sound as if I'm anti-drugs here. Drugs can be of that, I mean, classic psychedelics, for example, have a powerful role to play, I think, and that role is illuminated quite well by working predictive processing, people like Robin Carhart-Harris in London have been looking at the way that classic psychedelics might relax the grip of your high-level self-model, enabling you perhaps to have the realization the actual experience of the softness of the self if you like.

That if you have very chronic and untreatable depression, a single dose of psychedelics can sometimes make a big difference and the Carhart-Harris model is that difference might consist in just temporarily giving you the feeling that you're self-model is not your picture of reality and yourself model are not set in stone and the way that you feel right now isn't the way that you always have to feel and that actually can be a very, very powerful effect. So it's like. I think the model that Carhart-Harris is has is called REBUS, relaxed belief under psychedelics is in there somewhere. It's a slightly forced acronym, but anyway, there's a kind of line going back there to our earlier discussion about the self.

Rhys: Yeah, I mean the psychedelics and you know, like in how to change your mind or in the illuminated mine and books like that's like yeah the what psychedelics can do is they can take your, I forget the name of it but you might actually know the name, yeah, this Uber self, the kind of central executive and say, hey you should just go away for a bit like, how to chill yourself out and the way I see it is like it's taking these top down models and saying hey maybe do a little bit less of that right now and just allow the bottom-up sensory experience to kind of dictate and it kind of flow within itself. And then after that as you say, you can kind of say, oh, wow, these top-down models were just me. I can be something else here. I can create new top-down models of the world and create a new version of itself.

Andy: Yeah, shaking the snow globe, there's somebody else says like I (()).

Rhys: Yeah, I like that. We’re into the end issue of our conversation. So I want to ask a couple things here. First, is on technology, which is there's a lot of overlap between both kind of predictive processing models and stuff like back propagation and AI. There's also a lot of overlap in your work, but stuff like you know, extended mind hypothesis, and like Google Glass or you know, neuralink or things like that. What do you see as some of these like the end game or something like that, but as we go deeper and we have more powerful both artificial intelligence or more powerful technologies that can kind of maybe blur the line, or kind of queer the line between our minds in the world and how information flows between those. How are you thinking about the interaction between technology and digital technology and AI and some of these brain models that we've been talking about?

Andy: Yeah, well, I think that understanding brains of prediction engines should be a guide to how we should build the kinds of technologies with which we most naturally merge if you like. Something like that but what you really need, I think is technologies that don't require too much attention from the biological brain in order to start to do the right thing at the right time, because attention is a kind of limited commodity in these models. It appears as there's something called precision, and if you up the precision or one thing you must down the precision on something else (()).

So what I think we'll be seeing is a move towards a lot more kind of special purpose wearable technologies and move away from the kind of general-purpose sort of an iPhone in your pocket kind of thing that does still require a lot of attention from the biological brain if it's going to do the right thing at the right time. To a wide variety of different wearables that really do start to to if you like behave much more like, much more like part of the package that just is you so that the biological brain gets to sort of dovetail what it does with what they can do in ways that are as intimate as a way that it's dovetail to what your body can, you know, your muscle and tendon systems can do.

So that's sort of kind of the way that I see it going. That certainly would make quite a lot of sense. Because there's an interest in other layering, contemporary technology, which is that our technology is there now so busy predicting us and there we are busy, kind of predicting them that there's room for kind of, there's room for ecosystems to emerge that are not exactly in our interests. So, I think there are ways here to think about some of the dangers that again will flow from that.


Rhys: Yeah, kind of a couple pieces here. One is on that final piece popping off the stack. It's like there's a, yeah, we can kind of imagine and I just like this frame of, you know, thinking of the world and ourselves as having these models and constant frames of the world and then we can start to think about AI having these frames and this gets into like the racial biased algorithms and stuff like that. Where it’s like, oh, you've looked at a bunch of data here and you've generated these frames. These predictive processing models that say hey, if you're poor are you're more likely to, you're more at risk of recidivism and so let's make sure that the poor folks don't keep them in jail for longer or things like that.

So I think that being cognizant of using, I love using AI and stuff as a mirror rather than a crystal ball, I think is pretty powerful. I wonder though and what you're saying about kind of had biotechnical niches or something like that, where it's like we have the smart phone which has an ecosystem of apps within it. But yeah, requires attention of our brain, and so you're thinking about, are there any kind of proto examples that you see of these, like things that will not require our attention, but will still “help us.”

Andy: That's a good question, actually. They surely are. They're not springing to mind. Now that you ask the question. There are things that of course I've imagined in various sort of places like,  you know, something. Well, there is one, okay, there's one. There's one example that I can think of which is the. Which is the North Sense, which is a little belt that was initially developed where as the user-oriented themselves towards the north they got a little vibrotactile bots moving that sort of as they oriented in that direction. That was later turned by a company called Cyborg Nest into a little implantable, well, so would rather superficial implant like a body piercing in effect, that would again give you the same information as you oriented towards magnetic north and overtime people sort of ceased to feel the bars, but they feel as if they have kind of a new sort of way of inhabiting the world where, for example, they automatically seem to know sort of the direction of their children, from the gate to their children school, and they can get a kind of emotionally good feeling by orienting themselves in that direction, no matter where they are at the right time. I see and they're in the same hemisphere at least, but at the right time.

So you know, little things like that are very special purpose I think and that sort of deliver a stream of information that can go maybe a little bit, so it below the level of a a normal conscious threshold that could be very useful and you can imagine that being done for anything I could get a little vibrotactile bars whenever I encounter some somebody that is a you know a fan of the same hockey team as or because I feel that wouldn't be hard to do with a bit of face recognition or a bit kind of social media stuff. Easy enough to get that information back and give me that bars. So things like that I think could rather easily be imagined. Which ones would be most useful rather than just kind of slightly fun little bits of decoration that kind of hard to say.

Rhys: I think that's a good example. It's kind of a funny. It reminds me a bit of, you know, like Muslims pointing towards Mecca and praying towards Mecca and not being like a good shared kind of thing that you can do no matter where you are in the world. You have this good kind of anchor that you can come too. So I guess in our final, so thank you for that, I'll be curious. We'll both be curious to see what kinds of new informational, you know, subconscious layers of information streams start to input into our bodies all the time and hopefully, they are good and we will make them good for us. Maybe two final questions here. One is, is there a place, just like a recommendation for listeners, where do the great neuro philosophers hang out, or do you have a Twitter thread that you like? Or is there kind of a newsletter or something? Or you know, any recommendations for people who want to do similar stuff as you?

Andy: You know, I don't really. It's probably because I don't seem to hang out wherever they do. So you know, I run into other neuro philosophers and other people working on these kinds of things or you know Twitter streams and various online conferences and all kinds of places. But it's not like there seems to be any one place where I would typically go and and look for things like that, you know.

Yeah, I mean there are places like the Brains blog, which is a nice sort of. That's a place where you can get quite a lot of good sort of cognitive philosophy stuff appearing. If someone got a new book out in that area and normally turn that on the Brains blog. Doing a few quick praises of it and interacting with the crowd of people, so that's cool. There are other places like that, I'm sure, but don't mind this kind of, this brings to mind as a sort of the the absolute sort of opium den of neurophilosophy.

Rhys: That's funny. A place where you go every day and you just sit and do neurophilosophy. That's funny. Then, maybe, one kind of funny question here about like, yeah, do you think that an overrated and underrated, do you think that, I just ask you like whether you think it's overrated under it and you can tell me the answer, do you think that Google Glass is overrated or underrated?


Andy: Yeah, this is sort of slight less, slight right, isn't it? I think Google Glass is underrated because it's a wonderful piece of technology. A little bit clunky and it's sort of an original incarnations. Huge take-up in industry where it kind of used for stuff like people working. Imagine you're working on a wiring loom in an aircraft cockpit. Really, really complex wiring and you don't want to be consulting a manual. You want that information to be available to you while you got both your hands-free.

For the rest of us, imagine if you're sort of IKEA flat packs, came with a kind of a Google Glass way of assembling the flat-packed item that has arrived. So I think the only thing that helped Google Glass back really was the fact that we're all a bit worried about being under surveillance when we don't want to be, and every time you saw someone wearing that you thought, oh I wonder if they're filming me, what are they doing, you know? So I think it's a technology that just needs a few more iterations. I don't know what the first wristwatches looked like or whether they were huge, clunky, nasty things that scared some people away but if they were, it could like that.

Rhys: As a final question here, Andy, is there any place, again for listeners, you know, definitely check out Andy’s work on embodied cognition and in his recent book, Surfing Uncertainty around this stuff on predictive processing. Is there any other like plugs you want to give to folks? Either place to find you on the Internet or any final things for our listeners?

Andy: Oh, something I would plug come. University of Sussex, where I'm currently working, has a new program just starting in biomimetic embodied artificial intelligence and, you know, that's I think that's going to be a very exciting program. It's the first one of exactly that kind that I know of. Basically, based around a doctoral training program right now. There'll be a lot of other things going on too, so maybe look out for that. Yeah, biomimetic artificial intelligence.

Rhys: Beautiful, I'll put that in the show notes as well. Well, thank you again for chatting today, Andy, it was great and thanks for listening listeners, and goodbye everybody.

Andy: Thanks everyone. That was great. Thanks for chatting.