Podcast audio here.
Rhys Lindmark: Today, I chat with Benjamin Bratton, who is an interdisciplinary academic at the forefront of how technology is changing society. And he wrote this brilliant book called The Stack and has continued to write good ideas around how COVID-19 is shaping society.
And today, the main thing that we chat about is how technology is ahead of art concepts and models of us. The technology has changed us so much that we don't have the correct models to understand what's happening. And we dive deeply into that and his work with The Stack and his work with this institute in Moscow. And also, to say days with COVID-19 and surveillance.
But that's the crucial idea here is how thinking about this as the switch to the information age, and the creation of new concepts that we can use to model and understand ourselves. That is the function of today's episode. I hope you enjoy it.
Rhys Lindmark: Hi, hello, everybody. Today, I'm speaking with Benjamin Bratton. Benjamin is really this multidisciplinary type whose work spans Philosophy, and Art, and Design, and Computer Science. And he does that as both a Professor of Visual Arts and the Director for the Center of Design and Geopolitics at the University of California, San Diego.
And he's also the director of this really interesting program, the program director of this thing called the Terraforming think-tank at the Strelka Institute in Moscow. And he's very famous for writing this book called The Stack. And it's this book that outlines a new geopolitical theory for this age that we're in of global computation. And I'm excited to dive in with Ben on all these topics. So Ben, thanks for being on the show and welcome.
Benjamin Bratton: Thanks for having me. It’s a pleasure to be here and thanks for your invitation.
Rhys: Yeah, I'm excited to explore. So today we're going to chat about how you understand the world, and how that started with The Stack. And then switching over to your work at terraforming, these days with Strelka Institute. And then kind of applying that to some of you had this great essay called “18 Lessons of Quarantine Urbanism” about COVID-19. But before we dive into that, I want to stay kind of high level for a second and help our listeners understand how you view the world. I think you're so interdisciplinary and multidisciplinary. And it's really interesting but what's your goal as you go through the world? And how do you find collaborator’s info and stuff? Tell us a bit about that.
Ben: I'm a social theorist. I'm really a writer at the end of the day. But my own background has been extremely interdisciplinary, as you mentioned, but also kind of interinstitutional between working industry and academia.
I think I was someone of a generation who saw really early on the ways in which what I call planetary-scale computation was transforming the world, our political systems, economic systems, cultural systems, and indeed, philosophical systems in ways that we're not being properly accounted for.
And so just autobiographically for me, that led to a lot of interest in architecture, but in a kind of strange way and that architecture provided a sophisticated language of systems, and spaces, and structures that we could use to analyze. But also think about composing and producing.
And my interest, I guess in this sense, is thinking about planetary-scale computation through this lens of architecture, which led to the development of the book, The Stack: On Software and Sovereignty and the thesis of The Stack as a way of thinking about planetary-scale computation as what I call an accidental megastructure.
Rhys: Yeah, I want to actually hone in for a second on that architecture piece, which I think is really interesting. I used to work at the MIT Media Lab, which was part of the architecture school at MIT. And there's also Pattern Languages is an exciting book and an exciting way of thinking.
Ben: Sure, Chris Alexander wrote that.
Rhys: Exactly, tell me more about why you think architecture is a home for some weird interdisciplinary thinker types?
Ben: That's a good question. First of all, because it's a design discipline, and that it has been set up in such a way, particularly within architecture, academia. I think we want to make a distinction here is that, in this regard, in which it's able to take in ideas and discourses really from everywhere. From philosophy, humanities, art sciences, and kind of actively misused them and misinterpret them in really interesting kinds of ways.
And to put them to work, to not just be tools of analysis or tools of taking the world apart and breaking it up into smaller and smaller little pieces. But it's a place in which a lot of those little pieces can be brought back together and reassembled into bigger ideas that you can leverage and use to put one brick on top of another and make something.
And I think in certain respects, my work is probably increasingly philosophical in certain respects. But my position in that is that, as opposed to an era where philosophy was coming up with concepts and ideas, and then design and engineering was making things that could have looked like those ideas or resembled these. Right now, I think in many ways, the technology is ahead of our concepts, ahead of the ideas that we have at hand to steer them and to compose them deliberately.
And to really understand what's at stake at a more fundamental level. And so for someone like me, getting in and getting my hands dirty with the nuts and bolts of it all, is a way in which to steer the theoretical and philosophical project as well.
Rhys: Yeah, that makes sense. I think that what I'm hearing is that, yeah, this architecture is a thing, is not just about analysis, which is so much of what scientific and other fields are and just like the halls of the Western tradition of breaking things down into component parts.
But architecture also had this idea of bringing things back together in a holistic sense and seeing from a systemic lens, whatever how it combines together, and I think that makes sense to me.
The other thing that makes sense, what I'm hearing you say is that traditionally, perhaps philosophy was pre-technology. And now though, technology is ahead of the concepts, we have to understand it. And maybe this was learned, like the Industrial Revolution or something like that, where it's like, “Oh, God, what is this new thing that's occurring?” And people had to be like—the philosophers had to understand it. So tell me more...
Ben: And there was a huge revolution in philosophical thinking at that point in time, everything from positivism to Marxism emerges in many ways in relationship to the transformations in the world.
They're happening through technical systems. And I would argue that even going back to the Copernican turn. The way I put it is in the definition of a Copernican turn in this race, we have technologies that we use based on models of the world.
So we have a model of how the world works, we produce technologies that allow us to act upon the world in accordance with the logic of that model. But sometimes, those technologies reveal that the world doesn't work the way in which we thought it did. It doesn't work the way in which the model that brought that technology about would have us thinking. We have to then revise the model based on what the technology has disclosed.
And I think for my work, part of the way in which we think about technology is that not just something with which we make. It is also something with which we think. It's a way of remaking the world and through remaking the world, it reveals things about the way the world works that require us to conceptualize into form and to formalize. And so in this, in many ways is the basis of the theoretical project in which I find myself
Rhys: Yeah, totally, that makes sense. And I think that is where you and I are very alike, okay, we can think of this as like the tech society loop, or it's like technology changes society, society changes technology. And that our mindsets, technology's not just a thing that we make, but yeah, changes are actually our models of the world and how we think of things. When you talk about the Copernican turn. Tell me a little bit more about that specific... is that like a term that people use?
Ben: Yeah, Kant spoke of the Copernican Revolution and formally just explained it as a formalization of system of thinking, as in relationship to the Copernican Revolution. So the idea that philosophy has some relationship to Copernicus and particularly goes back to the beginnings of modern philosophy.
But more recently, Freud talked about the Copernican trauma, but the way in which we use the term is a bit more specific than all of that. I would define a Copernican turn or Copernican trauma as a priceless accomplishment. And I would define it in this sense, both in terms of that technical system which that technical revelation and disclosure which I just went through, but in more generally, a Copernican turn is when we have some model of the way the world works. In which, we, that is humans, Homo sapiens, or a particular subset of those humans more often imagined themselves to be the center of that system.
And the world has unfolded around. But usually through some means of technical alienation of our own intuitive understanding of the world, such as a telescope or a microscope or statistics. We realized, what is disclosed is, we're not the center of that system, we're kind of off to the side. We may be a very interesting and important part of that system, the one part of the system that is capable of sapiens. But we're not the center, our planet is not the center of the solar system, our sun is not the center of the universe.
And these Copernican turns, which, in fact, show us where we actually are within the big scheme, as I say, are priceless accomplishments. I would say that Darwinian biology is a kind of Copernican turn. Neuroscience, the neuroscientific revolution that shows how thinking is not some virtual process, but it is a function of biochemistry itself. AI, I think has been and will continue to be as it develops a kind of Copernican turn that shows that our model of our own intelligence, that is how we think that we think, is not only to a certain extent fictitious, but it's not normative. It is not the normative center of what intelligence is in the world.
And that the more we understand the way we think, in relationship to this broader diversity of intelligence, essentially, the better we'll know ourselves in this regard. So once more, this Copernican turn is kind of this displacement process over time that are used technologically mediated. And again, as far as I'm concerned, that's what philosophy is for. And that's what the pursuit of knowing the world is all about.
Rhys: Yeah, I think that the Copernican turn, I love it as, and just the move from the geocentric model to the heliocentric model that was just like the simplest form of it. And now you’ve seen it in a wide variety of other ways. And just this repeated reminder that we don't matter, that we're designed to think artistically or whatever, but really, to zoom out to that systemic perspective is powerful.
Ben: But I would qualify, just for clarity for your listeners, I qualify that a little bit. And I do think we matter. And I just don't think we matter in the way we thought we did. Like I said, we are for better or worse. The primary sapient species within the larger neighborhood here. The whole idea of the Anthropocene, of a geologic era that has been a terraforming scale transformation of the planet, in the image of the culture of one species, we’re incredibly important.
We just again, in a certain sense, don't recognize ways in which we are important, partially, because we operate under illusions of where we fit into the larger scheme of things. And so I think part of the paradox of this is that, through the Copernican turn, we understand that we are, in fact, in a way that the world does not operate around us in this cosmological sense. The implication of that is that we actually have a greater responsibility for the subsequent composition of the world because we are the medium through which in essence, the world figures itself out.
Rhys: Yeah, I like that push back on both the paradoxical nature here. And I was going to bring up the example of climate change, which shows that things are so much bigger than us. And also, we're obviously affecting the earth system. And so, yeah, keeping that paradox in mind is helpful for listeners.
And also this reminder to listeners that even if there's this nihilistic thought of having that we don't mean very much in the macro universal sense that this individual--, there's a great video from Kurzgesagt called “Egoistic Nihilism”, which is about how we can actually, just even though things might be nihilistic, like living a fun life is still good. So trying to balance those two.
So do want to transition now to this idea of getting into some of the specifics for… we know that the Industrial Revolution had the rise of positivism and Marxism, all these things. And now we're seeing this digital revolution and how, for you, what are some of the specifics and thinking about The Stack and your work with terraforming in the Strelka Institute? How are you understanding the world?
Ben: Well, let me maybe explain a little bit that the thesis of the book might be an entry point in this way. So it's a rather big book, it's 500 pages. So there's a single diagram, I guess, structure. So you could say one diagram with a 500-page caption is the structure of this. The book and the subtitle of software and sovereignty, it speaks to this phenomenon which I’ve called planetary-scale computation.
And so first of all, instead of thinking of computation as in a very abstract sense, and the way in which theory of computation we teach in computer science departments, or instead of thinking of computation as a kind of object in the world, like that's a computer but that other thing is not a computer. Instead, what we're looking at is to understand computation as a planetary-scale infrastructural phenomenon, which includes the types of things we would normally think of as the internet but it also includes much more of the side.
It's an identification of this construction or again of what I call the sort of accidental megastructure of planetary-scale computation that includes terrestrial, oceanic, urban technologies, sensor mechanisms, all the rest of this.
And the argument that the book makes is essentially twofold. One is that planetary-scale computation, both distorts and deforms traditional Westphalian models of political geography, and by Westphalian I mean like the bounded secular, geographically, territorially bounded secular nation-state, and produces new territories in its own image. And so there, we are already in the process by which the maps are being redrawn in relationship to the affordances of this new infrastructure.
And second, it argues that instead of thinking about all of the different kinds of species or genres of planetary-scale computation, smart grids, cloud computing, smart cities, internet of things, AI, augmented reality, as a bunch of different species all spinning out on their own. We can actually think about them fitting together in a way that is not unlike network architecture stack of OSI, OSI or TCP as an example, in which there are layers of this composite that are defined by their particular functions.
It's possible, in other words, to think about planetary-scale computation as a kind of model, at least in the model sense, as a kind of totality. And when we think of it as a totality, it allows us to understand patterns and structures within the whole that might have been obscure, if we were focusing on particular moments or particular places.
And like any kind of model, when you have that totality, it also allows you to recursively act back upon the system. And so a lot of what the book is both a kind of an explication of how we got here, what the history of the system is, what the history of the system is in relationship to the history of political geography itself, going back to the early modern era. A description of how these six layers work. And for the book—the model is that there's an earth layer, a cloud layer, a city layer, address layer, interface layer, and user layer where we sit. And so the book lays out describing the reality of each of these layers.
And then the third part is a proposition for what may be to come because part of the way in which stack systems work, and I mean, like hardware-software stacks, is they're designed to be replaced, that the way in which any whatever occupies functionally, at any one particular layer, layers are defined by the function of the layer, not by the qualities, whatever occupies it, that it's modular, it's designed to like that layer three can be replaced by something else, and the whole system still works. Layer two and layer six can be replaced, and the whole system still works.
And so the stack that we have, in other words, is not the stack we will have in the future. The stack to come. The design of the stack to come, the composition of the stack to come, the conceptualization of the stack to come is the project. It's the design project. It's the political project, it's the economic project, it's the philosophical project that we have in front of us for the coming decades.
Rhys: Yeah. So I just want to kind of repeat some of that back for our listeners and reflectively listen. I mean, I love that explanation. I think the first thing you noted is, yeah, this idea of planetary-scale computation, which is this accidental megastructure. And I love thinking of this as we completely changed how information is propagated in the world. And it used to take a lot more time to do that. And now that we have this crazy, big megastructure, it completely changes our nation-state systems.
And so, thinking about that, it's like we have the Industrial Revolution, we can think of our nation-states, as we know them today that Westphalian geography, as a part of and a result of and co-evolved with the Industrial Revolution. And now as you know, once we have Tiktok, but also the internet and all these important things. The geographical structures that we have today, the Westphalian politics are going to change. And we're already seeing that breakdown in a wide variety of ways. So totally agree with that.
Ben: Right. But let me sort of anticipate where we might go with this is that because you mentioned Tiktok, which we're all following the headlines every morning about all the different kinds of ways, in which it seems the opposite is happening, it seems that—wait a minute, it's not that states are going away, it's that states are actually fortifying themselves through at the exclusive nationalization of technologies—apps that aren't available in China are only available in the US or vice versa.
GDPR Europe becoming this Galapagos data jurisdiction. The important point I want to emphasize and because this is one of the ways in which there's a bit of, perhaps I should, in terms of an interpretation of the book, is that the argument is that the cloud layer, and cloud platforms in general, are taking on more and more functions, traditional functions of the modern nation-state, legal identity, currency, cartography, and so on.
But at the same time, states are taking on more and more of the technical capacities of cloud platforms. So as cloud platforms, in essence, evolve towards something that would in which they function as kind of state scale stateless actors, or at least have the capacity to do so. At the same time, states are evolving, as I say, into state platforms; the cloud allows states to see things, to process things, to structure knowledge about the world, and about their populations that were impossible to do before.
And so the phenomenon that identified with this was called hemispherical stacks, that, for example, the split between the western, it's not like there's one stack that operates the entire world. But rather you can imagine a kind of mitosis by which the stack genera is splitting into a Chinese-centric stack, an American-centric stack, Europe is a little bit different, Russia has its own kind of structure here as well.
And this is the important point. The split towards a multipolar geopolitics and multipolar hemispherical geopolitics and the split of the stacks towards these multipolar, sometimes technically exclusive, and antagonistic relationships are the same phenomenon.
It's not that politics is causing the technology to split or the technology is causing the geopolitics. But they are in fact, the same thing, that our geopolitics and our geotechnologies are so bound together, that we can't really think of one without the other. And that's what I mean when I say that it is producing new geographies in its own image.
Rhys: Yeah, I like that. I think it makes me think of two things, this concept of new institutional economics and institutional coevolution, where we saw the rise of, and there have been some articles about this, around like, okay, Wikipedia as a new kind of institution, like a new open-source institution is really interesting.
And we are now seeing just institutional coevolution in general, which is like, yeah, nation-states are “battling with and co-evolving with these tech platforms.” And it's not just like tech platforms versus the state, as you know, it's like that the state is sucking in parts of traditional platforms into them, maybe like surveillance authoritarianism, kind of stuff.
And then also on the vice versa, something like in Taiwan, these beautiful civil society stuff. And you can see the vice versa, which is platforms kind of sucking things at the state normally did a.k.a. identity and stuff like that into them. And so there's this battle between the states and the platforms here, which is going to be fascinating as it unfolds.
The other thing I wanted to note here was just thinking about the geopolitics on the tech side, I think that there's that we can make these patterns with any kind of technologies in the past, but thinking about like nuclear security, and like the Cold War, that was, obviously, nukes co-evolved with the USSR versus the United States. And so we're going to see more and more of that going forward. I want to ask a specific question here, though, which is I really like what you said about these protocol layers.
Ben: In the arms race connotations of this. Yes, for sure. I mean, just as underscore your point is that as part of the geopolitics is that the history of technology's relationship to military strategy is so tightly bound. We have no reason to think that would ever be otherwise. As the question of the planetary moves, not just from the terrestrial, but up into space, I don't see any reason why that dynamic wouldn't continue.
Rhys: Yep, the arms race was happening with respect to attention on the internet a.k.a. Facebook and YouTube battling for the same eyeballs. And it is also happening with respect to the nation-state surveillance authoritarianism vibe as well. The one thing that I love about the layers here in The Stack is as you noted, there's the microlayer, at the bottom user layer. And then it goes to the interface layer, address, city, cloud, and then earth. And I think one that is really interesting is this user interface layer, tell me how you think about that interface or that API between us as humans and the stack above us?
Ben: Sure, so just a minor point, but the way in which the model is drawn, I usually have the earth layer at the bottom and the user layer at the top. And part of the argument, there is also just to conceptually make clear, that this whole system is dependent upon the foundation of the energy that drives this whole thing. That we learned from 90s discourse, that the digital was virtual as the analog was real.
Its computation is a deeply physical phenomenon. The planet, the stack is a hungry, hungry infrastructure. Its energy appetite, its mineral appetite, is enormous. And we can't think of it as somehow virtual or immaterial. It's a foundational structure. So then that was part of the logic of this now, to your question with the use with the user in the interface, which I think you're right, it's a very sort of a fascinating moment of contact here.
So first of all, I should say that the user as we define it is a position. It's a position within a system and it's a position that within the stack could be occupied by a person, as you say, but it could also be occupied by an AI, it could be occupied by a robot, it could be occupied by a human and an AI working together. It can be automated human AI and two robots working together.
It's a position that you or I step into or out of many times over the course of the day, but the fact that we've developed this infrastructure that works just as well. As far as the rest of the stack is concerned, a user is whoever or whatever is able to interact with the interface layer in order to send and receive commander information up and down the rest of the stack.
So that could be an animal, vegetable, or mineral. It's agnostic in this regard. And in terms of the implications of that, for the future of AI and automation, we're generally there's obviously much to say. Now, the interface layer, as you define it, is an essence, the way in which the rest of the stack represents itself to a user.
So that user can have some sense of what the possibilities for action are. So that would choose one of these possibilities and act back upon and so you just think about the interface on your computer screen. The processor in your computer is capable of doing so many things. But it's too complex, there are too many things that could possibly do. And so the interface produces this diagrammatic fiction, skeuomorphic diagrammatic fiction of certain things that you might want to do with it and presents us with pictures that we take to resemble the actions we would like to take.
So a trash can involves file deletion and so forth. But first of all, in that, there's necessarily a kind of ideological and cognitive reduction of the space of positionalities within that interface itself. And this introduces all kinds of interesting complexities in terms of the way in which certain kinds of interfaces are actually structured. What it is that we understand is possible to do with the system itself. And my Linux friends have made this point for a very long time, Neil Stevenson, of course, wrote this book about the superiority of the command line. But there's another dynamic herein that the interface allows for, there's a kind of translate ability of what planetary-scale computation is capable of, where it translates those affordances into something that we the simple Homo sapiens are able to understand in terms of our symbolic reasoning.
And this allows more of us to participate in it. And so you have all these kinds of network law effects where the simpler the interface or the more intuitive the interface, the greater the network effects of the engagement, there is there as well. But as the interface, one of the things that I talked about in the book, in terms of area of concern here, let's say, in relationship to augmented reality and interface. And I'm very interested in AR, I think it's wonderful at the same time.
My suspicion is that there's a difference between the way in which we would cognitively, like neurologically process interfacial information that is on a screen, we process that as if it is a kind of artifact, rather than a real event in front of us. But what if we're walking around the world and the things that we see in the world are literally subtitled or narrow devised or augmented for us in the moment of direct perception.
I think it will be harder to get to just make the distinction between what is an interface and what is real. And I think in the long term, the capacities for more manipulative forms of cognitive fundamentalism and platform lock-in will become even greater but much to say on this. So.
Rhys: Yeah, I think a couple of pieces there, you started talking about how a supergroup is, thinking of the stack as kind of a being that is hungry. I think one way that I like to view is there's this frontier of bits out there that we as humans want to explore and develop and exploit. And it's a hungry thing that we're going to keep on, it's like inevitable that we keep doing that.
And then thinking about the specific interaction between that thing and us, this API between the things. Yeah, I love how you talk about how the stack, you have this a bunch of bits needs to represent itself to us simple, simpleton human types, through the floppy disk and those icons.
And you can think of just how when you're interacting with the world, this is just a constant reminder to listeners of, hey, when Twitter went from 140 characters to 280 characters that changed the interface, and it changed the kinds of conversations that we could have on the internet.
And so I think, thinking about this desire for the simpler interface, and the less friction, and then maybe pushing against that or seeing that there are other spaces that we can explore there. And then the third piece that you talked about here, which I think is powerful and dangerous is how AR is going to combine the symbolic and the real.
And that it is, if we walk through the world today, we see okay, blah, blah, blah. There's a person over there and he or she is an African-American or an Asian or whatever and or white person. And then but those are your kind of own internal processes happening symbols that you're applying. And if you then imagine that, no, those symbols that are applying to your mind are put on there by someone else. And that those things are saying, that is a bad person or that is a good person, that is going to be right for some difficulties.
I do want to keep talking about how the stack, I want to move from the stack right now to kind of your work today with the Strelka Institute and the kind of terraforming program that you have. So maybe it would be good to kind of talk about that program, and then the kind of goals of that program. Yeah, tell us about that.
Ben: Sure. So the terraforming is the name of a think-tank that I direct at the Strelka Institute in Moscow. I've been the program director at this institute for four years, an independent nonprofit educational institute, looking at future of cities more generally.
The think-tank meets for a very, very intensive residency of five months. We just finished the first year of this and we are in the process of looking at applications for the second year. It's extremely interdisciplinary. Our faculty are drawn from political science, architecture, design, anthropology, philosophy, a lot of the really interesting research labs within the big platforms.
The notion of the terraforming that we're working with here, that we're sort of proposing here is not terraforming of Mars or terraforming of the moon. It's like this idea that with the Anthropocene, we have an essence terraform the Earth but have done so through this kind of headless, pilotless, basic, in many ways catastrophic kind of recklessness.
The terraforming that we're talking about is the idea that in order to make, is not to make the moon viable for Earth-like life, it's that in order to ensure that Earth remains viable for Earth-like life, the response to anthropogenic climate change is going to have to be equally anthropogenic.
Whether we want to or not, we are going to have to fundamentally transform how it is that as a population of 7 billion primates that we occupy the surface of the planet that we draw energy from it, that we feed ourselves from it, that we house ourselves in one way or another. It could be a geoengineering scale initiative, that is not just about transforming clouds, or transforming solar radiation or something. But indeed about how it is that a more deliberate process by which we compose how it is that we occupy this towards what we call a viable planetarity.
That being said, there are a few rather more specific ways in which we come at that problem that make our program unique. One is, as I've already indicated, that there's a kind of leaning into the artificiality of this, of understanding that the interrelationship between technical systems and ecological systems is permanent. And that the distinctions between them are themselves largely fictitious, and so that the artificiality of this transformation is something that we embrace.
Second, it's an idea in terms of the geopolitics of this is we're also looking at ways in which we tend to think of these kinds of big-scale transformations as, first, we need to change our way of thinking, then we need to change our politics, and that the politics will then make available certain kinds of technologies that will have a material and at direct effect on the problem.
We're looking at it, we're allowing for looking at this the other way around, by looking at the history of technology in such a way that in many cases, the appearance of technologies has transformed the geopolitics in its image. And so for us the relationship between the geotechnologies that we need, and the geopolitics that we need, we tie these together very, very closely.
And maybe the third one is in relationship where the stack comes in here is, we recognize that when we're talking about questions like what is the impact of planetary-scale computation, what is the positive impact of planetary-scale computation could have on that on that initiative, could have on the response to the Anthropocene or the mitigation of the most catastrophic effects of climate change and the transformation of geopolitics and geoeconomics towards a more stable and equitable arrangement.
We first realized, is that the very idea of climate change, not the scientific fact of it, but the “concept of climate change” is an epistemological accomplishment of planetary-scale computation. It is because we have this massive sensing, and modeling, and calculation, and simulation infrastructure, we are able to produce this simple statistical pattern that demonstrates that this construct of climate change is possible, which implies to me a few things.
One of things, in terms of the philosophical changes here, one of the philosophical and conceptual changes that planetary-scale computation brings with it is a conception of planetarity as such. A conception of an ability to conceive and model and represent a planet at once in ways that are significant and meaningful and important in this way. And the other is that it demands in a way in which mechanisms, in which it's possible to, again, to recursively act back upon that, and to structure our response to this, in ways that we that not only situationally allows for but in fact demands.
Rhys: Yeah, interesting.
Ben: Go ahead.
Rhys: Oh yeah, it's a program in Moscow that is, you've been running for almost five years and that there's this residency piece to it as well. This terraforming piece which has brings together a bunch of interdisciplinary folks to work on these problems of terraforming our Earth in a geopolitical, geotechnical image.
And I think I really like what you're saying, and I was going to emphasize this earlier, but I think, thinking of the way that I kind of reflect what you're saying here is that I like to think of our future as the coevolution between like Nature 1.0 a.k.a. The Earth a.k.a. this amazing epistemological accomplishment of planetary-scale computation.
So there's Nature 1.0. And there's Nature 2.0 a.k.a. all the humans, the network human organism. And then there's Nature 3.0, which is this networks, AI, and computation piece. And so, those three things kind of coevolve with each other in a powerful and interesting way. One question I have about the program is do you want to give me a specific for how you're thinking that geotechnology affects geopolitics?
Ben: Sure. So a couple of the things that we were looking at in the first year, we did a lot of interesting research in our speculative design and films around these two themes that speak to this one is around space law. Space law is increasingly a really interesting area of legal thought and thinking both because there's so much interest in attention and indeed money going into space exploration once again.
And so that the questions around this are seemingly coming to the fore, but the way we're looking at it is also thinking about that part of the problem, for example, with the issue that we face in terms of climate change models, for example, that we have planetary-scale computation of is produce these enormously confident models of the dynamics of very complex systems.
We're not able to act upon the implications of those models that the models is merely a representation of a circumstance, but unlike the way, say a model of the market work or the models of the market actually kind of bend back and structure the market activity in the first place. Our models of ecosystems don't have that capacity to do so.
And what political scientists pointed this and they call this a problem of global governance that we have phenomenon that are taking place at one scale, but are governing systems that would be able to act upon them are operating in a very different scale. And this mismatch causes all kinds of noise in the system.
Space law is an interesting thing and that it's a body of press. It's a precedent body of work. It's a body of precedence. It's a jurisdictional logic that takes on the adjudication of an entire planet as a whole. It's oftentimes used as to talk about mineral rights on the moon or Mars or who has rights to do watch in low earth orbit and so forth and so on.
But thinking about it, again, the way in which planetary-scale computation, one of the things that it produces is the capacity and conceptualization of planetary systems as a whole, thinking about the legal structures, the ways in which the implications, those models could enforce themselves.
Where in space laws, one of the areas we're very, very interested in here as well. Here's a clear example then of we have the development of a geotechnological system that produces phenomenon that produces things that need to be accounted for politically and then demands a transformation or produces or necessitates a transformation in the political institutionality in order to account for the reality.
Another one that we looked at in relationship to this kind of as eudemic you asked for, how is it that the geotechnology can produce the geopolitics has to do with some of the work we're doing on around negative emissions technologies. Negative emission sent us or any teams, or sort of a general term to describe any kinds of technologies that the function of which is to, in essence to absorb atmospheric carbon and CO2, in a way that at a scale that is climate significant.
There is a several recent IPCC special reports that indicate that even the least ambitious of the warming targets are unlikely to be realized only with cuts to new carbon emissions that there is going to have to be at the same time. Also a climate significant scale subtraction of already existing carbon and CO2 that is in the atmosphere. There are any number of technologies that can demonstrate this in the lab, but understanding how to do this at the scale that is needed is a more apolitical and economic problem.
And so one of the projects that we did called To Bury the Sky was looking at the specific potential of Siberia, particularly the subterranean basalt in Siberia, as a likely area in which you could do this carbon sequestration at an enormous scale.
And that's really what is necessary to do that. And so this research notion not only worked out ways in which you can power this, ways in which you could structure this as well, but also quite a bit of work on the economics and the politics of the whole thing.
Rhys: Great. Got it. Yeah. I think those are two good examples. A space law, I mean, obviously back 10,000 years ago, we weren't thinking too much about space law, but now that we can actually go into space and now it's a frontier and you say, “Ah, who should get which pieces of land and where should you land?”
And where should you be able to go on the asteroid or with satellite patterns? That then leads to that new technology and macro planetary-scale technology then leads to some of these interesting bits of space law and space institutions that we need to create now. And then similarly with the negative...
Ben: And just to clarify that, I think part of the implication here is what it means is that we think of Earth as in space, right? It's not as though there's Earth where there's one line and over there there's space. In the space law, the Earth is in space, obviously. And so it is in fact part of this jurisdictional thing and part of that legal shift of the way in which our understanding of Earth, of our planet as an astronomic entity, and you don't positionality comes to take on something that actually has more political and legal significance.
Rhys: Yeah. I liked that. We think of physical space, usually not as space space, but actually we can just extend the map a little bit further, so it includes out there.
And then the second one of these negative emissions and we need to do negative emissions stuff in order to help with the climate crisis and how to do this at scale and thinking about doing that and Siberia and some of the economic and institutional changes that would need to happen if we wanted to really “bury the sky” as you said.
So that is all fascinating. And I want to switch to one final thought here before we wrap up the show and this final one is about COVID-19. It is, we are currently in a gold pandemic and you wrote this great article called “18 Lessons of Quarantine Urbanism.” So right at the beginning of the pandemic.
And I think I just want to do kind of a high level... I'll share the article with listeners. It's really quite it is these 18 lessons and at least nine of them are great. You know? I mean, seriously, that's a high, that's like an amazing and not, I'm not sort of this, that's like nine of them are really good. And so just give me your high level view on how you're viewing COVID-19 itself or out some of these lessons in Quarantine Urbanism.
Ben: Yeah. Thank you. Yeah, I should also say for your listeners, is that I'm currently turning this article into a book called The Revenge of the Real which will be coming out from Verso next year which a little bit deeper dive into this as well. Obviously, many things I talk about in there as well. Three that I think that dovetail nicely with what we’ve talked about here is that one of the ways we can view this pandemic, or I think one of the ways in which when we're looking towards a post-pandemic politics that we will be looking back at the pandemic is as this incredibly strange but also amazing experiment in comparative governance.
In which the virus was the control variable and different governing systems either dealt with it well or dealt with it very, very poorly. And there's a clear patterns in terms of the kinds of political systems that we may want to encourage or discourage going forward based on the results here.
It's clear that the governing systems that were very trusted, that were very technocratically competent, that already had a robust and inclusive and equitable sensing layer capacity within their medical system, generally that intervened early were generally did well.
Populist governments did really, really poorly. And that one of the things that we may anticipate from this pandemic is that those waves of populous politics at the last few years may not survive this moment, whether it's Trump or Boris Johnson, or both scenario.
And if we define populism in this way, as sort of an idea that the kind of mythological narratives that our particular culture has about the way it works are somehow imagined to be ultimately more powerful than the underlying biochemical reality of the world itself.
This lasts a little while, but ultimately this comes crashing down and we're seeing it happen all around us. Second, one of the things that I think we, that emerged with the pandemic that we will want to keep, keep going is what I call the epidemiological view of society.
And that is as we all found ourselves mid pandemic, looking at our phones and watching lots of charts and keeping track of statistics about how close the wolf was to the door and what the future might look like. We all became an amateur epidemiologist and trying to understand logics of contagion in this dynamics here as well.
And as masks became a more important part of what it meant to be part of society and what it meant to participate in society. This epidemiological conception of what our society was makes this possible and that when I wear a mask and I go out into the world and I'm seeing other people. I'm imagining myself as this kind of biological object or entity that is contagious and could do harm whether I mean to do harm or not.
And as we begin to understand ourselves as a population that whose biological proximity has importance and has effect beyond the symbolization that we might have for it. This is an embrace of the real that will also do as good going forward.
The third one. And this is the one that got the most. It was probably the most controversial, to be honest, was one in which I kind of made a bit of a pushback against the, I think now more predominant anti surveillance discourse, Shoshana Zuboff surveillance capitalism stuff, stuff like this.
And we could talk about that more specifically but what I was arguing here is that in fact, I'm not arguing that surveillance is good, what I'm arguing is that the term has been overinflated, and it's used to describe too many things, that it's used to describe any way in which a society would want to sense itself, model itself, and act upon itself deliberately, to compose itself deliberately through the models that it produces based on that sensing.
To call all of that surveillance and therefore for a form of pernicious social control as a way of capturing otherwise free individuals into some hard or soft totalitarian gesture. The pandemic demonstrates many sort of ways in which that just doesn't work.
One of the things that climate science demonstrates is that there are other ways that we can produce planetary-scale data-driven models of how the world works. That are secular, that are useful, that are confident, that should be driving our politics. And the ways in which we've chosen to use planetary-scale computation, for advertising, for manipulating people into clicking on stupid things.
It is not the way we should be using planetary-scale computation. We should be using planetary-scale computation more like the way we use it for climate science, but using it as a way of that would allow societies to sense model and act upon themselves deliberately. That I think is one of the foundational ideas for what post-pandemic politics should be about.
Rhys: Great. Yeah. So I just want to reflect those for our listeners because there's a lot of juicy, good stuff there. The first is the comparative governance piece and how the technocratic governance on one side versus the populism on the other side. Now the technocratic governance did better.
If you can sense, you hear population better, if you can do a little bit of, well, let's call it surveillance or whatever, and actually make good actions with respect to science then that obviously did a lot better versus, and as you note, there's kind of, there's this what's the term for us? Yeah, there's a symbol CRA here in the sense that there's like the narratives of Trump and Bolton are on those things. The narratives are powerful and they hit our emotions, but eventually those are just looping on themselves and they eventually have to come in contact with reality and when it comes in contact with their biochemical realities.
Oh wow. Just saying the virus is going to go away. It doesn't actually make it go away. And so having thinking about those two different things is interesting. The second piece...
Ben: The implications of that for climate change should be obvious.
Rhys: Mm, yes. That we need to, we can't just say XYZ about climate change. Like we need to connect things back to our biochemical reality. Is that what you're saying?
Ben: Yup. Reality bites last.
Rhys: Nice. I liked that reality bites last. And the second piece is, yeah, I love this epidemiological view of society and I just think it's a, it's part of a form of, viewing viewing, which is seeing like a state is a great way to view, like being able to see like a an internet platform or being able to see like one of these AI interfaces that we were talking about, being able to take the perspective of both yourself as an individual, but also as these kinds of greater collectives, is really powerful.
And I really liked what you said to you about. It's a reminder of our own physicality and biophysical reality that we are moving through the world and seeing ourselves as just this biological being connected, all these other biological beings. So it's, again, this connection back to the real. I also think that...
Ben: Yeah, and seeing yourself as part of a population that you are inevitably entangled with. It's not that you're an individual first and then secondarily you enter into this collective, no, you're actually part of this. You're part of this species population to start with that is what you are
Rhys: A 100% whether or not there's a pandemic. And I think that move towards, and we can talk more about that, but there's this creation of individualism within Western society and weird societies and how that has been good in a variety of ways, but also is we're seeing the negative manifestations of that as we fail to see, fail to connect rights and duties, and to see ourselves as part of a collective.
And so I do think that this epidemiological view will help us see yourself as more in a networked and collectivist sense. And then the third piece that you say here, I super agree that it's just an emotive term here where you take, if you think of us collecting data about ourselves, that is the kind of positive sense of what is true.
And then you can either, you can normally devise it and say like, Oh, it's surveillance; Oh, evil; Or you can say, Oh, this is actually just like a kind of beautiful check climate change model. That's helping us understand how transportation works in our city or whatever. And so making sure, do you have like a proposal for a different word there that we wouldn't use instead of surveillance? Like, if it's not surveillance?
Ben: I call it the sensing layer. It's basically just the way in which the society is able to sense what's going on within itself. And that would include not just like super fancy chips from smart city infomercials, it would also include having enough nurses that you can equitably and inclusively provide medical care to the entire population so that those people can put a thermometer on someone and say, “Okay, this is a point where we need to care for someone”
There's also this dichotomy between a high touch care, between the provision of social care versus a high tech. I don't believe in this dichotomy, I think you need both of these as well. So that sensing layer is it would be my term.
Rhys: Yep. Great. That makes sense. I like that. So to conclude here the whole podcast is we're going to get in wrap-up mode. At the beginning we talked about the stack and kind of how you view society, which is in this tech-society loop thing, where it's like, hey, we have this new technology, drastically changing how we view ourselves and we can kind of positively shape that if we understand it.
And so trying to understand it is a lot of your work and The Stack is a great book that I recommend for readers. I've read it. It's quite good. Especially, if you're interested in just understanding an abstract perspective, how these kinds of systems coevolve with each other and how computers co-evolved with humans. It's a good read.
And then second, we talked about your current work with the Strelka Institute and these terraforming programs and it's November 10th is when folks have to apply for this next one. And I'll add this to the show notes, but if you go to theterraforming.strelka.com then you can check that out or just Google Ben and you'll find it. And so apply by November 10th, if you're excited by thinking about how geotechnologies shape geopolitics and vice versa.
And then finally we chat about quarantine urbanism here and COVID-19, and some of the learnings and I guess that there'll be a book coming out about the real.
Ben: Yeah, it’s called The Revenge of the Real Post-Pandemic Politics.
Rhys: Exactly. Beautiful. Anything else you want to say, has there been any place that people can find you on the internet?
Ben: Yeah, sure. I very much appreciate the chance to speak with your listeners to cover so much ground. For those of you who might be a little, also more interested in some of the work, the more applied work that we're doing through the terraforming program, that URL that Rhys mentioned theterraforming.strelka.com is also where you'll find a copy of the short book I wrote called The Terraforming.
It's about 30,000 words or so, but it's a design brief for manifesto for this initiative. It's free to download, go ahead and have at it and look forward to being in touch with your listeners in the future.
Rhys: Yeah. Great. Well thank you again for coming today, Ben and hope you all the listeners enjoy this episode. Goodbye, everybody.
At the end of an episode, I like to give my own thoughts about that episode. And here are my thoughts. I have three main thoughts. The first is. This idea of being a social theorist and that Ben talks about tech being ahead of the concepts that we have of it. I think that's a brilliant way to talk about our current situation in society, which is that most of the time, our concepts and our models of reality are fit with reality itself.
But during some intense periods of upheaval: agricultural revolution, industrial revolution, information revolution. The model that produced our reality is no longer the same model. Either our sensing changes what we can understand about ourselves or these new technologies that we have just changed ourselves and change the model so deeply that our models of reality have to change and our understandings have to change.
And, yeah, I think that's a great way to think about it. And for Ben, I think that this idea that I think connecting it back to, as he says, with the stack and our planetary-scale computation, that those things in the internet, those disrupt Westphalian geography and create new geographies in its own image.
And I think that hits the two crucial pieces here, which is that the internet is both changing our understanding, our current old models, industrial age models of the nation-state. And you can see that in small ways with something like how Facebook was part of the Arab Spring or how surveillance is part of a Hong Kong struggle for independence and that our nation-state self systems are going to be continually rocked by these new digital technologies.
And then on the other side, it's like creating new geographies and its own image. A lot of those geographies we see just on the internet, like subreddits or QA, non-conspiracy theorists or things like that.
And there's also new geographies that show up in reality. And so this is when you have a Bitcoin meetup group that meets at the local bar, or when you have San Francisco as a entity kind of show up in this massive way, just like how London or New York City showed up in the past.
So I think that this idea that tech is ahead of our concepts is a crucial one. And I like the idea, and I think there's so much juicy space to explore. Just like as Ben was saying in the show, Marxism and positivism and all these cool ideas showed up in the industrial age as a response to our industrial moment and in the changes that brought on society.
So, too, is there so much amazing untapped space for new kinds of social theory and models of our ability to understand ourselves in the information age.
Okay, that's point one. Point two is this differentiation that has been between the surveillance layer and a sensing layer. And I really like this because we have these negative instincts towards surveillance.
Especially for me, coming more from a cryptocurrency world, like surveillance is evil. Whether it's surveillance, authoritarianism from nation-states or surveillance capitalism with companies. That's like bad but actually the act of trying to sense our reality, do sense-making, do epistemology.
Doing that and then using that to inform good decision making. That's a good idea. And I love the example of climate change as that where it's like—hey, we actually like it when we surveil our current climate models and we don't call that surveillance, we call it modeling, but it's the same thing.
It's just whether you're modeling and getting data and sensing the Earth in biological, non-human biological systems, or whether it's human systems.
And I do want to say, I love this line from Ben raises the concept of climate change itself is an epistemological accomplishment of planetary-scale computation. That's so true. Like we have the fact that we can look and see all these models and look at Google Earth and see smoke models and all these things that is a sign that we can surveil the earth at this global level really well.
I do think there's this huge opportunity, though, for a new kind of surveillance that is not authoritarian and it's not deeply capitalist or what have you, but it is instead adding to our societal sensing layers. And I don't know what the word for that has been provided the word a sensing layer, but I don't think that's good enough there's this word sousveillance which means bottom up surveillance, something like the smartphones taking images and videos of the George Floyd killing.
That's an example of our ability as society to kind of point all of our technological tools onto bad things and, or good things and show, hey, these are happening, let's deal with them. And those are happening in this more bottom up way. And so I think there's some term or some kind of movement that allows us to surveil ourselves but in a delightful, in a powerful bottom up way. So I think there's a lot of juiciness there.
My final third point is kind of a meta point around how folks like Ben and other social theorists should interact with the builders of the world. And this is a more general question between academia and industry. And Ben existed this intersection to some extent but if we think about this idea that, hey, we should have this—the sensing layer—we shouldn't just have, we shouldn't break things down into surveillance and say, oh, that's evil.
How do we actually build out something like that sensing layer? How does that go from an idea to or a book or something like that to being implemented or invested in by venture capitalists or funded by grants and civic technology?
How do we take this? Architecture as designed discipline idea and really push the building and design side. And I don't know what the right answer to that. I think it's an information logistics problem. And I think that that is obviously a juicy area of exploration as well, so that, yeah, I think this is a good episode.
It's cool to hear Ben coming from very similar angles that I come at problems with, but for him, it's coming with a much more academic angle and in an angle steeped in an architecture and philosophy and kind of more, more readings. So I think, I think that that's powerful. Okay. Have a good week. Goodbye.
Thanks to Alfred Malaza for the transcript.