This is the seventh of the interview series initiated by the PENTACLE team. The interview is conducted, transcribed, translated into Turkish, and revised by Başak Ağın and Şafak Horzum. Türkçe versiyon için buraya tıklayınız.
Kevin LaGrandeur is Professor of English at the New York Institute of Technology. He has written many articles and conference presentations on digital culture; on Artificial Intelligence and ethics; and on literature and science. LaGrandeur has been appointed Fellow of the Institute for Ethics and Emerging Technologies, an international technology think tank.
Başak Ağın: Hello, Professor Kevin LaGrandeur.We prepared the interview like a very short and sweet one. We have got like five questions and two of them are about your general take on the idea of posthumanism and how you became involved in the Global Posthuman Network. Three of them are tailor-made for you and are based on your own works. So we will be asking your opinion about certain things. Shall we start?
Kevin LaGrandeur: Yeah, great.
BA: Kevin, thank you very much for accepting our invitation for the interview. You have a degree in English literature and critical theory, so you’re very much interested in philosophy as well as languages and most of us who study in the posthumanities come from similar backgrounds; but we have different ideas, different opinions on the concept of the posthuman. So what is your take on the posthuman? How would you define the concept?
KL: I always start talking about the posthuman by talking about the two definitions of posthuman. That is because what happened since the 1990s about the idea of posthumanism is split into two tracks: one is the original idea that N. Katherine Hayles brings up in her famous book How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, which is “the” posthuman versus post-humanism. So, there’s the posthuman which is a speculative sort of study of what the end point of transhumanism might be, if we modify ourselves with technology; we’ll end up as another species essentially, that’s the post-human species. The other track is philosophical posthumanism and it extends from postmodernism and is an extension of the questioning of humanism, the philosophical idea of humanism with humans at the center of the universe and everything else in reference to them. So that’s a completely different topic. It’s posthumanism: what comes after humanism. The way they’re similar is they both involve technology. Posthumanism is basically postmodernism inflected through the glass of technology, how technology is changing our ideas of postmodernism, such that we humans become diminished as an entity, and therefore other things take bigger precedence. For instance, other animals on the planet become more special. The idea is if we can invent intelligent technology that is potentially smarter than us, then what does that say about our species? We’re not so special after all, which means that animals have a higher place in the hierarchy of life than we thought of before in the humanistic schema. So, when we use the term posthumanism, its definition changes depending on which one of those posthumanisms we’re talking about. If we’re talking about “the” posthuman, I think it’s a very interesting topic, and I’ve been dealing with this since I did a summer seminar with N. Katherine Hayles in 1995, and that’s how I got into this 25 years ago. Hayles was skeptical about a lot of things having to do with transhumanism and I am too. I’m not, first of all, sure that modifying ourselves with technology is a great idea. Second, as she says in her book at the very beginning, believing that transhumanism is even a good thing is dependent upon believing that the human body is essentially a fashion accessory to the brain, which she thinks is horrible. She likes—and I agree with this—to think of the whole organism when thinking about humans, because neuroscience most recently shows us that the mind or consciousness is not just up here [pointing at his head]. The whole body participates in it, you know. Recent studies about the Vagus nerve, for instance, which goes between the brain and the organs show that it’s a two-way street. Our brain doesn’t just control our body’s organs; the organs also control parts of our brain; it’s a symbiotic relationship and the idea – the big transhumanist idea – that we could upload our consciousness into a robotic body assumes that the body doesn’t matter; it’s just a meat puppet as they like to say, which I think science shows us is wrong. So, I don’t know if I’m answering the question, I’m kind of wandering into my whole hobby horse here on the whole idea of transhumanism and the posthuman.
BA: It definitely makes sense and I couldn’t agree with you more. What about the new materialist stance of Karen Barad, Jane Bennett, Stacy Alaimo, or some other material feminists? There is this growing discussion of the new materialisms in alignment with the concept of the posthuman because what, for example, Karen Barad refers to as “posthuman performativity” is actually some sort of a derivation of the idea that it has nothing to do with transhumanism and we can never separate the mind from the body, or discourse from matter in that array. But they speak a whole different language than Katherine Hayles or yourself. So, what is your take on these new developments?
KL: Well, I haven’t really engaged with feminist attitudes toward the posthuman. The only person of those three that I’ve even engaged with is Karen Barad and that was at conferences talking to her mostly about her idea of quantum entanglement. She doesn’t, at least in my conversations with her, connect quantum entanglement tightly with feminist ideas; she connects it more with ideas of posthumanism as far as I understand her. If quantum entanglement across the universe is possible, if two quanta can communicate with each other regardless of space and time, then that also implies we can do the same thing; there’s a lot more connectivity involved in our existence with everything than we like to think. I like that concept because it also connects with the Gaia principle, the Hindu idea of Gaia that the whole planet is a living being, which I increasingly think is probably true. I mean, the skin of the planet is simply a huge organic collection of various types of interactive matter and that’s what our bodies are too. If we have a consciousness, it’s possible the planet does as well. So, I tend to wander into those areas more than I do into feminism because it’s the direction I started in at the beginning.
BA: Yes, what I see or what I observe from all the people studying in the posthumanities or in the environmental humanities or both may be a merger of them, we come from different backgrounds, from very different starting points, but somewhere we meet, and I think that’s fascinating. I think this is what I also observe in the Global Posthuman Network. You’re a part of it and you’re one of the founding members. How did the idea of GPN come to life in the very first place and how are you weaving these recent developments in terms of this posthuman summer camp? For example, there will be other events this summer I guess; it’s flourishing.
KL: I think the credit is due here to Francesca Ferrando, who has been a really huge impetus behind these organizations. She’s the one that got us started with the New York Posthuman Research Group. She and I and Yunus Tuncel started that group, but really it was Francesca who was the driving force, saying ‘I really want to start this group in New York.’ Then, at the same time when we started that group, she was talking to people in Europe about starting the Global Posthuman Network. What happened is the two organizations sort of merged because they started at the same time, and it was all Francesca’s doing. Also, some other people in the European theater such as Stefan Lorenz Sorgner, Jaime del Val, and others were behind that idea of starting a global posthuman network. But since then, I think the biggest influence has been Francesca. As for the summer camp, I was involved in that in a very minimal way because she asked me to do it. Then I’ve sort of diverged since she and I both moved away from New York City, which meant that she and I and Yunus all started going in different directions. Yunus mainly focuses on Nietzsche; Francesca has moved toward focusing on philosophical posthumanism, as a sort of New Age movement. She seems to focus on peace, love, understanding, and unity among people because posthumanism sort of brings back again the realization that we’re not so different and that we’re not so individual as we like to think we are. Then, my focus has sort of veered off into focusing on AI and ethics as part of a jumping-off point from posthumanism. That’s what happened, it’s the history of the organization; Francesca initiated most of it and then the rest of us followed suit, some of us a bit reluctantly.
BA: OK. I think that’s all on my part now. I will shift the ball to Şafak’s side; he has got three more questions for you that are more specifically based on your work.
Şafak Horzum: Especially on your new direction towards AI and ethics. I would like to start asking my questions by taking my point of departure from your opinion paper in AI and Ethics, “How Safe Is Our Reliance on AI, and Should We Regulate It?”. It touched upon many problematic issues regarding humans’ unreliable decision-making processes and self-induced ethical pitfalls, actually. Based on both this paper and your book Androids and Intelligent Networks in Early Modern Literature and Culture: Artificial Slaves, which I benefited a lot by reading because it opened several new gateways during my dissertation-writing process, I have two questions. The first one is this: With the proliferation of AI programs and robotics, where do you think the human will place itself in a new chain of beings? As a master, will the human eventually turn into a servant of its own creation? Has it already? Because it has been more than two years, since the paper was published, I would like to hear your contemporary views about that one.
KL: Yeah, let’s see, I just had a paper published again in AI and Ethics called “The Consequences of AI Hype”. It’s not that one you’re talking about, but it’s also a very short article, an editorial really. It’ll take you probably 10 minutes to read it, but it illustrates one of the problems I have with AI, which is that there’s a lot of hysteria around AI, a lot of emotion by the general public because the general public doesn’t understand AI, because it’s very difficult to understand AI, and also because Silicon Valley doesn’t do a good job of making transparent AI. So those are some problems I have with it. But to go back to your question about where I think the ultimate hierarchy will end up; well, I think I say at the end of my book that the biggest problem is that we’ve already allowed a dialectical reversal to happen between us and intelligent technology, and we started to allow that long ago. Since its inception, we have allowed intelligent technology to make decisions for us, and as a whole bunch of previous philosophers have said, that’s the danger point, the tipping point. The philosophers I’m thinking of are the people who invented intelligent systems in the mid-20th century, like Norbert Wiener. He, in his books in the 1950s, warned that if we start letting intelligent technology make decisions for us, we’re going down a very slippery slope toward allowing it to dominate our lives. If you look around you today, how much does technology dominate us? People might say technology is my slave, but what about devices like our smartphones? I don’t know about you, but I’ve been caught recently in the mountains with no reception on my smartphone for like two or three days, and I feel like I lost an arm. And I am not even as huge a technology user as many people. But I feel kind of lost without it because I’ve come to depend on it to remind me about things I need to do, to communicate with other people quickly, and easily to do calculations for me, all kinds of things. People might say that’s not a really big deal, that’s not important stuff. Yet even the little things, they add up. Then, in my most recent article “The Consequences of AI Hype,” I complain about some real life and death things that are the result of allowing AI to make decisions for us. The biggest one I talk about is Tesla’s autopilot. First of all, they shouldn’t have named it “autopilot” because that implies that, as with an aircraft, you can just let it do everything by itself and you’ll be safe. But that’s not the case. People in their Tesla cars will turn on autopilot and then fall asleep at the wheel and let the autopilot drive. And then they die because they hit something, because autopilot is not a complete system. There have been a number of examples in the news about people relying on autopilot and then getting killed. Even GPS, which is a much older and much simpler type of AI, has killed a lot of people in the United States because we have a lot of wilderness sites and dirt roads and the GPS does not distinguish between dirt roads and regular paved roads. People will be depending on their GPS out in the middle of a drive and take a wrong turn because the GPS can’t tell that the road is a dirt fire road. Then, they get stuck and then they die. It happens so often here in the United States, there’s even a name for it: “death by GPS”. These are the kinds of examples of what I think is already happening: us becoming over-dependent on our intelligent technology. I mean, now we even have robotic bartenders, for instance. We have them on cruise ships here in the United States; in Japan, they have completely roboticized noodle shops where the chef is a robot. All the humans do is take your money and put the bowl in front of you. Those are not really important examples, but that’s the thing. I think right now humans are still in control of machines. On the other hand, I think that it’s becoming less appropriate to talk about a hierarchy with regard to humans and their intelligent artifacts because that leaves out animals and everything else. Even plants have a sort of intelligence. I also think animals should be further up the hierarchy. So, I would talk instead about the duality of AI and human. Right now humans are the masters and AI the slaves. But, as Hegel said regarding his idea of the dialectic, if you give the slaves too much power and make them do too much of your work, you lose the ability to do that work and the slaves gain the ascendency because you need them to do that work. They eventually reverse the dialectic and become your masters. I think we’re already well on the road to that and GPT4 is a huge watershed in that process. We have ChatGPT, the newest one based on GPT4, which is even more powerful and has a bigger database than the initial one that was made available to the public; and I think if we rely on it too much to do work for us, we’re going to be in trouble. On the other hand, I don’t want to sound like Cassandra here because I think AI has huge promise for the human race if we’re careful and judicious about how we use it. I think medicine is the biggest area where AI can be helpful. Think about how good it is at analyzing medical images. It’s much better at seeing tumors in a medical image than a human physician, for example. Also there are possibilities already that AI can analyze folded proteins, which means that it can look for new types of drugs that will be helpful to humanity. On the other hand, you didn’t ask about this but, I think part of the AI hype is that AI is going to make us superhuman, that we can implant AI in us to make us like cyborgs and then we’ll live for 300 years. I think that kind of stuff is silly.
ŞH: Yes, I totally agree. What you hinted at in your answer brings me directly to another question, which maybe you expect me to ask. Looking at our dependence and reliance or false reliance on AI as well as robotics, we see that in recent events just like wars or occupations, human beings are slow in making ethical decisions about several onto-epistemological circumstances. In your papers also you mentioned AI making important decisions instead of us. Then, allowing AI to take over those decision-making processes might ease the burden of our species on the surface. At least, this is one of the greatest support reasons for it. Can this transfer of power actually be regarded as a mere matter of taking the weight of our ethical concerns off the shoulders of the human? What do you think about that? I mean we do not seem to want to take more responsibility for our decisions; but we seem to hope AI can do it.
KL: I think there are two things about that. First of all, yes I think humans are lazy; I mean if you read my first book Androids and Intelligent Networks in Early Modern Literature and Culture: Artificial Slaves, my whole point in that book is that we spent at least 2,000 years trying to find ways to offload various types of work that we don’t like, anything that’s dirty, dangerous, and distasteful. I call it the 3 Ds. That’s the reason we’ve had slaves throughout history, because we didn’t want to do nasty work and or hard work. Usually we didn’t let our slaves think for us though and devise our philosophies. I still don’t think AI is even capable of devising philosophies for us, but ChatGPT is a good example of where people are letting or at least tempted to let AI make decisions for them. I mean, if you have ChatGPT write a paper for you, it’s coming up with the topic, it’s coming up with the outline of what you’re going to say. If you’re lazy and you simply let it write the paper, then you’ve let the AI do your thinking for you. I think in this kind of case the genie is out of the bottle; we have to come to terms with ChatGPT, and I think the right way to do it is to put our arms around it and learn how to coexist with it and how to use it properly. For instance, I think the best way to do that for writing is to go ahead and let ChatGPT come up with a possible outline for a topic and then stop there and go in and alter it yourself and add to it yourself, and then do the initial writing yourself and then maybe pop it back into the ChatGPT again to see what you might have missed. So it’s a good assistant in that respect. It can sort of take the place of a human editor. As far as letting AI make ethical decisions for us, I don’t think we’ve gotten there yet. For one thing, it really can’t do that. AI is still pretty dumb; I mean most AIs are still designed to only do one thing really well, like for instance to play chess. Usually ChatGPT, for instance, is really good at finding the next word to go with the word before it; that’s how it’s made; that’s all it really does. That’s why it hallucinates (comes up with false information). Because if you ask it to tell you something and it doesn’t have it in its database, it just makes a series of words that make sense in a topical way but don’t make sense in a factual way, because it’s not true. Did I answer your full question?
ŞH: Yes, absolutely. It is because it’s always the suspicion or possible conspiracy theories that we might have today, but Donna J. Haraway also in her Cyborg Manifesto was talking about how robotics was improved from military technologies, just like today we have them as the “bastards” of the military technologies. But today we see, which you also touched upon in your writing, that Elon Musk is supposedly making investments, according to his claims, in order to protect humanity from possible future dangers. Yet this is also a step to make sure that he has a share or a say in the future to help prevent a software company like OpenAI, which makes GPT, from ruling some of the fundamental components of our sociopolitical life. And humans always prove the saying “actions speak louder than words”, especially when these shareholders’ personal or mercantile gains are in the business. History likes to repeat itself in that arena. So, how can the ethical constraints be guaranteed in the face of such mega-corporations which can easily infiltrate into the supposedly safe digital platforms of communities? What is your current perspective on the regulation issue?
KL: I’ve talked about regulation in some of my publications and I usually talk about it at the end of the article because prescribing regulation is difficult. We’re trying to do it right now as a group of humans worldwide, and it is difficult because AI is so new and is developing faster than our regulating institutions can keep up. But regulation is absolutely necessary because although capitalist corporations do some good for humanity, the fact is they’re simply profit-making machines, and so are without morals. The people who are in those corporations are simply parts of that profit-making machine and so they become machines themselves as well, cogs in the bigger machine. Therefore, there’s a huge conflict of interest for them in making regulations for themselves because it may conflict with the machine’s main function, which is making profit. So, right now, for instance, I just finished writing an article that’ll be posted I guess in the next couple of weeks on the Global AI Ethics Institute website. In it, I talk about the fact that capitalism is a machine, it’s not inherently interested in morals, it’s interested in profit. The conflict of interest between profit-making and morals means that the recent US government’s move in July of this year to have the seven biggest AI corporations voluntarily follow eight rules of making safe AI is not a good standard because that whole agreement is just voluntary, and these companies at some point are not going to be able to follow those voluntary rules because it’ll conflict with their main focus on making money. What has to happen is there has to be an international governing body. National governments need to make regulations, but there needs to be an international government body like the International Atomic Energy Agency that controls nuclear weapons. As with that agency, the one for AI needs to get everybody worldwide to agree on a certain set of rules about making safe AI. That’s only just beginning to happen. That object or that goal is the main goal of the Institute I’m the director of research; it is to get the whole world involved in these regulatory processes and also to involve more philosophical and cultural viewpoints in that process. Right now, It’s all Western philosophy and Western religions that underlie the ethical standards for how we should deal with AI. I’d like to know what people in India have to say about those standards; what does Hinduism or Jainism have to say about these things, or Shintoism in Japan, or Confucianism in China? I don’t think we’re getting enough input from those types of ethical systems. For Türkiye, Sufism… I mean I don’t see or hear anything about Sufi values in handling AI.
BA: I don’t think any of those ethical value systems or religions or belief systems has produced any kind of sense about the use of AI. I mean they don’t generally put forward very contemporary, very modern ideas. They are more based on the spiritual aspect of things. Maybe Francesca would have an idea about what’s going on with this perhaps, because she’s more focused on spiritual posthumanism.
KL: Yes, possibly. But I’ve been dealing a lot with that now because I’ve been giving talks in India and other places like Türkiye and the cultural values aren’t exactly the same, for example, in India as in the United States or even Türkiye. Moreover, it is in fact possible to have Buddhist-influenced AI ethics, for instance. A good example of that is evident in the writing of my writing partner James J. Hughes, a sociologist at UMass Boston. He and I edited our more recent book called Surviving the Machine Age: Intelligent Technology and the Transformation of Human Work which is about economics and jobs. He’s a Buddhist monk as well as an academic, and he’s written articles on Buddhist values relative to AI. One example is the idea of making a sentient artificial general intelligence, a human-level AI isn’t a good thing in a Buddhist sense because in Buddhist ethics in order to empathize with a human being and not injure them, you have to be able to suffer, because only through suffering do you develop empathy. So, it would be immoral in the Buddhist context to develop a human-level AI because you’d have to make a suffering being. So, the morality of making an artificial general intelligence equivalent to a human is doubtful for somebody of a Buddhist background. But that kind of thing doesn’t get brought up enough in discussions of AI ethics.
ŞH: My last translation Heavy by a Turkish author, Sadık Yemni, is a post-apocalyptic novel about a world that is saved by an AI made by a Buddhist-like software developer, actually. In the end, it takes over the world peacefully by implying that humans are not doing well on the planet and it’s on the human’s side. I will be sending a copy of it if you can let me when it is published.
KL: Sure. That’s the basic concept behind Isaac Asimov’s Foundation Trilogy, which involves highly intelligent robots that preserve human civilization. The basis of that series is that secretly robots have used the three laws of robotics to make a fourth law called the “Zeroth Law” which is they can’t allow the human race to become extinct or hurt itself. So they develop this sort of extremely altruistic set of ethics for themselves to save the human race over a period of thousands of years. They interfere with things just enough to push humans into a better place. There’s certainly that possibility, but my own opinion is that human-level artificial general intelligence (AGI) would be an alien intelligence, not necessarily similar to or sympathetic to humans. If we manage to make an artificial general intelligence, first of all, it would take off geometrically because it would derive from an AI reprogramming itself over and over again very rapidly and as it got smarter it would reprogram itself better so that within a few years we’d have AI that was way more intelligent than we are. This process means that there wouldn’t necessarily be a benign intelligence, and I agree with this. But instead of this, people like Eliezer Yudkowsky and Nick Bostrom have said it could end up as the equivalent of an alien intelligence, and it would be basically like dealing with a being from Mars for us, it would have a completely different set of ethics than us. We could end up with AI whose thinking we can’t even understand, and that might not be a positive thing for humanity. It’s also what Elon Musk tends to think and that’s why he has a plan B. His plan A is to inject all of us with more technology so that we can keep up with AI, in evolutionary terms. But his second plan – if that fails – is to build a city on Mars where we can all move to get away from Super AI. That’s laughable because it just seems ridiculous to think first of all that Mars is livable in a normal way, and that going from here to Mars is going to work. First of all, the human body is not at all made for Mars, and if the AI is super intelligent, we’re not going to be safe on Mars either. The AI will eventually get there too.
ŞH: Absolutely. Thank you.
BA: So, doomed to fail, huh?
KL: Well, I don’t think we can technologize ourselves out of our problems with AI. I think we have to start now – and it’s almost too late – and we have to build into AI purposely some regulations to keep it beneficial for humans. That can be done; it just takes political will, which I think is going to be the problem.
BA: That was a great interview, Kevin. Thank you very much.
