Michael Withey: First of all, I’d like to ask you about the transhumanist movement as it stands today, specifically the debates which are occurring within the movement and debates with other philosophies.
Nick Bostrom: It depends on how broadly or narrowly you define the transhumanist movement, because there is an organisation called the World Transhumanist Association (WTA), who are self-identified transhumanists doing that. Then, I think, there is a much larger set of people who hold transhumanist views but they don’t call themselves transhumanists, they might not even have heard of the term. So there’s a much broader stream in our culture and society who hold transhumanist views but don’t have that identification. With regard to the more narrowly-defined, self-described transhumanists, right now, within the WTA, we are discussing going forward, to try to broaden the appeal by filing off the sharp edges and removing some ballast of assumptions; to make it mainstream, as it were. Other people think that it is very important to hang on to the more radical assumptions. But that’s a debate you have in any organisation: how to position, present your views.
M.W.: So what would you describe as the more radical and more mainstream tenets of transhumanism?
N.B.: The more mainstream part of it has to do with technologies which are here today, or which we can foresee within the next 10-15 years or so. Mild drugs to improve memory slightly, some interventions which might extend a healthy human lifespan, things that really skip around the surface of human nature. This debate has by now reached into the mainstream already: a couple of weeks ago there was an editorial in Nature on cognitive enhancement and the call for it, basically accepting attitudes towards cognitive enhancers, that we should have more money to research these, but also that we should think about implications for equality and prevent situations where people feel pressured to use them: but yes, it’s a good idea. And there are many signs that, at least here in the UK, many of these near-term prospects are quite widely accepted.
I first got interested in transhumanism some twelve years ago. At the time, the discussion would revolve around whether this was all science fiction, or if it could actually happen. Now, the discussion is different: it’s often quickly agreed that a lot of these transhumanist prospects could happen, we don’t know exactly when or which ones, but a lot of this will eventually happen, and the question is whether they should. So, it’s moved from the factual to the normative question. I see that as some progress. Of course, I would like to see the discussion move one step further: not just yes or no, for or against enhancement, but to the next stage which, in my view should be ‘yes, there are a lot of ways of enhancing human beings which are highly desirable, but we need to figure out how we can do that, in what context and what social policy is needed to ensure that benefits are distributed evenly’ and so forth. I don’t think we’re quite there yet, but that’s the direction in which I’m hoping transhumanism is moving. But, in transhumanist discourse there has always been an interest in more far-future possibilities—not just about the tinkering which can be done in the next decade or two, but also what technological development could mean for humanity in the long run, questions about posthumanity, possibilities of artificial intelligence, extreme longevity.
M.W.: There’s a terminological distinction here, isn’t there, between transhuman and posthuman—as I understand it, a transhuman would be someone who enhances his natural capabilities to the sort of levels achieved only by a few today, and a posthuman would be someone who far outstrips these limits.
N.B.: You could, but the reality is that these words are vague, and ‘posthuman’, in addition to being vague—also bigoted—it’s probably not a very helpful term at this point, just because it causes probably more confusion than understanding. Now, some people outside of the transhumanist circle argue that we are already posthuman because we have, say, computers, and this changes the way we think about ourselves and the human mind. So, the perception of what humanity is has changed. There may be something to this, but it’s all very different from the transhumanist concern with how science and technology might not just change our perception of the world, but also change the world – in particular, the human organism in a biological way, which, in my view, is a lot more profound than changing the metaphors about how the human mind operates. So, there is always this tension in transhumanism between, on the one hand, taking seriously these long-term, radical possibilities; and, on the other, being interested in, and wanting to have some impact on the here and now. And there are tensions in that, because it’s a lot easier to have a direct impact on ongoing policy debates and get accepted onto forums if you remain exclusively focused on current concerns, whereas these more futuristic concerns sound wild and frighten people off. Nevertheless, in my view it’s important to keep both of these in the discussion – not that at any given point, in any given meeting, you have to discuss all of this, but try and ultimately forge an understanding that doesn’t compartmentalise our thinking into one long term science-fiction realm and the other politics today, but to see that these are ultimately connected and I think that this is the path to great wisdom.
M.W.: Does the transhumanist movement have a conception of ethics, or meta-ethics?
N.B.: Transhumanism is not a full-fledged philosophical system, which has a developed meta-ethics, a developed metaphysics, a developed epistemology: it’s not a package deal. Different transhumanists might have different value theories; some might not have thought about it at all, it’s hard to say. But the common denominator is that there are possible modes of being, ways of living, which are currently inaccessible to us because of our biological restrictions, but could be extremely valuable, good for us; so not just that some people can exist who have greater levels of wellbeing, but that there are possible paths of transformation for us from where we are now that would be good for us. Just as you might think of a baby growing up: it’s a very profound transformation that it changes the character of that individual in a very profound way, and yet we do not think it’s bad for a baby to grow. And, somewhat analogously, I think there are these possible developmental trajectories foreclosed to us because we die too early, after just 70-80 years, which is nowhere near enough, I think, to reach any sort of reasonable level of maturity of the whole. More subtly, we are limited by the kinds of brains we are working with, which might not be capable of a huge number of thoughts and feelings and sensations. So, there’s no reason to suppose that the brains we humans have are free of all limitations when all the other creatures we see have very severe limitations.
M.W.: Would you say that transhumanism is committed to one particular idea of a particular human good, or of human flourishing?
N.B.: Not just one idea of that: different transhumanists emphasise different aspects of it. You could plug in most philosophical accounts of human well-being or human flourishing, and then see how they play out within the transhumanist framework. Suppose you have a very simple theory, like hedonism, according to which suffering is bad and pleasure is good, and you want to achieve the greatest balance of pleasure, and that’s what well-being consists in. Now, I’m not advocating that particular view, but suppose you do hold that view, you reflect a little bit, and you see that the root of most suffering is not external, but it is in the human psyche itself: we are not built for sustainable pleasures. Even if the circumstances are very good, we quickly get used to them, and you have this hedonic treadmill. There’s a set point of well-being that you always return to. If you win the lottery, you might be happy for a few days or a few weeks and then you return to your set point, which we know has a significant genetic component. So, the conclusion of all of this is that, if you’re serious about improving subjective well-being, eventually you will have to do something about the neural machinery which underlies subjective well-being. Then you get into this idea of what David Pierce calls paradise engineering: re-engineering our minds to become more capable of sustained well-being. Now, suppose you have another account of well-being, one which emphasises engagement with beauty and achievement and knowledge; some more classical humanist conception, for example. Well, there too, you can easily see that there are all sorts of ways in which our well-being in that sense is restricted by our biological limitation: lifespan, limits on the time we have to develop, our memories decay, our intelligence is blocked, our emotional life is less broad or rich than sometimes we would want it to be. Those limitations can be overcome to some extent by behaviour such as meditation, healthy eating to gain a few more years of life, studying hard: you might overcome some of your limitations cognitively. You work on your character, you ennoble your emotions, but there’s only so far those things can take you, so ultimately we need help from something that can change our biology more fundamentally. If you go through different conceptions of value and well-being, you could bring about a greater realisation of value if you don’t just change the world around you, but change the internal world.
M.W.: One philosophical theme which seems to run throughout transhumanist literature is how it conceives of the distinction between culture and nature. It seems to me that it seeks to break down this distinction, looking at nature as an essentially technological phenomenon, something which continually adapts, generates tools and finds uses for them, and suchlike; and culture as something which is a natural phenomenon.
N.B.: This is a common thought. I guess, at the bottom there is the observation that it’s hard to make a meaningful distinction between nature and non-nature when you start to think about it. I look around today and a lot of the things that people would classify as natural were once not considered natural. All the things that make up modern civilization, save the last 50-100 years of inventions, a lot of moderns think of as natural. Simple agriculture—a man drawing a plough—or something like that. There was a time when all of these were artificial. So there are many different ways that once you begin to think about the distinction between natural and non-natural, it’s really difficult to make one out that’s not arbitrary. Now, for transhumanists that’s not really a problem, as transhumanists don’t think that there is anything of fundamental normative importance riding on this distinction: ultimately it doesn’t matter if something’s natural or non-natural. So, transhumanists are happy to acknowledge that there is no distinction or that the distinction is somewhat arbitrary. On the other hand, if you’re a bioconservative—one who opposes the idea of enhancement, who is suspicious of human attempts to tinker with nature—then obviously you depend on what is enhancement and what’s not enhancement – what is nature, what is not.
M.W.: And this distinction seems to be fundamental to other transhumanist concerns. For example, the distinction between enhancement and therapeutic technology; between correcting a deficiency in nature and enhancing nature.
N.B.: From a normative point of view, it doesn’t matter whether it’s enhancement or therapy. I do think that in certain contexts the distinction is still useful, not as much from a moral point of view as from a technical one. I’ve written a paper with a colleague of mind, Andrew Sandberg, on what we call the evolution heuristic, which exposes a heuristic for exploring promising intervention opportunities, where the challenge is where you might look. The human organism is a very complex system, it’s not very well understood, and if we’re going to go in and tinker randomly, our tinkering might backfire – that would mean long-term side effects. This is true, not just in medicine, but in order to fix things which break, and we would expect it to be even more true when we’re trying to enhance something that was already functioning as it is designed to by evolution. But within this heuristic we have developed, it’s sometimes useful to distinguish between therapeutic and enhancing interventions, since they have slightly different implications for where we should look for effective ways of doing it. But I’m very happy with acknowledging that there is nothing of fundamental moral importance to this distinction. In many cases it’s arbitrary: for example whether we’d call a vaccination therapeutic, because it prevents disease, or an enhancement, because it enhances your immune system.
M.W: I’d like to explore further the transhumanist attitude to the good. How would you respond to the claim that the limitations of the human body might be necessary to life having any meaning? Two examples seem pertinent here – sport, for instance. If I’m playing basketball, it’s not necessarily the case that I want to be Michael Jordan – rather, I want to challenge my own physical limits. I want to engage with the contingency of my body, and play to the best of my ability. Wouldn’t this be undermined if I could just, say, pop a pill to make you play like Michael Jordan? The second category which could be threatened in this way would be aesthetics. For example, a large part of Beethoven’s appeal is the fact that you know he’s engaging with his own mortality, and you can understand that because you’re mortal yourself. If you overcome the limitations of the human body, such experiences would be rendered senseless – it would be a series of pretty notes, which lack meaning.
N.B.: With sport, it’s sometimes true that we want to create limits to make an activity more interesting. All of sports, to some extent, is based on arbitrary limits. It’s an activity which is almost defined by the fact that you choose to adopt some arbitrary rules to enhance our experience. It’s a great thing we are able to do that and, presumably, without some forced external limitations; we can more creatively choose to adopt limitations where it would enhance our lives and enhance our experiences. Now, I do have a problem with the idea that great art somehow excuses the suffering that some great artists have undergone: I find it very callow to suppose that we should be grateful to the Gulag because then we can read Solzhenitsyn, or to say that Mozart’s dying enabled his Requiem to be written and therefore it’s somehow good that he had to go through this – I don’t think that’s the morally correct way of looking at these things. I also think that it’s the case we already have a lot of art on the ‘dark side’. We would retain all of that, and there’s a lot of art which can be done which isn’t about death or suffering; we could also have art which celebrates how good life is, or which is just about the natural world. So I think it would be absurd to preserve human suffering just so we can listen to the screams of anguish which issue forth from the tortured human soul.
M.W.: One of the problems I see with transhumanism is not so much that it undermines nature, but that it has the potential to undermine culture. If we’d take the natural span of human life, we see that the biological progress of the human body, from birth through its ageing and towards death allows for a meaning to be given to life; as we get older and our bodily state changes, we have different roles: a son, a husband, a father, a grandfather. Institutions such as marriage allow us to gain a conceptual understanding of the natural human process of ageing and death. Could a transhumanist preserve this meaning to human life, or would it result in a long, meaningless life devoted to sensual pleasures?
N.B.: Well, I think that there would be a great task for culture and for inventing new culture if some of the basic biological parameters of human nature changed. There would be a great need to develop a new culture around that, which enables opportunities for human flourishing that this would release. I think sometimes technology runs ahead of culture. A trivial example might be when cellphones became commonly used. There weren’t any cultural norms limiting when you should use them, so people would carry them into the cinema or the theatre and they would start ringing all the time. When society had a bit more experience with this technology, it developed norms: you’re supposed to switch off your cell-phones when you listen to a concert. That’s a limited case, but in the general case, there’s an inescapable role for the creation of culture to make the most of the new circumstances. This is the case with extended lifespan in particular, but also the other ways in which human capacities might be enhanced.
M.W.: How do you think man would find meaning in a transhumanist world? Would, say, new social roles have to be found for man?
N.B.: There are two parts to my answer. If we are thinking of extreme life extension, it is my view that long before that there would be a big population of people who are several hundred years old and there would be other profound changes in the human condition. That makes it a lot harder to evaluate that scenario – I think that within hundreds of years we might have uploaded into computers, or changed our minds to become super-intelligent and have complete control of our emotional states. There would be so many changes that it would be naïve—perhaps—to think of a scenario where nothing else changes except for the fact that there are now a lot of old geezers who are healthy and several hundred years old. But, suppose we do want to acknowledge that scenario; if we model it on ageing today, it would be a very sad sight, as what you find is that when you get older today you lose some of your vital energy or interest in life. Inside the brain, dopamine neurons idle off at an alarming rate, which affects your personality and your ability to find meaning in life and activities you find worth engaging in. So we’re going to postulate for the scenario to be at all desirable that biologically you should remain in excellent condition – I would agree that keeping the number of older people alive for longer with respirators would not really be all that worthwhile. Truly conserving youth and vitality, rather than keeping the heart pumping longer, should be the aim. There are a number of things to say. We might note that, today, a lot of people—depending on what state of mind you’re in—may be said to lead meaningless lives. If you’re in a certain state of mind, you might see lot of people leading not very meaningful lives. If you’re in a cynical state of mind, you will see someone getting up, going to work, watching television, then going to bed, and repeating this for forty years. From one point of view, this is all a meaningless drill. However, if you zoom in and have a more sympathetic state of mind, you will discover all sorts of happy occurrences: a phone-call from the children, a cup of tea with the paper in the morning, a stroll in the park with the dog, these little things in life that create meaning. So, human life today can appear empty and meaningless or can seem wonderful. Similarly, if you imagine life extended much longer, even if it didn’t change in quality, I think the value you place on that would depend on your state of mind when you’re contemplating it. For the same reason that it’s good to prevent people dying from cancer or of strokes, it’s good to do something about the underlying process which leads to most illnesses – the process of senescence, the damage that builds up in our bodies as we age. Indigestible molecules that our cells can’t digest and eventually clog up the machinery, mutations in our mitochondrial DNA and so forth. Eventually, this will result in pathology. It needs pointing out that preventing these processes is to preserve human life and human health. Then, we can worry about how to make the most out of the life we’ve got. If somebody’s dying, you first save their lives, and then you worry about how they can achieve maximal fulfilment.
M.W.: What about the risks associated with transhumanism? You make, in your paper, Existential Threats, the explicit point that technology such as nuclear technology, and with possible new technologies such as nanotechnology, we have, for the first time in human history, the possibility of bringing about a global catastrophe. If there’s a non-trivial risk of a catastrophe occurring, is it ethical to pursue these technologies?
N.B.: A significant proportion of my research is on the risks of future technologies. The risks are major, but I do believe that these risks can be confronted regardless of whether the transhumanist movement is very successful or not. The question is how we can reduce and mitigate these risks, and this is often a non-trivial question to answer. There are many different risks: you could have a situation where some technology would eventually be developed and all you would achieve by some moral prohibition against developing it would be that the people most moved by moral imperatives would refrain from developing it, and some other group of people or nation would develop it instead, who don’t suffer these moral scruples. The outcome of that could be an increase rather than a decrease in danger – it’s a more dangerous world if the most dangerous people are the ones with the most powerful technologies. That’s one example of a complication that makes it difficult to infer from the fact that there is a risk to the conclusion that we should therefore not go in that direction. Now, I do think with regard to human enhancement technologies in particular, that there are differences between different kinds of enhancement technologies, in terms of the levels of risk they pose. I believe that enhancements of emotion, personality, motivation have a potential great risk associated with them – it’s one thing if you’re using it to help people with severe depression, but if you roll this out on a broad scale, if we use these technologies unwisely, gradually we could transform ourselves to such an extent that we lost something which is essential to what we value in the world; we slide down some slippery slope, we lose some deep value. By taking small steps, each of which seems convenient, but cumulatively add up to some tragic loss that we may never even notice, because we—also—lose our ability to recognise this value. For that reason, although I think that, ultimately, this emotional, mood, personality redesign holds great promise for ameliorating the human condition, I think we should go easy on our paradise engineering until we have the wisdom to do it right. I think that fools will build a fool’s paradise, and we should perhaps start by trying to make ourselves wiser and smarter. That would put us in a better position to determine from there what further changes we would want to make.
M.W.: The obvious question which arises here is that emotion and wisdom seem closely intertwined. I think few emotions can be characterised as being intrinsically bad or good: to be angry all the time is undesirable but, certainly, is desirable in certain situations, and the question is having the wisdom to distinguish between the two. Similarly, depression may be a terrible thing, but eliminating sadness would mean eliminating a certain depth to life.
N.B.: Yes, and I think this is why we should be careful and go easy on that kind of manipulation. I do think that in some cases there is a trade-off; there’s a humanitarian imperative to relieve intense and great human suffering, to cure depression, but for people whose lives are already quite good, and who are at the peak of their powers, I think it would be dangerous if those people ‘en masse’ started to modify their mood and personality with drugs which are not very well understood, and it would be better to postpone that kind of experimentation until we have developed more wisdom and understanding of the consequences of that kind of manipulation.
M.W.: You take a very liberal view on reproductive technology – people should be able to select the best genes for their children and make them more intelligent. There is an obvious point here, though, isn’t there: that if 90% of people pick genes to make their children more intelligent, those parents who don’t select the best children would be imposing a real cost on their children – their children would be unable to compete with other children, their life opportunities would be severely restricted.
N.B.: I think in practical terms. The best policy would be one which left a huge amount of personal choice to parents, because I think that, all things considered, that’s the least likely to infringe on autonomy and is least likely to result in a dangerous concentration of power in the state to dictate reproductive choices. It is true that you could end up in scenarios where you would have to draw a line between what is permissible and what is not. I could imagine a few sadistic parents who might want to make their children suffer, and eventually you would have to lay down limits and tell people not to do that – parents have a lot of freedom in how to raise their children, but in the case of clear child abuse, the state should intervene. At some point, you would confront the situation where, if you have an enhancement that is known to be completely safe and has a clear benefit to the child-to-be, it’ll have to be considered whether you would not be negligent as a parent if you failed to provide your child-to-be with that enhancement. I don’t have a nice formula for when negligence would be large enough for it to be right for the state to intervene. Think of today: we have mandatory schooling and even if parents would not like their children to go to school, they still have to do it, since we think it’s an important thing for a child to have a rich set of options in life. So, theoretically, in the long run, if these were very safe and social acceptability was very broad, and if the benefits were very clear, you might have a situation where the state could mandate enhancement which would give a richer range of options in life and would allow children to participate in a democratic society.
M.W.: People often talk about the rise of a ‘cognitive elite’ in transhumanism. Isn’t there a risk of cognitively enhanced people being able to pass on more and more benefits to their children, who would be able to do the same to their children? Over time, there would be a huge disparity between a member of the cognitive elite and the masses.
N.B.: Well, that’s not too different from the situation which we have today – except they don’t have more children, they have fewer children. There are elements of this which we already see, and generally we don’t think the solution is to close the best schools because they’re good for the pupils, for example. The answer is to improve the worst schools, so they can be closer to the best schools. And, similarly, with cognitive enhancements, rather than restricting access, our aim should be to make them more broadly available to children of poor parents.