Abortion, Fetal Personhood, and Artificial Intelligence

Part of the point of this blog is to make people with different perspectives more comprehensible to one another. That’s hard when it comes to abortion, so I tried to think up a case that would put each side in the other’s shoes. Here goes.

Imagine, as many of you believe, that one day we will have sentient artificial intelligence. Also imagine, as will certainly be true, that many people will deny that the AIs are sentient when they arrive and perhaps long after that. This will be understandable to some extent. It’s hard to determine whether AI are sentient since we don’t even know what brain states correlate with consciousness yet, so it’s hard to know if artificial neural networks will be conscious either. And even if we did learn what consciousness corresponds to in the brain, that may not bear on whether artificial neural networks can be conscious. The arguments for and against AI consciousness will end up being metaphysical in character and very hard to resolve. Good people on both sides of the debate will be honestly convinced that they are correct. And they will have serious philosophical arguments about the nature of persons and consciousness to back them up.

That’s kind of how the abortion debate is now, except the groups who believe in personhood are flipped. Today, progressives usually deny fetal personhood and conservatives usually affirm it. But in the future, progressives will affirm artificial personhood and conservatives will want to deny it.

So now let’s imagine a debate between a “pro-life” progressive who wants to protect artificial life, and a “pro-choice” conservative who wants to allow people to erase artificial life if it significantly hampers their freedom.

Imagine that the artificial intelligences can take a physical form, and appear human. Also imagine that, for whatever reason, they often follow humans around, innocently, but in ways that prove to be burdensome. Every once in a while, the robots get in someone’s way, such as when they’re driving. And in a few cases, the threats they pose are deadly. So, some humans periodically decide to run the robots over, or break them just to end the burden and harassment. Sure, the robots sort of seem human, some of us reason, but they’re probably not, so there’s probably nothing wrong with running a few of them over and crushing them. They’re not persons.

Now imagine that, in time, pro-artificial-life progressives start to protest at the places where the robots frequently get run over. In some cases, they place blockades in the way, and in still other cases, they hire police to do likewise. And then imagine that this issue becomes political. Progressives start to care intensely about protecting artificial life, and conservatives become intensely committed to allowing people the freedom to run the robots down when it is necessary, and when the choice is a hard one.

Let’s also imagine some of the same kinds of debates arise. Let John be a pro-AI-life progressive, and Reba a pro-AI-choice conservative.

John: “I think these robots are alive. I think they have souls. And I think we shouldn’t kill them by the thousands, even when they pose real risks to people and even in cases where we really need to run them over. They’re people after all, just like us.”

Reba: “I’m not sure you appreciate how badly these robots harass and endanger people. Sure, they don’t mean to harass; they’re pretty innocent seeming. But they’re also burdensome, and you’re telling us that we have to tolerate their constant presence based solely on your weird religious theory that robots are sentient, when that’s just crazy. I mean, c’mon, they’re robots. Haven’t you read that important article by John Searle on the Chinese Room? Didn’t that refute things once and for all?” [Readers: if you think that’s silly, compare the use of Judith Thomson’s article defending abortion.]

John: “I didn’t share the base intuitions in the Searle article. And, come on, surely it is significant that the artificial neutral networks are functionally equivalent to ours.”

Reba: “But mental states aren’t the same as functional states! They’re just really good consciousness simulators! And you’re trying to tell people what to do with their lives, to pay severe social and physical costs, just to preserve these weird creatures that are kind of like humans, but really just resemble them?”

John: “Look, I know I’m not going to convince you that mental states supervene on functional states; and you’re never going to convince me that dualism is true. The arguments are hard, we’ve had them for centuries, and we didn’t get anywhere when we used to debate whether fetuses had souls before perfect contraception and artificial wombs made abortion disappear. But I’m still convinced, very convinced, that these robots are alive, that they’re just like us on the inside. And I have to do what I can, even vote for bad politicians, in order to save the robots from being destroyed.”

Reba: “The thing that frustrates me is that it seems like you just don’t care about humans very much, you just don’t respect our choices or our rights. Sometimes we need to get to the hospital, and sometimes we need to get home to attend to a family emergency. You’re talking about putting up roadblocks, big restrictions on our liberties, liberties without which we might suffer, and in some cases, die.”

John: “I know it’s a risk, but we will save robotic life!”

Reba: “I’m concerned your primary aim is to control people. You just don’t seem to care about our welfare.”

John: “I don’t want to control humans! I want to save AIs!”

Reba: “You just don’t get it, do you John? You need to trust humans, respect humans, believe humans.”

John: “You’re just not hearing me anymore. I know I’m putting restrictions on carbon-based people, and I hate that, but it is the only way to stop you from killing silicon-based people by the thousands. I mean, since 2100, pro-choice laws allowing carbon-based people to kill artificial life have led to the annihilation of 40 million robotic lives. That’s one of the great moral horrors of the twenty-second century!”

Reba: “Ugh, you’re so dramatic. I can’t believe you’d compare us running over assemblages of robotic parts to slavery or the Holocaust.”

John: “I’m never going to convince you. Maybe I’ll just shock people into listening by showing them big pictures of robots being torn apart, or movies of them screaming as they die.”

Reba: “That’s really offensive, putting people through all that. You might traumatize a victim, you know.”

John: “I have to convince someone, somehow. Maybe I’ll even have to vote for Barron Cyber-Trump III just to stop you.”

Many pro-AI progressives will be strongly tempted to get in the way of humans who feel the need to disassemble, disable, or destroy the robots that constantly harass and sometimes endanger them. And that will involve interfering with the liberty of humans, since keeping the robots alive is of the greatest moral importance. And many anti-AI conservatives will tend to prioritize the interests and worth of humans, and deny the personhood and value of AI. At some point, we’re going to have another big disagreement about who counts as a person, and when it comes to AI, progressive and conservative opinion is likely to reverse. But once you put yourself in this future mindset, you may find yourself a bit more sympathetic to how the other side sees things.

Maybe that helps. Maybe not. But it was worth a shot.

1 Comment

  • Eric Posted November 23, 2019 10:09 am

    It does help.

Add Comment

Your email address will not be published. Required fields are marked *