Should Synths Be Given Human Rights? – Rethinking Fallout 4


Are synths people? Last episode we talked about the institute and its treatment of synthetic intelligences and some of you took issue with my implication
that synths are legitimately thinking, feeling, sentient creatures, and that what the Institute
does to them doesn’t constitute slavery–which…you may not be wrong to call me out on. “Data is not sentient, no.” The basic premise is this: robots, or artificial
intelligences, were created by humans for a purpose. Whatever that purpose is, doesn’t
matter, they’re a tool, like a hammer or washing machine. They’re more complex, sure,
but they’re designed to complete more complex tasks: diagnosing medical conditions, innovating
new designs, or beating people at Jeopardy. Regardless, anything that may incidentally
resemble what we perceive as free will or self-awareness is an illusion. It’s just
a bunch of programming, a directive put there by humans, either as an accidental consequence,
or put there on purpose to make it easier for us to interface with them. You know, ‘cause
it’s easier to interact with something that acts human than something that doesn’t. Robots aren’t sentient because they aren’t
self-aware like we humans are. While they may display outwardly the patterns and mannerisms
of humanity and intelligence, they don’t experience the experience of being alive.
You know what I’m talking about, right? You’re sentient. You know what it feels
like to be conscious. You’re conscious right now, watching this video. Look at it. Just
feel the feeling of it. We all know the experience of being human.
Issues of race, politics, and culture aside, that’s one unifying truth in our species,
and we’re confident that no man-made object, no matter how complex their circuitry, could
ever truly capture the richness of the human experience, especially not something that
was designed to wash floors. I mean, college students aren’t ditching class to protest
against the treatment of roombas. Let’s step out of the realm of fantasy for
a second and take a look at something a bit more complex than a Roomba: Watson. Watson
is a computer system developed by IBM that was designed to answer questions asked in
natural language. Watson was pitted against the best contestants the game show Jeopardy
had ever seen, and…leaned house. Totally wrecked the the competition. Watson was capable
of recognizing the fucking weird-ass structure of Jeopardy questions, searching its archives,
and composing an answer in the similarly ridiculous answer format. So what about Watson? I mean,
clearly it doesn’t have any pretenses of thoughts and emotions yet, and it doesn’t
have a face of any kind,…but it’s no Roomba, either. Can we at least agree that it is a
powerful thinking machine? Well, not necessarily. Philosopher John Searle
and others like him claim that Watson doesn’t think because it doesn’t truly “understand.”
It doesn’t have a sense of self, or an understanding of actual meaning. It’s just a complicated
program that recognizes patterns and symbols, and spits back the correct sequence of 1’s
and 0’s based upon its predetermined programming parameters. It’s not conscious. But what if it’s not that simple? Earlier I said that we all know what it means
to be conscious, because we’re conscious all the time. Okay. Cool. So tell me: what
is consciousness? Is it being awake? Because that’s what I think of it as. I bet you
probably answered with something else. Having memories? Experiences? Senses? Is it something
uniquely human? Uniquely human. You know something that’s
uniquely human? Our neocortex. The neocortex is the outside part of your brain. The part
with all the wrinkles. Lots of other mammals have them, but NOTHING has a neocortex like
ours. It’s only 4 millimeters thick, but because of all the wrinkles increasing its
surface area it takes up almost 80% of our brain, which is good, because that’s where
ALL of our higher-level reasoning exists. Reading, writing, speech, the angry comment
you’re writing underneath this video right now telling me how stupid I am: all of it
comes from your neocortex. The human neocortex is responsible for every single human innovation,
from stone tools, to writing, to the steam engine, and the computer. The human brain
is immensely complicated, but it does seem to have a hierarchical structure that follows
certain rules. In his book “How To Create A Mind,” Ray
Kurzweil, a major pioneer in the field of machine learning, details algorithms about
how our neocortex recognizes various patterns in our environment on a micro level and pieces
them together to create whole pictures and ideas, which he calls “The Pattern Recognition
Theory of Mind.” It’s not without its critics, but it does
get to the root of the problem: what is our brain if not a wet, biological computer? A
machine that’s made out of cells and not silicon chips? Hell, it even runs off of electricity. So what separates us from machines? As with everything else in life, the answer
lies in Zombies. Okay, no, not those Zombies. And not Ghouls,
either. No, instead I’m talking about something else entirely: the zombies of Philosopher
David Chalmer. A zombie in Chalmer’s world is something
that is human in every conceivable way. It acts exactly like a person does in any given
circumstance. It laughs at jokes, it acts offended when insulted, it expresses fear
and pain when injured–the only difference between a Zombie and you and me is that a
zombie lacks “subjective experience,” or “consciousness.” Just…it’s not
there. So let’s say you’re at work or school with a bunch of normal folk and zombies–you
know, just like real life “that was a joke”–how would you tell the difference between normal
people and zombies? And remember, they act and look human in every way. They’re not
unemotional robots, there’s no voight-kampff test to determine their psychological status.
If they’re not convincingly human in every way, it doesn’t fit the criteria of the
experiment. So, if something acts and looks human, but lacks the ineffable experience
of “consciousness,” how do you tell? The answer is you can’t. So what is consciousness? “Prove to the court that I am sentient.” Kurzweil, has his own idea, “Consciousness is an emergent property of
a complex physical system. In this view a dog is also conscious, but somewhat less than
a human. An ant has some level of consciousness, too, but much less that of a dog.” He goes on to say that a computer that’s
capable of successfully emulating a human brain would have the same level of consciousness
as an actual human. But that’s ridiculous, right? I mean, the
Institute aside, that’s not actually something we’re going to have to worry about, is it?
Well, yes, actually. Henry Markram is the head of the Blue Brain
Project, whose goal is to completely simulate the human brain. In 2005 they successfully
simulated a human neuron, and in 2008 they simulated an entire neocortical column of
a rat. Their current target for full-brain emulation is 2023–eight years from now. Scientists
at IBM, in a completely different project have created a cell-by-cell simulation of
parts of the human visual neocortex. I’m reaching about 10 minutes now, and I’ve
barely scratched the surface–the fact of the matter is that there are no easy answers
out there. In an age of simulated neurons and neocortexs, science and philosophy fail
to help us. And these problems aren’t unimportant. Consciousness
and free will are at the heart of our strictest laws and most controversial political issues.
You can go to prison for the rest of your life for extinguishing someone else’s consciousness
against their will, or sometimes we’ll even take yours from you as punishment. And we
don’t even know what consciousness is! As things like Watson, Siri, and Google get more
complex, capable, and self-reliant, and the Institute, Fallout 4, Mass Effect, and Halo
less and less resemble fiction, these issues are only going to become more and more important. We laugh at the idea now, but there was a
time when putting a man on the moon was considered ridiculous, tablets were considered an unattainable
technology only seen in Star Trek, and Pip Boys, computers on your wrist, were just some
whacky invention by game developers. Some day soon these are issues that will be
more important than who will fight with you against the reapers in Mass Effect, or who’s
going to be your ally in the wasteland. Personally, I think that if you believe that
consciousness and free actually exist, you have to accept that synthetic intelligences
possess them too. I mean, can you really say that something that’s capable of telling
a joke, asking questions, learning, and getting offended at someone telling it that it isn’t
actually real just because its emotions are digital and not analogue doesn’t have free
will or awareness? Is it a leap of faith? Sure. But no more of
a leap of faith than believing that you, someone who’s not me, is conscious while watching
this video. To exist every day is to take certain things on faith, and in this case,
I’m going to err on the side of thinking that more things are conscious than actually
are, as opposed to unfairly mistreating a sentient creature just because I don’t understand
it. Or him. Or her. What do you think? Are machines capable of
sentience and consciousness? Are sentience and consciousness even real? There’s way
more to it than I could get to in this video, so I definitely encourage you to explore it
on your own. And if you do, pick up Kurzweil’s book “How to Create a Mind.” Or if you’re lazy and don’t want to read You can watch this star trek episode that total covers the same thing. Even if you don’t agree, it’s a good place to start. Anyway I will see you guys in a couple of weeks.

100 Comments

Add a Comment

Your email address will not be published. Required fields are marked *