The lights are on
Veteran Member - Level 11
This might seem like a follow up to yesterday's blog
addressing gun control and video games, but I promise you it's not (at least
it's not meant to be). However, I would like to take a moment thanking you all
for your support and comments with that piece. Even those who didn't agree
responded with professionalism and realness. I'd also like to apologize for
this blog being a little out of character, as I don't know if it will relate
directly to the subject of video games, something I rarely do. Who knows, maybe
I will find a link as we go along (like many of my blogs, this is just free
writing and not planned out in advance).
This almost sounds ridiculous as I type it, but I am
fascinated by robots and cyborgs (cybernetic organisms - part man, part
machine). There are plenty of great movies (and video games) featuring robots
and cyborgs including some of my personal favorites - I, Robot, Terminator and
What all of these movies have in common is their success at
developing an emotional attachment to a machine from the audience. You
literally become connected to these man-made characters comprised of metal
pieces and parts, integrated circuitry and in some instances, a very real
looking outer shell resembling a human. You feel bad when something bad happens
to them and you feel relieved when they survive, if they survive. Of these
movies, I find Blade Runner one of the most fascinating stories.
The film depicts a
dystopian Los Angeles in November 2019 in which genetically engineered organic
robots called replicants-visually indistinguishable from adult humans-are
manufactured by the powerful Tyrell Corporation as well as by other
"mega-manufacturers" around the world. Their use on Earth is banned
and replicants are exclusively used for dangerous, menial or leisure work on
off-world colonies. Replicants who defy the ban and return to Earth are hunted
down and "retired" by police special operatives known as "Blade
Runners". The plot focuses on a brutal and cunning group of recently
escaped replicants hiding in Los Angeles and the burnt out expert Blade Runner,
Rick Deckard (Harrison Ford), who reluctantly agrees to take on one more
assignment to hunt them down. - Wikipedia
It raises one of my favorite questions to debate...
"If you didn't know you were a robot, would you want me to tell you?"
Today marks the day that I witnessed one of the creepiest
"things" I have ever seen in my entire life. And while I'm embarrassed to say
my initial thought was this thing resembled something out of Dead Space and
needed put out of its misery, I'm not so sure all of you will agree. It did
cause me to question,
Is killing artificial intelligence killing?
Robots are nothing new, having existed for decades. They
assemble cars in manufacturing plants and take pictures of pretty rocks on the Mars, but progress in creating a lifelike robot has really advanced in recent
years, as further demonstrated by this news story I read today about a robot
baby that can learn to express human emotion. Check it out here.
Isn't that just the creepiest thing you've ever seen...did
they have to make it look like those mutant babies in Dead Space?
So, naturally as I'm reading and watching the video about
creepy baby v1.0, I'm only half listening because in my mind I'm envisioning
this thing going haywire and grabbing me by the leg while trying to drag me to
the ground to finish me off. I think in my brief daydream I grabbed a broom or
a mop and beat this thing profusely until it quit moving.
Imagine when more advanced lifelike robots exist - will
attacking them be a crime; will killing them be murder?
It's a slippery slope, because on one hand you have a
non-living being that was constructed by man, just like a computer or a car is
produced... But if we can get emotionally connected to the robots when we watch
them in a movie, how much more could we get attached to them if they lived with
us or interacted with us on a day to day basis. Perhaps it's not a question of
legality, but morality - would it be morally wrong to kill a machine that is
capable of thought or emotion? Will such technology ever exist?
As gamers, we've been eliminating digital representations of
artificial intelligence for decades without remorse or feeling like we've
committed a crime, but how does this thought process change when it's a physical
being staring you down...is it any different? I have to admit, I don't know why
but when I play video games, I tend to stray away from unnecessary killing and
feel guilty if and when I do. Sometimes it's so bad I don't even like killing
the livestock in Minecraft.
Anyway, I know this is kind of short, but I wanted to share
this and get some thoughts on the topic of robots, because I have another blog
lined up for tomorrow in line with this theme, only that one will reference video
games. Now I'm off to bed, and will probably have nightmares of that creepy
Haha only slightly disturbing that the reason for you wanting to kill a robot with a broom is because it looks like a creepy baby. I went to a show were they had a robot head (I think they were working on a body) that was supposed to be a social robot or something. You could talk to it and and have it tell jokes and stuff, but the person showing it had to say the stuff into a mic because it mostly only recognized his voice.
I think before robots become more lifelike the reason for it to be illegal to kill or attack a robot would be because they are most likely owned by someone. It's kind of sad the robots are pretty much going to be used as slaves, or at least it will be when they get smarter.
As a vegan and an animal lover, I avoid killing animals in games whenever I can, yeah they're just little programs running around, but I still don't like killing them for some reason. Same for innocent bystanders.
No, killing the AI in games is not killing. They aren't alive, don't think, and are only created for what the programmers tell them to do. I know that sounds like slavery, but do we feel bad for oranges because they were created by us so that we can eat them? In fact, we should feel worse for the orange because it was once alive. I wouldn't even call the "AI" in gaming AI because they don't think for themselves, they do what they were programmed to do.
If AI develop feelings and think for themselves it would be considered destroying an intelligent being which is the same thing as killing practically, but it was never alive. It still would be a horrible thing to do. But until Skynet takes over, I think we're fine.
Hmm is killing artificial intelligence killing? This question would have to run along the lines of what makes us human? As a matter of existentialism, I'd say in most cases a being who is self aware is in fact a proper living being. In this case yes AI would be granted a right to live as they are self aware to the point that they can establish an emotional attachment to their actions. Under the same scope however, this line of thinking could be used against those who suffer with mental handicaps, and without proper means of proving otherwise could lead to their deaths.
I actually don't see a purpose to the AI construction push. I, as I am getting older, have noticed that all this technology has really begun to hinder us as humans. We are able to communicate more with other's and thus that level of connection has incresed, however at the same time navigating source files, online pages, and any other thing you would do with a modern computer has completely hindered our ability to also isolate when we see fit.
SOme of us have been better than others at this, myself included as I am notorious for putting my isues on the steps of the world for all to see. Not only this, but technology is now a daily requirement for your every routine. Water heater heats the water to the temperature you want for a shower so you don't stink when you come into work. Alram clokcs wake us up, television delivers our news. Without getting too long winded, point is, and to answer your question, is killing AI killing depends on where you stand morally, and what you consider "proper" life. The answer is as most cases up to the people to decide and congress to act on.
I'd say, if they can feel it, yeah.
I'm a huge fan of Science Fiction, and this is one of my favorite questions presented by the genre. The book that Blade Runner was based on (Do Androids Dream of Electric Sheep?) and I, Robot both have had a bit of an echo in games like Mass Effect (the Geth dilemma). I agree with TOGNick in the sense that, anything that is self aware and can govern their own actions falls under the category of "the living". Whenever you factor in what moral influences are implemented in their design however, that's where things start getting complicated. Say a scientist designs Artificial Intelligence with a small personality trait that pushes that AI to come towards certain moral conclusions. Are those conclusions that particular AI's own after they've come to them? Loved this blog Saint, great job.
This is a really stellar topic you hit on. Bravo.
Like Hannibal said, games occasionlly make great use of this idea, like the Geth in Mass Effect. I had a similar experience playing Binary Domain; despite mediocre cover-shooter gameplay and AWFUL dialogue, the underlying story was very much along these lines, and it's a shame they couldn't have built a better game around it.
I find myself feeling less bad about killing robots than people in games, and I wonder if that's by design. After all, if you believe we were created by a higher power, and thus we are alive, then artificial life shouldn't be discounted just because it was created by us. We don't consider killing animals in real life "ok" just because they're a lower form of life than we are; just because artificial life wouldn't be an exact replica of humanity shouldn't completely discount it from ever being considered life. Heck, our higher thought processes - morality, critical thinking - are the first things we point to when separating ourselves from the animals (well, that and enjoying beer.) If an artificial/synthetic being achieves those higher thought capacities, would that be enough to be considered alive?
Food for thought (no pun intended.) Again, great blog.
I remember seeing creepy robot baby things from Japan a while back on some of the technology sites I watch. Thing looked like a fricking demon. This is something I have though about before and never been able to really come to an answer. I really don't think it would be but we would kinda have to wait and see if technology will ever be able to get so advanced it would actually be able to replicate humans. Also, when I saw this I instantly thought of Mass Effect, since I am playing through that series for the firs time, and was all "does he mean VI or AI?"
We are miles away from true artificial intelligence. At this point, game enemies are still just rigs of pixels programmed to shoot, dodge, and occasionally say a random battle phrase when you're in earshot. When games become sophisticated enough to have true a.i. I may have second thoughts about playing shooters and other Killing Spree games.
I would not hesitate to kill that thing. Holy crap.
You've reminded me that I still haven't watched Blade Runner. I've had it on my Instant Queue for a long time. Last I was paying attention to it, it wasn't currently on Instant Queue, but it might be now - I'll have to check.
I feel that killing Artificial intelligence is not "killing" per se. I think that using the term destroying a bit more appropriate, since I feel that killing involves something with a soul. Now, I know this is a bit tricky to explain, and I'm not necessarily doing the best job here.
I have only seen I Robot and played Dead Space of the examples you listed, but I do find the advancement of artificial intelligence fascinating. I too also try to avoid killing in video games, especially if needless. I think that having a physical form of AI in front of me would only exaggerate that tendency.
My take on the development of this falls in line with my religion. I am a muslim. Now, I know that almost all muslims get a terrible rep post 9-11 (sometimes fairly so). In the Quran though, it says to avoid killing as much as possible, (really only doing so for food, and even then taking only what you need) and to respect all things, even to treat innatimate objects with care, because someone made those.
I hate to bring up religion in cases, and I apologize if I offended anyone. That's just my stand on this, and I thought the perspective might be appreciated by people.
Thanks again for the great articles!
To answer the question posed in your title, yes. While we may not want to designate a robot as being a "living", i.e. organic, if it is self-aware, then I believe it would be in humanity's interest to respect such awareness and it's inherent worth.
I would say that humans would need to respect "aware" robots/androids/a.i. not for their sake, but for our own. In the same way that cruelty to animals and other humans tarnishes our spirit, it would be the same with artificial life. The native amaericans were known for encouraging a respect for all things both living and inanimate. I think that's the best mental space to be in.
Legion is a virtual intelligence that spends his downtime playing a virtual reality game called Call of Honor. So yeah, for him those programs do exist.