Good question. I’ll be very interested to see in the coming years, as we develop a true AI, how much of humanity is imprinted into the AI programming, unconsciously from us. Will an AI be purely logical, or will we inadvertently give it some of the human tendancies?
Or, by extension, will the AI learn our behaviors from us (whether we want it to or not) even if we don’t touch the programming aside from implementing the basics of machine learning?
Given that AI is largely designed around replicating ill- or extra-logical behavior (pattern extrapolation using unstructured association), and then trained with human-centric data, I would expect any emergent intelligence to be more like humanity than unlike it. I mean the learning process is intentionally based on our own biological development.
Maybe it’s because most people don’t do bad things out of being evil but because of reasons. As long as we aren’t forced to listen to these reasons, we can imagine them to be just bad, and a failure to not listen to reason is called Stockholm Syndrome?
Neat trick: Stockholm syndrome isn‘t real, and was fabricated after Stockholm police did such a bad job dealing with a bank robbery that the hostages took the side of the hostage takers. From the wikipedia article: > In her 2020 treatise on domestic violence _See What You Made Me Do_, Australian journalist Jess Hill described the syndrome as a “dubious pathology with no diagnostic criteria”, and stated that it is “riddled with misogyny and founded on a lie”; she also noted that a 2008 literature review revealed that “most diagnoses [of Stockholm syndrome] are made by the media, not by… Read more »
GamingSpreeMX
4 years ago
Yes.
Bwauder
4 years ago
“Yes” is slightly better than “both”.
Daminica
4 years ago
Wow, Ethan realized that way sooner then I would have expected.
Pal87
4 years ago
While I get that it’s a sentient robot, couldn’t they still just somehow program the three laws of robotics into him/it?
And add a 4th law where robots self destruct once they come to the realization that humans need saving from themselves.
Problem is that would be violating his rights as an individual, which Ethan used to convince the others not to kill him for good. It would basically be mind control at best, and a lobotomy at worst.
We’ve got mirror neurons, which are a hardwired mechanism to feel whatever the next one does.
What if I had none and you’d implant them into me — instead of keeping me captive for the rest of my life because otherwise I’d kill people?
Would I chose mind control, or would I chose body control, or would I chose end of existence? I’m glad that I don’t need to know that.
scott said its constantly rewriteing its self , that kind of thing would have to be built in probably , its not something you can patch in, > personally id go with bolt a bomb to his noggen that if he trys to remove it blows up, and blows up if he kills or seriously hurts anyoine
It shouldn’t be any harder than just teaching a human to be moral… oh wait…
HonoredMule
4 years ago
How was his first question not “How long /have/ you been around?” The obvious followup is “then how do you know your hatred of humanity isn’t someone else’s preprogrammed agenda if you’ve had so little time to form your own opinions?”
I’m taking some blind guesses here, I suppose. But: 1) “Little time” is a very relative concept, when you talk about a sentience with vast processing power. It can probably absorb information way faster, and also reach its own conclusions without all that ruminating and second-guessing and psycho-crap we humans go through. Just like MCU’s Ultron concluded humanity was a doomed mess, within a couple seconds of Internet access. Notice I’m not saying those conclusions are RIGHT. Just that they’d happen in a snap. 2) Unless someone tried to program a human-like psyche into Zeke, it doesn’t have the inherent… Read more »
Pulse
4 years ago
yes Ethan, he is saying you’ve done doubled down right there
So there is this thing called stockholm syndrome, does that work on robots?
Good question. I’ll be very interested to see in the coming years, as we develop a true AI, how much of humanity is imprinted into the AI programming, unconsciously from us. Will an AI be purely logical, or will we inadvertently give it some of the human tendancies?
Based on a lot of the machine learning that’s going on today, it wouldn’t surprise me. ?
Or, by extension, will the AI learn our behaviors from us (whether we want it to or not) even if we don’t touch the programming aside from implementing the basics of machine learning?
Given that AI is largely designed around replicating ill- or extra-logical behavior (pattern extrapolation using unstructured association), and then trained with human-centric data, I would expect any emergent intelligence to be more like humanity than unlike it. I mean the learning process is intentionally based on our own biological development.
Hence the dread.
This has been tested:
en.wikipedia.org/wiki/Tay_(bot)
It didn’t work well.
Better hope for human tendencies, because you really don’t want purely logical.
We wouldn’t balance well
If we use the internet to teach it it will be a moron.
A horny moron my good sir
stockholm relies more on psychology than physiology, so assuming he thinks with the same logic as humans, it’s possible.
Maybe it’s because most people don’t do bad things out of being evil but because of reasons. As long as we aren’t forced to listen to these reasons, we can imagine them to be just bad, and a failure to not listen to reason is called Stockholm Syndrome?
Neat trick: Stockholm syndrome isn‘t real, and was fabricated after Stockholm police did such a bad job dealing with a bank robbery that the hostages took the side of the hostage takers. From the wikipedia article: > In her 2020 treatise on domestic violence _See What You Made Me Do_, Australian journalist Jess Hill described the syndrome as a “dubious pathology with no diagnostic criteria”, and stated that it is “riddled with misogyny and founded on a lie”; she also noted that a 2008 literature review revealed that “most diagnoses [of Stockholm syndrome] are made by the media, not by… Read more »
Yes.
“Yes” is slightly better than “both”.
Wow, Ethan realized that way sooner then I would have expected.
While I get that it’s a sentient robot, couldn’t they still just somehow program the three laws of robotics into him/it?
And add a 4th law where robots self destruct once they come to the realization that humans need saving from themselves.
Problem is that would be violating his rights as an individual, which Ethan used to convince the others not to kill him for good. It would basically be mind control at best, and a lobotomy at worst.
We’ve got mirror neurons, which are a hardwired mechanism to feel whatever the next one does.
What if I had none and you’d implant them into me — instead of keeping me captive for the rest of my life because otherwise I’d kill people?
Would I chose mind control, or would I chose body control, or would I chose end of existence? I’m glad that I don’t need to know that.
scott said its constantly rewriteing its self , that kind of thing would have to be built in probably , its not something you can patch in, > personally id go with bolt a bomb to his noggen that if he trys to remove it blows up, and blows up if he kills or seriously hurts anyoine
It shouldn’t be any harder than just teaching a human to be moral… oh wait…
How was his first question not “How long /have/ you been around?” The obvious followup is “then how do you know your hatred of humanity isn’t someone else’s preprogrammed agenda if you’ve had so little time to form your own opinions?”
I’m taking some blind guesses here, I suppose. But: 1) “Little time” is a very relative concept, when you talk about a sentience with vast processing power. It can probably absorb information way faster, and also reach its own conclusions without all that ruminating and second-guessing and psycho-crap we humans go through. Just like MCU’s Ultron concluded humanity was a doomed mess, within a couple seconds of Internet access. Notice I’m not saying those conclusions are RIGHT. Just that they’d happen in a snap. 2) Unless someone tried to program a human-like psyche into Zeke, it doesn’t have the inherent… Read more »
yes Ethan, he is saying you’ve done doubled down right there
Well, the robot’s not exactly WRONG…
P A R A N O I A S E T S I N
Zeke looks sad here. 🙁