The thing is, with a lot of the AI software that gets developed these days, we don’t understand all of the details behind it, even though it is “just code”. Have a look at CGP Grey’s video on AI for more info on why the “just code” idea doesn’t really hold up.
… and that’s when we move into GMO territory. “They don’t even know what they’re doing when they throw things together until get something that seems to ‘work’. But they don’t know the full extent of how it works, do they?”
generally with GMOs they are specifically selecting genes to add, and where to put them. while the “full extent” of how it works is not 100% understood, that has more to do with our current understanding of the intricacies of how genes interact than anything else. on the other side of things, traditional breeding techniques can include exposure to radiation or mutagenic chemicals to randomly mix up the genes a bit, then you see if you got any useful outcomes. some of the benefits to using these traditional methods over GMOs are that it is relatively cheap, you don’t need… Read more »
So circa 1950 that was pretty accurate. they would expose plants to high levels of radiation to force mutations and then see if anything worked for the benefit (rare but when you expose 500,000 plants to mutations you get a few golden apples) however in today’s GMO’s they are taking specific genes that due specific things and then introduce them to a parent plant. this is the equivalent of changing surgical tools from shotguns to scalpels. can things still go wrong? sure but not nearly as bad as performing an appendectomy with a 12 gauge
I don’t think that’s specific enough. We do understand all of the details behind it – we developed the networks. What we have trouble forseeing is the way the networks will inter-relate and how the networks’ internal path weightings will dynamically change and how that will all together produce unexpected emergent behaviours. That’s just the limit of our human brain’s processing and rules processing and memory limits showing themselves. On the other hand, a meatbeing can still easily make near instant inferences with high degrees of validity where even the best AI system can when it comes to data that… Read more »
Yeah it took a few Billion years for us to get here with that kind of dynamic processing. Computers have had *checks notes* 100 years. Not a stretch to imagine it’s gonna surpass us in the next 100, if not less time.
Exactly! There is a VERY BIG DIFFERENCE between “we do not know what pixels the network bases its decision on, whether it sees a cat or a dog” and “maybe tomorrow it decides it wants to kill us all” We do in fact understand very very well how modern AI works, and there is a whole branch of reasearch called “Explainable AI”. We do not neccessary always completely understand how it weights its input data, but that does not mean that anything could happen. A network build to classify images has NO WAY of ever doing something else. That just… Read more »
We already did use machine learning to crate programs on a FPGA – and one of the programs would not run correctly when eliminating a seemingly completely unrelated circuit. The interference from the detached network did influence the other parts in just the right way. It’s safe to say that this one simple chip did surpass whatever it’s creators would reasonably be able to expect.
Meatbags are especially good in not acknowledging the things that are outside our usual experiences. We remain focused on whatever we’re currently doing.
That doesn’t apply. That the ML algo abused a design flaw of the FPGA totally within the constraints of it’s programming, doesn’t mean that it surpassed anything. The algo did exaclty what it was told to do: try random stuff, use a heuristic to determine if it was on a useful path, and then evaluate the result until it finds something worthwhile.
To extend your example ad absurdum: Since the point of an ML algo is to find a solution that wasn’t specified by the programmer, every single result of an ML algo would “surpass whatever it’s creator expected”.
In that sense,we’re just a bunch of chemicals interacting in a funny way. We should not be considered to be more alive or intelligent than a campfire. We do surpass the expectations of a Stanley-Miller experiment, but that amounts to nothing.
Except we do know, it’s literally just stacks and stacks and stacks of TRUE/FALSE guesses to sets of questions with Correct/Incorrect matches to go with it. The thing we don’t understand if how it comes to a wrong conclusion when it’s a ML that’s built up using an unsupervised or hybrid learning model: more popular because they’re less reliant on the manual dataset production and is thus cheaper – one of the most favorite words in capitalism – and from a programmer’s stand point it also can find relationships that weren’t initially accounted for. Which can be problematic. Like when… Read more »
We still don’t truly understand how planes fly. We don’t know how general anesthesia works. We know dreaming is important, but we don’t know why. There is a bunch of crap out their we rely on that we just can’t explain.
No no, we also understand how general anesthesia works. The books about it are like 3 inches thick, I took a picture when getting onboarded for my Masters
Oh, don’t misunderstand, I know we understand a lot of how general anethesia works. I’m only saying that there’s also quite a bit of the details that we haven’t quite puzzled out about its effects. Last I checked, for example, we don’t know the precise mechanisms behind sometimes permanent personality and behavioral changes attributed to general anesthesia use:
We generally know about the concepts used in flying and we’re quite good ad guessing good shapes to achieve it, but there is plenty that we don’t know. To say “we know” is a Dunning-Kruger thing.
We know the physics about flying in perfect gases, but air does behave as something-between-gas-and-liquid. Our formulas are wrong.
Off cause it’s not that wrong; e.g. if you use Newton to calculate the solar system and apply some corrections, you’ll be good, too, but to say that Newton explains it and we know everything (instead of using the Einstein formulas) doesn’t describe our reality. At larger scales we can’t describe our milky way purely by Einstein, we need to add unknown (“dark”) matter and unknown energy.
That’s really quite interesting. Doing a little bit of research into specifically what we don’t know about flight reveals there do seem to be some mysteries left. By the looks of things, we know how things happen with flight (and thus how to work with it), but not why they happen. If you keep looking at something that happens with flight and ask “why”, and then ask “why” to the answer as well, you eventually run into a wall with current science. Which makes sense, considering we don’t know everything about physics. It might be less that our formulas are… Read more »
The study says that with increasing knowledge your confidence-of-knowing at start raises fast but with knowing more, your confidence increases at a slower rate. Or in other words: Newbies tend to be over-confident (when compared to the real knowledge).
I do experience some point of over-confidence each time I learn something new.
There was a time when people were discouraged from studying physics because “we already know everything”. Today with more knowledge, we know nothing compared to what we know that we could know.
That’s the problem with rudimentary knowledge of science. It’s lies to children. A basic explanation that doesn’t paint an actual complete picture but is “good enough”. The thing with science, though, is that if science doesn’t know the entire answer 100%, they assume they know none of it. This is because there have been many times where tiny discoveries have completely uprooted everything we know about a subject. All of the examples listed above fall in to the category of “We’re missing a piece of the puzzle, the explanation we have does not completely fit because there are a whole… Read more »
You’re not entirely wrong, but we DO know how planes fly. It’s fairly simple… frankly. Sufficient forward momentum increases air pressure. Flapping does the same thing, but by pushing downwards. Sufficient air pressure creates something akin to a solid surface (physically speaking).
And you can turn around that humanity is nothing but biochemical reactions and such as a comparison. lol
TomB
3 years ago
Scott is being a bit bigoted with his inability to recognize that synthetic origins or not, the killbot could well be what for all intents and purposes is a competitive intelligence for his own meat-based processing system.
I’m not sure if moral judgement is a useful characteristic to look at Scott’s position. Sure, “this is just a web comic” and “Hollywood is just Hollywood”. The idea that artificial intelligences _in fiction (right now_ are emotionally relatable says more about fiction writers and their audience than it says about the reality of AI as sophisticated as Zeke2/Gizmo/Copplebrunch. I found the film “Ex Machina” one of the more intelligent variants as to how an AI might play on human weaknesses as to exploiting emotion for their own purposes. The film ludes the audience to relate to the junior programmer… Read more »
That’s an interesting explanation, but it depends on the goal of the AI. If it’s amassing wealth, then we might end up with soulless neural networks exploiting human labor and the environment in the name of quarterly earnings!
You don’t need robotic overlords. Corporations will do this for you. While run by human owners, they will act just as stupid, greedy, and shortsighted as they have traditionally been, possibly further fueled by going public, with investment funds pulling the strings.
The same goes for augmented humans. If they were found to be more efficient workers, regular humans might go extinct by losing their “economic habitat”. There’s a hard limit imposed by biology on us meatbags.
Consider his position, though. Despite it being a world of superheroes we have not seen anything beyond a certain boundary, i.e. there has been super science and robots but nothing like “mutants” “aliens” or the like. They were even attacked by robots before. Now, a single robot comes up and begins talking as if it is “alive”, but has been under the control of the enemy this entire time. It is far FAR safer to assume this is all a very very complex program designed to “seduce” (in the strictest sense of the word) and/or distract them. He has also… Read more »
Yeah Scotts’ thinking is in line with the story line of the movie Ex Machina. The worst possible situation is: They is sentient, and is slowly gaining everyone’s trust, only to erase mankind when they found a way to replicate.
Better to hope they is just lines of code in a meat grinder.
-_-Okay, I think we need clarification. If they fight, Zeke’ll break SCOTT”S legs
Brendan Keating
3 years ago
So there’s a lot of really good discussion about AI here that’s pretty cool. I wanna talk about something else- Chekov’s Brain Bomb. It was an interesting choice to tell us, right from the beginning of this arc, that if our AI friend here even so much as leaves the room, the bomb goes off. I have to imagine that it will misfire when he finally tries to leave, leading to a very angry and tense dynamic going forward, or Lucas or Scott breaks the news and… well, second verse same as the first. It’s very Hitchcockian of you, Tim-… Read more »
Maybe it’s one more reason for Gizmo to respect Scott – if I assume that G. knows about it. Scott is just as brutal to G. as G. would initially be brutal to S.
I hadn’t thought of the bomb through the lens of Chekhov’s gun. Like a fool, I kept wondering IF it would come into play, when I should’ve been wondering HOW it would come into play.
Brendan Keating
3 years ago
Personally, I’d like to see an approach at holistic AI creation. See, I think without emotions or instinct, humans would be more or less like the AIs we’re creating today. Without bounds and needs, we’d just regurgitate facts or answers when prompted, or perform functions when our strict logic demands it of us. In my opinion, our… shall I say, Animal side, is actually what gives us choice and autonomy. Whiiiiiich doesn’t say great things about meat consumption- but I am WAY too in to steak to care about that. (Actually I have a line- I don’t eat octopus because… Read more »
Well, at the end of the day, we’re human, and thus we have humanity ‘in our veins’ so to speak.
What I mean is, we can try to not be emotional or have instincts, we are NOT computers and cannot work like them. I tried to become one but it takes a toll on your physical and mental health, so I quit that in the recent years to start being a human instead. To live life, so to speak.
Did… Did Zeke just go all “Bastion narrator” on him? 😀
Lily
3 years ago
It is pretty clear that biological brains are just following a bunch of coding too, especially when you look at slightly less advanced brains of animals like insects.
Pretty much, yes. Coding is a skill, with the resulting code being a tool – one that expresses logic. “If this, then that”, “do this while that”, “true or false”; code is a way to express and define logic in the same way math is used to express and define reality as we experience it. I know a lot of people who take issue with the idea that human brains are “coded”, but it shouldn’t even really be up for debate as it’s not something to take exception to to begin with. Saying the human brain follows programming is the… Read more »
Last edited 3 years ago by Vandril
Henchman Twenty1
3 years ago
Sarcasm. One of the defining attributes of advanced sentience… for better or worse.
Verdiekus
3 years ago
Lmao, I’m with Zeke on this one.
Pulse
3 years ago
in theory all life is just the execution of code stored in dna. hell dna is even a binary code
I think you’d have to describe DNA as a quaternary code. Your point and argument stand, it’s code. It’s just not binary.
MarlinBrando
3 years ago
Me: “I wonder if Scott is going to become a villain in this timeline, as well.”
Tim Buckley: “And then Scott says, ‘Choice… requires free will. You have code.'”
Me: “OH NO.”
Ian
3 years ago
Fuckin’ zing.
Twilight Faze
3 years ago
I get Scott’s skepticism. Not only is this uncharted territory for him but it’s also an enemy that tried to kill his best friends on more than one occasion. I wouldn’t exactly be in a forgiving mindset nor be accommodating either. That being said, it’s clear Zeke (I’m using his 1.0 name and still hoping he’ll run with it) is trying. Used to be he didn’t understand the point of video games and now he’s an addict like the rest of us. Used to be he didn’t understand the concept of names and now he’s trying to find himself. He’s… Read more »
Just don’t tell Zeke that. He’ll get all red-LED on you for that LOL
BakaGrappler
3 years ago
I think Scott may end up being the one who does Not-Zeke the most good. Because unlike the others, Scott is challenging NZ to do the hard inspections of themself. Scott s the only one not coddling NZ, so he’s quite the important foil for the droid’s development.
Plus, when somebody is consistently being a jerk, it can also be good to reflect that behavior and it’s consequences right back on the person instead of constantly coddling and letting thing slide.
Given Zeke’s research, it is highly unlikely that they don’t know why humans stopped using it. So the odds are good that Adolf was suggested as a way to get a reaction out of the third human. By having his attitude suddenly volleyed right back at them, this provides far more impetus to change than the constant coddling of Zeke’s current “jerk” setting.
Admiral Casual
3 years ago
Scott…tread carefully. I’d rather not piss this thing off.
The thing is, with a lot of the AI software that gets developed these days, we don’t understand all of the details behind it, even though it is “just code”. Have a look at CGP Grey’s video on AI for more info on why the “just code” idea doesn’t really hold up.
… and that’s when we move into GMO territory. “They don’t even know what they’re doing when they throw things together until get something that seems to ‘work’. But they don’t know the full extent of how it works, do they?”
generally with GMOs they are specifically selecting genes to add, and where to put them. while the “full extent” of how it works is not 100% understood, that has more to do with our current understanding of the intricacies of how genes interact than anything else. on the other side of things, traditional breeding techniques can include exposure to radiation or mutagenic chemicals to randomly mix up the genes a bit, then you see if you got any useful outcomes. some of the benefits to using these traditional methods over GMOs are that it is relatively cheap, you don’t need… Read more »
So circa 1950 that was pretty accurate. they would expose plants to high levels of radiation to force mutations and then see if anything worked for the benefit (rare but when you expose 500,000 plants to mutations you get a few golden apples) however in today’s GMO’s they are taking specific genes that due specific things and then introduce them to a parent plant. this is the equivalent of changing surgical tools from shotguns to scalpels. can things still go wrong? sure but not nearly as bad as performing an appendectomy with a 12 gauge
I don’t think that’s specific enough. We do understand all of the details behind it – we developed the networks. What we have trouble forseeing is the way the networks will inter-relate and how the networks’ internal path weightings will dynamically change and how that will all together produce unexpected emergent behaviours. That’s just the limit of our human brain’s processing and rules processing and memory limits showing themselves. On the other hand, a meatbeing can still easily make near instant inferences with high degrees of validity where even the best AI system can when it comes to data that… Read more »
Yeah it took a few Billion years for us to get here with that kind of dynamic processing. Computers have had *checks notes* 100 years. Not a stretch to imagine it’s gonna surpass us in the next 100, if not less time.
Exactly! There is a VERY BIG DIFFERENCE between “we do not know what pixels the network bases its decision on, whether it sees a cat or a dog” and “maybe tomorrow it decides it wants to kill us all” We do in fact understand very very well how modern AI works, and there is a whole branch of reasearch called “Explainable AI”. We do not neccessary always completely understand how it weights its input data, but that does not mean that anything could happen. A network build to classify images has NO WAY of ever doing something else. That just… Read more »
Your car analogy is really good. I’m gonna steal it.
Born-blind people DO use their optical cortex to do different tasks.
Your car might not fly, but long jumps are an option.
the spoilers gotta serve some purpose right
???
Oh neat! Thanks for the extra info. 🙂
We already did use machine learning to crate programs on a FPGA – and one of the programs would not run correctly when eliminating a seemingly completely unrelated circuit. The interference from the detached network did influence the other parts in just the right way. It’s safe to say that this one simple chip did surpass whatever it’s creators would reasonably be able to expect.
Meatbags are especially good in not acknowledging the things that are outside our usual experiences. We remain focused on whatever we’re currently doing.
That doesn’t apply. That the ML algo abused a design flaw of the FPGA totally within the constraints of it’s programming, doesn’t mean that it surpassed anything. The algo did exaclty what it was told to do: try random stuff, use a heuristic to determine if it was on a useful path, and then evaluate the result until it finds something worthwhile.
To extend your example ad absurdum: Since the point of an ML algo is to find a solution that wasn’t specified by the programmer, every single result of an ML algo would “surpass whatever it’s creator expected”.
In that sense,we’re just a bunch of chemicals interacting in a funny way. We should not be considered to be more alive or intelligent than a campfire. We do surpass the expectations of a Stanley-Miller experiment, but that amounts to nothing.
Except we do know, it’s literally just stacks and stacks and stacks of TRUE/FALSE guesses to sets of questions with Correct/Incorrect matches to go with it. The thing we don’t understand if how it comes to a wrong conclusion when it’s a ML that’s built up using an unsupervised or hybrid learning model: more popular because they’re less reliant on the manual dataset production and is thus cheaper – one of the most favorite words in capitalism – and from a programmer’s stand point it also can find relationships that weren’t initially accounted for. Which can be problematic. Like when… Read more »
We still don’t truly understand how planes fly. We don’t know how general anesthesia works. We know dreaming is important, but we don’t know why. There is a bunch of crap out their we rely on that we just can’t explain.
…
Just because you don’t understand how those things work doesn’t mean nobody else knows how those things work.
We actually do understand how planes fly, lol. There is still a lot we don’t know about the other two topics, though, yes.
No no, we also understand how general anesthesia works. The books about it are like 3 inches thick, I took a picture when getting onboarded for my Masters
https://ibb.co/6rJdvgW
Oh, don’t misunderstand, I know we understand a lot of how general anethesia works. I’m only saying that there’s also quite a bit of the details that we haven’t quite puzzled out about its effects. Last I checked, for example, we don’t know the precise mechanisms behind sometimes permanent personality and behavioral changes attributed to general anesthesia use:
https://www.sciencedirect.com/science/article/pii/S0301008216301137
I’m not just talking out of my ass, believe it or not. 😛
We generally know about the concepts used in flying and we’re quite good ad guessing good shapes to achieve it, but there is plenty that we don’t know. To say “we know” is a Dunning-Kruger thing.
I’m not so sure about that. Fairly sure physicists have flying down pat. I’m always open to be proven wrong, though. Best way to learn new things!
We know the physics about flying in perfect gases, but air does behave as something-between-gas-and-liquid. Our formulas are wrong.
Off cause it’s not that wrong; e.g. if you use Newton to calculate the solar system and apply some corrections, you’ll be good, too, but to say that Newton explains it and we know everything (instead of using the Einstein formulas) doesn’t describe our reality. At larger scales we can’t describe our milky way purely by Einstein, we need to add unknown (“dark”) matter and unknown energy.
That’s really quite interesting. Doing a little bit of research into specifically what we don’t know about flight reveals there do seem to be some mysteries left. By the looks of things, we know how things happen with flight (and thus how to work with it), but not why they happen. If you keep looking at something that happens with flight and ask “why”, and then ask “why” to the answer as well, you eventually run into a wall with current science. Which makes sense, considering we don’t know everything about physics. It might be less that our formulas are… Read more »
…That isn’t what the Dunning-Kruger effect is either, but, ironically, you seem to be exhibiting a case study of it
The study says that with increasing knowledge your confidence-of-knowing at start raises fast but with knowing more, your confidence increases at a slower rate. Or in other words: Newbies tend to be over-confident (when compared to the real knowledge).
I do experience some point of over-confidence each time I learn something new.
There was a time when people were discouraged from studying physics because “we already know everything”. Today with more knowledge, we know nothing compared to what we know that we could know.
i can explain all three of the things you claim we cant. and if someone with rudimentary science knowledge can, the dr.s out there sure as hell can.
That’s the problem with rudimentary knowledge of science. It’s lies to children. A basic explanation that doesn’t paint an actual complete picture but is “good enough”. The thing with science, though, is that if science doesn’t know the entire answer 100%, they assume they know none of it. This is because there have been many times where tiny discoveries have completely uprooted everything we know about a subject. All of the examples listed above fall in to the category of “We’re missing a piece of the puzzle, the explanation we have does not completely fit because there are a whole… Read more »
So why do advanced neural networks need a time with sleep-like patterns to not lose their function?
You’re not entirely wrong, but we DO know how planes fly. It’s fairly simple… frankly. Sufficient forward momentum increases air pressure. Flapping does the same thing, but by pushing downwards. Sufficient air pressure creates something akin to a solid surface (physically speaking).
That’s how I built my first kite. It didn’t fly. Then I changed it to use the Bernoulli effect.
AI is not about code, it is about data
And you can turn around that humanity is nothing but biochemical reactions and such as a comparison. lol
Scott is being a bit bigoted with his inability to recognize that synthetic origins or not, the killbot could well be what for all intents and purposes is a competitive intelligence for his own meat-based processing system.
I’m not sure if moral judgement is a useful characteristic to look at Scott’s position. Sure, “this is just a web comic” and “Hollywood is just Hollywood”. The idea that artificial intelligences _in fiction (right now_ are emotionally relatable says more about fiction writers and their audience than it says about the reality of AI as sophisticated as Zeke2/Gizmo/Copplebrunch. I found the film “Ex Machina” one of the more intelligent variants as to how an AI might play on human weaknesses as to exploiting emotion for their own purposes. The film ludes the audience to relate to the junior programmer… Read more »
That’s an interesting explanation, but it depends on the goal of the AI. If it’s amassing wealth, then we might end up with soulless neural networks exploiting human labor and the environment in the name of quarterly earnings!
You don’t need robotic overlords. Corporations will do this for you. While run by human owners, they will act just as stupid, greedy, and shortsighted as they have traditionally been, possibly further fueled by going public, with investment funds pulling the strings.
The same goes for augmented humans. If they were found to be more efficient workers, regular humans might go extinct by losing their “economic habitat”. There’s a hard limit imposed by biology on us meatbags.
Consider his position, though. Despite it being a world of superheroes we have not seen anything beyond a certain boundary, i.e. there has been super science and robots but nothing like “mutants” “aliens” or the like. They were even attacked by robots before. Now, a single robot comes up and begins talking as if it is “alive”, but has been under the control of the enemy this entire time. It is far FAR safer to assume this is all a very very complex program designed to “seduce” (in the strictest sense of the word) and/or distract them. He has also… Read more »
Yeah Scotts’ thinking is in line with the story line of the movie Ex Machina. The worst possible situation is: They is sentient, and is slowly gaining everyone’s trust, only to erase mankind when they found a way to replicate.
Better to hope they is just lines of code in a meat grinder.
Just when he starts becoming a respectable character, too……
Of all the epic fights and showdowns that could happen on A&D, this is the one I’ve been looking forward to most.
IKR real edge-of-your-seat stuff.
Zeke’ll break his legs
But like the cyborg he is, that doesn’t stop him.
-_-Okay, I think we need clarification. If they fight, Zeke’ll break SCOTT”S legs
So there’s a lot of really good discussion about AI here that’s pretty cool. I wanna talk about something else- Chekov’s Brain Bomb. It was an interesting choice to tell us, right from the beginning of this arc, that if our AI friend here even so much as leaves the room, the bomb goes off. I have to imagine that it will misfire when he finally tries to leave, leading to a very angry and tense dynamic going forward, or Lucas or Scott breaks the news and… well, second verse same as the first. It’s very Hitchcockian of you, Tim-… Read more »
Oh quick note- if the brain bomb becomes a Pitch Meeting moment (Super easy! Barely an inconvenience!) I will be extremely disappointed.
Maybe it’s one more reason for Gizmo to respect Scott – if I assume that G. knows about it. Scott is just as brutal to G. as G. would initially be brutal to S.
I hadn’t thought of the bomb through the lens of Chekhov’s gun. Like a fool, I kept wondering IF it would come into play, when I should’ve been wondering HOW it would come into play.
Personally, I’d like to see an approach at holistic AI creation. See, I think without emotions or instinct, humans would be more or less like the AIs we’re creating today. Without bounds and needs, we’d just regurgitate facts or answers when prompted, or perform functions when our strict logic demands it of us. In my opinion, our… shall I say, Animal side, is actually what gives us choice and autonomy. Whiiiiiich doesn’t say great things about meat consumption- but I am WAY too in to steak to care about that. (Actually I have a line- I don’t eat octopus because… Read more »
Well, at the end of the day, we’re human, and thus we have humanity ‘in our veins’ so to speak.
What I mean is, we can try to not be emotional or have instincts, we are NOT computers and cannot work like them. I tried to become one but it takes a toll on your physical and mental health, so I quit that in the recent years to start being a human instead. To live life, so to speak.
Originally “Computer” is a profession.
https://en.wikipedia.org/wiki/Computer_(job_description)
Did… Did Zeke just go all “Bastion narrator” on him? 😀
It is pretty clear that biological brains are just following a bunch of coding too, especially when you look at slightly less advanced brains of animals like insects.
Pretty much, yes. Coding is a skill, with the resulting code being a tool – one that expresses logic. “If this, then that”, “do this while that”, “true or false”; code is a way to express and define logic in the same way math is used to express and define reality as we experience it. I know a lot of people who take issue with the idea that human brains are “coded”, but it shouldn’t even really be up for debate as it’s not something to take exception to to begin with. Saying the human brain follows programming is the… Read more »
Sarcasm. One of the defining attributes of advanced sentience… for better or worse.
Lmao, I’m with Zeke on this one.
in theory all life is just the execution of code stored in dna. hell dna is even a binary code
I think you’d have to describe DNA as a quaternary code. Your point and argument stand, it’s code. It’s just not binary.
Me: “I wonder if Scott is going to become a villain in this timeline, as well.”
Tim Buckley: “And then Scott says, ‘Choice… requires free will. You have code.'”
Me: “OH NO.”
Fuckin’ zing.
I get Scott’s skepticism. Not only is this uncharted territory for him but it’s also an enemy that tried to kill his best friends on more than one occasion. I wouldn’t exactly be in a forgiving mindset nor be accommodating either. That being said, it’s clear Zeke (I’m using his 1.0 name and still hoping he’ll run with it) is trying. Used to be he didn’t understand the point of video games and now he’s an addict like the rest of us. Used to be he didn’t understand the concept of names and now he’s trying to find himself. He’s… Read more »
Being a dick and deciding to not acting on it is a very human thing to do.
Just don’t tell Zeke that. He’ll get all red-LED on you for that LOL
I think Scott may end up being the one who does Not-Zeke the most good. Because unlike the others, Scott is challenging NZ to do the hard inspections of themself. Scott s the only one not coddling NZ, so he’s quite the important foil for the droid’s development.
Plus, when somebody is consistently being a jerk, it can also be good to reflect that behavior and it’s consequences right back on the person instead of constantly coddling and letting thing slide.
Given Zeke’s research, it is highly unlikely that they don’t know why humans stopped using it. So the odds are good that Adolf was suggested as a way to get a reaction out of the third human. By having his attitude suddenly volleyed right back at them, this provides far more impetus to change than the constant coddling of Zeke’s current “jerk” setting.
Scott…tread carefully. I’d rather not piss this thing off.