First Random Last |

You are currently browsing the archive for Analog and D-Pad



24

Identity, p17

May 7, 2021 by Tim


Subscribe
Notify of
guest

68 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Pocket Astronomer
Pocket Astronomer
3 years ago

The thing is, with a lot of the AI software that gets developed these days, we don’t understand all of the details behind it, even though it is “just code”. Have a look at CGP Grey’s video on AI for more info on why the “just code” idea doesn’t really hold up.

wkz
wkz
3 years ago

… and that’s when we move into GMO territory. “They don’t even know what they’re doing when they throw things together until get something that seems to ‘work’. But they don’t know the full extent of how it works, do they?”

krabcat
krabcat
3 years ago
Reply to  wkz

generally with GMOs they are specifically selecting genes to add, and where to put them. while the “full extent” of how it works is not 100% understood, that has more to do with our current understanding of the intricacies of how genes interact than anything else. on the other side of things, traditional breeding techniques can include exposure to radiation or mutagenic chemicals to randomly mix up the genes a bit, then you see if you got any useful outcomes. some of the benefits to using these traditional methods over GMOs are that it is relatively cheap, you don’t need… Read more »

Brian Glenn
Brian Glenn
3 years ago
Reply to  wkz

So circa 1950 that was pretty accurate. they would expose plants to high levels of radiation to force mutations and then see if anything worked for the benefit (rare but when you expose 500,000 plants to mutations you get a few golden apples) however in today’s GMO’s they are taking specific genes that due specific things and then introduce them to a parent plant. this is the equivalent of changing surgical tools from shotguns to scalpels. can things still go wrong? sure but not nearly as bad as performing an appendectomy with a 12 gauge

TomB
TomB
3 years ago

I don’t think that’s specific enough. We do understand all of the details behind it – we developed the networks. What we have trouble forseeing is the way the networks will inter-relate and how the networks’ internal path weightings will dynamically change and how that will all together produce unexpected emergent behaviours. That’s just the limit of our human brain’s processing and rules processing and memory limits showing themselves. On the other hand, a meatbeing can still easily make near instant inferences with high degrees of validity where even the best AI system can when it comes to data that… Read more »

KindlyPizza
KindlyPizza
3 years ago
Reply to  TomB

Yeah it took a few Billion years for us to get here with that kind of dynamic processing. Computers have had *checks notes* 100 years. Not a stretch to imagine it’s gonna surpass us in the next 100, if not less time.

jere
jere
3 years ago
Reply to  TomB

Exactly! There is a VERY BIG DIFFERENCE between “we do not know what pixels the network bases its decision on, whether it sees a cat or a dog” and “maybe tomorrow it decides it wants to kill us all” We do in fact understand very very well how modern AI works, and there is a whole branch of reasearch called “Explainable AI”. We do not neccessary always completely understand how it weights its input data, but that does not mean that anything could happen. A network build to classify images has NO WAY of ever doing something else. That just… Read more »

Eva
Eva
3 years ago
Reply to  jere

Your car analogy is really good. I’m gonna steal it.

7eggert
7eggert
3 years ago
Reply to  jere

Born-blind people DO use their optical cortex to do different tasks.

Your car might not fly, but long jumps are an option.

Pulse
Pulse
3 years ago
Reply to  7eggert

the spoilers gotta serve some purpose right

KillerDragon989
KillerDragon989
3 years ago
Reply to  Pulse

???

Pocket Astronomer
Pocket Astronomer
3 years ago
Reply to  TomB

Oh neat! Thanks for the extra info. 🙂

7eggert
7eggert
3 years ago
Reply to  TomB

We already did use machine learning to crate programs on a FPGA – and one of the programs would not run correctly when eliminating a seemingly completely unrelated circuit. The interference from the detached network did influence the other parts in just the right way. It’s safe to say that this one simple chip did surpass whatever it’s creators would reasonably be able to expect.

Meatbags are especially good in not acknowledging the things that are outside our usual experiences. We remain focused on whatever we’re currently doing.

Jack0r
Jack0r
3 years ago
Reply to  7eggert

That doesn’t apply. That the ML algo abused a design flaw of the FPGA totally within the constraints of it’s programming, doesn’t mean that it surpassed anything. The algo did exaclty what it was told to do: try random stuff, use a heuristic to determine if it was on a useful path, and then evaluate the result until it finds something worthwhile.

To extend your example ad absurdum: Since the point of an ML algo is to find a solution that wasn’t specified by the programmer, every single result of an ML algo would “surpass whatever it’s creator expected”.

7eggert
7eggert
3 years ago
Reply to  Jack0r

In that sense,we’re just a bunch of chemicals interacting in a funny way. We should not be considered to be more alive or intelligent than a campfire. We do surpass the expectations of a Stanley-Miller experiment, but that amounts to nothing.

Kaitensatsuma
Kaitensatsuma
3 years ago

Except we do know, it’s literally just stacks and stacks and stacks of TRUE/FALSE guesses to sets of questions with Correct/Incorrect matches to go with it. The thing we don’t understand if how it comes to a wrong conclusion when it’s a ML that’s built up using an unsupervised or hybrid learning model: more popular because they’re less reliant on the manual dataset production and is thus cheaper – one of the most favorite words in capitalism – and from a programmer’s stand point it also can find relationships that weren’t initially accounted for. Which can be problematic. Like when… Read more »

Last edited 3 years ago by Kaitensatsuma
michael pollack
michael pollack
3 years ago

We still don’t truly understand how planes fly. We don’t know how general anesthesia works. We know dreaming is important, but we don’t know why. There is a bunch of crap out their we rely on that we just can’t explain.

Kaitensatsuma
Kaitensatsuma
3 years ago


Just because you don’t understand how those things work doesn’t mean nobody else knows how those things work.

Last edited 3 years ago by Kaitensatsuma
Vandril
Vandril
3 years ago

We actually do understand how planes fly, lol. There is still a lot we don’t know about the other two topics, though, yes.

Kaitensatsuma
Kaitensatsuma
3 years ago
Reply to  Vandril

No no, we also understand how general anesthesia works. The books about it are like 3 inches thick, I took a picture when getting onboarded for my Masters

https://ibb.co/6rJdvgW

Vandril
Vandril
3 years ago
Reply to  Kaitensatsuma

Oh, don’t misunderstand, I know we understand a lot of how general anethesia works. I’m only saying that there’s also quite a bit of the details that we haven’t quite puzzled out about its effects. Last I checked, for example, we don’t know the precise mechanisms behind sometimes permanent personality and behavioral changes attributed to general anesthesia use:

https://www.sciencedirect.com/science/article/pii/S0301008216301137

I’m not just talking out of my ass, believe it or not. 😛

7eggert
7eggert
3 years ago
Reply to  Vandril

We generally know about the concepts used in flying and we’re quite good ad guessing good shapes to achieve it, but there is plenty that we don’t know. To say “we know” is a Dunning-Kruger thing.

Vandril
Vandril
3 years ago
Reply to  7eggert

I’m not so sure about that. Fairly sure physicists have flying down pat. I’m always open to be proven wrong, though. Best way to learn new things!

7eggert
7eggert
3 years ago
Reply to  Vandril

We know the physics about flying in perfect gases, but air does behave as something-between-gas-and-liquid. Our formulas are wrong.

Off cause it’s not that wrong; e.g. if you use Newton to calculate the solar system and apply some corrections, you’ll be good, too, but to say that Newton explains it and we know everything (instead of using the Einstein formulas) doesn’t describe our reality. At larger scales we can’t describe our milky way purely by Einstein, we need to add unknown (“dark”) matter and unknown energy.

Vandril
Vandril
3 years ago
Reply to  7eggert

That’s really quite interesting. Doing a little bit of research into specifically what we don’t know about flight reveals there do seem to be some mysteries left. By the looks of things, we know how things happen with flight (and thus how to work with it), but not why they happen. If you keep looking at something that happens with flight and ask “why”, and then ask “why” to the answer as well, you eventually run into a wall with current science. Which makes sense, considering we don’t know everything about physics. It might be less that our formulas are… Read more »

Last edited 3 years ago by Vandril
Kaitensatsuma
Kaitensatsuma
3 years ago
Reply to  7eggert

That isn’t what the Dunning-Kruger effect is either, but, ironically, you seem to be exhibiting a case study of it

Last edited 3 years ago by Kaitensatsuma
7eggert
7eggert
3 years ago
Reply to  Kaitensatsuma

The study says that with increasing knowledge your confidence-of-knowing at start raises fast but with knowing more, your confidence increases at a slower rate. Or in other words: Newbies tend to be over-confident (when compared to the real knowledge).

I do experience some point of over-confidence each time I learn something new.

There was a time when people were discouraged from studying physics because “we already know everything”. Today with more knowledge, we know nothing compared to what we know that we could know.

Pulse
Pulse
3 years ago

i can explain all three of the things you claim we cant. and if someone with rudimentary science knowledge can, the dr.s out there sure as hell can.

DField
DField
3 years ago
Reply to  Pulse

That’s the problem with rudimentary knowledge of science. It’s lies to children. A basic explanation that doesn’t paint an actual complete picture but is “good enough”. The thing with science, though, is that if science doesn’t know the entire answer 100%, they assume they know none of it. This is because there have been many times where tiny discoveries have completely uprooted everything we know about a subject. All of the examples listed above fall in to the category of “We’re missing a piece of the puzzle, the explanation we have does not completely fit because there are a whole… Read more »

7eggert
7eggert
3 years ago
Reply to  Pulse

So why do advanced neural networks need a time with sleep-like patterns to not lose their function?

Swiftbow
Swiftbow
3 years ago

You’re not entirely wrong, but we DO know how planes fly. It’s fairly simple… frankly. Sufficient forward momentum increases air pressure. Flapping does the same thing, but by pushing downwards. Sufficient air pressure creates something akin to a solid surface (physically speaking).

7eggert
7eggert
3 years ago
Reply to  Swiftbow

That’s how I built my first kite. It didn’t fly. Then I changed it to use the Bernoulli effect.

Muppet
Muppet
3 years ago

AI is not about code, it is about data

Urazz
Urazz
3 years ago

And you can turn around that humanity is nothing but biochemical reactions and such as a comparison. lol

TomB
TomB
3 years ago

Scott is being a bit bigoted with his inability to recognize that synthetic origins or not, the killbot could well be what for all intents and purposes is a competitive intelligence for his own meat-based processing system.

Ex Machina
Ex Machina
3 years ago
Reply to  TomB

I’m not sure if moral judgement is a useful characteristic to look at Scott’s position. Sure, “this is just a web comic” and “Hollywood is just Hollywood”. The idea that artificial intelligences _in fiction (right now_ are emotionally relatable says more about fiction writers and their audience than it says about the reality of AI as sophisticated as Zeke2/Gizmo/Copplebrunch. I found the film “Ex Machina” one of the more intelligent variants as to how an AI might play on human weaknesses as to exploiting emotion for their own purposes. The film ludes the audience to relate to the junior programmer… Read more »

Rolan7
Rolan7
3 years ago
Reply to  Ex Machina

That’s an interesting explanation, but it depends on the goal of the AI. If it’s amassing wealth, then we might end up with soulless neural networks exploiting human labor and the environment in the name of quarterly earnings!

Ex Machina
Ex Machina
3 years ago
Reply to  Rolan7

You don’t need robotic overlords. Corporations will do this for you. While run by human owners, they will act just as stupid, greedy, and shortsighted as they have traditionally been, possibly further fueled by going public, with investment funds pulling the strings.
The same goes for augmented humans. If they were found to be more efficient workers, regular humans might go extinct by losing their “economic habitat”. There’s a hard limit imposed by biology on us meatbags.

Anon A Mouse
Anon A Mouse
3 years ago
Reply to  TomB

Consider his position, though. Despite it being a world of superheroes we have not seen anything beyond a certain boundary, i.e. there has been super science and robots but nothing like “mutants” “aliens” or the like. They were even attacked by robots before. Now, a single robot comes up and begins talking as if it is “alive”, but has been under the control of the enemy this entire time. It is far FAR safer to assume this is all a very very complex program designed to “seduce” (in the strictest sense of the word) and/or distract them. He has also… Read more »

R77
R77
3 years ago
Reply to  Anon A Mouse

Yeah Scotts’ thinking is in line with the story line of the movie Ex Machina. The worst possible situation is: They is sentient, and is slowly gaining everyone’s trust, only to erase mankind when they found a way to replicate.
Better to hope they is just lines of code in a meat grinder.

Leon
Leon
3 years ago
Reply to  TomB

Just when he starts becoming a respectable character, too……

Eldest Gruff
Eldest Gruff
3 years ago

Of all the epic fights and showdowns that could happen on A&D, this is the one I’ve been looking forward to most.

Cragfast
Cragfast
3 years ago
Reply to  Eldest Gruff

IKR real edge-of-your-seat stuff.

Leon
Leon
3 years ago
Reply to  Eldest Gruff

Zeke’ll break his legs

Eldest Gruff
Eldest Gruff
3 years ago
Reply to  Leon

But like the cyborg he is, that doesn’t stop him.

Leon
Leon
3 years ago
Reply to  Eldest Gruff

-_-Okay, I think we need clarification. If they fight, Zeke’ll break SCOTT”S legs

Brendan Keating
Brendan Keating
3 years ago

So there’s a lot of really good discussion about AI here that’s pretty cool. I wanna talk about something else- Chekov’s Brain Bomb. It was an interesting choice to tell us, right from the beginning of this arc, that if our AI friend here even so much as leaves the room, the bomb goes off. I have to imagine that it will misfire when he finally tries to leave, leading to a very angry and tense dynamic going forward, or Lucas or Scott breaks the news and… well, second verse same as the first. It’s very Hitchcockian of you, Tim-… Read more »

Brendan Keating
Brendan Keating
3 years ago

Oh quick note- if the brain bomb becomes a Pitch Meeting moment (Super easy! Barely an inconvenience!) I will be extremely disappointed.

7eggert
7eggert
3 years ago

Maybe it’s one more reason for Gizmo to respect Scott – if I assume that G. knows about it. Scott is just as brutal to G. as G. would initially be brutal to S.

MarlinBrando
MarlinBrando
3 years ago

I hadn’t thought of the bomb through the lens of Chekhov’s gun. Like a fool, I kept wondering IF it would come into play, when I should’ve been wondering HOW it would come into play.

Brendan Keating
Brendan Keating
3 years ago

Personally, I’d like to see an approach at holistic AI creation. See, I think without emotions or instinct, humans would be more or less like the AIs we’re creating today. Without bounds and needs, we’d just regurgitate facts or answers when prompted, or perform functions when our strict logic demands it of us. In my opinion, our… shall I say, Animal side, is actually what gives us choice and autonomy. Whiiiiiich doesn’t say great things about meat consumption- but I am WAY too in to steak to care about that. (Actually I have a line- I don’t eat octopus because… Read more »

Dom
Dom
3 years ago

Well, at the end of the day, we’re human, and thus we have humanity ‘in our veins’ so to speak.
What I mean is, we can try to not be emotional or have instincts, we are NOT computers and cannot work like them. I tried to become one but it takes a toll on your physical and mental health, so I quit that in the recent years to start being a human instead. To live life, so to speak.

7eggert
7eggert
3 years ago
Reply to  Dom

Originally “Computer” is a profession.

https://en.wikipedia.org/wiki/Computer_(job_description)

Mor
Mor
3 years ago

Did… Did Zeke just go all “Bastion narrator” on him? 😀

Lily
Lily
3 years ago

It is pretty clear that biological brains are just following a bunch of coding too, especially when you look at slightly less advanced brains of animals like insects.

Vandril
Vandril
3 years ago
Reply to  Lily

Pretty much, yes. Coding is a skill, with the resulting code being a tool – one that expresses logic. “If this, then that”, “do this while that”, “true or false”; code is a way to express and define logic in the same way math is used to express and define reality as we experience it. I know a lot of people who take issue with the idea that human brains are “coded”, but it shouldn’t even really be up for debate as it’s not something to take exception to to begin with. Saying the human brain follows programming is the… Read more »

Last edited 3 years ago by Vandril
Henchman Twenty1
Henchman Twenty1
3 years ago

Sarcasm. One of the defining attributes of advanced sentience… for better or worse.

Verdiekus
Verdiekus
3 years ago

Lmao, I’m with Zeke on this one.

Pulse
Pulse
3 years ago

in theory all life is just the execution of code stored in dna. hell dna is even a binary code

MarlinBrando
MarlinBrando
3 years ago
Reply to  Pulse

I think you’d have to describe DNA as a quaternary code. Your point and argument stand, it’s code. It’s just not binary.

MarlinBrando
MarlinBrando
3 years ago

Me: “I wonder if Scott is going to become a villain in this timeline, as well.”

Tim Buckley: “And then Scott says, ‘Choice… requires free will. You have code.'”

Me: “OH NO.”

Ian
Ian
3 years ago

Fuckin’ zing.

Twilight Faze
Twilight Faze
3 years ago

I get Scott’s skepticism. Not only is this uncharted territory for him but it’s also an enemy that tried to kill his best friends on more than one occasion. I wouldn’t exactly be in a forgiving mindset nor be accommodating either. That being said, it’s clear Zeke (I’m using his 1.0 name and still hoping he’ll run with it) is trying. Used to be he didn’t understand the point of video games and now he’s an addict like the rest of us. Used to be he didn’t understand the concept of names and now he’s trying to find himself. He’s… Read more »

7eggert
7eggert
3 years ago
Reply to  Twilight Faze

Being a dick and deciding to not acting on it is a very human thing to do.

Twilight Faze
Twilight Faze
3 years ago
Reply to  7eggert

Just don’t tell Zeke that. He’ll get all red-LED on you for that LOL

BakaGrappler
BakaGrappler
3 years ago

I think Scott may end up being the one who does Not-Zeke the most good. Because unlike the others, Scott is challenging NZ to do the hard inspections of themself. Scott s the only one not coddling NZ, so he’s quite the important foil for the droid’s development.

Pyre
Pyre
3 years ago
Reply to  BakaGrappler

Plus, when somebody is consistently being a jerk, it can also be good to reflect that behavior and it’s consequences right back on the person instead of constantly coddling and letting thing slide.

Given Zeke’s research, it is highly unlikely that they don’t know why humans stopped using it. So the odds are good that Adolf was suggested as a way to get a reaction out of the third human. By having his attitude suddenly volleyed right back at them, this provides far more impetus to change than the constant coddling of Zeke’s current “jerk” setting.

Admiral Casual
Admiral Casual
3 years ago

Scott…tread carefully. I’d rather not piss this thing off.