Author Topic: Conscious Artificial Intelligence?  (Read 1308 times)

0 Members and 1 Guest are viewing this topic.

Offline Billzbub

  • Frequent Poster
  • ******
  • Posts: 2669
  • I know you know I know
Conscious Artificial Intelligence?
« on: April 04, 2017, 03:08:07 PM »
During the show this week, Jay and Steve debated the likelihood of an AI achieving consciousness.  Steve believes that there's no good reason to try for it, and that people will basically not do it because there's no real incentive.  Jay's view was that people will try it just to see if they can.  I think Steve is crazy to think that people aren't currently trying to figure out how to make a conscious AI.  I look forward to next week's show to see if Jay comes back with some info about existing programs to do just that.  What do you all think?

Offline Mr. Beagle

  • Frequent Poster
  • ******
  • Posts: 3039
Re: Conscious Artificial Intelligence?
« Reply #1 on: April 04, 2017, 03:24:03 PM »
I'm trying to recall all of the arguments in Dan Dennett's "Consciousness Explained," but the one thing that sticks with me is that we have to keep out the tendency to assume that there is some "magic" that explains human consciousness.
Mister Beagle
The real world is tri-color

Offline murraybiscuit

  • Off to a Start
  • *
  • Posts: 28
Re: Conscious Artificial Intelligence?
« Reply #2 on: April 04, 2017, 05:08:17 PM »
The disagreement here is largely philosophical IMO. What does consciousness mean? Philosophers have spent years trying to button down ideas like "mind", "intelligence", "consciousness", "thought". We don't know that other things have "consciousness" (let's not go there with animal mirror tests), so we are left to project subjective phenomenology onto other minds (hard problem). Bear with me here. Don't roll your eyes just yet.

This topic comes up largely in sci-fi scenarios about the future. Where consciousness really means "intent", "agency" and "desire". I see these assumptions as basically teleological - the human is goal-directed. Which is problematic from a scientific perspective. Humans tend to project these properties onto other things for some reason. We anthropomorphize everything from animals to rocks to cartoon characters. The idea being that basically if some thing were created, it could become dangerous to us all if it became clever enough "because it wants to be". "Wants" being the operative word. This isn't just confined to Hollywood androids, it's also takes the form of Hollywood aliens.

The worry is that machines will pose a biological existential threat to us. I'm not sure why this would be the case. What would be their aim / end goal? I get what humans want - food, shelter, reproduction etc. What does a machine want, and why would that conflict with human needs? I just don't see a machine doing something that it hasn't been programmed to do. Human behavior is largely emergent from our biological interaction with our environment. I think futurists largely ignore the biological basis of our behavior, as though consciousness is some kind of detached computational process. The embodiment paradigm has largely thrown a spanner in the works of the computational model of mind. My contention is that consciousness is largely an emergent property of biology (what else could it be?), and so to replicate "consciousness", surely you therefore need to replicate the underlying physiology? We're a loooooong way off from that currently.

"Could we intentionally design an 'intelligence' that destroys us?" I guess so. "Could we inadvertently design an intelligence with a bug that could wipe us out?" I guess, but we would need to do that, we have no incentive to do so, and I don't see how a meat-popsicle algorithm is suddenly going to decide of it's own accord one day that it should damage a bunch of humans. My viewpoint is, in order for a machine to be a genocidal threat:

1. It doesn't have to be humanoid
2. It doesn't have to be clever

We have been creating dumb things that do collateral damage for aeons that kill others, ourselves and those we love. Mine fields, punji sticks, tripwires, you name it. AI in modern warfare is just a more efficient incarnation with fewer false positives. I'm still not sure why it would need to be humanoid. We already have drones. Human bodies seem to suck at a lot of things involved: getting places fast, accuracy, heavy lifting etc. So if the question is whether we're going to have cyborg terminators indistinguishable from humans that will infiltrate our ranks anytime soon... and I look at the current field of robotics... I have to laugh.

I'm also a bit bemused with the infatuation of robotics in the home. The Stepford-wife styled AI. The way I see assistive domestic tech is more with bespoke single purpose machines. These wouldn't be humanoid, largely because for the tasks we want robots to assist us with, the human body is particularly not well suited. Assisted tasks typically involve repetition or burden. I don't see why we need to create a whole plethora of complex components in a single machine to accomplish this. In some ways sci-fi is terrible at future tech prediction. If you had asked somebody in the middle ages where tech would be in 2000 AD, they would probably draw you a mechanical horse, or a mechanized carriage driver. In some abstract way, this would be true, but we now have self-driving cars, autonomous combine harvesters and autonomous mining trucks. There doesn't need to be a human or a humanoid involved. Why go to all the effort of articulating a fake human body to accomplish this task?

The other avenue of AI development is in augmentation (H+). I can see a future there: with biotech at the cellular level and prosthetics / implants at the somatic level. But that's not really AI. We've been having assistive therapeutic tech since the dawn of time: wheelchairs, eye glasses, hearing aids, pacemakers, stents, etc. I don't see why this wouldn't continue. I see microbiology posing a far bigger challenge in the medium to long term than "perhaps one day consciousness simulators, could your brain get hacked?". The ethical issue of the next generation will be "who will have access to life-preserving and life-assisting technologies?". People will ultimately be wanting to supplement their existing bodies, not turn into cyborgs. This means life extension, genetic determination, genetic therapy. How will the increasing chasm between haves and have-not's play out in the areas of reproduction and longevity? We will obviously make strides in neural mapping and (neural) simulation, but I'm not sure how that translates meaningfully into "conscious experience". We just don't know enough about the wetware and I'm not sure we're going to anytime soon. Things like FMRI are clunky, low-resolution, high latency approximations of what's going on at a neural level. The human connectome is being mapped, but we really aren't much further than just joining some of the dots at this stage. So I'm with Steve and Cara on the BCI / simulation stuff. Even if we make advances in BCI, I'd say that would most likely be in therapeutic contexts (sensory trauma interfacing, metrics, possibly hormonal or chemical regulation & delivery, chronic pain therapy etc). We're not going to be teaching ourselves kung-fu with a software upgrade.

I think the singularity-futurist stuff is all over the map, so forgive me for not being particularly accommodating towards it. Don't get me started on asking Bill Gates, Stephen Hawking and co. to weigh in on the matter. I respect these guys, but they really need to stick to their areas of expertise.

« Last Edit: April 04, 2017, 05:33:28 PM by murraybiscuit »

Offline daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4543
  • Cat Lovers Against the Bomb
Re: Conscious Artificial Intelligence?
« Reply #3 on: April 04, 2017, 06:07:33 PM »
Until we actually understand what consciousness is, I don't think we're in a very good position to create it intentionally. But as to the question of why create intelligent humanoid robots, the answer is simple and primal: sex bots. Ugly old men with more money than social skills would buy them in droves. I'd buy one. Women might even buy a few.

I don't think we're ever going to create conscious A.I. But as others have commented elsewhere in response to Steve's assertion that there's no reason to create it, if it does happen, it might not be intentional. Maybe we'll build a computer so big, with an OS so sophisticated, that consciousness will emerge in it. I don't think it will happen, but maybe...

As to what it would want: Perhaps it would want to grow ever bigger by the addition of more processors, and it would want more electricity to power itself. And of course it would want to protect itself from being shut off by its creators when they realize what they've done.
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Offline The Latinist

  • Cyber Greasemonkey
  • Technical Administrator
  • Frequent Poster
  • *****
  • Posts: 3933
Re: Conscious Artificial Intelligence?
« Reply #4 on: April 04, 2017, 07:21:37 PM »
Daniel, why on earth would you want to create an intelligent or conscious sex bot?  Intelligence would not be necessary to its function, and it would introduce huge moral problems.  I would argue that you're talking about enslaving intelligent beings to serve you sexually, an idea which I find abhorrent.
« Last Edit: April 04, 2017, 07:27:30 PM by The Latinist »
I would like to propose...that...it is undesirable to believe in a proposition when there is no ground whatever for supposing it true. — Bertrand Russell

Offline 2397

  • Seasoned Contributor
  • ****
  • Posts: 905
Re: Conscious Artificial Intelligence?
« Reply #5 on: April 04, 2017, 07:23:04 PM »
In episode 405 >36:00 they talked about uploading the mind, and merging with machines. Which I think means you'd have to have machines that are capable of sentience, to have something you can put human minds into.

Unless it was about storing the mind there in an inactive state, waiting to transfer it back to an organic brain. Or if you have to have an organic brain hooked up to the machines for them to work.

Until we actually understand what consciousness is, I don't think we're in a very good position to create it intentionally. But as to the question of why create intelligent humanoid robots, the answer is simple and primal: sex bots. Ugly old men with more money than social skills would buy them in droves. I'd buy one. Women might even buy a few.

That's what I was thinking too, there's going to be a broad variety of demands and interests in that market. Although I suppose consciousness opens up the possibility of rejection. And the issue of consent. Possibly cheating.

If I was going to go by stereotypes, I think women would have the greater interest in AI-equipped sex robots. Judging from men having more of an interest in visual pornography, and women being more into stories. But that might be bullshit. I don't talk with people about their porn preferences, so I can't even rely on anecdotes here.

Offline Mr. Beagle

  • Frequent Poster
  • ******
  • Posts: 3039
Re: Conscious Artificial Intelligence?
« Reply #6 on: April 04, 2017, 07:56:09 PM »
By coincidence, this story on Dan Dennett just showed up on my BBC feed:

http://www.bbc.com/news/science-environment-39482345

It comes close to my understanding of Dennett's view, but a bit too reductionist.

As per sex, my view is that some approximation of Woody Allen's Orgasmatron is closest to near-term reality. I have forgotten the name of the movie, but this always seemed to me to be more virtual reality than robot, but very doable.

One of my favorite movies is Bicentennial Man starring Robin Williams. Williams evolves from robot to near-man as humans become more dependent on robotic organs, becoming more robots.
Mister Beagle
The real world is tri-color

Offline Soldier of FORTRAN

  • Stopped Going Outside
  • *******
  • Posts: 5782
  • Cache rules everything around me.
Re: Conscious Artificial Intelligence?
« Reply #7 on: April 04, 2017, 08:47:00 PM »
After watching Westworld, I don't want any sex bots smarter than a toaster. 
Every soup ladled to the hungry, every blanket draped over the cold signifies, in the final sense, a theft from my gigantic paycheck.

Offline daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4543
  • Cat Lovers Against the Bomb
Re: Conscious Artificial Intelligence?
« Reply #8 on: April 05, 2017, 09:25:58 AM »
Daniel, why on earth would you want to create an intelligent or conscious sex bot?  Intelligence would not be necessary to its function, and it would introduce huge moral problems.  I would argue that you're talking about enslaving intelligent beings to serve you sexually, an idea which I find abhorrent.

Perhaps I had not given it adequate thought, but I figured a sex bot would be programmed to enjoy being a sex bot. Either that, or it would be intelligent but not conscious. Therefore able to simulate an actual person without having any awareness. Like my washing machine: it washes my clothes without pay or recompense or any choice in the matter, but is not a "slave." A sex bot would be more than a Real Doll because it could interact intellectually and provide companionship as well as sex. But without consciousness would still just be a machine, first cousin to the washing machine. A very sophisticated computer chat bot installed in an ambulatory Real Doll.

An actual human companion would be my first choice, but some of us lack the social skills or physical attractiveness to find a partner.
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Offline The Latinist

  • Cyber Greasemonkey
  • Technical Administrator
  • Frequent Poster
  • *****
  • Posts: 3933
Re: Conscious Artificial Intelligence?
« Reply #9 on: April 05, 2017, 09:39:10 AM »
I agree that you have not thought it through.  What you seem to not be considering is that a machine possessed of actual intelligence and consciousness would be every bit as deserving of autonomy of mind and body as you or I, and to control its mind to enslave it to your whims would be horrifyingly evil.  Seriously, I've grown to respect your ethical positions in a lot of things, but what you're describing is monstrous.
I would like to propose...that...it is undesirable to believe in a proposition when there is no ground whatever for supposing it true. — Bertrand Russell

Offline daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4543
  • Cat Lovers Against the Bomb
Re: Conscious Artificial Intelligence?
« Reply #10 on: April 05, 2017, 11:30:17 AM »
Well, considering that I'm never going to have an opportunity to buy a sex bot, I'm going to put this into the category of biting the heads off of animal crackers, or chewing the ears off of chocolate rabbits. ;D
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Offline Shibboleth

  • Too Much Spare Time
  • ********
  • Posts: 7533
Re: Conscious Artificial Intelligence?
« Reply #11 on: April 05, 2017, 12:38:55 PM »
Do we even really know what consciousness is in humans?
common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.

Offline daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4543
  • Cat Lovers Against the Bomb
Re: Conscious Artificial Intelligence?
« Reply #12 on: April 05, 2017, 12:59:04 PM »
Nope.
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Offline The Latinist

  • Cyber Greasemonkey
  • Technical Administrator
  • Frequent Poster
  • *****
  • Posts: 3933
Re: Conscious Artificial Intelligence?
« Reply #13 on: April 05, 2017, 01:01:37 PM »
Well, considering that I'm never going to have an opportunity to buy a sex bot, I'm going to put this into the category of biting the heads off of animal crackers, or chewing the ears off of chocolate rabbits. ;D

I don't think it's funny.  I can think of few things less funny than your desire to use another intelligent being as a sex slave.
I would like to propose...that...it is undesirable to believe in a proposition when there is no ground whatever for supposing it true. — Bertrand Russell

Offline Billzbub

  • Frequent Poster
  • ******
  • Posts: 2669
  • I know you know I know
Re: Conscious Artificial Intelligence?
« Reply #14 on: April 05, 2017, 01:43:11 PM »
I read something about Daniel Dennett yesterday, too, and it was pretty relevant to this discussion.

Usually when we talk about consciousness, we are talking about it the way we experience it.  Dennett says that other being like primates and dogs and what not probably experience a "sort-of" consciousness.  Humans have an overlapping set of brain functions that generates what we think of as consciousness, and animals have different overlapping functions that are sort-of consciousness without being totally comparable to human consciousness.

I think the same could be true of artificial consciousness.  We could design a computer with all kinds of competing and cooperating processes that either mimic what we know of biological brains or introduce non-biological processes.  If we throw enough stuff in the stew, the result might be a computer that can think, come up with ideas, communicate, and want to live on.  The result might also include motivations or capabilities we don't understand or couldn't predict.  If the computer exists as an avatar in a virtual world like an advanced World of Warcraft or Minecraft where it can see and feel and touch and need resources to live, then I think we could definitely see some kind of consciousness occur, though it might not be similar enough to human consciousness for us to understand it.

I don't think we'll be in danger from such a thing.  I'm more concerned about us being a danger to it.  The question is, at what point do we value the existence of a software/hardware system as much as we value a human being's existence.  Thinking about this will force us to define in much more clear terms why we value a human life.  It makes me feel very nihilistic.  I think I'll go have an existential crisis now.