Author Topic: Episode #612  (Read 2374 times)

0 Members and 1 Guest are viewing this topic.

Offline arthwollipot

  • Stopped Going Outside
  • *******
  • Posts: 4711
  • Observer of Phenomena
Re: Episode #612
« Reply #30 on: April 04, 2017, 01:28:38 AM »
Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.

Wait, isn't that an example instead of an exception?

It's an exception to the two-unemphasised-syllables rule of thumb that I was describing, so no.

Offline brilligtove

  • Stopped Going Outside
  • *******
  • Posts: 4599
  • Ignorance can be cured. Stupidity, you deal with.
    • Valuum
Re: Episode #612
« Reply #31 on: April 04, 2017, 02:24:56 AM »
Some of you know I'm running a tabletop RPG via videoconference. Recently the subject of AIs has become quite important to the development of that world. Between this episode and the latest Stratechery, some ideas began to solidify.

While I think Asimov's laws of robotic are almost useless for several reasons, their essence can be rephrased in a way that could allow coherent action from an AI: ensure the long term survival of humanity and humans (in that order).

If the first AGI has this concept at its core there can't be a skynet - at least not easily. (The core ambiguity remaining is what "humanity" means.) Any malificent AI would be confronted immediately and if at all possible, shut down.

...our collective thinking is summarized in the Seven AIs page of our game.
evidence trumps experience | performance over perfection | responsibility – authority = scapegoat | emotions motivate; data doesn't

Offline stuque

  • Brand New
  • Posts: 1
Re: Episode #612
« Reply #32 on: April 04, 2017, 03:16:09 AM »
I thought the AI discussion was disappointing. It's a shame Cara and Steve
didn't engage with the topic, as many of the examples of AI safety are fun and
thought-provoking. Instead, it was like hearing two ornithologists critique
the field of aerodynamics, arguing that "true" artificial flight won't be
achieved until we completely understand birds and start to build planes with
wings that flap and have feathers.

Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?

Steve seems to believe that self-awareness is somehow the obvious missing
ingredient, but, as he points out, it is not so obvious to lots and lots of AI
experts. Maybe these AI experts are on to something? Maybe self-awareness and
consciousness are not related to intelligence in any significant way? Maybe
they are just side-effects of biology? Maybe they actually hinder us instead
of help us? Or maybe they are just not useful concepts --- perhaps they are
philosophical umbrella terms that bundle various ideas about minds and brains?

I think computation is a vastly more useful idea than "self-awareness",
"consciousness", or "mind" --- terms that are ripe with confusion. At least,
it helps lay bare the "meat is magic" fallacy, i.e. the argument that human
brains are special because they've been blessed with self-awareness, or
consciousness, or intentionality, or a soul, etc.

Offline Tentacle Number 6

  • Off to a Start
  • *
  • Posts: 10
Re: Episode #612
« Reply #33 on: April 04, 2017, 06:49:04 AM »
I think the one main point where I disagree with Steve and Cara is their claim that artificial self-awareness would require deliberate effort and that there's not really any incentive to do so.

I think creating an AI that is self-learning is the holy Grail of computing. It takes SO MUCH effort to create, say, a Go-playing AI that we'll basically never be able to use AI for the truly important questions. Questions like "how will law x affect the economy" or "how can we eliminate poverty". These aren't questions where we can just write a cool algorithm and job done. We basically need to be able to throw data at the computer and say "figure this out". To be able to do that we basically have to make the AI able to learn.

And once it can learn, we have basically no idea where it will lead. Maybe it'll become self-aware, maybe not, but my point is there most definitely is an enormous incentive to try and make AI able to outgrow the slow, error-prone meatbags that write the code.

Offline daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4238
  • Cat Lovers Against the Bomb
Re: Episode #612
« Reply #34 on: April 04, 2017, 09:38:46 AM »
Some of you know I'm running a tabletop RPG via videoconference. Recently the subject of AIs has become quite important to the development of that world. Between this episode and the latest Stratechery, some ideas began to solidify.

While I think Asimov's laws of robotic are almost useless for several reasons, their essence can be rephrased in a way that could allow coherent action from an AI: ensure the long term survival of humanity and humans (in that order).

If the first AGI has this concept at its core there can't be a skynet - at least not easily. (The core ambiguity remaining is what "humanity" means.) Any malificent AI would be confronted immediately and if at all possible, shut down.

...our collective thinking is summarized in the Seven AIs page of our game.

Ensure the long-term survival of humanity. ----> Figure out how to keep a person alive forever, put one person in a plexiglas case with feeding and breathing tubes and tubes for the necessary drugs, and kill everyone else so nobody interferes with the project.

Oops. Not what we hoped it would do.

I don't actually think this will happen. I think that more down-to-Earth A.I. will help the military design and deploy ever-more-efficient weapons and we'll wipe ourselves out without the need for our A.I. to have any volition of its own.

Wasn't Skynet supposed to prevent us from wiping ourselves out, and it decided the best way to do that was to do it for us? Either way, we're toast. Even non-volitional A.I. (A.I. that really just does what we want it to) will still help us to wipe ourselves out if we don't figure out how to get along with one another. And getting along with one another seems to be something we've never figured out how to do. And IMO, as long as there is religion, we never will.

A.I., at whatever level we manage to develop it, is just a tool. And right now the military, whose vocation is the science of killing people, has the most money and resources to take it the furthest and use it to its limits.
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Offline brilligtove

  • Stopped Going Outside
  • *******
  • Posts: 4599
  • Ignorance can be cured. Stupidity, you deal with.
    • Valuum
Re: Episode #612
« Reply #35 on: April 04, 2017, 10:25:09 AM »
Some of you know I'm running a tabletop RPG via videoconference. Recently the subject of AIs has become quite important to the development of that world. Between this episode and the latest Stratechery, some ideas began to solidify.

While I think Asimov's laws of robotic are almost useless for several reasons, their essence can be rephrased in a way that could allow coherent action from an AI: ensure the long term survival of humanity and humans (in that order).

If the first AGI has this concept at its core there can't be a skynet - at least not easily. (The core ambiguity remaining is what "humanity" means.) Any malificent AI would be confronted immediately and if at all possible, shut down.

...our collective thinking is summarized in the Seven AIs page of our game.

Ensure the long-term survival of humanity. ----> Figure out how to keep a person alive forever, put one person in a plexiglas case with feeding and breathing tubes and tubes for the necessary drugs, and kill everyone else so nobody interferes with the project.

Oops. Not what we hoped it would do.

I don't actually think this will happen. I think that more down-to-Earth A.I. will help the military design and deploy ever-more-efficient weapons and we'll wipe ourselves out without the need for our A.I. to have any volition of its own.

Wasn't Skynet supposed to prevent us from wiping ourselves out, and it decided the best way to do that was to do it for us? Either way, we're toast. Even non-volitional A.I. (A.I. that really just does what we want it to) will still help us to wipe ourselves out if we don't figure out how to get along with one another. And getting along with one another seems to be something we've never figured out how to do. And IMO, as long as there is religion, we never will.

A.I., at whatever level we manage to develop it, is just a tool. And right now the military, whose vocation is the science of killing people, has the most money and resources to take it the furthest and use it to its limits.

...of humans and humanity, not make a meat puppet that could be accidentally destroyed by a natural disaster at any time. There are many assumptions built into this kind of moral code, including what humans think it means to be human and what humans think humanity is. Dogs and cats do not regularly violate their core program to serve and support humans. An AI with that kind of core motivator and a robust understanding of psychology, sociology, and actual human behaviours might do something that kills a lot of people to drastically increase the odds of long term human survival. (In our future game-history they think an IOT-based AI may have killed 10 million Americans and left 100 million homeless in an effort to stave off a full-scale world war, for example.)

The assumption that AI=Skynet is another instance of the "meat is magic" fallacy, I think.
evidence trumps experience | performance over perfection | responsibility – authority = scapegoat | emotions motivate; data doesn't

Offline brilligtove

  • Stopped Going Outside
  • *******
  • Posts: 4599
  • Ignorance can be cured. Stupidity, you deal with.
    • Valuum
Re: Episode #612
« Reply #36 on: April 04, 2017, 10:25:50 AM »
I thought the AI discussion was disappointing. It's a shame Cara and Steve
didn't engage with the topic, as many of the examples of AI safety are fun and
thought-provoking. Instead, it was like hearing two ornithologists critique
the field of aerodynamics, arguing that "true" artificial flight won't be
achieved until we completely understand birds and start to build planes with
wings that flap and have feathers.

Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?

Steve seems to believe that self-awareness is somehow the obvious missing
ingredient, but, as he points out, it is not so obvious to lots and lots of AI
experts. Maybe these AI experts are on to something? Maybe self-awareness and
consciousness are not related to intelligence in any significant way? Maybe
they are just side-effects of biology? Maybe they actually hinder us instead
of help us? Or maybe they are just not useful concepts --- perhaps they are
philosophical umbrella terms that bundle various ideas about minds and brains?

I think computation is a vastly more useful idea than "self-awareness",
"consciousness", or "mind" --- terms that are ripe with confusion. At least,
it helps lay bare the "meat is magic" fallacy, i.e. the argument that human
brains are special because they've been blessed with self-awareness, or
consciousness, or intentionality, or a soul, etc.

Welcome aboard. Insightful first post!
evidence trumps experience | performance over perfection | responsibility – authority = scapegoat | emotions motivate; data doesn't

Offline bligh

  • Brand New
  • Posts: 5
Re: Episode #612
« Reply #37 on: April 04, 2017, 03:21:01 PM »
A recurring theme in this thread seems to be surprice and dissapointment with Cara and Steves contributions to the AI segment.

What I find noteworthy, however, is the brothers apparent facination with Kurzweil, without any hint of critical thinking. Many, including myself, consider Kurzweil to be a pseudoscientist. I find it very interresting indeed, that some (many?) sceptics apparently are susceptible to his ideas and have a blindspot regarding "the singularity".

See for example what Massimo Pigliucci writes about Kurzweil on the Rationally Speaking blog:

http://rationallyspeaking.blogspot.com/2011/04/ray-kurzweil-and-singularity-visionary.html

« Last Edit: April 04, 2017, 03:24:43 PM by bligh »

Offline 2397

  • Seasoned Contributor
  • ****
  • Posts: 712
Re: Episode #612
« Reply #38 on: April 04, 2017, 06:26:08 PM »
Ray Kurzweil? My impression was that their impression of him has been waning. And it started with mentioning all the supplement taking as unfounded.

Offline Fast Eddie B

  • Frequent Poster
  • ******
  • Posts: 2384
Re: Episode #612
« Reply #39 on: April 04, 2017, 09:12:48 PM »
To further establish my nerd creds...

As a child, I named my parakeet Archie, an homage to archaeopteryx!

"And what it all boils down to is that no one's really got it figured out just yet" - Alanis Morisette
• • •
"I doubt that!" - James Randi

Offline arthwollipot

  • Stopped Going Outside
  • *******
  • Posts: 4711
  • Observer of Phenomena
Re: Episode #612
« Reply #40 on: April 04, 2017, 11:27:00 PM »
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
This made me stop and go "wait, what?" as well.

Offline daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4238
  • Cat Lovers Against the Bomb
Re: Episode #612
« Reply #41 on: April 05, 2017, 09:11:38 AM »
To further establish my nerd creds...

As a child, I named my parakeet Archie, an homage to archaeopteryx!


For me, "Archie" brings to mind Archy and Mehitabel.

Mehitabel's Song: (from memory, so possibly not entirely correct)

i once was an innocent kit
with a ribbon my neck to fit
and bells tied onto it
and what the hell what the hell

and a maltese tom came by
with a come hither look in his eye
and a song that soared to the sky
and what the hell what the hell

and i followed adown the street
the pad of his rhythmical feet
o permit me again to repeat
what the hell

Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Offline gebobs

  • Not Enough Spare Time
  • **
  • Posts: 184
  • Me like hockey!
Re: Episode #612
« Reply #42 on: April 05, 2017, 11:11:48 AM »

I'd disagree that we won't have a drive to create sentient machines. The sexbot market alone has a lot of potential for it.


The current technology doesn't require sentience.

"You can lead a whore to culture, but you can't make her think."
- Dorothy Parker

Offline Ah.hell

  • Poster of Extraordinary Magnitude
  • **********
  • Posts: 10399
Re: Episode #612
« Reply #43 on: April 05, 2017, 11:23:14 AM »
I think the Steve underrates the "Lets see if we can do it" factor in the self aware AI conversation.  I agree, it won't be for a while yet but someone somewhere will do it just to be the first.

Offline gebobs

  • Not Enough Spare Time
  • **
  • Posts: 184
  • Me like hockey!
Re: Episode #612
« Reply #44 on: April 05, 2017, 12:07:40 PM »
I think the Steve underrates the "Lets see if we can do it" factor in the self aware AI conversation.  I agree, it won't be for a while yet but someone somewhere will do it just to be the first.

I think the climbing Mt. Everest example "because it's there", "Lets see if we can do it", etc. may not really apt. To get there, all they had to do is go a bit further than the guy before. The gulf between non-sentient and sentient AI seems to be vast, perhaps too vast for any motivation to overcome. Surely some of what we dream will be beyond our endeavor. If Everest was twice as high, no one would be able to climb it.
« Last Edit: April 05, 2017, 12:09:55 PM by gebobs »