Author Topic: Episode #612  (Read 2610 times)

0 Members and 1 Guest are viewing this topic.

Online daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4327
  • Cat Lovers Against the Bomb
Re: Episode #612
« Reply #15 on: April 02, 2017, 04:20:39 PM »
I personally hold the opinion, with no evidence to back it up, that true cognitive A.I. will never exist. Cheap software, running on the simplest of computers, can beat most human chess players. Self-driving cars are just around the corner. All sorts of expert systems can do lots of amazing stuff, and will do more and more. But true independent thought, either with or without self-awareness, I regard as a pipe dream.

And I agree with Steve that there's no need for computers to be self-aware.

HOWEVER, I also have no doubt that people will try to create true cognitive A.I. and self-awareness. Why do people climb Everest? Because it's there. Why do people rob banks? Because there's money in them. People will try to create A.I. and computer self-awareness just to prove that they can. Just because the idea is there. We could ban it in the U.S., and they will go elsewhere. You can't stop technology. I personally think they will fail, and I hope I'm right. But Steve is being extremely naive if he thinks that people won't try merely because it would serve no necessary purpose.

Think of what a boon a fully-cognitive A.I. would be to scammers!

Issac Asimov's first law of robotics was that a robot shall not harm a human. The first true robots were self-guided missiles whose only purpose is to kill people. Criminals and the military would both give their eye teeth for a more advanced A.I., the former so they can take your stuff, the latter so they can kill people. If I am wrong and the technology is possible, people will make it.

Even if it is possible, I'm not worried about it happening in my lifetime, but that's mainly because I'm an old man and my body it already falling apart.
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Online 2397

  • Seasoned Contributor
  • ****
  • Posts: 824
Re: Episode #612
« Reply #16 on: April 02, 2017, 04:38:36 PM »
I got SoF wrong again. Of course I "knew" that some dinosaurs have a second brain near the base of their spine. Well, I'm 68 years old and learned about dinosaurs when I was a teen. I never heard that the textbooks had it wrong.

Don't humans have a second brain in the gut? https://www.scientificamerican.com/article/gut-second-brain/

Or is that just a metaphor? If there is a bunch of neurons together there, that function independently from the brain, and if it's a greater amount of neurons than a lot of species we recognize as having brains, doesn't it count as another brain?

Offline arthwollipot

  • Stopped Going Outside
  • *******
  • Posts: 4866
  • Observer of Phenomena
Re: Episode #612
« Reply #17 on: April 03, 2017, 02:10:13 AM »
Cara's mispronounced word of the week: "Jestalt."



Did she also say diploDOCus?

She also said (twice) psitt-AC-o-SAU-rus.

It's a common rule of thumb in English that the emphasis goes on the vowel preceding a double consonant. It's not always true, but it's a good first approximation. Hence, PSITT-a-co-SAU-rus. Also, it's extremely common for there to be two unemphasised syllables together. Never more than two, but two is usually better than one in words that have four or more syllables. Hence, dip-LOD-o-cus not DIP-lo-DOC-us.

Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.

Offline yarbelk

  • Brand New
  • Posts: 1
Re: Episode #612
« Reply #18 on: April 03, 2017, 02:29:27 AM »
I think an important point in the AI discussion was missed.  It was implied that there is no threat to human from non-sentient AIs.  I posit that these could be as big a threat to humanity as some evil Roko's Basilisk http://rationalwiki.org/wiki/Roko%27s_basilisk style AI.  At least you could argue with a sentient AI.  I will refer to this by the more generic, Machine Learning (ML) expert systems.  Also - forgive the meandering nature of this - writing this at the end of a lunch break.

First: ML systems are fallible:

We can look at the impact of mistakes in machine learning, a relatively 'mild' example (mild as in despite its horrible social impact, its unlikely this killed anyone), was http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/

Second: People are fallible:

As a software engineer (aka, professional lazy person who writes the software your life runs on), if an ML service exists to solve one of my problems, I will use that service, and probably trust that it works in its published parameters.

What would the impact of similar mistakes be as ML starts to take a more prominent role in systems which have higher risk, such as medical diagnosis and industrial automation.

Third: Distributed systems are Hard, non-deterministic distributed systems are scary

The problem is these systems, especially if they are consumed as distributed service, you can end up with difficult to understand non-linear feedback loops, and impossible to predict behaviour.  Allegorically from discussions with Google engineers, no one in Google can predict how Google search will respond to changes in its code.  Imagine this non-determinism impacting every networked system, from medical systems to self driving cars.  It wouldn't take a malicious self aware AI to cause huge damage, just an idiot programmer deploying test code to a ubiquitous ML service.

An interesting exploration on the subject of the threat of this kind is Peter Watts' Blindsight,
https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

Offline Skip Nordenholz

  • Off to a Start
  • *
  • Posts: 31
Re: Episode #612
« Reply #19 on: April 03, 2017, 05:03:48 AM »
Wow, Steve is shockingly ignorant of AI risk concerns. That was incredibly disappointing to hear.

Though I agree with him on the timeline for when this is going to be an issue, I did have a big problem with a couple things he said. I think the issue is we don't really have a good definition for consciousness, at a gut level we think of conscious as being like our own and I agree that that may never happen, what is the point, but consciousness is not a real goal for anything, humans do not have a consciousness for the purpose of being conscious, consciousness is for the purpose of solving problems, and any AI is going to have its own version of consciousness, machines now are on the spectrum towards this, a poker playing program desires to maximise the outcome of any plays, it has an internal state and history of how the other players have played that it uses to make future decisions. Real world AI consciousness will be very alien to us or even compared to other animals so much so that people may not be able to agree if it count as consciousness, the goals are completely different, but that is irrelevant to the risk. I think there are other reasons why the risk is over stated though.

Online daniel1948

  • Stopped Going Outside
  • *******
  • Posts: 4327
  • Cat Lovers Against the Bomb
Re: Episode #612
« Reply #20 on: April 03, 2017, 09:57:20 AM »
Technology has risks. Technological advance is inevitable. Ergo, the human race is doomed. On the bright side, everyone else will be better off after we're gone.
Daniel
----------------
"Anyone who has ever looked into the glazed eyes of a soldier dying on the battlefield will think long and hard before starting a war."
-- Otto von Bismarck

Online Swagomatic

  • Frequent Poster
  • ******
  • Posts: 2344
Re: Episode #612
« Reply #21 on: April 03, 2017, 11:19:38 AM »
I recently attended a talk at ASU:  https://origins.asu.edu/events/future-of-artificial-intelligence


It was organized by Lawrence Krauss and the ASU Origins project. The consensus of the assembled group seemed to be that the security of the AI was the larger threat to Humanity. No one seemed greatly concerned with the "rise of the machines."  The video on the left side is Part One, the right side video is the Q&A following the talk.


Edit: Fixed link
« Last Edit: April 03, 2017, 11:22:52 AM by Swagomatic »
Beware of false knowledge; it is more dangerous than ignorance.
---George Bernard Shaw

Offline Stephan

  • Brand New
  • Posts: 3
AI misconception (Re: Episode #612)
« Reply #22 on: April 03, 2017, 12:38:51 PM »
I was really surprised by Steve's comments with respect to Artificial Intelligence. I suspect there are several unstated assumptions, possibly coming from his neurologist perspective.

First, the claim that AIs don't make decisions is just baffling.  Of course, AIs make decisions all the time. Amazon's AI decides which books to suggest to you. Tesla's AI decides wether to brake or change lanes or run off the road. Google's AI decides which Korean translation to suggest for a given English sentence. Maybe there is an unstated "conscious" qualifier to the decisions an AI is not supposed to make?

I found Bob's simple example excellent - give an AI an optimisation goal (e.g. eliminate spam) and a heap of world knowledge, and it may well arrive at the solution to eliminate one of the root causes of spam, i.e. humans. Coming up with such plans has been standard for planning systems since the 1980s, without any of the current generation of Deep Learning (although they could not cope with the complexity of current common sense knowledge bases). The idea of solving a problem by trying to reduce it to (hopefully simpler) subproblems is the basic computation engine of PROLOG, one of the classic AI languages.

I also have the impression that Steve implicitly assumes that consciousness requires an architecture similar to the human brain - massively parallel, and possibly even using processes as in artificial neural networks. But that is, while not impossible, another unstated and quite far-fetched assumption. The Church-Turing Thesis https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis gives us good reasons to believe that all computation paradigms are, at least in principle, equally powerful, and that all reasonable encodings result in similar (up to a polynomial factor ;-) computational complexity (that last one may give me away as a theoretical computer scientist ;-). So it is quite plausible that a neural network is not the only way to consciousness. Moreover, we already have quite massive parallel processing engines in our GPUs, which are increasingly misused (or re-purposed) as compute engines. In the case of humans, the brain first developed to react to complex external stimuli, so it's original purpose was pattern recognition and and reaction, for which neural networks are excellent. Evolution always works with the material at hand, so that's why our way to consciousness lead via a biological neural net. But it would need quite an argument to claim that that is the only way to consciousness.

I'd also argue that "consciousness" or "self-awareness" are red herrings. Neither of the two are a feature that is either designed into the system or not. Indeed, I would argue that both are emergent properties along a spectrum. The argument that "I didn't built it consciously, therefore it's not conscious" is simply false. I never intend to hit my thumb with a hammer, and it still happens. I'd argue that the only legitimate way to gauge consciousness is by observing the system from the outside - as in the Turing test https://en.wikipedia.org/wiki/Turing_test. If we cannot distinguish a system from a conscious being, the only valid conclusion is that the system is conscious.

The current wave of AI is mostly driven by Deep Learning, typically complex neural architectures with learning by error-backpropagation. The neural architecture, with encoders, convolution layers, and evaluators/decision networks is usually designed by hand with a lot of tweaking and trial-and-error. But we already apply genetic algorithms and other optimisation techniques to improve AI systems - in particular, we have employed GAs to improve search heuristics for automated theorem provers. I see no reason why we should not, in the near future, use evolutionary algorithms to improve neural architectures. Assuming that consciousness really lies on a spectrum, and that it offers an advantage for problem solving, I see no reason why it should not emerge from such processes. If that happens in the near future is uncertain, and (depending on the definition of "near") probably unlikely. But there certainly is no design decision for consciousness required to produce it.

I'm just back from the 2nd Conference on Artificial Intelligence and Theorem Proving http://aitp-conference.org/2017/, and one of the pervasive topics was the integration of machine learning (in particular deep learning) and symbolic reasoning (normally in the guise of theorem proving, but many current theorem provers are general reasoning engines and can provide e.g. question answering services). It's not a very far stretch to assume that such hybrid architectures, with a self-optimisation module, might achieve significant progress towards general AI.
« Last Edit: April 06, 2017, 06:29:41 AM by Stephan »

Offline bligh

  • Brand New
  • Posts: 5
Re: AI misconception (Re: Episode #612)
« Reply #23 on: April 03, 2017, 01:24:37 PM »
The current wave of AI is mostly driven by Deep Learning, typically complex neural architectures with learning by error-backpropagation. The neural architecture, with encoders, convolution layers, and evaluators/decision networks is usually designed by hand with a lot of tweaking and trial-and-error. But we already apply genetic algorithms and other optimisation techniques to improve AI systems - in particular, we have employed GAs to improve search heuristics for automated theorem provers. I see no reason why we should not, in the near future, use evolutionary algorithms to improve neural architectures. Assuming that consciousness really lies on a spectrum, and that it offers an advantage for problem solving, I see no reason why it should not emerge from such processes. If that happens in the near future is uncertain, and (depending on the definition of "near") probably unlikely. But there certainly is no design decision for consciousness required to produce it.

When trying to evolve something like human-level intelligence you don't only need a sufficiently complex neural architecture, but also a sufficiently complex environment for evaluating the individuals. Also, research in embodied cognitive science suggest that, when trying to understand intelligence, the body is at least as important as the brain, so you can add a sufficienly complex body to the list.

So you either

a) Need a very good simulation. This gets computatiionally expensive really fast, and introduces uncertainty when transfering result from the simulation to the real world.
b) Use the "The world is its own best model" approach and hook up the inputs and outputs of your artificial neural architecture to the real world. Now you are limited to real world physics, and evolution will take a looong time. In addition, evolving the sturctured of the body is difficult in this scenario. Going back to Darwin, the limitations of this approach are obvious.



Offline Anathem

  • Brand New
  • Posts: 3
Re: Episode #612
« Reply #24 on: April 03, 2017, 05:14:56 PM »
I posted this on Neurologica as well, but I thought I'd add it here (with some typo fixes).

I would argue that it isn’t self-awareness (in the way that we consider self-awareness) that is the concern, it’s self modification/learning. For any given weak AI that has goals (which we have presumably put it in through top down engineering) that can do simple learning or self-modify in some other way, there will absolutely be cases in which it will try to accomplish those goals in ways we can’t predict. The stamp collector thought experiment on Computerphile is a good example of this.

.

That is an unbounded perfectly learning AI which I agree with is far, far off, but it does show the extreme version of this problem. This channel has another video which I think it much more realistic under consideration of learning AI.

.

The summary of the video is that a logical system which has a goal and is able to assess and accurately determine obstacles to that goal will, naturally, not want someone to tune it in such a way that makes it more difficult to achieve that goal. So, while on, the machine would actually fight you trying to fix it.

In one sense, I agree with Steve's concept of AI. That is, I don’t think it likely any doomsday situation will happen, that as AI develops and improves, we will be able to direct it in a way that works with people rather than against them. Also, general AI is a long way off. That said, learning AI is developing right now. Self-driving cars by Google, facial recognition by Microsoft, Watson by IBM. The major difference between them and the kind of AI that can be dangerous are that they are currently bounded in their ability to act and change themselves.

I’ll leave you with one final thought experiment that is somewhat realistic in the shorter term from a programming perspective. Suppose you’re a hacker, and you have a simple AI who generally understands language (like the chat bots that exist). First, you take that code and modify it so that instead of English, it understands, semantically, its own programming language. Second, you give it any simple goal, something like, with 50% of its resources create as much network traffic as you can, coding a couple of ways to do this innately into its heuristics. Third, with the other 50% of its resources have it modify its own heuristics using real grammar at random keeping the base rules in effect, disregarding but remembering the failures so as not to try them again, and keeping the successes whereby you improve in rule number two. To get this machine do something malicious is at this point only a matter of having enough time resources to allow it to track it’s failures and successes.

Lose/win tracking is a one of the most basic type of learning algorithm that exists (that I know of). it sucks because of the number of successes for any given trial will be minuscule relative to the number of failures, but it does work. And there are better learning models that already exist like the one used with AlphaGo. Turn the best ones to a malicious use, and you suddenly have the Anarchist Cookbook times 1000.

Edit: sorry about the video links, I didn't realize it would post them inline.
« Last Edit: April 03, 2017, 05:51:34 PM by Anathem »

Offline RMoore

  • Off to a Start
  • *
  • Posts: 30
Re: Episode #612
« Reply #25 on: April 03, 2017, 05:26:33 PM »
Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.

Wait, isn't that an example instead of an exception?

Offline RMoore

  • Off to a Start
  • *
  • Posts: 30
Re: Episode #612
« Reply #26 on: April 03, 2017, 05:29:22 PM »
First: ML systems are fallible:

We can look at the impact of mistakes in machine learning, a relatively 'mild' example (mild as in despite its horrible social impact, its unlikely this killed anyone), was http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/


Let's not forget Tay! https://en.wikipedia.org/wiki/Tay_(bot)

Offline SATAKARNAK

  • Off to a Start
  • *
  • Posts: 24
Re: Episode #612
« Reply #27 on: April 03, 2017, 05:45:46 PM »
The big advantage of Ai is that we can have Thousands of generations in an short time. Remember evolution did not aim for consciousness it happend as an side effekt of bigger an bigger brains .  In the same way consciousness can be an side effekt.  Remember that  weak ai cold survive after us humens are long gone. Ants are not smart one by one but in numbers thay can create complex system and thay can survie. If we allow Weak Ai controll over produktion thay can replicate the hardware  give them a millon years to refine Then thay  can become conscious. Evolution is like bababrinkman says an equation Performance, Feedback, Revision. From now untill the heat death of the univers it will happen at least in one part of our Univers.



And do not forget that The russians hade an Dead mans hand it did not nead an Strong AI just sensors to unleash armagedon. Think what an roge  Ai cold do with it.
« Last Edit: April 03, 2017, 05:51:25 PM by SATAKARNAK »

Offline RMoore

  • Off to a Start
  • *
  • Posts: 30
Re: AI misconception (Re: Episode #612)
« Reply #28 on: April 03, 2017, 05:54:20 PM »
I was really surprised by Steve's comments with respect to Artificial Intelligence. I suspect there are several unstated assumptions, possibly coming from his neurologist perspective.

...(etc.)...

Thanks, Stephan, for the insightful comments. I also had doubts about what Steve was saying, along the lines of "We can't predict what AI architecture approaches, if any, will lead to sentience, so it is wrong (that is, unskeptical) to say that the things we are doing today won't ultimately lead there." While it would certainly be an extraordinary claim to say that we will definitely converge on sentient AI, it is equally extraordinary to claim that the only way we would ever get sentient AI is to mimic the brain's organization (as I interpreted Steve's comments to mean).

Here is a simple reductio argument. Imagine aliens come to Earth and examine human intelligence. If they find that our brains are structurally different from theirs (as I think would be a reasonable assumption), would they be right to conclude that we were not sentient?

Offline RMoore

  • Off to a Start
  • *
  • Posts: 30
Re: Episode #612
« Reply #29 on: April 03, 2017, 05:59:44 PM »
The big advantage of Ai is that we can have Thousands of generations in an short time. Remember evolution did not aim for consciousness it happend as an side effekt of bigger an bigger brains .  In the same way consciousness can be an side effekt.  Remember that  weak ai cold survive after us humens are long gone. Ants are not smart one by one but in numbers thay can create complex system and thay can survie. If we allow Weak Ai controll over produktion thay can replicate the hardware  give them a millon years to refine Then thay  can become conscious. Evolution is like bababrinkman says an equation Performance, Feedback, Revision. From now untill the heat death of the univers it will happen at least in one part of our Univers.

In fact, a genetic algorithm can induce mutations at a much faster rate than biological evolution; can avoid catastrophic cutoff of promising lines (for example, imagine an early species developing some features that could ultimately enable flight, only to be wiped to extinction by a volcanic eruption and delaying the appearance of flight by a million or so years); and can apply more directed and specific adaptive pressure.

 

personate-rain