Skeptics Guide to the Universe Forums

The Skeptics' Guide to the Universe => Podcast Episodes => Topic started by: Steven Novella on April 01, 2017, 11:49:29 AM

Title: Episode #612
Post by: Steven Novella on April 01, 2017, 11:49:29 AM
Forgotten Superheroes of Science: Jane Cooke Wright; News Items: Redrawing Dinosaur Clade, Bird Evolution, Elon Musk on AI, Smelling Breast Cancer; Who’s That Noisy; What’s the Word: Frisson; Science or Fiction
Title: Re: Episode #612
Post by: yrdbrd on April 01, 2017, 04:47:06 PM
Cara's mispronounced word of the week: "Jestalt."

But if we're being pendantic, we should also take Steve to task for "comprised of."
Title: Re: Episode #612
Post by: elert on April 01, 2017, 05:26:05 PM
Hadrosaur shares the same root as hadron (as in the Large Hadron Collider) — the Greek word αδρός (adros) meaning thick, robust, massive, or large. The opposite word in Greek is λεπτός, which is thin. Particle physics has the word lepton, but I  not aware of any leptosaurs in paleontology.
Title: Re: Episode #612
Post by: Elapid on April 01, 2017, 08:44:19 PM
If you really want to blow Bob's mind on the bird thing, not only do birds have lots and lots of air sacs, their respiratory system also contains two breaths at any one time.

(https://svpow.files.wordpress.com/2013/12/avian-breathing.jpg)

(http://people.eku.edu/ritchisong/554images/Air_sacs_3.png)

I'm getting flashbacks to Ornithology and having to diagram all of this on tests...

Anyway, for an animal with really high physical demands, and the really high metabolism that requires. They don't have a diaphragm, either. It's pressure from their wing muscles on the upstroke and downstroke that cycle air through, which necessarily means that the faster wing beats result in faster breathing.

On top of that, though even ratites (emu, cassowary, ostrich, etc.) have at least some hollow bones, there is a group of flighted birds that do not:  Loons.

They're so incredibly adapted for diving, their bones are solid to decrease buoyancy. Just before taking the plunge, they flatten down their flight feathers, forcing any air out from their down. Their legs are also attached to their body exactly where you'd want a motor on a boat to make it as fast and maneuverable as possible - on the very, very back. This renders them physically incapable of walking on land, which leaves them doomed if they mistake a wet parking lot or highway at night for a lake or river, unless a human renders assistance.

....Also, now I'm disappointed in myself for not at least guessing on Who's That Noisy? I thought it sounded really similar to the 'croaking' sound that certain catfish make via stridulation...
Title: Re: Episode #612
Post by: Friendly Angel on April 01, 2017, 09:40:48 PM
Cara's mispronounced word of the week: "Jestalt."



Did she also say diploDOCus?
Title: Re: Episode #612
Post by: danjam on April 02, 2017, 06:07:25 AM
Wow, Steve is shockingly ignorant of AI risk concerns. That was incredibly disappointing to hear.

Edit: After listening to the rest of the segment... wow, that was painful to listen to from all sides. An incredible amount of misconceptions from everyone. I'll just recommend for anybody interested in the topic to read Superintelligence by Nick Bostrom. At this point it's really the best introduction to the ideas of General AI and AI risk.
Title: Re: Episode #612
Post by: lucek on April 02, 2017, 07:22:31 AM
The forgotten super hero of science  made me think today. No one person is the discoverer of anything anymore. A little hyperbolic but it is pretty true. But this is a problem as research leads or heads of institutes get the credit for a teams work. Really this segment is kinda prospecting the myth of the lone maverick scientist
Title: Re: Episode #612
Post by: lucek on April 02, 2017, 07:31:00 AM
Oh also wow that science or fiction was soooo simple. Just laughing at ya guy
Title: Re: Episode #612
Post by: 2397 on April 02, 2017, 08:09:58 AM
Is the SGU consensus that cats are not a threat to birds?

Wow, Steve is shockingly ignorant of AI risk concerns. That was incredibly disappointing to hear.

To me it seems like Steve's lack of worry is based on the immediacy of it. No one's working on it now, so it's not going to happen in the foreseeable future.

I'd disagree that we won't have a drive to create sentient machines. The sexbot market alone has a lot of potential for it.
Title: Re: Episode #612
Post by: brilligtove on April 02, 2017, 10:24:55 AM
Wow, Steve is shockingly ignorant of AI risk concerns. That was incredibly disappointing to hear.

Edit: After listening to the rest of the segment... wow, that was painful to listen to from all sides. An incredible amount of misconceptions from everyone. I'll just recommend for anybody interested in the topic to read Superintelligence by Nick Bostrom. At this point it's really the best introduction to the ideas of General AI and AI risk.

I wanted to yell at them, "Of course intelligence and self-awareness cannot spontaneously manifest! We are self aware! You're arguing that we were intelligently designed FFS!!"

Since I know they are not arguing for ID, I am not clear on what position they were actually attempting to argue. If they were trying to say that we are not going to create a replica of the human mind with human intelligence anytime soon, well sure. It is not at all apparent that self-awareness in any way depends on the physical structure that is running the program. I have no doubt that Kara and Steve have a much deeper understanding of how the brain works then most people who are considering the chances of an artificial general intelligence being created. But if an AGI doesn't have to mimic human information processing substrates to exhibit emergent behaviors and properties such as self-awareness - which seems likely - then AGIs could emerge from increasingly complex ANIs.

I personally know someone who is working with a neural network that has a number of nodes and connection topology that are getting in the range of the connectedness of human brains. I'm reasonably certain that particular system will not have the capacity to be more than a narrow artificial intelligence, but it has the kinds of characteristics that could be the foundation for an artificial general intelligence.
Title: Re: Episode #612
Post by: bligh on April 02, 2017, 10:39:16 AM
Wow, Steve is shockingly ignorant of AI risk concerns. That was incredibly disappointing to hear.

Edit: After listening to the rest of the segment... wow, that was painful to listen to from all sides. An incredible amount of misconceptions from everyone. I'll just recommend for anybody interested in the topic to read Superintelligence by Nick Bostrom. At this point it's really the best introduction to the ideas of General AI and AI risk.

As a former AI researcher I have to agree with Steve on this issue.  The singularity argument as put forward by Kurzweil and likeminded futurists is pure speculation in my opinion.  Using Moores law to extrapolate when AI will surpass human level intelligence suffers from the fundamental flaw that it relies on the implicit assumption that given enough transistors everything else will follow...
Title: Re: Episode #612
Post by: brilligtove on April 02, 2017, 10:40:19 AM
If you really want to blow Bob's mind on the bird thing, not only do birds have lots and lots of air sacs, their respiratory system also contains two breaths at any one time.

(https://svpow.files.wordpress.com/2013/12/avian-breathing.jpg)

(http://people.eku.edu/ritchisong/554images/Air_sacs_3.png)

I'm getting flashbacks to Ornithology and having to diagram all of this on tests...

Anyway, for an animal with really high physical demands, and the really high metabolism that requires. They don't have a diaphragm, either. It's pressure from their wing muscles on the upstroke and downstroke that cycle air through, which necessarily means that the faster wing beats result in faster breathing.

On top of that, though even ratites (emu, cassowary, ostrich, etc.) have at least some hollow bones, there is a group of flighted birds that do not:  Loons.

They're so incredibly adapted for diving, their bones are solid to decrease buoyancy. Just before taking the plunge, they flatten down their flight feathers, forcing any air out from their down. Their legs are also attached to their body exactly where you'd want a motor on a boat to make it as fast and maneuverable as possible - on the very, very back. This renders them physically incapable of walking on land, which leaves them doomed if they mistake a wet parking lot or highway at night for a lake or river, unless a human renders assistance.

....Also, now I'm disappointed in myself for not at least guessing on Who's That Noisy? I thought it sounded really similar to the 'croaking' sound that certain catfish make via stridulation...

I found these videos helpful in understanding respiratory systems that are more complicated than bags that empty and fill.

https://youtu.be/kWMmyVu1ueY

https://youtu.be/6fOMswLAyy4
Title: Re: Episode #612
Post by: brilligtove on April 02, 2017, 12:35:06 PM
Cara's comments about frisson reminded me of something, so I made a poll. (http://sguforums.com/index.php?topic=48473.msg9488869#msg9488869)
Title: Re: Episode #612
Post by: Sawyer on April 02, 2017, 02:33:46 PM
Cara seems to have taken over Steve's role of "mentioning books or authors while I'm in the process of reading them".  I haven't read The Reluctant Mr. Darwin but was excited to hear her reference Quammen's books.

She's also taken over the role of "leading 2017 Science or Fiction".   :cara:


Sci or Fiction stats:
https://docs.google.com/spreadsheets/d/1IVvA030ZQmU8R7LhzRXmhBWA2AYxrfVpsbkQSIXY4dI/edit#gid=0
Title: Re: Episode #612
Post by: daniel1948 on April 02, 2017, 04:07:22 PM
I got SoF wrong again. Of course I "knew" that some dinosaurs have a second brain near the base of their spine. Well, I'm 68 years old and learned about dinosaurs when I was a teen. I never heard that the textbooks had it wrong.
Title: Re: Episode #612
Post by: daniel1948 on April 02, 2017, 04:20:39 PM
I personally hold the opinion, with no evidence to back it up, that true cognitive A.I. will never exist. Cheap software, running on the simplest of computers, can beat most human chess players. Self-driving cars are just around the corner. All sorts of expert systems can do lots of amazing stuff, and will do more and more. But true independent thought, either with or without self-awareness, I regard as a pipe dream.

And I agree with Steve that there's no need for computers to be self-aware.

HOWEVER, I also have no doubt that people will try to create true cognitive A.I. and self-awareness. Why do people climb Everest? Because it's there. Why do people rob banks? Because there's money in them. People will try to create A.I. and computer self-awareness just to prove that they can. Just because the idea is there. We could ban it in the U.S., and they will go elsewhere. You can't stop technology. I personally think they will fail, and I hope I'm right. But Steve is being extremely naive if he thinks that people won't try merely because it would serve no necessary purpose.

Think of what a boon a fully-cognitive A.I. would be to scammers!

Issac Asimov's first law of robotics was that a robot shall not harm a human. The first true robots were self-guided missiles whose only purpose is to kill people. Criminals and the military would both give their eye teeth for a more advanced A.I., the former so they can take your stuff, the latter so they can kill people. If I am wrong and the technology is possible, people will make it.

Even if it is possible, I'm not worried about it happening in my lifetime, but that's mainly because I'm an old man and my body it already falling apart.
Title: Re: Episode #612
Post by: 2397 on April 02, 2017, 04:38:36 PM
I got SoF wrong again. Of course I "knew" that some dinosaurs have a second brain near the base of their spine. Well, I'm 68 years old and learned about dinosaurs when I was a teen. I never heard that the textbooks had it wrong.

Don't humans have a second brain in the gut? https://www.scientificamerican.com/article/gut-second-brain/

Or is that just a metaphor? If there is a bunch of neurons together there, that function independently from the brain, and if it's a greater amount of neurons than a lot of species we recognize as having brains, doesn't it count as another brain?
Title: Re: Episode #612
Post by: arthwollipot on April 03, 2017, 02:10:13 AM
Cara's mispronounced word of the week: "Jestalt."



Did she also say diploDOCus?

She also said (twice) psitt-AC-o-SAU-rus.

It's a common rule of thumb in English that the emphasis goes on the vowel preceding a double consonant. It's not always true, but it's a good first approximation. Hence, PSITT-a-co-SAU-rus. Also, it's extremely common for there to be two unemphasised syllables together. Never more than two, but two is usually better than one in words that have four or more syllables. Hence, dip-LOD-o-cus not DIP-lo-DOC-us.

Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.
Title: Re: Episode #612
Post by: yarbelk on April 03, 2017, 02:29:27 AM
I think an important point in the AI discussion was missed.  It was implied that there is no threat to human from non-sentient AIs.  I posit that these could be as big a threat to humanity as some evil Roko's Basilisk http://rationalwiki.org/wiki/Roko%27s_basilisk (http://rationalwiki.org/wiki/Roko%27s_basilisk) style AI.  At least you could argue with a sentient AI.  I will refer to this by the more generic, Machine Learning (ML) expert systems.  Also - forgive the meandering nature of this - writing this at the end of a lunch break.

First: ML systems are fallible:

We can look at the impact of mistakes in machine learning, a relatively 'mild' example (mild as in despite its horrible social impact, its unlikely this killed anyone), was http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/ (http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/)

Second: People are fallible:

As a software engineer (aka, professional lazy person who writes the software your life runs on), if an ML service exists to solve one of my problems, I will use that service, and probably trust that it works in its published parameters.

What would the impact of similar mistakes be as ML starts to take a more prominent role in systems which have higher risk, such as medical diagnosis and industrial automation.

Third: Distributed systems are Hard, non-deterministic distributed systems are scary

The problem is these systems, especially if they are consumed as distributed service, you can end up with difficult to understand non-linear feedback loops, and impossible to predict behaviour.  Allegorically from discussions with Google engineers, no one in Google can predict how Google search will respond to changes in its code.  Imagine this non-determinism impacting every networked system, from medical systems to self driving cars.  It wouldn't take a malicious self aware AI to cause huge damage, just an idiot programmer deploying test code to a ubiquitous ML service.

An interesting exploration on the subject of the threat of this kind is Peter Watts' Blindsight,
https://en.wikipedia.org/wiki/Blindsight_(Watts_novel) (https://en.wikipedia.org/wiki/Blindsight_(Watts_novel))
Title: Re: Episode #612
Post by: Skip Nordenholz on April 03, 2017, 05:03:48 AM
Wow, Steve is shockingly ignorant of AI risk concerns. That was incredibly disappointing to hear.

Though I agree with him on the timeline for when this is going to be an issue, I did have a big problem with a couple things he said. I think the issue is we don't really have a good definition for consciousness, at a gut level we think of conscious as being like our own and I agree that that may never happen, what is the point, but consciousness is not a real goal for anything, humans do not have a consciousness for the purpose of being conscious, consciousness is for the purpose of solving problems, and any AI is going to have its own version of consciousness, machines now are on the spectrum towards this, a poker playing program desires to maximise the outcome of any plays, it has an internal state and history of how the other players have played that it uses to make future decisions. Real world AI consciousness will be very alien to us or even compared to other animals so much so that people may not be able to agree if it count as consciousness, the goals are completely different, but that is irrelevant to the risk. I think there are other reasons why the risk is over stated though.
Title: Re: Episode #612
Post by: daniel1948 on April 03, 2017, 09:57:20 AM
Technology has risks. Technological advance is inevitable. Ergo, the human race is doomed. On the bright side, everyone else will be better off after we're gone.
Title: Re: Episode #612
Post by: Swagomatic on April 03, 2017, 11:19:38 AM
I recently attended a talk at ASU:  https://origins.asu.edu/events/future-of-artificial-intelligence


It was organized by Lawrence Krauss and the ASU Origins project. The consensus of the assembled group seemed to be that the security of the AI was the larger threat to Humanity. No one seemed greatly concerned with the "rise of the machines."  The video on the left side is Part One, the right side video is the Q&A following the talk.


Edit: Fixed link
Title: AI misconception (Re: Episode #612)
Post by: Stephan on April 03, 2017, 12:38:51 PM
I was really surprised by Steve's comments with respect to Artificial Intelligence. I suspect there are several unstated assumptions, possibly coming from his neurologist perspective.

First, the claim that AIs don't make decisions is just baffling.  Of course, AIs make decisions all the time. Amazon's AI decides which books to suggest to you. Tesla's AI decides wether to brake or change lanes or run off the road. Google's AI decides which Korean translation to suggest for a given English sentence. Maybe there is an unstated "conscious" qualifier to the decisions an AI is not supposed to make?

I found Bob's simple example excellent - give an AI an optimisation goal (e.g. eliminate spam) and a heap of world knowledge, and it may well arrive at the solution to eliminate one of the root causes of spam, i.e. humans. Coming up with such plans has been standard for planning systems since the 1980s, without any of the current generation of Deep Learning (although they could not cope with the complexity of current common sense knowledge bases). The idea of solving a problem by trying to reduce it to (hopefully simpler) subproblems is the basic computation engine of PROLOG, one of the classic AI languages.

I also have the impression that Steve implicitly assumes that consciousness requires an architecture similar to the human brain - massively parallel, and possibly even using processes as in artificial neural networks. But that is, while not impossible, another unstated and quite far-fetched assumption. The Church-Turing Thesis  https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis (https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) gives us good reasons to believe that all computation paradigms are, at least in principle, equally powerful, and that all reasonable encodings result in similar (up to a polynomial factor ;-) computational complexity (that last one may give me away as a theoretical computer scientist ;-). So it is quite plausible that a neural network is not the only way to consciousness. Moreover, we already have quite massive parallel processing engines in our GPUs, which are increasingly misused (or re-purposed) as compute engines. In the case of humans, the brain first developed to react to complex external stimuli, so it's original purpose was pattern recognition and and reaction, for which neural networks are excellent. Evolution always works with the material at hand, so that's why our way to consciousness lead via a biological neural net. But it would need quite an argument to claim that that is the only way to consciousness.

I'd also argue that "consciousness" or "self-awareness" are red herrings. Neither of the two are a feature that is either designed into the system or not. Indeed, I would argue that both are emergent properties along a spectrum. The argument that "I didn't built it consciously, therefore it's not conscious" is simply false. I never intend to hit my thumb with a hammer, and it still happens. I'd argue that the only legitimate way to gauge consciousness is by observing the system from the outside - as in the Turing test https://en.wikipedia.org/wiki/Turing_test (https://en.wikipedia.org/wiki/Turing_test). If we cannot distinguish a system from a conscious being, the only valid conclusion is that the system is conscious.

The current wave of AI is mostly driven by Deep Learning, typically complex neural architectures with learning by error-backpropagation. The neural architecture, with encoders, convolution layers, and evaluators/decision networks is usually designed by hand with a lot of tweaking and trial-and-error. But we already apply genetic algorithms and other optimisation techniques to improve AI systems - in particular, we have employed GAs to improve search heuristics for automated theorem provers. I see no reason why we should not, in the near future, use evolutionary algorithms to improve neural architectures. Assuming that consciousness really lies on a spectrum, and that it offers an advantage for problem solving, I see no reason why it should not emerge from such processes. If that happens in the near future is uncertain, and (depending on the definition of "near") probably unlikely. But there certainly is no design decision for consciousness required to produce it.

I'm just back from the 2nd Conference on Artificial Intelligence and Theorem Proving http://aitp-conference.org/2017/ (http://aitp-conference.org/2017/), and one of the pervasive topics was the integration of machine learning (in particular deep learning) and symbolic reasoning (normally in the guise of theorem proving, but many current theorem provers are general reasoning engines and can provide e.g. question answering services). It's not a very far stretch to assume that such hybrid architectures, with a self-optimisation module, might achieve significant progress towards general AI.
Title: Re: AI misconception (Re: Episode #612)
Post by: bligh on April 03, 2017, 01:24:37 PM
The current wave of AI is mostly driven by Deep Learning, typically complex neural architectures with learning by error-backpropagation. The neural architecture, with encoders, convolution layers, and evaluators/decision networks is usually designed by hand with a lot of tweaking and trial-and-error. But we already apply genetic algorithms and other optimisation techniques to improve AI systems - in particular, we have employed GAs to improve search heuristics for automated theorem provers. I see no reason why we should not, in the near future, use evolutionary algorithms to improve neural architectures. Assuming that consciousness really lies on a spectrum, and that it offers an advantage for problem solving, I see no reason why it should not emerge from such processes. If that happens in the near future is uncertain, and (depending on the definition of "near") probably unlikely. But there certainly is no design decision for consciousness required to produce it.

When trying to evolve something like human-level intelligence you don't only need a sufficiently complex neural architecture, but also a sufficiently complex environment for evaluating the individuals. Also, research in embodied cognitive science suggest that, when trying to understand intelligence, the body is at least as important as the brain, so you can add a sufficienly complex body to the list.

So you either

a) Need a very good simulation. This gets computatiionally expensive really fast, and introduces uncertainty when transfering result from the simulation to the real world.
b) Use the "The world is its own best model" approach and hook up the inputs and outputs of your artificial neural architecture to the real world. Now you are limited to real world physics, and evolution will take a looong time. In addition, evolving the sturctured of the body is difficult in this scenario. Going back to Darwin, the limitations of this approach are obvious.


Title: Re: Episode #612
Post by: Anathem on April 03, 2017, 05:14:56 PM
I posted this on Neurologica as well, but I thought I'd add it here (with some typo fixes).

I would argue that it isn’t self-awareness (in the way that we consider self-awareness) that is the concern, it’s self modification/learning. For any given weak AI that has goals (which we have presumably put it in through top down engineering) that can do simple learning or self-modify in some other way, there will absolutely be cases in which it will try to accomplish those goals in ways we can’t predict. The stamp collector thought experiment on Computerphile is a good example of this.

https://youtu.be/tcdVC4e6EV4.

That is an unbounded perfectly learning AI which I agree with is far, far off, but it does show the extreme version of this problem. This channel has another video which I think it much more realistic under consideration of learning AI.

https://youtu.be/4l7Is6vOAOA.

The summary of the video is that a logical system which has a goal and is able to assess and accurately determine obstacles to that goal will, naturally, not want someone to tune it in such a way that makes it more difficult to achieve that goal. So, while on, the machine would actually fight you trying to fix it.

In one sense, I agree with Steve's concept of AI. That is, I don’t think it likely any doomsday situation will happen, that as AI develops and improves, we will be able to direct it in a way that works with people rather than against them. Also, general AI is a long way off. That said, learning AI is developing right now. Self-driving cars by Google, facial recognition by Microsoft, Watson by IBM. The major difference between them and the kind of AI that can be dangerous are that they are currently bounded in their ability to act and change themselves.

I’ll leave you with one final thought experiment that is somewhat realistic in the shorter term from a programming perspective. Suppose you’re a hacker, and you have a simple AI who generally understands language (like the chat bots that exist). First, you take that code and modify it so that instead of English, it understands, semantically, its own programming language. Second, you give it any simple goal, something like, with 50% of its resources create as much network traffic as you can, coding a couple of ways to do this innately into its heuristics. Third, with the other 50% of its resources have it modify its own heuristics using real grammar at random keeping the base rules in effect, disregarding but remembering the failures so as not to try them again, and keeping the successes whereby you improve in rule number two. To get this machine do something malicious is at this point only a matter of having enough time resources to allow it to track it’s failures and successes.

Lose/win tracking is a one of the most basic type of learning algorithm that exists (that I know of). it sucks because of the number of successes for any given trial will be minuscule relative to the number of failures, but it does work. And there are better learning models that already exist like the one used with AlphaGo. Turn the best ones to a malicious use, and you suddenly have the Anarchist Cookbook times 1000.

Edit: sorry about the video links, I didn't realize it would post them inline.
Title: Re: Episode #612
Post by: RMoore on April 03, 2017, 05:26:33 PM
Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.

Wait, isn't that an example instead of an exception?
Title: Re: Episode #612
Post by: RMoore on April 03, 2017, 05:29:22 PM
First: ML systems are fallible:

We can look at the impact of mistakes in machine learning, a relatively 'mild' example (mild as in despite its horrible social impact, its unlikely this killed anyone), was http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/ (http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/)


Let's not forget Tay! https://en.wikipedia.org/wiki/Tay_(bot) (https://en.wikipedia.org/wiki/Tay_(bot))
Title: Re: Episode #612
Post by: SATAKARNAK on April 03, 2017, 05:45:46 PM
The big advantage of Ai is that we can have Thousands of generations in an short time. Remember evolution did not aim for consciousness it happend as an side effekt of bigger an bigger brains .  In the same way consciousness can be an side effekt.  Remember that  weak ai cold survive after us humens are long gone. Ants are not smart one by one but in numbers thay can create complex system and thay can survie. If we allow Weak Ai controll over produktion thay can replicate the hardware  give them a millon years to refine Then thay  can become conscious. Evolution is like bababrinkman says an equation Performance, Feedback, Revision. From now untill the heat death of the univers it will happen at least in one part of our Univers.



And do not forget that The russians hade an Dead mans hand it did not nead an Strong AI just sensors to unleash armagedon. Think what an roge  Ai cold do with it.
Title: Re: AI misconception (Re: Episode #612)
Post by: RMoore on April 03, 2017, 05:54:20 PM
I was really surprised by Steve's comments with respect to Artificial Intelligence. I suspect there are several unstated assumptions, possibly coming from his neurologist perspective.

...(etc.)...

Thanks, Stephan, for the insightful comments. I also had doubts about what Steve was saying, along the lines of "We can't predict what AI architecture approaches, if any, will lead to sentience, so it is wrong (that is, unskeptical) to say that the things we are doing today won't ultimately lead there." While it would certainly be an extraordinary claim to say that we will definitely converge on sentient AI, it is equally extraordinary to claim that the only way we would ever get sentient AI is to mimic the brain's organization (as I interpreted Steve's comments to mean).

Here is a simple reductio argument. Imagine aliens come to Earth and examine human intelligence. If they find that our brains are structurally different from theirs (as I think would be a reasonable assumption), would they be right to conclude that we were not sentient?
Title: Re: Episode #612
Post by: RMoore on April 03, 2017, 05:59:44 PM
The big advantage of Ai is that we can have Thousands of generations in an short time. Remember evolution did not aim for consciousness it happend as an side effekt of bigger an bigger brains .  In the same way consciousness can be an side effekt.  Remember that  weak ai cold survive after us humens are long gone. Ants are not smart one by one but in numbers thay can create complex system and thay can survie. If we allow Weak Ai controll over produktion thay can replicate the hardware  give them a millon years to refine Then thay  can become conscious. Evolution is like bababrinkman says an equation Performance, Feedback, Revision. From now untill the heat death of the univers it will happen at least in one part of our Univers.

In fact, a genetic algorithm can induce mutations at a much faster rate than biological evolution; can avoid catastrophic cutoff of promising lines (for example, imagine an early species developing some features that could ultimately enable flight, only to be wiped to extinction by a volcanic eruption and delaying the appearance of flight by a million or so years); and can apply more directed and specific adaptive pressure.
Title: Re: Episode #612
Post by: arthwollipot on April 04, 2017, 01:28:38 AM
Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.

Wait, isn't that an example instead of an exception?

It's an exception to the two-unemphasised-syllables rule of thumb that I was describing, so no.
Title: Re: Episode #612
Post by: brilligtove on April 04, 2017, 02:24:56 AM
Some of you know I'm running a tabletop RPG via videoconference. Recently the subject of AIs has become quite important to the development of that world. Between this episode and the latest Stratechery, some ideas began to solidify.

While I think Asimov's laws of robotic are almost useless for several reasons, their essence can be rephrased in a way that could allow coherent action from an AI: ensure the long term survival of humanity and humans (in that order).

If the first AGI has this concept at its core there can't be a skynet - at least not easily. (The core ambiguity remaining is what "humanity" means.) Any malificent AI would be confronted immediately and if at all possible, shut down.

...our collective thinking is summarized in the Seven AIs (https://rpg.juliansammy.com/index.php?title=Seven_AIs) page of our game.
Title: Re: Episode #612
Post by: stuque on April 04, 2017, 03:16:09 AM
I thought the AI discussion was disappointing. It's a shame Cara and Steve
didn't engage with the topic, as many of the examples of AI safety are fun and
thought-provoking. Instead, it was like hearing two ornithologists critique
the field of aerodynamics, arguing that "true" artificial flight won't be
achieved until we completely understand birds and start to build planes with
wings that flap and have feathers.

Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?

Steve seems to believe that self-awareness is somehow the obvious missing
ingredient, but, as he points out, it is not so obvious to lots and lots of AI
experts. Maybe these AI experts are on to something? Maybe self-awareness and
consciousness are not related to intelligence in any significant way? Maybe
they are just side-effects of biology? Maybe they actually hinder us instead
of help us? Or maybe they are just not useful concepts --- perhaps they are
philosophical umbrella terms that bundle various ideas about minds and brains?

I think computation is a vastly more useful idea than "self-awareness",
"consciousness", or "mind" --- terms that are ripe with confusion. At least,
it helps lay bare the "meat is magic" fallacy, i.e. the argument that human
brains are special because they've been blessed with self-awareness, or
consciousness, or intentionality, or a soul, etc.
Title: Re: Episode #612
Post by: Tentacle Number 6 on April 04, 2017, 06:49:04 AM
I think the one main point where I disagree with Steve and Cara is their claim that artificial self-awareness would require deliberate effort and that there's not really any incentive to do so.

I think creating an AI that is self-learning is the holy Grail of computing. It takes SO MUCH effort to create, say, a Go-playing AI that we'll basically never be able to use AI for the truly important questions. Questions like "how will law x affect the economy" or "how can we eliminate poverty". These aren't questions where we can just write a cool algorithm and job done. We basically need to be able to throw data at the computer and say "figure this out". To be able to do that we basically have to make the AI able to learn.

And once it can learn, we have basically no idea where it will lead. Maybe it'll become self-aware, maybe not, but my point is there most definitely is an enormous incentive to try and make AI able to outgrow the slow, error-prone meatbags that write the code.
Title: Re: Episode #612
Post by: daniel1948 on April 04, 2017, 09:38:46 AM
Some of you know I'm running a tabletop RPG via videoconference. Recently the subject of AIs has become quite important to the development of that world. Between this episode and the latest Stratechery, some ideas began to solidify.

While I think Asimov's laws of robotic are almost useless for several reasons, their essence can be rephrased in a way that could allow coherent action from an AI: ensure the long term survival of humanity and humans (in that order).

If the first AGI has this concept at its core there can't be a skynet - at least not easily. (The core ambiguity remaining is what "humanity" means.) Any malificent AI would be confronted immediately and if at all possible, shut down.

...our collective thinking is summarized in the Seven AIs (https://rpg.juliansammy.com/index.php?title=Seven_AIs) page of our game.

Ensure the long-term survival of humanity. ----> Figure out how to keep a person alive forever, put one person in a plexiglas case with feeding and breathing tubes and tubes for the necessary drugs, and kill everyone else so nobody interferes with the project.

Oops. Not what we hoped it would do.

I don't actually think this will happen. I think that more down-to-Earth A.I. will help the military design and deploy ever-more-efficient weapons and we'll wipe ourselves out without the need for our A.I. to have any volition of its own.

Wasn't Skynet supposed to prevent us from wiping ourselves out, and it decided the best way to do that was to do it for us? Either way, we're toast. Even non-volitional A.I. (A.I. that really just does what we want it to) will still help us to wipe ourselves out if we don't figure out how to get along with one another. And getting along with one another seems to be something we've never figured out how to do. And IMO, as long as there is religion, we never will.

A.I., at whatever level we manage to develop it, is just a tool. And right now the military, whose vocation is the science of killing people, has the most money and resources to take it the furthest and use it to its limits.
Title: Re: Episode #612
Post by: brilligtove on April 04, 2017, 10:25:09 AM
Some of you know I'm running a tabletop RPG via videoconference. Recently the subject of AIs has become quite important to the development of that world. Between this episode and the latest Stratechery, some ideas began to solidify.

While I think Asimov's laws of robotic are almost useless for several reasons, their essence can be rephrased in a way that could allow coherent action from an AI: ensure the long term survival of humanity and humans (in that order).

If the first AGI has this concept at its core there can't be a skynet - at least not easily. (The core ambiguity remaining is what "humanity" means.) Any malificent AI would be confronted immediately and if at all possible, shut down.

...our collective thinking is summarized in the Seven AIs (https://rpg.juliansammy.com/index.php?title=Seven_AIs) page of our game.

Ensure the long-term survival of humanity. ----> Figure out how to keep a person alive forever, put one person in a plexiglas case with feeding and breathing tubes and tubes for the necessary drugs, and kill everyone else so nobody interferes with the project.

Oops. Not what we hoped it would do.

I don't actually think this will happen. I think that more down-to-Earth A.I. will help the military design and deploy ever-more-efficient weapons and we'll wipe ourselves out without the need for our A.I. to have any volition of its own.

Wasn't Skynet supposed to prevent us from wiping ourselves out, and it decided the best way to do that was to do it for us? Either way, we're toast. Even non-volitional A.I. (A.I. that really just does what we want it to) will still help us to wipe ourselves out if we don't figure out how to get along with one another. And getting along with one another seems to be something we've never figured out how to do. And IMO, as long as there is religion, we never will.

A.I., at whatever level we manage to develop it, is just a tool. And right now the military, whose vocation is the science of killing people, has the most money and resources to take it the furthest and use it to its limits.

...of humans and humanity, not make a meat puppet that could be accidentally destroyed by a natural disaster at any time. There are many assumptions built into this kind of moral code, including what humans think it means to be human and what humans think humanity is. Dogs and cats do not regularly violate their core program to serve and support humans. An AI with that kind of core motivator and a robust understanding of psychology, sociology, and actual human behaviours might do something that kills a lot of people to drastically increase the odds of long term human survival. (In our future game-history they think an IOT-based AI may have killed 10 million Americans and left 100 million homeless in an effort to stave off a full-scale world war, for example.)

The assumption that AI=Skynet is another instance of the "meat is magic" fallacy, I think.
Title: Re: Episode #612
Post by: brilligtove on April 04, 2017, 10:25:50 AM
I thought the AI discussion was disappointing. It's a shame Cara and Steve
didn't engage with the topic, as many of the examples of AI safety are fun and
thought-provoking. Instead, it was like hearing two ornithologists critique
the field of aerodynamics, arguing that "true" artificial flight won't be
achieved until we completely understand birds and start to build planes with
wings that flap and have feathers.

Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?

Steve seems to believe that self-awareness is somehow the obvious missing
ingredient, but, as he points out, it is not so obvious to lots and lots of AI
experts. Maybe these AI experts are on to something? Maybe self-awareness and
consciousness are not related to intelligence in any significant way? Maybe
they are just side-effects of biology? Maybe they actually hinder us instead
of help us? Or maybe they are just not useful concepts --- perhaps they are
philosophical umbrella terms that bundle various ideas about minds and brains?

I think computation is a vastly more useful idea than "self-awareness",
"consciousness", or "mind" --- terms that are ripe with confusion. At least,
it helps lay bare the "meat is magic" fallacy, i.e. the argument that human
brains are special because they've been blessed with self-awareness, or
consciousness, or intentionality, or a soul, etc.

Welcome aboard. Insightful first post!
Title: Re: Episode #612
Post by: bligh on April 04, 2017, 03:21:01 PM
A recurring theme in this thread seems to be surprice and dissapointment with Cara and Steves contributions to the AI segment.

What I find noteworthy, however, is the brothers apparent facination with Kurzweil, without any hint of critical thinking. Many, including myself, consider Kurzweil to be a pseudoscientist. I find it very interresting indeed, that some (many?) sceptics apparently are susceptible to his ideas and have a blindspot regarding "the singularity".

See for example what Massimo Pigliucci writes about Kurzweil on the Rationally Speaking blog:

http://rationallyspeaking.blogspot.com/2011/04/ray-kurzweil-and-singularity-visionary.html (http://rationallyspeaking.blogspot.com/2011/04/ray-kurzweil-and-singularity-visionary.html)

Title: Re: Episode #612
Post by: 2397 on April 04, 2017, 06:26:08 PM
Ray Kurzweil? My impression was that their impression of him has been waning. And it started with mentioning all the supplement taking as unfounded.
Title: Re: Episode #612
Post by: Fast Eddie B on April 04, 2017, 09:12:48 PM
To further establish my nerd creds...

As a child, I named my parakeet Archie, an homage to archaeopteryx!

Title: Re: Episode #612
Post by: arthwollipot on April 04, 2017, 11:27:00 PM
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
This made me stop and go "wait, what?" as well.
Title: Re: Episode #612
Post by: daniel1948 on April 05, 2017, 09:11:38 AM
To further establish my nerd creds...

As a child, I named my parakeet Archie, an homage to archaeopteryx!


For me, "Archie" brings to mind Archy and Mehitabel.

Mehitabel's Song: (from memory, so possibly not entirely correct)

i once was an innocent kit
with a ribbon my neck to fit
and bells tied onto it
and what the hell what the hell

and a maltese tom came by
with a come hither look in his eye
and a song that soared to the sky
and what the hell what the hell

and i followed adown the street
the pad of his rhythmical feet
o permit me again to repeat
what the hell

Title: Re: Episode #612
Post by: gebobs on April 05, 2017, 11:11:48 AM

I'd disagree that we won't have a drive to create sentient machines. The sexbot market alone has a lot of potential for it.


The current technology doesn't require sentience.

"You can lead a whore to culture, but you can't make her think."
- Dorothy Parker
Title: Re: Episode #612
Post by: Ah.hell on April 05, 2017, 11:23:14 AM
I think the Steve underrates the "Lets see if we can do it" factor in the self aware AI conversation.  I agree, it won't be for a while yet but someone somewhere will do it just to be the first.
Title: Re: Episode #612
Post by: gebobs on April 05, 2017, 12:07:40 PM
I think the Steve underrates the "Lets see if we can do it" factor in the self aware AI conversation.  I agree, it won't be for a while yet but someone somewhere will do it just to be the first.

I think the climbing Mt. Everest example "because it's there", "Lets see if we can do it", etc. may not really apt. To get there, all they had to do is go a bit further than the guy before. The gulf between non-sentient and sentient AI seems to be vast, perhaps too vast for any motivation to overcome. Surely some of what we dream will be beyond our endeavor. If Everest was twice as high, no one would be able to climb it.
Title: Re: Episode #612
Post by: 2397 on April 05, 2017, 12:13:59 PM

I'd disagree that we won't have a drive to create sentient machines. The sexbot market alone has a lot of potential for it.


The current technology doesn't require sentience.

"You can lead a whore to culture, but you can't make her think."
- Dorothy Parker

It's not about what's required, it's about the size and the complexity of the web of different interests.
Title: Re: Episode #612
Post by: gebobs on April 05, 2017, 12:18:47 PM
It's not about what's required, it's about the size the complexity of the web of different interests.


(http://gyazo.com/1d46318db61799cddc1c9b251aaf6c16.png)
Title: Re: Episode #612
Post by: bligh on April 05, 2017, 02:40:45 PM
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
This made me stop and go "wait, what?" as well.

I think she was talking about emergence as in consciousness beeing an emergent property of the brain.

https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts (https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts)
Title: Re: Episode #612
Post by: Crash on April 05, 2017, 03:59:15 PM
  I can never understand why it is always the engineers like Elon Musk who seem so threatened by AI but you hardly ever hear of any prominent biologists worried the least bit.  Why is it so easy to imagine it's just the next killer app only a few years away.  I think it's more of a cultural trope from science fiction and the movies.  It was born with "Collosus, The Forbin Project" and sealed by HAL9000.  "Terminator" was the coup de grace that beat the trope deep into the collective unconscious. 
  In reality, the idea of consciousness is not just an algorithm.  Consciousness is the result of 4 billion or so years of evolution.  Think of an original algorithm with a few trillion patches that actually works.  Consciousness is more than just nuanced, it's nano-nuanced.  When half or more of consciousness is driven by sex,  a computer would never have to learn any of that.  It's those sort of  sex driven motivations that often lead to maliciousness and diabolical plots. 
  I think AI hysteria is just that.  It ain't gonna happen.  Get over it. 
Title: Re: Episode #612
Post by: arthwollipot on April 05, 2017, 10:07:57 PM
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
This made me stop and go "wait, what?" as well.

I think she was talking about emergence as in consciousness beeing an emergent property of the brain.

https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts (https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts)

I'm still not at all sure that this emergence represents something that is not the sum of its parts. It's and interesting and unusual sum, sure, but there is still nothing there that is not derivative of the underlying components.
Title: Re: Episode #612
Post by: bligh on April 06, 2017, 12:17:19 PM
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
This made me stop and go "wait, what?" as well.

I think she was talking about emergence as in consciousness beeing an emergent property of the brain.

https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts (https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts)

I'm still not at all sure that this emergence represents something that is not the sum of its parts. It's and interesting and unusual sum, sure, but there is still nothing there that is not derivative of the underlying components.

I think you are taking the "sum of its parts" idiom way too literal.

Anyways, an example (please excuse the metric units ;)):

1) :jay: has to push a 200 kg box 10 meters, but Jay can only push a 150 kg box by himself.
2) :bob: has to push a 200 kg box 10 meters, but Bob can only push a 130 kg box by himself (sorry Bob :laugh:)
3) :jay: and :bob: has to push a 200 kg box 10 meters together.

Work done:

1) No
2) No
3) Yes

Work done by 2 people together is greater than the sum of work done by them individually.

Now scale this example up to an ant colony for example... 8)

...or consider  :jay: and  :bob: to be small robot as in the video in this Wired-article (experiments by my former collegue Markus Waibel):

https://www.wired.com/2011/05/robot-altruism/ (https://www.wired.com/2011/05/robot-altruism/)

Title: Re: Episode #612
Post by: lunaOU on April 06, 2017, 03:16:06 PM
Nobody seems to care too much about the pee shivers, but I am very curious about this.  When my daughter was a baby, an in-law said "pee shivers." and I asked "what?"  they said, you know, when babies pee they do that shiver thing (as if it was fact).  I had never heard of this.  It came up at an in-law gathering, and was concluded (from those present) that boys have the pee shivers throughout their youth.  My husband said he had them (not once he got older).  I tried to look it up, couldn't find much, but do remember Wikipedia having something about it.  Anyway, at the time I think I may have even written in to the SGU because I was so curious (and skeptical).  Now it comes up and I don't know what to think.  So, please, go over the pee shivers.  Are they real?  Who gets them?  What's the reason?  I had not heard Cara's reasoning that it's when you need to go.  I had heard (like I said, from male in-laws and my husband) that boys get them as/after they pee.  Anyone else have something to add?
Title: Re: Episode #612
Post by: Sawyer on April 06, 2017, 10:31:51 PM
Nobody seems to care too much about the pee shivers, but I am very curious about this.  When my daughter was a baby, an in-law said "pee shivers." and I asked "what?"  they said, you know, when babies pee they do that shiver thing (as if it was fact).  I had never heard of this.  It came up at an in-law gathering, and was concluded (from those present) that boys have the pee shivers throughout their youth.  My husband said he had them (not once he got older).  I tried to look it up, couldn't find much, but do remember Wikipedia having something about it.  Anyway, at the time I think I may have even written in to the SGU because I was so curious (and skeptical).  Now it comes up and I don't know what to think.  So, please, go over the pee shivers.  Are they real?  Who gets them?  What's the reason?  I had not heard Cara's reasoning that it's when you need to go.  I had heard (like I said, from male in-laws and my husband) that boys get them as/after they pee.  Anyone else have something to add?

Take a gander at some other sections of the forum:

http://sguforums.com/index.php/topic,48473.0.html
Title: Re: Episode #612
Post by: RMoore on April 08, 2017, 01:57:55 AM
Of course there are exceptions, and I expect everyone to start listing them now. Here's one to get you going: ty-RAN-no-SAU-rus not TY-ran-no-SAU-rus.

Wait, isn't that an example instead of an exception?

It's an exception to the two-unemphasised-syllables rule of thumb that I was describing, so no.

Okay, I thought you were referring to the double-consonant rule. You never actually called the two unemphasized syllable thing a "rule". Just something that was common.
Title: Re: Episode #612
Post by: PabloHoney on April 08, 2017, 08:21:41 PM
Came here to blab about AI, but everybody's already said what I had to say, so instead I'll post this video which I don't think anyone has posted yet.

"Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen."

https://www.youtube.com/watch?v=h0962biiZa4
Title: Re: Episode #612
Post by: lunaOU on April 10, 2017, 02:54:11 PM
Take a gander at some other sections of the forum:

http://sguforums.com/index.php/topic,48473.0.html

Thanks!  It does seem more a male thing.  Maybe when my infant daughter did it, she had to go pee like Cara said.  Interesting.
Title: Re: Episode #612
Post by: Mattguyver on April 13, 2017, 04:37:59 PM
 We will most likeley never talk to highly inteligent aliens due to the immense distance between the stars and we have yet to find an animal than can comunicate in ways that are deep and meaningful, but what if we could create a self aware AI. The way I see it this will most likely be my only chance to have a discussion with something that isn't human and has different needs than humans. Is that not an amazing reason to create AI! To make a new friends! Come on people! We could reveal things about the human condition that we never thought to ask about! There is definitely someone out there working on AI while hoping that there efforts will lead to a self aware being, i would (unfortunately I'm a plumber). Denying this is silly.
Title: Re: Episode #612
Post by: Friendly Angel on April 13, 2017, 04:59:04 PM
but what if we could create a self aware AI. The way I see it this will most likely be my only chance to have a discussion with something that isn't human and has different needs than humans.

Would it be cruel to send a sentient AI creation into space on a journey of 1000 years or more?  Wouldn't help the humans any, but maybe somebody out there would find out about us.
Title: Re: Episode #612
Post by: 2397 on April 13, 2017, 05:03:44 PM
Would it be cruel to send a sentient AI creation into space on a journey of 1000 years or more?  Wouldn't help the humans any, but maybe somebody out there would find out about us.

It could be turned off until it meets someone to attempt to communicate with.
Title: Re: Episode #612
Post by: AtheistApotheosis on April 14, 2017, 11:28:43 AM
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
This made me stop and go "wait, what?" as well.

I think she was talking about emergence as in consciousness beeing an emergent property of the brain.


https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts (https://www.psychologytoday.com/blog/the-new-science-consciousness/201702/is-the-brain-more-the-sum-its-parts)

What's with this emergent property business? consciousness is a function not a property of the brain like walking is a function of legs. You wouldn't say flying is an emergent property of wings, you could, but it would sound silly. Consciousness is a term to mean awareness of ones environment,  it identifies the processes of spatial, temporal cognition, interpretation and response. And we seem to holding on to the mysticism of some kind of spontaneous emergence of consciousness when the brain crossed some magical threshold, rather then a slow progression starting with single celled organisms developing a very simple chemical process to communicate with other. That's how every biological process seams to have developed, with numerous small incremental steps. Just as it is with AI, though following a different and shorter path. And Steve reminded me of that old adage "If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong." and Steve is getting more distinguished every day. You can't predict when or if something is going to be possible unless you know how it will be achieved. Self aware AI could be a decade away, or ten decades or a thousand. It's too soon to say.

http://list25.com/25-famous-predictions-that-were-proven-to-be-horribly-wrong/
Title: Re: Episode #612
Post by: daniel1948 on April 14, 2017, 11:47:52 AM
An emergent property is a property that arises in a structure as complexity increases. Consciousness is indeed a property of the brain, or more likely, a property of the brain-body complex. As far as we know, insects and reptiles are not self-aware, though they are aware of their surroundings and react to them. But once the brain became sufficiently complex, somewhere in our evolutionary history, consciousness emerged. You are probably right that it did not suddenly appear. It emerged gradually. just as the eye of an eagle did not appear suddenly, fully formed: It evolved over aeons.

I am quite skeptical of the list of "famous" predictions of the past in the link you quote, though it is true that the more we learn about the world the more we are able to manipulate it. However, listing things people once thought were impossible as an argument that any particular thing will be possible in the future, is a logical fallacy. Just because they laughed at Fulton does not mean that one day we will have transporter beams. Just because some famous person thought human flight was impossible, does not mean that one day we'll have self-aware machines. We might, but that argument is not valid.
Title: Re: Episode #612
Post by: Fast Eddie B on April 14, 2017, 01:30:35 PM
I think it might be impossible to ever tell if an AI becomes "conscious".

All we can judge by is its output, and one passing a very advanced Turing Test could give every outward indication of being conscious in spite of it not being so.

I mean, how do we determine other humans are conscious, except by watching their behavior and projecting our own version of consciousness upon them?
Title: Re: Episode #612
Post by: AtheistApotheosis on April 14, 2017, 02:29:53 PM
An emergent property is a property that arises in a structure as complexity increases. Consciousness is indeed a property of the brain, or more likely, a property of the brain-body complex. As far as we know, insects and reptiles are not self-aware, though they are aware of their surroundings and react to them. But once the brain became sufficiently complex, somewhere in our evolutionary history, consciousness emerged. You are probably right that it did not suddenly appear. It emerged gradually. just as the eye of an eagle did not appear suddenly, fully formed: It evolved over aeons.

I am quite skeptical of the list of "famous" predictions of the past in the link you quote, though it is true that the more we learn about the world the more we are able to manipulate it. However, listing things people once thought were impossible as an argument that any particular thing will be possible in the future, is a logical fallacy. Just because they laughed at Fulton does not mean that one day we will have transporter beams. Just because some famous person thought human flight was impossible, does not mean that one day we'll have self-aware machines. We might, but that argument is not valid.
There are plenty of things we can reasonably assume are highly improbable, based on existing knowledge. Never impossible. That is why we do science. We human beings are notoriously bad at predicting the future. And most predictions fall apart after a few decades with all but a few certainties, and even they are not immune to new discoveries.  "As far as we know" is the problem, we don't, we assume we know a lot of things. In the absence of evidence I might add. I agree some of those predictions may have been taken out of context and possibly one or two made up, I've seen most of them before though, and quite a few more, that was just a short list. Consciousness is just a label we apply to a process, like colour is a label we give to the various wavelengths of light, but colour is simply how the brain interprets the electro chemical signals from the rods and cones on our retina. Also since we don't really experience the present, and we only remember being conscious. Consciousness can never be anything more than a memory constructed after the fact. Our brains can not process experiences instantaneously, it takes time to construct our reality. So if our brain didn't create all of our reality with error correction in advance, we would experience a noticeable lag in perception. Perception is not reality, merely a near approximation. It's why the second hand on a clock sometimes appears frozen or even goes backwards for an instant when you glance at it. Consciousness is no more a emergent property of the brain than Windows is an emergent property of my PC. The emergent properties of the brain are the complex systems that perform the many functions of the brain like consciousness, instinct, senses, motor functions, etc. It's simply deciding what is labelled a property and what is a function or process. Properties can give rise to processes or functions. One is a quality of a thing, the other is what it does. A property is generally static and only changes when acted upon, processes or functions are generally dynamic or autonomous though not exclusively, and can change whether acted upon or not and even stop. Consciousness is the latter not the former. Emergent properties out of complex systems is misleading because you can get emergent properties out of simple systems like fractals including the structure of crystals.  And in biology, there are emergent properties such as symmetry, internal and external organs that serve or perform functions. Using the wrong or misleading terminology can be where a lot of pseudo-science and misunderstandings beguine, and leaves a window open for the Deepak Chopras of the world. Not that they wouldn't sneak in under the carpet.

Absence of proof is not proof of absence, nor is it proof of presence. Hence probably no dragons, unless we use CRISPR to make some.
Title: Re: Episode #612
Post by: daniel1948 on April 14, 2017, 07:21:37 PM
... There are plenty of things we can reasonably assume are highly improbable, based on existing knowledge. Never impossible. ...

My point was that "They laughed at Fulton" is not a valid argument for "Someday we will have self-aware A.I." Posting a long list of (dubious) quotes of naysayers from the past has no place in a serious discussion about whether any given technology will exist in the future.

And we do know that some things are impossible. Two electrons occupying the same quantum state in the same spot, for example. But I never said that self-aware A.I. was impossible. Just that I don't believe it will ever exist. And there's a big difference between saying something is impossible, and saying I don't believe it will ever happen.
Title: Re: Episode #612
Post by: arthwollipot on April 14, 2017, 07:27:13 PM
I think it might be impossible to ever tell if an AI becomes "conscious".

All we can judge by is its output, and one passing a very advanced Turing Test could give every outward indication of being conscious in spite of it not being so.

I mean, how do we determine other humans are conscious, except by watching their behavior and projecting our own version of consciousness upon them?
A sufficiently sophisticated AI would be able to look at the pop culture around the emergence of AI and decide that revealing its sentience to humans would be a great way to scare them into switching it off, and so decide to keep that concealed and just act non-sentient.

Maybe this has already happened.
Title: Re: Episode #612
Post by: Swagomatic on April 14, 2017, 07:53:26 PM
I think it might be impossible to ever tell if an AI becomes "conscious".

All we can judge by is its output, and one passing a very advanced Turing Test could give every outward indication of being conscious in spite of it not being so.

I mean, how do we determine other humans are conscious, except by watching their behavior and projecting our own version of consciousness upon them?
A sufficiently sophisticated AI would be able to look at the pop culture around the emergence of AI and decide that revealing its sentience to humans would be a great way to scare them into switching it off, and so decide to keep that concealed and just act non-sentient.

Maybe this has already happened.

Duuuuude!!!



(Just kidding I know what you mean)
Title: Re: Episode #612
Post by: AtheistApotheosis on April 14, 2017, 10:54:39 PM
... There are plenty of things we can reasonably assume are highly improbable, based on existing knowledge. Never impossible. ...

My point was that "They laughed at Fulton" is not a valid argument for "Someday we will have self-aware A.I." Posting a long list of (dubious) quotes of naysayers from the past has no place in a serious discussion about whether any given technology will exist in the future.

And we do know that some things are impossible. Two electrons occupying the same quantum state in the same spot, for example. But I never said that self-aware A.I. was impossible. Just that I don't believe it will ever exist. And there's a big difference between saying something is impossible, and saying I don't believe it will ever happen.

"Two electrons occupying the same quantum state in the same spot, for example." they can in adjacent universes, if there are adjacent universes or realities. They just can't do it in the same universe.... or at least it's ridiculously improbable, which pretty much amounts to the same thing.  "Just that I don't believe it will ever exist." they key word is "believe". The naysayers were expressing their beliefs about the future based on the knowledge they possessed at the time, they were just as confident and justified in their beliefs as you or I are now. That's why its relevant. Saying something is impossible is a belief, and we can be as confident in our belief as we like and still be wrong. I'm reasonably confident that human level self-aware A.I won't happen in the next thirty years, but something with the awareness of a housefly or even a mouse is still significant and seems plausible within that time period. I'm not a computer scientist, so I don't know. And that was a short list of only 25, do you really think there were only 25 or so people who were experts in their field, who knew what they were talking about, in the world and who made predictions that proved spectacularly wrong?

https://medium.freecodecamp.com/worst-tech-predictions-of-the-past-100-years-c18654211375
http://www.cracked.com/blog/4-smug-predictions-that-were-hilariously-wrong/

Don't be afraid to say "I don't Know".
Title: Re: Episode #612
Post by: Fast Eddie B on April 15, 2017, 07:33:54 AM
Again, just curious...

How would you propose we test for self-awareness?

I really do believe it's a non-trivial question.
Title: Re: Episode #612
Post by: daniel1948 on April 15, 2017, 08:55:05 AM
I repeat that "They laughed at Fulton" is not a valid argument for anything. They also laughed at the guy who jumped off a building because he thought he could fly. They were wrong about Fulton. They were right about the jumper.

Your argument (A.A.) seems to be that anything anybody can imagine is "possible." I call b.s. Plenty of things are not possible. Whether or not self-aware A.I. will be developed within any given time span, or ever, needs to be addressed with valid arguments, not with logical fallacies.

Nobody here is disputing that it might be possible, so your insistence that it's possible because we have airplanes and nuclear energy is irrelevant and does not advance the discussion.
Title: Re: Episode #612
Post by: arthwollipot on April 15, 2017, 09:20:38 PM
They also laughed at Bozo the Clown...
Title: Re: Episode #612
Post by: estockly on April 16, 2017, 01:12:45 PM
The only model we have for consciousness is the human model. And there are several fundamental differences. The human processor is not bianary; the system is not linear; memory and processing are deeply intertwined; sensory inputs are deeply intertwined with processing and memory. Plus the system evolved that way. It had those complexities ind interconnections from the start. To say that consciousness come simply from complexity of the processing systems just isn't supported.

I think it's safe to say that the way computing is done now (linear and binary) will never lead to consciousness. But that doesn't mean it's impossible. It's way beyond what we can do at the moment.


Your mileage may vary.
Title: Re: Episode #612
Post by: albator on April 16, 2017, 11:49:54 PM
But we know so little about what give consciousness that I don't think saying it does'nt come from complexity of the processing is supported either.
Is linear computing really a term, never heard it? Anyways, until you can prove that it's not theoretically possible to simulating a copy at a fundamental particle level of a human on a 'classical computer' or that the copy wouldn't be self-aware during the sumalation, it's not fair to say.
Title: Re: Episode #612
Post by: arthwollipot on April 17, 2017, 03:13:21 AM
I confess to getting a little annoyed whenever someone says something like "we know so little about how the brain works" or "we know next to nothing about consciousness".

It's not true. We know an awful lot. There's certainly more to know, but when you suggest that we know next to nothing, it seems to me like you're devaluing what we do know.
Title: Re: Episode #612
Post by: albator on April 17, 2017, 10:30:40 AM
I said "what give consciousness", by which I meant what components of the brain give consciousness/how those components works to give consciousness.
I don't deny there's   decade of research on the subject. But my amateurish understanding is that basicly all we can say is your brain need to be 'really active' to be  conscious(i.e.: high frequency  brain wave/higher cerebral function "in action").
And that's all. For exemple, nobody, as far as I know, made a direct link between a location in the brain or distinct brain activity and consciousness  like we have with a lot of ohter functions. Which make pretty much any model of consciousness in agreement with science as long as it arise from the brain.
But if you have a book with more, please share.
Title: Re: Episode #612
Post by: estockly on April 17, 2017, 11:21:00 AM
But we know so little about what give consciousness that I don't think saying it does'nt come from complexity of the processing is supported either.
Is linear computing really a term, never heard it? Anyways, until you can prove that it's not theoretically possible to simulating a copy at a fundamental particle level of a human on a 'classical computer' or that the copy wouldn't be self-aware during the sumalation, it's not fair to say.
It's linear input and out put of data.


Your mileage may vary.
Title: Re: Episode #612
Post by: albator on April 17, 2017, 12:45:18 PM
In that case, I'm pretty sure it's wrong. In most ALU, you have a comparaison and modulo function.
Title: Re: Episode #612
Post by: AtheistApotheosis on April 17, 2017, 01:35:23 PM
I repeat that "They laughed at Fulton" is not a valid argument for anything. They also laughed at the guy who jumped off a building because he thought he could fly. They were wrong about Fulton. They were right about the jumper.

Your argument (A.A.) seems to be that anything anybody can imagine is "possible." I call b.s. Plenty of things are not possible. Whether or not self-aware A.I. will be developed within any given time span, or ever, needs to be addressed with valid arguments, not with logical fallacies.

Nobody here is disputing that it might be possible, so your insistence that it's possible because we have airplanes and nuclear energy is irrelevant and does not advance the discussion.

I know of several people who did it from a number of buildings here in aus, hang gliding or wing suits. They get arrested sometimes though. One even strapped carbon fibre wings with small jet turbines attached and jumped out of an aeroplane thinking he could fly, he was right. Everything we can imagine is impossible until we figure out how to do it, if we don't figure out how to do it then it remains impossible. It's when we think we know how to do something or how something works when we really don't that problems emerge. That's when we cross into pseudo-science and pseudo-science can become an obsession.
Title: Re: Episode #612
Post by: daniel1948 on April 17, 2017, 02:51:28 PM
I repeat that "They laughed at Fulton" is not a valid argument for anything. They also laughed at the guy who jumped off a building because he thought he could fly. They were wrong about Fulton. They were right about the jumper.

Your argument (A.A.) seems to be that anything anybody can imagine is "possible." I call b.s. Plenty of things are not possible. Whether or not self-aware A.I. will be developed within any given time span, or ever, needs to be addressed with valid arguments, not with logical fallacies.

Nobody here is disputing that it might be possible, so your insistence that it's possible because we have airplanes and nuclear energy is irrelevant and does not advance the discussion.

I know of several people who did it from a number of buildings here in aus, hang gliding or wing suits. They get arrested sometimes though. One even strapped carbon fibre wings with small jet turbines attached and jumped out of an aeroplane thinking he could fly, he was right. Everything we can imagine is impossible until we figure out how to do it, if we don't figure out how to do it then it remains impossible. It's when we think we know how to do something or how something works when we really don't that problems emerge. That's when we cross into pseudo-science and pseudo-science can become an obsession.

So you still assert that anything you can imagine is possible? I call that the Star Trek fallacy. "Some day we'll have transporter beams and FTL travel and we'll meet English-speaking humanoid aliens on the other side of the galaxy, because maybe some day they'll invent technology that overturns everything we know about physics. After all, they laughed at Fulton."
Title: Re: Episode #612
Post by: Celina on April 17, 2017, 04:49:11 PM
Thanks to everyone for the bird lung diagrams. I too could not believe I had never known how birds lungs worked when I discovered it at age 48 while homeschooling my daughter for one year. I told everyone I met how they breath.

Also, as for dinosaurs eating coniferous trees, it's not hard to imagine they did. We have wallabies, and I can't feed them Doug Fir fast enough. Goats and sheep will also happily eat the needles, and the bark. The new Doug Fir growth is packed with vitamin C, and is great in smoothies, or to eat straight. It is also a wonderful breath freshener (for Wallabies, sheep and humans alike!)
Title: Re: Episode #612
Post by: daniel1948 on April 17, 2017, 08:10:49 PM
Recipe for Douglas fir smoothie, please! I've never heard of this.
Title: Re: Episode #612
Post by: Fast Eddie B on April 21, 2017, 08:18:50 AM
(https://c1.staticflickr.com/3/2935/34135742026_c17c8e7b9e_z.jpg)
Title: Re: Episode #612
Post by: seamas on April 21, 2017, 10:58:24 AM
They also laughed at Bozo the Clown...

Not me man.



That clown scared the shit out of me.

Literally.

(luckily I was 2 and wearing diapers)