Author Topic: Episode #666  (Read 4627 times)

0 Members and 1 Guest are viewing this topic.

Offline Steven Novella

  • SGU Panel Member
  • Well Established
  • *****
  • Posts: 1786
    • http://www.theskepticsguide.org
Episode #666
« on: April 14, 2018, 12:20:33 PM »
What’s the Word: Demon; News Items: 666, Buzz Aldrin and Aliens, AI Good and Evil, The Business of Exorcism, The Power of Satan; Who’s That Noisy; Your Questions and E-mails: Acidic vs Alkiline diet; Science or Fiction
Steven Novella
Host, The Skeptics Guide
snovella@theness.com

Offline Igor SMC

  • Not Enough Spare Time
  • **
  • Posts: 240
Re: Episode #666
« Reply #1 on: April 14, 2018, 01:41:46 PM »




« Last Edit: April 15, 2018, 04:36:01 PM by Igor SMC »
"Knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable"

Offline stands2reason

  • Empiricist, Positivist, Militant Agnostic
  • Reef Tank Owner
  • *********
  • Posts: 9804
Re: Episode #666
« Reply #2 on: April 14, 2018, 02:13:26 PM »





Offline stands2reason

  • Empiricist, Positivist, Militant Agnostic
  • Reef Tank Owner
  • *********
  • Posts: 9804
Re: Episode #666
« Reply #3 on: April 14, 2018, 02:28:01 PM »

Online Ron Obvious

  • Not Enough Spare Time
  • **
  • Posts: 158
Re: Episode #666
« Reply #4 on: April 14, 2018, 02:57:58 PM »

Online bachfiend

  • Seasoned Contributor
  • ****
  • Posts: 623
Re: Episode #666
« Reply #5 on: April 14, 2018, 04:01:34 PM »
Actually, you’ve missed the ‘number of the beast’ episode.  ‘666’ was a mistranslation of Revelations.  It should actually have been ‘616’, so the ‘number of the beast’ episode was actually a year ago.

Offline Lothian

  • Brand New
  • Posts: 3
Re: Episode #666
« Reply #6 on: April 14, 2018, 04:06:48 PM »
Interesting that the podcast "This Week in Science" is also celebrating their episode 666. Coincidence, I think not!

Offline brilligtove

  • Too Much Spare Time
  • ********
  • Posts: 6005
  • Ignorance can be cured. Stupidity, you deal with.
Re: Episode #666
« Reply #7 on: April 14, 2018, 06:21:29 PM »
Actually, you’ve missed the ‘number of the beast’ episode.  ‘666’ was a mistranslation of Revelations.  It should actually have been ‘616’, so the ‘number of the beast’ episode was actually a year ago.

They discuss this in the episode. It is likely not a mistranslation. It is numberology related to different spelling of "Nero" (as "Neron") in IIRC Greek and Hebrew.
evidence trumps experience | performance over perfection | responsibility – authority = scapegoat | emotions motivate; data doesn't

Offline lucek

  • Off to a Start
  • *
  • Posts: 83
Re: Episode #666
« Reply #8 on: April 15, 2018, 07:42:37 AM »
On the moral A.I. I agree the article simplifies a complex problem but no it's not just analyzing consensus. If done correctly mortality is the kind of complex problem that A.I. research can inform on not solve. But inform. And in theory even the biases that were listed may shake out of such research. That said we are talking basic research on the topic.
You have the power, but. . .
Power is just energy over time and. . .
Energy is just the ability to do work.

Offline Ah.hell

  • Poster of Extraordinary Magnitude
  • **********
  • Posts: 12643
Re: Episode #666
« Reply #9 on: April 15, 2018, 12:15:59 PM »
I worked for years next to a building that was at 666 2nd street in San Francisco.  It was renovated and became 680 second street, I was very disappointed by the change. 

Offline Igor SMC

  • Not Enough Spare Time
  • **
  • Posts: 240
Re: Episode #666
« Reply #10 on: April 15, 2018, 12:21:49 PM »
On the moral A.I. I agree the article simplifies a complex problem but no it's not just analyzing consensus. If done correctly mortality is the kind of complex problem that A.I. research can inform on not solve. But inform. And in theory even the biases that were listed may shake out of such research. That said we are talking basic research on the topic.

Agree. Let us not forget this experiment:

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours
https://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/

When the rogues were discussing about things that were intentionally changed to avoid the number 666, it came to my mind memory modules... My old computer had a 333 MHz DDR memory, and after some years, I was considering an upgrade to the 667 MHz modules. The circuit that could synchronize 333 and 666 is very simple: For every clock cycle of the memory, there is going to be exactly two cycles of the new one. But, the complexity of synchronizing 333 and 667 is way higher, although the difference might seem small. As computers make billions of calculations per second, for many hours in a row, that extra 1 MHz would prove completely counter productive and inefficient. From an engineering point of view, it makes perfect sense to leave it as the exact double.... Now I'm wondering if they market it as 667, but maybe under the hood the true frequency is the exact double of 333?

According to the wiki, the exact frequency of the old modules were 333.33 MHz. So, the exact double would be 666.66 Mhz.... but they say the frequency is 666.67 MHz!!! I mean, I understand that mathematically is correct to round 666.66 to 667, but rounding 666.66 to 666.67 appears to make no sense at all...
"Knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable"

Offline lucek

  • Off to a Start
  • *
  • Posts: 83
Re: Episode #666
« Reply #11 on: April 15, 2018, 03:04:57 PM »
On the moral A.I. I agree the article simplifies a complex problem but no it's not just analyzing consensus. If done correctly mortality is the kind of complex problem that A.I. research can inform on not solve. But inform. And in theory even the biases that were listed may shake out of such research. That said we are talking basic research on the topic.

Agree. Let us not forget this experiment:

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours
https://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/

When the rogues were discussing about things that were intentionally changed to avoid the number 666, it came to my mind memory modules... My old computer had a 333 MHz DDR memory, and after some years, I was considering an upgrade to the 667 MHz modules. The circuit that could synchronize 333 and 666 is very simple: For every clock cycle of the memory, there is going to be exactly two cycles of the new one. But, the complexity of synchronizing 333 and 667 is way higher, although the difference might seem small. As computers make billions of calculations per second, for many hours in a row, that extra 1 MHz would prove completely counter productive and inefficient. From an engineering point of view, it makes perfect sense to leave it as the exact double.... Now I'm wondering if they market it as 667, but maybe under the hood the true frequency is the exact double of 333?

According to the wiki, the exact frequency of the old modules were 333.33 MHz. So, the exact double would be 666.66 Mhz.... but they say the frequency is 666.67 MHz!!! I mean, I understand that mathematically is correct to round 666.66 to 667, but rounding 666.66 to 666.67 appears to make no sense at all...
I think that is due to rounding. 333 is a third of a gig. 667 is 2/3 rounded up.
You have the power, but. . .
Power is just energy over time and. . .
Energy is just the ability to do work.

Offline esterin

  • Off to a Start
  • *
  • Posts: 52
Re: Episode #666
« Reply #12 on: April 16, 2018, 02:49:36 AM »
Worrying now about AI and morality is like worrying now about not corrupting intelegent alian coltures from other star systems or galaxies.
Might be entertaining for SF flicks but taking it seriousely is weird.

On the subject of AI, am I the only one missing discussion of the self driving accidents? and the dishonest Tesla/Musk press releases.
This is just a start, since billions are riding on this (no pun..), we are going to experience a lot of fake statistics about the safety of self-driving cars - Skeptics should start being skeptical.
I suspect that people who will point that out would immidiattly become "anti-science" and "anti-progress".

Also, I am missing discussion of the most interesting science news for a long time, Brian Wansink head of the Cornell Food Lab, one of the biggest stars of "nutrition science" and his exploits.
Reading a lot of nutriotion science I was always suspicious of the research standards in the field of nutriotion, shocking to see it so spectacullary exposed. 

Getting a PHD before commenting on morality, really?
Morality is the one thing where a evryones opiniion is exactly equal, envoking elitism here is completly missplaced and is a case of severe scientism.
Consensus and Democracy are the only basis for Morality, not elitist technocrats with PHDs - it will be inetresting to be informed where social engineering by PHDs had such a huge success?

Offline Mr. Beagle

  • Stopped Going Outside
  • *******
  • Posts: 4327
    • When God Plays DIce
Re: Episode #666
« Reply #13 on: April 16, 2018, 08:07:58 AM »
I took this graduate course at a Jesuit university 25 years ago. They had a sense of humor. It was a great course, by the way. This prof, a member of the Jesus Seminar, sees Revelation as a theatrical/political performance, using the language in use at the time typically praising Caesar, but the end being that Jesus shows up in triumph.  The course also looks at the book as one of a dozen or so similar "apocalypses" in circulation at the time. Why did this one get in?

Mister Beagle
The real world is tri-color
now blogging at http://godplaysdice.com

Offline Igor SMC

  • Not Enough Spare Time
  • **
  • Posts: 240
Re: Episode #666
« Reply #14 on: April 16, 2018, 01:13:33 PM »
Worrying now about AI and morality is like worrying now about not corrupting intelegent alian coltures from other star systems or galaxies.
Might be entertaining for SF flicks but taking it seriousely is weird.

Oh boy... you are many, many orders of magnitude wrong in this one. AI already has a huge potential to cause a lot of damage to people, and depending what kind of responsibility we leave to those systems, catastrophic things can happen. Like Zeynep Tufekci said in her talk, most people when they think about "The Threat of AI" they imagine an army of killer robots like those in the Matrix or the Terminator movie. This is not happens in the real world. What happens instead, is the potential natural emergence of highly unethical behavior from AIs, that can cause real psychological, or physical threat to humans. If you think that is something far away into the future, is time to wake up and see what is ALREADY happening. After watching the TED talk, search about Lethal Automated Weapons (LAWs). The UN already knows about the huge harm that AIs can cause, if they are left alone to decide very important ethical issues. Thats why they want to regulate it, and ban it. They want to make sure that the final decision of ending a human life is never the product of the decision of a machine. Instead of thinking about the Terminator going after you, you should think about a completely autonomous drone "deciding" that it is better to explode a missile at a given moment to maximize damage to the specified target, even if this means unnecessarily killing innocent people that were passing by. This is an ethical dilemma... that is the result of some programmers already feeding some pre-conceived "ethical standards" on it.



Let me give you an example that clarifies this confusion that people have about the threat of AIs: Lets imagine two scenarios, A and B. On the scenario A, someone was able to develop an astronomically advanced AI... it is so advanced, that it is start to become sentient. It starts reading and learning from all the articles on Wikipedia and on the whole internet. after some years, this AI become so advanced, that it started creating its own predilection for certain philosophers and some historical figures. Then, after a very profound philosophical analysis this AI starts to  become disgusted by the behavior of the humans, and decide to kill a lot of them. This AI them launches a nuclear warhead in the direction of the city of New York... it explodes killing 20 million people.

Now lets go to the scenario B: Present day, some military scientists decide to test a standard 2018-era AI on its nuclear launch systems. But, some programmer messed up the software in the last patch and by an error of conversion between variables, New York for some reason surges as the best target imaginable. The system launches, New York explodes, and 20 million people die.

When the topic of the danger of AIs comes up, Naïve people think about the "A" sci-fi scenario. Serious people in the field of AI think about the scenario "B". Thats why Elon Musk wants us to be very caution about it, and to introduce a "kill switch" to it. If we delegate huge powers to AIs, it could potentially do catastrophic damage.... not because they are going to become an army of highly philosophical sentient robots that want to kill mankind.... But because humans often mess up. Programmers mess up. For all the dead people in New York, it don't matter at all if they were killed by a higher intelligence or by a system with one thousandth the intelligence of a cockroach. THE DAMAGE IS PRECISELY THE SAME.

Just like the example in the talk. Whats the difference to those Facebook users, if they are being exploited by a sadistic, greedy and sentient AI; compared to the simple refined algorithm for ads that it is used today? None. This, my fellow Skeptic, is the true realistic danger of AIs.


"Knowledge is preferable to ignorance. Better by far to embrace the hard truth than a reassuring fable"

 

personate-rain
personate-rain