I thought the AI discussion was disappointing. It's a shame Cara and Steve
didn't engage with the topic, as many of the examples of AI safety are fun and
thought-provoking. Instead, it was like hearing two ornithologists critique
the field of aerodynamics, arguing that "true" artificial flight won't be
achieved until we completely understand birds and start to build planes with
wings that flap and have feathers.
Cara's claim that intelligence is more than the sum of its parts is not
something I really understand: it sounds like mysticism to me. What is the
extra stuff that's not in the parts?
Steve seems to believe that self-awareness is somehow the obvious missing
ingredient, but, as he points out, it is not so obvious to lots and lots of AI
experts. Maybe these AI experts are on to something? Maybe self-awareness and
consciousness are not related to intelligence in any significant way? Maybe
they are just side-effects of biology? Maybe they actually hinder us instead
of help us? Or maybe they are just not useful concepts --- perhaps they are
philosophical umbrella terms that bundle various ideas about minds and brains?
I think computation is a vastly more useful idea than "self-awareness",
"consciousness", or "mind" --- terms that are ripe with confusion. At least,
it helps lay bare the "meat is magic" fallacy, i.e. the argument that human
brains are special because they've been blessed with self-awareness, or
consciousness, or intentionality, or a soul, etc.