I was really surprised by Steve's comments with respect to Artificial Intelligence. I suspect there are several unstated assumptions, possibly coming from his neurologist perspective.
First, the claim that AIs don't make decisions is just baffling. Of course, AIs make decisions all the time. Amazon's AI decides which books to suggest to you. Tesla's AI decides wether to brake or change lanes or run off the road. Google's AI decides which Korean translation to suggest for a given English sentence. Maybe there is an unstated "conscious" qualifier to the decisions an AI is not supposed to make?
I found Bob's simple example excellent - give an AI an optimisation goal (e.g. eliminate spam) and a heap of world knowledge, and it may well arrive at the solution to eliminate one of the root causes of spam, i.e. humans. Coming up with such plans has been standard for planning systems since the 1980s, without any of the current generation of Deep Learning (although they could not cope with the complexity of current common sense knowledge bases). The idea of solving a problem by trying to reduce it to (hopefully simpler) subproblems is the basic computation engine of PROLOG, one of the classic AI languages.
I also have the impression that Steve implicitly assumes that consciousness requires an architecture similar to the human brain - massively parallel, and possibly even using processes as in artificial neural networks. But that is, while not impossible, another unstated and quite far-fetched assumption. The Church-Turing Thesis https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
gives us good reasons to believe that all computation paradigms are, at least in principle, equally powerful, and that all reasonable encodings result in similar (up to a polynomial factor ;-) computational complexity (that last one may give me away as a theoretical computer scientist ;-). So it is quite plausible that a neural network is not the only way to consciousness. Moreover, we already have quite massive parallel processing engines in our GPUs, which are increasingly misused (or re-purposed) as compute engines. In the case of humans, the brain first developed to react to complex external stimuli, so it's original purpose was pattern recognition and and reaction, for which neural networks are excellent. Evolution always works with the material at hand, so that's why our way to consciousness lead via a biological neural net. But it would need quite an argument to claim that that is the only way to consciousness.
I'd also argue that "consciousness" or "self-awareness" are red herrings. Neither of the two are a feature that is either designed into the system or not. Indeed, I would argue that both are emergent properties along a spectrum. The argument that "I didn't built it consciously, therefore it's not conscious" is simply false. I never intend to hit my thumb with a hammer, and it still happens. I'd argue that the only legitimate way to gauge consciousness is by observing the system from the outside - as in the Turing test https://en.wikipedia.org/wiki/Turing_test
. If we cannot distinguish a system from a conscious being, the only valid conclusion is that the system is conscious.
The current wave of AI is mostly driven by Deep Learning, typically complex neural architectures with learning by error-backpropagation. The neural architecture, with encoders, convolution layers, and evaluators/decision networks is usually designed by hand with a lot of tweaking and trial-and-error. But we already apply genetic algorithms and other optimisation techniques to improve AI systems - in particular, we have employed GAs to improve search heuristics for automated theorem provers. I see no reason why we should not, in the near future, use evolutionary algorithms to improve neural architectures. Assuming that consciousness really lies on a spectrum, and that it offers an advantage for problem solving, I see no reason why it should not emerge from such processes. If that happens in the near future is uncertain, and (depending on the definition of "near") probably unlikely. But there certainly is no design decision for consciousness required to produce it.
I'm just back from the 2nd Conference on Artificial Intelligence and Theorem Proving http://aitp-conference.org/2017/
, and one of the pervasive topics was the integration of machine learning (in particular deep learning) and symbolic reasoning (normally in the guise of theorem proving, but many current theorem provers are general reasoning engines and can provide e.g. question answering services). It's not a very far stretch to assume that such hybrid architectures, with a self-optimisation module, might achieve significant progress towards general AI.