The disagreement here is largely philosophical IMO. What does consciousness mean? Philosophers have spent years trying to button down ideas like "mind", "intelligence", "consciousness", "thought". We don't know that other things have "consciousness" (let's not go there with animal mirror tests), so we are left to project subjective phenomenology onto other minds (hard problem). Bear with me here. Don't roll your eyes just yet.
This topic comes up largely in sci-fi scenarios about the future. Where consciousness really means "intent", "agency" and "desire". I see these assumptions as basically teleological - the human is goal-directed. Which is problematic from a scientific perspective. Humans tend to project these properties onto other things for some reason. We anthropomorphize everything from animals to rocks to cartoon characters. The idea being that basically if some thing were created, it could become dangerous to us all if it became clever enough "because it wants to be". "Wants" being the operative word. This isn't just confined to Hollywood androids, it's also takes the form of Hollywood aliens.
The worry is that machines will pose a biological existential threat to us. I'm not sure why this would be the case. What would be their aim / end goal? I get what humans want - food, shelter, reproduction etc. What does a machine want, and why would that conflict with human needs? I just don't see a machine doing something that it hasn't been programmed to do. Human behavior is largely emergent from our biological interaction with our environment. I think futurists largely ignore the biological basis of our behavior, as though consciousness is some kind of detached computational process. The embodiment paradigm has largely thrown a spanner in the works of the computational model of mind. My contention is that consciousness is largely an emergent property of biology (what else could it be?), and so to replicate "consciousness", surely you therefore need to replicate the underlying physiology? We're a loooooong way off from that currently.
"Could we intentionally design an 'intelligence' that destroys us?" I guess so. "Could we inadvertently design an intelligence with a bug that could wipe us out?" I guess, but we would need to do that, we have no incentive to do so, and I don't see how a meat-popsicle algorithm is suddenly going to decide of it's own accord one day that it should damage a bunch of humans. My viewpoint is, in order for a machine to be a genocidal threat:
1. It doesn't have to be humanoid
2. It doesn't have to be clever
We have been creating dumb things that do collateral damage for aeons that kill others, ourselves and those we love. Mine fields, punji sticks, tripwires, you name it. AI in modern warfare is just a more efficient incarnation with fewer false positives. I'm still not sure why it would need to be humanoid. We already have drones. Human bodies seem to suck at a lot of things involved: getting places fast, accuracy, heavy lifting etc. So if the question is whether we're going to have cyborg terminators indistinguishable from humans that will infiltrate our ranks anytime soon... and I look at the current field of robotics... I have to laugh.
I'm also a bit bemused with the infatuation of robotics in the home. The Stepford-wife styled AI. The way I see assistive domestic tech is more with bespoke single purpose machines. These wouldn't be humanoid, largely because for the tasks we want robots to assist us with, the human body is particularly not well suited. Assisted tasks typically involve repetition or burden. I don't see why we need to create a whole plethora of complex components in a single machine to accomplish this. In some ways sci-fi is terrible at future tech prediction. If you had asked somebody in the middle ages where tech would be in 2000 AD, they would probably draw you a mechanical horse, or a mechanized carriage driver. In some abstract way, this would be true, but we now have self-driving cars, autonomous combine harvesters and autonomous mining trucks. There doesn't need to be a human or a humanoid involved. Why go to all the effort of articulating a fake human body to accomplish this task?
The other avenue of AI development is in augmentation (H+). I can see a future there: with biotech at the cellular level and prosthetics / implants at the somatic level. But that's not really AI. We've been having assistive therapeutic tech since the dawn of time: wheelchairs, eye glasses, hearing aids, pacemakers, stents, etc. I don't see why this wouldn't continue. I see microbiology posing a far bigger challenge in the medium to long term than "perhaps one day consciousness simulators, could your brain get hacked?". The ethical issue of the next generation will be "who will have access to life-preserving and life-assisting technologies?". People will ultimately be wanting to supplement their existing bodies, not turn into cyborgs. This means life extension, genetic determination, genetic therapy. How will the increasing chasm between haves and have-not's play out in the areas of reproduction and longevity? We will obviously make strides in neural mapping and (neural) simulation, but I'm not sure how that translates meaningfully into "conscious experience". We just don't know enough about the wetware and I'm not sure we're going to anytime soon. Things like FMRI are clunky, low-resolution, high latency approximations of what's going on at a neural level. The human connectome is being mapped, but we really aren't much further than just joining some of the dots at this stage. So I'm with Steve and Cara on the BCI / simulation stuff. Even if we make advances in BCI, I'd say that would most likely be in therapeutic contexts (sensory trauma interfacing, metrics, possibly hormonal or chemical regulation & delivery, chronic pain therapy etc). We're not going to be teaching ourselves kung-fu with a software upgrade.
I think the singularity-futurist stuff is all over the map, so forgive me for not being particularly accommodating towards it. Don't get me started on asking Bill Gates, Stephen Hawking and co. to weigh in on the matter. I respect these guys, but they really need to stick to their areas of expertise.