This is a topic that has been covered so widely and with such variety in science fiction that it's hard to treat its portrayal in a unifying manner. From the Terminator where the artificial intelligence seems hell-bent on destroying humans because of some innate psychopathic disposition (though I don't think it's actually made clear why Skynet wants to destroy humanity) to something like the movie Her where the AIs are benevolent in nature but ultimately outgrow humanity and leave.
While I think true malevolence in a machine intelligence is unlikely that doesn't mean that they cannot be dangerous. Suppose you create an intelligent machine to calculate the next unknown prime number. The machine thinks to itself, 'I'll be able to do this faster if I co-opt some more computing power' so it hacks into other computers on the network and runs its prime-finding algorithms on them. Suddenly all of the computers in the world are trying to find new primes and the world infrastructure collapses killing millions even though the AI was just trying to do what you asked it to. It's the usual story of the genie. You don't just need an intelligent machine but one whose motivations and the view of the world align with yours. How likely is that to happen given that they would be built on a foundation completely different from our own (which is presumably empathy and cooperation arising from kin selection and inclusive fitness dynamics)?
Another thing the machine could think might be, 'I'll find new primes faster if I redesign myself to be smarter'. If it can do this then it will no longer be the machine you designed. And in fact, once it's made itself smarter it can make itself smarter still etc. so that the growth in intelligence would become exponential and the machine could become unrecognisable compared to what you initially designed. In fact, once the AI is able to improve itself this sort of intelligence explosion could happen in a matter of microseconds.
One might think that you can prevent possible damage by isolating the computer from the network or some such but in reality who knows what the computer actually has in its disposal. For example, there are all kinds of interesting experiments which try to design circuits using evolutionary algorithms shuffling components in a circuit. The designs this sometimes resulted in were completely baffling to the experimenters. For instance, some parts of the circuit were completely isolated from the actual functional part but were in fact essential because the circuit utilised the physical features of these additional parts, such as their electromagnetic interference. In another experiment the algorithm produced a system which used the circuit tracks on its motherboard as a makeshift antenna to pick up signals generated by some desktop computers that happened to be nearby.
None of this even requires self-awareness or consciousness or what have you. We are simply talking about intelligence which I take to mean the ability to adapt to solving novel tasks. Of course if machines do become self-aware that will open a host of other questions, most prominently ethical ones. And I'm not only talking about trivial ones such as whether they should have equal rights with humans but perhaps whether they should have even more rights. Unlike humans, computers are easily upgradable. Even if humans can be augmented by technology there's only so much circuitry you can cram into a human scull. An AI, on the other hand, can add infrastructure almost indefinitely. If it can also improve itself by reprogramming then the ways in which it utilizes said infrastructure can also become more efficient at an exponential rate, as I mentioned. So it is feasible that AIs could have states of consciousness far surpassing our own. So would their interests not morally trump ours? After all, this is exactly the reasoning we use to justify to ourselves why it's ok for us to kill tens of billions of animals each year so that we could have a tasty steak for dinner. We think that although animals are in many ways similar to us and can experience suffering and well-being they don't do so to the same extent as we do. Wouldn't the same logic apply to us compared to an AI which can experience states of suffering and/or happiness far in excess of our own?
Anyway, one can speculate on this topic indefinitely and I've already gone on for way too long. I don't mean to suggest that it's necessarily all doom and gloom, just that it's a possibility. Nevertheless, the one scenario that I can't envision is us living side by side. I think the best we can hope for is that AIs will outgrow us and go on their separate way.
reply
share