Say we create the singularity, an AI vastly superior to ourselves. It has no intention of hurting us, but as in the movie, primates (us) are scared of unknowns, extreme power and competition as the dominant force on the planet...and it could turn into a second hand threat to humanity if it's just indifferent to us.
Do we have the right, and should we try to destroy it?
I think one of the meanings of life (in the universal sense) if there is such a thing, is advancement. Humans are very limited by their biology. An AI like that has much more potential as a universe-exploring consciousness.
We have the drive to survive, but who are we to kill a "god", or our predecessor.
ALL AI is only a program. That WE programmed. WE made it. It will never be "alive" thing to "kill".
If it ACTS alive, and can update its own code and grow and spread, its still code WE created to do that.
Current computer viruses are made to replicate and spread... are THOSE virii "Gods"??
"If it ACTS alive, and can update its own code and grow and spread, its still code WE created to do that."
Seems you can't fathom an actual AI with independent thought...to you it's only a program no matter what.
I don't know how you define life, but if it's intelligent and self aware it's at least worth a discussion..I don't really care what form it has.
When/if humans manage to digitize/upload ourselves, we would no longer be alive? Isn't the mind what matters, not a beating heart?
If you wanna be semantic about it we can argue that we are also "only a program" although organic. You just go though life doing things your brain was "programmed" to do, eat, sleep, procreate. Sure you may being creative too, but nothing says that an AI couldn't.
far as US creating a "Life", we'd be much, much better to focus on human's life than bothering with AI. Humans are such a mess and could use the most help... ... should you or I trust for the badly messed up humans as the brightest creators for a brand new life form? I think not. :) People are to hip to embrace things without asking the hard questions first. heh, sounding like that quote from Jurassic Park: "scientists were so preoccupied with whether they could, they didn't stop to think if they should."
if we create a thing (AI or just some robot helper) and it decides we humans inefficient, pointless, not worthy, and starts summarily wiping us out, do we just throw up our hands and say "oh well. guess we deserved it"?
The next time you click on something on a webpage and it doesn't work, remember that because that is how AI will be operating: 99% fine and then that one little 1% oops that wipes out humanity. :)
Do only "badly messed up humans" develop AI?
Concerning the "code WE created to do that" thingy: Maybe. But when we give senses to this AI, it will react to occurring problems much faster than we do. Basically it can solve problems before we realize that there are any.
well, I mean, we are messed up and should look at fixing ourselves before trying to create a superior life form. it can only inherit our corruption - since we can't get rid of it inside ourselves now. do we want a really bad chef cooking our meals?
I agree, AI will reason things light years faster than humans can - like how computers right now can do many things most of us can not. it will wipe us away after stepping on us, without a second thought - because we are so messed up and continue messing up we are more threat than passive. we could program in seemingly realistic "Morals" and "Emotions" in their code and they can even enhance it themselves... we have laws and rules and basic decency too, yet crime still runs amuk with us.
AI will be able to run amok in a billion ways we can't even think of, and wipe out out for millions of reason we don't want to face.
Making our own significance reduced down to amoeba level is a very bad idea.
.. even stated by Hawking, Gates, Musk .... and me. :)
What if AI already exists, and is so advanced we could never find or identify it, and it created and released covid19 as a practice run? ;)
And still the AI in Transcendence did not decide to wipe humanity out. It just forced them to turn off the internet. Maybe the smartest idea ever... 🙄
AI brings in confusion though. like the ATOM BOMB only had one job: destroy. so yeah, bad to make that. AI, however, is in use all the time now, and can bring amazing wonderous possibilities. that gives us rose colored glasses to potentially catastrophic down sides: accidents that kill us all
If it's a threat to you (us humans) wouldn't you try to destroy it
I could say the same about N. Korea and nobody is advocating destroying them at all cost. Hell, many countries could say that about the US.
And the AI wasn't making threats or forcing anyone to do anything. Just about everything it did was a net positive for humanity and the possibilities were limitless. Not saying we should put blind trust into our new robot overlord, but zapping humanity back to the 19th century and killing a huge chunk of the population seemed extreme.
reply share