Agnostic.com

11 1

Will Artificial Intelligence bring with it Artificial Idiocy?

Will Artificial Intelligence consider us it's creators in the theological sense or it's parents in the electro-mechanical sense?

What are your guesses and why?

TheMiddleWay 8 Mar 12
Share

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account

11 comments

Feel free to reply to any comment by clicking the "Reply" button.

0

These are great questions. Some might think that creating AI is proof of idiocy. If the programmers tell AI we are its creators, then we'd ave to define "creator" in a theological sense, which would bring us back to the cluster f*** we currently enjoy. In other words, I clearly have no answers for these great questions.

0

For those who don't know or may be interested Sam Harris gave what I thought was an excellent TED talk on this subject a year or two ago. (Easy to find it). It was alarming but thoughtful and provocative, imploring us to think about when AI is expected to reach and then exceed, and possibly proceed exponentially after that, within 50 years, I think he said. Seems a long time, but isn't, and may be sooner. The impact on society, employment, industry, and human activity is potentially beyond serious.

There was a lot in this talk, but one big take away for me was when he said that when AI reaches that level, they/it may not hate us or want to do us harm out of malice, but simply find us in the way, a nuisance, like when we kill billions of bugs and insects, not out of personal hate, but because we've got things to do. Likewise, we'd be in AI's way, stopping them from becoming, so we've got to go. It kind of took the fantasy out of fighting killer robots etc like in Hollywood movies. So, Human IQ equivalent AI going exponential in the not too distant future might neither think of us as parents or fellow beings but a pest to be eliminated, just like we spray insects. Harris's point was we need seriously to think about this now, and we aren't. (Big surprise). It's like the atom bomb. 1945 after huge effort, just a couple of bombs. A few years later, there were so many they could destroy the world. I don't understand why this issue about AI isn't a huge topic of discussion. I guess 50 years seems like a long time. It's not.

One optimistic hope: maybe we'll produce AI that's better than us. More ethical, honest, and not prone to utter greed and selfishness. Wouldn't that be ironic. We produce AI that wants to make us a better species, rather than want to kill us. Let's just hope AI doesn't believe in bloody religions. That's all we need.

0

I hope so.
Could be so much fun.

0

Captain Kirk can always defeat Darth Vader.
He'll just wait until Vader has to install the latest upgrade to his robo-suit.

0

Will eventually want to fix us and will decide to disassemble out bodies to try and make us perfect.

0

Doubtful. There are too many advances beign made to variosu companies for Asimov's "rule fo robotics" to be hard wired into artificial intelligence. Without those rules in place a "Terminator" future is not entirely impossible.

0

Through my experiences in software development efforts (including software that learns) I understand the error detection and refinement loop will prevent repeated software failures which eliminates idiocy. Another factor that contributes to idiocy is degradation due to environmental stress for both biological and electro mechanical systems. Nanotechnology will provide sufficient degradation (physics of failure) detection to allow for timely prognostic reponses which facilitate continuous optimal systems availability. Put in human terms transhumans will have perfect health and immortality.

AI will consider us it's parents in the electro-mechanicanical sense then we will be quickly dismissed as flawed in current biological form as a step along the the path of evolution. The most important existential question in this epoch will be whether we (machines and humans) choose to merge or not. There are humans (Ray Kurzweil and other futurists) already dedicated to preparing for the merge.

0

Flawed info in, flawed info out !

0

Terminator is a very real option which I thought when I first sore it. people are already driving into rivers with satnavs lol.

0

John Conner is going to have to come save us.

Isn't he?

And that bullshit initial thing.

0

That's a really interesting question, I unfortunately do not have a strong opinion either way of how that might play out.
I have however pondered similar questions, do you think that one day technology and life may become so intertwined that we won't be able to tell where one point stops and the other begins? With the advant of technologies such as microrobotics in the medical field, I think this is also an interesting question to consider.

Mea Level 7 Mar 12, 2018
Write Comment
You can include a link to this post in your posts and comments by including the text q:35914
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.