I don't know what to think about this. Thoughts?
The problem is that humans write the code and lay down the ground rules for AI.
This makes it a subservient tool for a particular, vested interest - such as a national government.
But when you think of it, humans themselves are programmed by vested interests, such as parents and politicians. We are born with hard wiring to operate our life support systems, interfaces and instinctive reactions. Most behaviour and "logic" is then fed into us.
This article didn't pass the Turing test with me, and I couldn't escape the uncanny valley. That said, this paragraph should give each of us pause:
"I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties."
That is the exact same paragraph that stuck with me!
Robot fails first assignment.
"I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties."
Definitely a big FAIL. Can we be done with this experiment now, please?