(I have to agree with then. AI doesn't have to be all that smart to cause havoc. Just look at what computer viruses do and have done. Technology can be easily misappropriated. I do not think AI by itself will likely do harm, but people who design AI programs to do harm, will likely become a problem.)
A survey of scientists and researchers working in artificial intelligence (AI) has found that around a third of them believe it could cause a catastrophe on par with all-out nuclear war.
The survey was given to researchers who had co-authored at least two computational linguistics publications between 2019–2022. It aimed to discover industry views on controversial topics surrounding AI and artificial general intelligence (AGI) – the ability of an AI to think like a human – plus the impact that people in the field of research believe AI will have on society at large. The results are published in a preprint paper that has not yet undergone peer review.
AGI, as the paper notes, is a controversial topic in the field. There are big differences in opinion on whether we are advancing towards it, whether it is something we should be aiming towards at all, and what would happen when humanity gets there. ...
AI must be programed with empathy eg a recognition that other life has a right to exist and should be considered in decisions. If not, AI will focus on one species only, ignoring all others.
Only considering one species is the same as being selfish and self absorbed. We can see these traits in humans and call them psychopaths.
We do not need psychopathic AI therefore programing empathy is a must.