Why Asimov's Laws of Robotics Don't Work - Computerphile
I watched it, and its two main arguments were
I'm not impressed by the argument from being from SF, but the argument from being poorly defined is much more important. It shows how much we take for granted about our knowledge of ourselves and our surroundings. One would have to get it all into an AI system in some usable form. Isaac Asimov himself half-conceded this by how he got lots of story ideas from the Three Laws.
Even if they are not very useful specifically, they are good overall guidelines, and they can be generalized as Three Laws of Tool Design:
I think that IA's laws are often mentioned because they are simple and comprehensive -- and nobody seems to have thought of good alternatives.
IA thought of them because he got tired of stories of robots destroying their creators, with the implication that we were not meant to create such machines. So he thought that robots ought to have safety mechanisms to keep that from happening, like the rest of our technology.
I note with an example of the problems that one gets when working out these laws in detail: weapons. They violate the First and Second Laws with respect to their targets, though they must follow those laws with respect to their users. Then we have the "bomb paradox", as it might be called. A bomb's action involves destroying itself, thus seemingly overriding the Third Law. But a bomb must avoid that self-destruction until it is at its target (the Third Law again), and it must destroy itself at its target (the Second Law). But the Third Law specifies its action being overridden by the first two laws, thus resolving that paradox.
I conclude with Isaac Asimov's reaction to a scene in "2001: A Space Odyssey", where HAL 9000 kills some crewpeople. IA's reaction: "They're breaking the First Law! They're breaking the First Law!" Someone calmed him down with "Isaac, why don't you strike them with lightning?"