Call me crazy. The whole time I stood in the museum and stared at this hand, I expected it to crash through the glass to grab me by the neck … or to pet me. Silly person, the hand is obviously not connected to a command central. But still. I heartily applaud and simultaneously worry about the human exuberance to create.
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
When they are built to further think for themselves and socialize, what if when robots develop feelings and identity? How are these laws humane then? How will they ultimately protect us and them from harm? This research isn’t wrong, but it isn’t right. Standing still, technological stagnation, is not the answer either, for innovation, moving, using and building defines us. It is us. So, it will end in fire, ours or theirs.
neoseeker | Engineers work on moral compass for battlefield robots