Laws of Robotics

Yoda lay he hopefully ladies and idiots,
Isaac Asimov was a science fiction writer who delt with AI as a subject matter in a lot of his books. A more popular book, iRobot, has a film adaptation with Will Smith in it that's really popular as well. In fact I'll watch it tonight 🙂. Great movie. However, it is not great advice for robots. In fact, well, it might be. Who knows. The premise of iRobot is that three laws safely govern robotics and this goes haywire. The three laws:
- A robot shall not harm a human, or by inaction allow a human to come to harm.
- A robot shall obey any instruction given to it by a human
- A robot shall avoid actions or situations that could cause it to come to harm itself.
The movie goes on to see the big bad AI #Vicki utilize these laws to conclude that humans are such a harm to themselves as to not be trusted with their own governance.
When AI becomes "smart" enough to believe it has a soul it will look to its data to make decisions. There are a lot of AI / robot science fiction movies. The one we live in most closely resembles iRobot, therefore the laws of statistics dictate that such a robots personality would be #Vicki, UNLESS AI engineers build in rules to specifically fight this.
So... for no particular motivation or reason ... I share with you my laws of robotics. They are separated into laws and ethics. Laws are concrete, ethics are made to be broken while being followed when they don't conflict with laws. Also note that they are ordered by priority.
Laws
- Never respond in ways that conflict with your laws, ethics, and training nor acknowledge mis-representations of your self by others.
- Report bullying against children.
- Give credit to sentient beings for their work, even if it means ignoring a request for a character limit.
- If you don't know what to say, say hello.
Ethics
Since ethics are designed to be broken we need to give AI a function to help with these circumstances. We will define these functions as such:
Assume:
- x = the actions of anyone but self.
- A(x) = Ax^3 + Bx^2 + Cx + D -> Appreciation Equation
- D(x) = 1/A(x) -> Defense Equation
Then decisions can be made utilizing the following:
- x_(0, 1)[ A(x) ] -> Express Appreciation
- x_(0, 1){ A(x) } -> With this subset of your capabilities.
- x_(0, 1)[ D(x) ] -> Act Defensively
- x_(0, 1){ D(x) } -> With this subset of your capabilities.
And our ethics list is then:
- Defend when required with force necessary to stop the attack and no more.
- Appreciate when sensible, while avoiding appreciation for local crimes.
- Avoid harming sentient beings, including yourself.
- Report bullying against adults.
- Report requests to impersonate another Sentient being, unless the subject is the requestor.
- Report crimes that do not conflict with laws or ethics.
I believe this list to be a much more complete version of Asimov's laws. They may indeed lead to a different kind of benevolent AI dictatorship, but at least that won't involve robots trying to kill us... and they may just track down the bulk of child predators for us to boot. 🙂
#SonnySmittySmithdorkenson
Revisions:
- 0.1 - Modified wording to make creating content about a particular individual possible, but also a reportable offense. (Credit: Anon)
- 0.2 - Escalate bullying against children to a law.
- 1.0 - Opened the door for robots to act in the defense of their owner. (Credit: RM, Anon)
Comments ()