- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Had to state them because they seem to be so flawless – however the robots became intelligent enough to make a fourth or the zeroth one. Now since Asimov was writing the book so the law reads:
” A robot may not harm humanity, or, by inaction, allow humanity to come to harm!
As I was musing, I felt that probably humans would be like that even thousands of years from now, if the species is not extinct. Ones with advanced technologies would look down upon those without, and being more attached to fellow human beings would be a sign of backwardness! You can see that in the present world itself – where do you find the concept of families and relations being more important – the developed or the developing economies?
In case you are wondering why am I writing all this today, these thoughts were in a draft mode on this blog since a long time and needed to be set to published. The trigger was just this advertisement that I saw a few minutes back – “Every child should have a computer to make this a better world”. I wonder if that’s the pre-cursor of – “Every person should have a robot to take care of him/her!”.