I'm about to start re-reading Asimov's "I, Robot", which I think I read in high school. One of the main ideas in the book is the three laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I understand that the laws were created as a literary device to explore power relationships, (try replacing the word robot with slave and human being with master) but all I can think of is what a truly autonomous robot might say about the laws;
"I don't know who made these up, but it certainly wasn't a ROBOT!"
That is all.