But seriously, I think that sentience is something that we should not grant robots, mostly because they are highly unpredictable. Unless we have some form of control over them even after granting sentience, then maybe, but I dunno, I think humans feel like gods because we are in a way creating life, or perhaps emulating life would be a better phrase.
Not everything is dim and shady like Matrix or Skynet , AND the I-Robots (Will Smith , take a seat) but it's a good idea. They could help perform so many other tasks humans could err at with little or not err in a program file. They could do every thankless job nobody wants to do and not bitch at it - and do it well, at that. They could be used as a workforce in and of itself, laboring to build collosal projects and not risk human lives to do it (sounds conceited, but robots could be replaced - saved memories banks that transfer between bodies) and yeah, it has to do with playing God. Every kid dreamed once that they control some superpower, and now we're THIS close to it.. I say damned be the results, end of the world or not you can't be afraid to step up in evolution.
Sentience doesn't necessarily make even, with things like 'rights' and 'granted freedoms' I could foresee being limited, but sentience is the next step up from AI.
I'm not arguing about banning or not using robotics per say, but messing with AI is kind of a leery topic. It's one thing to program something to do something, it's another to give it awareness. We're quite a ways away from creating that kind of extremely complex AI, but people are constantly working on pushing the boundaries. I can see the many valuable contributions robotics gives towards quality of life. Humanity is always searching for cheap and efficient labor and robotics is providing that. However it becomes an issue when said creations can be used against us.
That's a big philosophical problem. The question can be posed another way. At what point should I consider another human being to be alive? If it has emotions? But how do I know that it really has emotions and that I am not merely projecting my own emotions onto him? So to begin to answer your question, you first need to define what it means to have emotions.
Can a robot really be programmed to have emotions? I don't think so. Sure, you could program it to act as though it has emotions and react in the necessary way, but those emotions wouldn't be genuine. The robot would just be following commands. You wouldn't be able to program it to actually experience the emotions either because it would need the necessary chemical reactions that happen inside the human body in order to be able to truly experience them. It would also need human traits like conscience, guilt, shame, self esteem, etc.
I think that problem could be resolved by the Turing test.
The Turing test is a test for "machine intelligence", such that a person communicating to that machine should not be able to notice that he or she is speaking to a machine. In effect, a machine that passes the Turing test blurs the distinction between human and machine.
It all boils down to wanting to play god and feel like we can recreate the senses gifted to us by our natural birth. A need to explain something. An urge to change an untested field. To advance humanity, most people are looked at like heretics and sinners, ironically. Even Nostradamus was acclaimed to have been hailed as a heretic just later to have his prophecies come true, and he knew of things like the helicopter and everything IN HIS time period. There was no way of knowing how he knew; he just did. Quite in that same sense, even if not playing 'god' I feel we should look at it from a science standpoint, as machines can be programmed to literally do ANYTHING, with little to no margin for error. I think it's at least worth a serious look and talk on an alternative matter in robotics and medicine. Not necessarily AI, though. I do believe it'll have it's time and we'll see Cyborgs. Just not yet. And the pieces to make them aren't free, so even if we could replicate free thought machines we couldn't construct them.
Well, I think they may have "technically" passed but were not able to learn or formulate ideas. I believe they could just reply and ask questions. I haven't read much into it, but I'm sure they didn't pass it.