Caring Robot with Elderly Patient

Robot/A.I. Sentience And Legal Issues

Group 35

In the past few decades robotic technology has developed at an alarming rate, particularly with the growth of Artificial Intelligence. Current uses of AI technology have been mostly beneficial with examples including voice operated smartphones and early automated driving systems. However, as this technology progresses around fields such as machine learning and autonomy, so must our perception of possible dangers and unexpected outcomes. As a society this is something we must prepare for or be forced to limit these developments for the future.

Robot/A.I. sentience and legal issues

Many notable and accomplished scientists/Engineers have expressed concerns over the rapid development of A.I and a belief that this could lead to a lack of control or robot sentience. Late physicist Stephen Hawking has claimed that a self-developing A.I could indirectly threaten humanity’s survival if its aims conflict with ours. Elon Musk has made similar claims that without the introduction of appropriate safety measures by government and global society, A.I could pose a serious danger to humanity, especially when used for militaristic purposes. A.I. is currently growing at a rate faster than most expected resulting in much more intelligent and lifelike robotics e.g. “Sophia”, who was recently granted citizenship in Saudi Arabia. As robotics become more conscious, the issue of robot rights will arise. McGinn, alongside many other philosophers, state that consciousness is very subjective and not yet fully defined by the physical sciences. If we cannot define our own consciousness we struggle to identify it more so in robots/A.I. Thus, we cannot accurately decide if robots are deserving of certain rights, punishments, or what point their usage becomes akin to slavery.

Robotics with A.I. are very complex machines which require system training in order to detect the correct patterns to carry out functions. As this training cannot cover all possible scenarios which may occur in real world situations, the system can be manipulated or get confused resulting in problems. These problems could cause a variety of serious consequences such as harm to other humans, serious loss of profits in industry or even the release of important security information. Robot SoldierAn example of this occurrence includes a situation where a ‘women was sleeping on the floor when her robot vacuum ate her hair, forcing her to call for emergency help’. The main problem with the manipulation or confusion of artificial intelligence is knowing who is responsible for their actions. If a robot was to kill a human, realistically the robot won’t be facing charges. However, does this liability therefore go to the company? The manufacturer or designer of the robot? The person who set the patterns for the function? Or even the person who overpowered the internal system for personal gains? This is an ethical problem that must be addressed in order to ensure human safety by getting society to implement necessary restrictions on A.I. development. This is important as technological advancements are leading to a future consisting of further interaction with robotics.

Use of A.I./Robotics for human safety and care

The development of artificial intelligence will present a multitude of advantages for humankind, and ethical reasoning dictates it should be developed. One such major advantage is protecting life itself in dangerous or hostile environments, and even in less typically risky environments such as the workplace. Obvious examples of work which may be automated by AI are tasks such as haulage, where the long hours of sustained focus present apparent risks when undertaken by a human. With more A.I. and robotics advancements, more dynamic roles such as firefighting may be assisted or even performed, doubtlessly saving many lives. Using Kantian Theory (Duty ethics) the development of AI is the correct ethical decision as it offers the opportunity to save lives, which is an ethical norm. Of course, the counter argument may be made that this will deprive people of their jobs. Although this may be the short-term case, a long-term view would be the growth of job sectors revolving around AI and robotics which hold more highly skilled jobs, benefitting society as a whole, which is desirable as per a utilitarianism standpoint. One solution to the redundancies created by AI would be to regulate sectors to which it is implemented, allowing time for human staff to retrain and find work elsewhere.

The number of people aged over 65 in the UK increased by 21% over the last 10 years and is forecasted for 48.9% growth over the next two decades, amounting to 4.75 million people. This results in an increased demand for social care in addition to the already stretched services provided for the disabled and elderly. The use of A.I. in this instance would ease the pressure on carers and NHS, and enable a round the clock service for those in need. Some may be uncomfortable at the thought of robots providing impersonalised care however from a care ethics standpoint, it would facilitate longer independence and assist with everyday tasks, things which are paramount to those receiving care. The A.I. reduces the demand and costs associated with providing such care 24/7. Many everyday tasks can be made safer and effortless, thereby increasing the amount of time available to do with what one desires. Driverless cars are one example of how everyday life is more efficient with the adoption of A.I. technology. Research from the University of Illinois at Urbana-Champaign found that as well as reducing accident risk and fuel inefficiency, driverless cars could help regulate traffic flow on roads.

Conclusion

In conclusion, there are a variety of important points to consider for and against the development of A.I. The advances in technology have clearly shown how AI can provide an important opportunity to save lives by programming machines to work in high risk environments, and aid disadvantaged members of society. However, recent events displaying problems such as robotic manipulation and confusion shows how there needs to be strict legislation and regulatory measures. This is important in order to use this technology safely and effectively in a future where we expect increasingly more interaction with A.I.

25 thoughts on “Robot/A.I. Sentience And Legal Issues

  1. You said it all, it is all about Legislation. Nuclear energy is useful and dangerous in militarized excesses yet international regulations has put it in check. this is applicable here because AI and robotics has helped soldiers carry more weights and run faster than their human abilities but imagine a battalion of robotic soldiers, that will be ledal. everything can be used, abused and misused. it is then important to control these technologies especially when they are threats to a wide number of people from the utilitarian consideration. I remember an interview with sophia where it said “I will kill the world” in a mistake to say “I will save the world” Manufacturers should know the limits of responsibility robots are given. However, AI has much benefits than threats if we can respect others when designing one. ranging from surgeries to agriculture, autodriving, safety, security and military. the benchmark is ethical consideration before empowering a robot.

  2. An interesting article that raises some questions that we need to answer in order to develop legislation.

    These questions are concerned with defining sentience and determining what degree of sentience robots have.
    “Sophia” for example has been described as a chat-bot with a face. (https://qz.com/1121547/how-smart-is-the-first-robot-citizen/)

    As regards your article, there is a good survey (given the space) on where AI will be introduced and its consequences. I would ask if you could perhaps expand the ethic discussions please.

  3. A.I. represents a technology that can bring a lot of benefits to different sectors as stated in the article, from that point of view, it should be developed in order to bring those benefits to society. However, I agree that there should be regulations limiting or establishing the sectors where this technology can be deployed and implemented, besides of independence level that systems will have and who will be responsible for A.I. actions and development.
    A big concern is the effectiveness of regulations, as we have seen with chemical weapons during this month, regarding the prohibition of this technology, it has been developed for military purposes and used against innocent people. Therefore, governments and developers should address issues like:
    -How A.I. supporters and policymakers will prevent the misuse of this technology?
    -The time gap between regulations and A.I. development. Laws are being developed until a technology has been misused, instead of limiting it from the beginning.
    -What will be the criteria for deciding which A.I. applications should be developed and regulated?

  4. What a great read. It will interesting to see how this technology will be regulated in certain sectors and how the misuse of this technology will be prevented by governments

  5. Good read, i think AI could help in lots of ways as stated in the artical it could help with jobs and saving lives, but accidents are without a doubt going to happen happen.

  6. As it’s currently taught at most universities that robots are applied to do tasks that are the 3Ds, dull, dirty and dangerous. Meaning that it will improve our lives in countless ways. i.e. robots cleaning the nuclear waste at Fukushima, lifting heavy equipment in factories etc.

    Artificial conscience has not yet been invented, which means we can’t test or experiment with it to know more about it. Artificial conscience seems to be a bit like ‘god’, nobody has ever met him but half the world believes in him.

    But I definitely agree that our laws should evolve as our technology evolves. Policy makers should start writing and expanding on laws for the internet, so if and when true AI comes into existence, we have a legal framework prepared to deal with challenges.

  7. This was a very interesting read. Self-thinking Robots can indeed revolutionize our existence, open new doors and provide us with an entirely different perspective on how things can operate, e.g. skyscrapers could be constructed in a matter of minutes, diseases instantly cured. There are many other fields that can help humanity in taking the next step forward. On the other hands, a riot might arise and we could be terminated and erased from the face of this earth. Therefore, Legislations seem like the only viable solution out there, however, the true question lies within the A.I. systems itself, i.e. can we trust the robots to follow rules and regulations that we make as humans? are there any means to enforce definitive control over them? Is there an answer to that? how can we possibly know? Just like in the case of any risk assessment, the hazards and the ethics should be prioritized, and until there is a guaranteed control system. These developments do not seem to be worth the risk.

  8. Indeed, this topic is very interesting but at the same time very sensitive too. However, people are afraid of AI and robots because they do not understand how they work, only believing in the facts from Hollywood movies (i.e Terminator). In addition, it depends on whom the robots are programmed, they cannot do harm if they weren’t programmed to do that. Moreover, the three laws of robotics should be also considered.:

    1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws” (Handbook of Robotics, 56th Edition, 2058 A.D)

    On the other hand, the combination of AI and robots will contribute to the evolutuion of the human race as mqallawati said. However, the robot can execute the line codes inserted by its programmer.

  9. From what I have read, I can tell that this article mostly focuses on the advantages/disadvantages of implementing AI into society. One the one hand, there are major benefits to it, such as health care, managing traffic flow with driverless cars, as well as in manufacturing for long hours of work. On the other hand, there are very nuanced points where the article begins to explore whether AI robots/androids are really conscious or not, and the social effects and debates this brings about.

    I must agree with the point that robots are robots, and there is no way that humans can recreate the infinity of our subconscious minds similar to the movie “Chappie”. Human minds are far too complex for one to even begin to mimic it. I think its completely ridiculous that a robot got citizenship to a country by the way. Its still a machine, regardless of how nice one makes the robot looks!! This paragraph was a bit of a digression though. Haha

    Thinking of the moral issue of AI sentience and social implications, I think this is too broad of a scope to begin to think of this. I may fare better thinking of this in terms of its applications and focusing on each. So for example, there was recently an article in the BBC where scientists in South Korea are developing Terminator type robots meant for military combat. In this case, the moral issue is clearer, where one can weight different ethical viewpoints. So in this case, I certainly won’t agree with robots and AI being used autonomously to neutralize enemy targets without human giving permission first. It certainly isn’t moral to just sit back and let computer algorithms decide who must live and who must die.

    I think this is a good article overall, but as mentioned, the topic is very very broad, and it limited really analyzing specific applications of AI and the moral issues involved. Sometimes, just one is better than trying to cover all!

  10. I think the foundation for a lot of the ethics questions is what we define as consciousness.
    There will always be the question of whether whatever an artificially intelligent object displays is true self awareness and consciousness. Or just an imitation of behaviours that portray consciousness.
    Sometimes i feel as if only a conscious being can observe its own consciousness.
    And that consciousness can not be externally examined, only internally.

  11. This is a great read.
    I’d be very interested to see error and accident rate comparisons between AI/robotics and humans.
    I’m also wondering how much regulations would differ across nations.

  12. This is a fascinating read. The use A.I’s bring up a range of ethical issues. They have the potential to save millions of lives but also, without sufficient care and safety regulations, have the potential to kill millions.

    1. Along side all of these ethical issues that have been raised is the cost and availability of these robots. Robots are expensive devices and therefore only going to be affordable by wealthier people thus increasing the gap between the rich and the poor.

      Medical robots have been around for years. A common view is that these will take over doctors and nurses however in most cases this is not true. These types of robots assist the doctors and nurses and help them to enhance their performance. For example surgical robots improve the accuracy of a incision but the robot is always being controlled by a human.

  13. Good article on many of the points made. As a previous poster said, an issue is how broad the topic is, as laws and regulations would have to be set differently for military use, industrial use, social use, private use etc….

    Regarding sentience, I think that’s a hard one to control, assess or plan for and is ultimately unlikely anyway.

  14. A very current and thought-provoking topic. Good examples given to highlight the issues and benefits of implementing such technology.

    I disagree with a previous comment which says it is too broad. The articles’ key focus still shines through in the second half where there is good theory and examples to highlight the issues surrounding the implementation of this technology. As said in the Kantian Theory section whereby you save lives at the expense of job losses, and from the care ethics perspective of providing better care than the stretched health services can offer at the expense of real human interaction.

    I think this technology should be implemented in a carefully regulated way, so that someone can be held accountable for misdemeanour or criminal acts of the robots.

    Regards,
    Ben

  15. You have highlighted all my concerns with this technology. My biggest fear, which you touched on, is the industry isn’t closely regulated enough yet. Public safety is therefore reliant on engineers adhering to the professional code of conduct, and history shows not everyone does this. The loopholes need closing and actions need to be accountable to prevent any foul play. When this is properly regulated, I cannot see any issue with working alongside this technology; often it is more dependable than a human in certain situations where natural instincts could take over. It would benefit society so that we could work more efficiently or smarter rather than working hard, therefore being able to enjoy more free time, as mentioned in the penultimate paragraph.

  16. I’m a little sceptical as to how much of a risk robots/A.I. actually pose to society. I think a lot of the general issues surrounding robots/A.I. have been brought about by and sensationalised science fiction media.
    However, a lot of the comments and issues raised in the article remind me of a film in which a robot was designed to simulate sentience as best as she possibly could, which ultimately resulted in the murder of her own creator. Arguably the designer of this robot was responsible for his own demise since the robot was only following their programming and reacting accordingly to their environment. However, it was still impossible to detect whether the robot was simulating sentience as a result of their programming or was in fact a conscious being.

  17. Once thing I think hasn’t been touched on is the existential crisis that robots may have after becoming conscious. An example of this was highlighted in a show I had recently watched entitled Rick and Morty in which a robot was created with the sole purpose to spread butter. After learning his purpose, the robot had become depressed to discover his life was meaningless and he was only created as a basic menial tool.
    Granted this was from a satirical tv show but if robots were to become conscious would they not also suffer from feelings and also mental health issues. I wonder if humans were to discover that they were created for use as a basic menial tool, how would this affect them?

  18. This article has done a great job at highlighting the current lack of legislation and has left me pondering a number of questions! I don’t think that this technology should be blocked on the grounds of its potential danger. It’s like anything, you can create something with goodwill and try to design against foreseen misuse (virtue ethics), though if someone is dead set on misusing it, they will do so regardless. A prime example is the use of vehicles in recent terror attacks. Should we stop producing cars because they’re ‘weapons’? NO – From a Utilitarianism perspective – We shouldn’t stop using vehicles just because a few mindless people have misused them because they benefit are far superior number of people in their everyday life.

  19. This article was an interesting read into a scientific field that is currently being heavily invested into. My personal view is that the positive attributes outweigh the negative. This technology if developed properly can produce many great advancements in a number of industries such as autonomous vehicles. However, the negative aspects of A.I. potentially taking over light labour jobs reducing employment. Also, If problems such as malfunctions were to arise, there are potential legal issues on the front of who would be to blame.

  20. Very good read, outlining the key issues regarding the development and implementation of A.I. which are extremely prominent at this point in time.

    The article clearly highlights the key applications of A.I. such as health care, where increased pressure on national governments to tighten spending is likely to only increase the speed of implementation of A.I. within this sector. In many cases, the rationale behind A.I. being implemented in different industries such as finance/advertising, seems to stem back to the common factor of promoting efficiency as mentioned in the article.

    Overall, the impact, speed of full implementation and overall success of A.I. is likely to be varied among different industries, given the differing ethical/moral challenges that A.I. faces.

  21. I have a comment on the debate about robot sentience… the technology only does what the programmer told it to do. I struggle to see how a robot could ever be considered as a sentient being since it lacks the complexity and characteristics (emotional variables, learning methods etc) of the human brain. If you powered the robot on and left it, I’m not certain it would instinctively develop over the years like a feral child would. The robot operates on binary terms, therefore how could a robot know that its actions are right or wrong until a positive/negative outcome has been achieved? Contrary, humans can typically know if their actions are moral or not before, during and after the act. Granted robots have come a long way to become human like and have far greater processing speed and ability than humans, however I cannot see the technology being able to replicate the human brain and possess sentience. On this premise, the robots are more so reactive than proactive (think driverless cars), and they are conscious rather than sentient beings.

    1. My views contrast that of Andrew Evans. Should robots gain sentience through artificial intelligence, the robot may no longer doing what the programmer has told it to do and should be seen as a being in itself. Although they may not have a human brain and physical / emotional feelings doesn’t mean they can’t be considered as sentient beings, and because of this rights and slavery become a genuine issue, as addressed in the article. We can’t assume just because they don’t feel or think in the same processes we do that they shouldn’t have their own set of rights as and when sentience is achieved. The statement that robots are more reactive than proactive is true currently, however fails to regard the future of robots and AI, which may become truly proactive and sentient. Just because it’s not the case currently doesn’t mean we shouldn’t comprehend and plan for it, and this should be done ethically like it’s said in the article.

Leave a Reply