In the past few decades robotic technology has developed at an alarming rate, particularly with the growth of Artificial Intelligence. Current uses of AI technology have been mostly beneficial with examples including voice operated smartphones and early automated driving systems. However, as this technology progresses around fields such as machine learning and autonomy, so must our perception of possible dangers and unexpected outcomes. As a society this is something we must prepare for or be forced to limit these developments for the future.
Robot/A.I. sentience and legal issues
Many notable and accomplished scientists/Engineers have expressed concerns over the rapid development of A.I and a belief that this could lead to a lack of control or robot sentience. Late physicist Stephen Hawking has claimed that a self-developing A.I could indirectly threaten humanity’s survival if its aims conflict with ours. Elon Musk has made similar claims that without the introduction of appropriate safety measures by government and global society, A.I could pose a serious danger to humanity, especially when used for militaristic purposes. A.I. is currently growing at a rate faster than most expected resulting in much more intelligent and lifelike robotics e.g. “Sophia”, who was recently granted citizenship in Saudi Arabia. As robotics become more conscious, the issue of robot rights will arise. McGinn, alongside many other philosophers, state that consciousness is very subjective and not yet fully defined by the physical sciences. If we cannot define our own consciousness we struggle to identify it more so in robots/A.I. Thus, we cannot accurately decide if robots are deserving of certain rights, punishments, or what point their usage becomes akin to slavery.
Robotics with A.I. are very complex machines which require system training in order to detect the correct patterns to carry out functions. As this training cannot cover all possible scenarios which may occur in real world situations, the system can be manipulated or get confused resulting in problems. These problems could cause a variety of serious consequences such as harm to other humans, serious loss of profits in industry or even the release of important security information. An example of this occurrence includes a situation where a ‘women was sleeping on the floor when her robot vacuum ate her hair, forcing her to call for emergency help’. The main problem with the manipulation or confusion of artificial intelligence is knowing who is responsible for their actions. If a robot was to kill a human, realistically the robot won’t be facing charges. However, does this liability therefore go to the company? The manufacturer or designer of the robot? The person who set the patterns for the function? Or even the person who overpowered the internal system for personal gains? This is an ethical problem that must be addressed in order to ensure human safety by getting society to implement necessary restrictions on A.I. development. This is important as technological advancements are leading to a future consisting of further interaction with robotics.
Use of A.I./Robotics for human safety and care
The development of artificial intelligence will present a multitude of advantages for humankind, and ethical reasoning dictates it should be developed. One such major advantage is protecting life itself in dangerous or hostile environments, and even in less typically risky environments such as the workplace. Obvious examples of work which may be automated by AI are tasks such as haulage, where the long hours of sustained focus present apparent risks when undertaken by a human. With more A.I. and robotics advancements, more dynamic roles such as firefighting may be assisted or even performed, doubtlessly saving many lives. Using Kantian Theory (Duty ethics) the development of AI is the correct ethical decision as it offers the opportunity to save lives, which is an ethical norm. Of course, the counter argument may be made that this will deprive people of their jobs. Although this may be the short-term case, a long-term view would be the growth of job sectors revolving around AI and robotics which hold more highly skilled jobs, benefitting society as a whole, which is desirable as per a utilitarianism standpoint. One solution to the redundancies created by AI would be to regulate sectors to which it is implemented, allowing time for human staff to retrain and find work elsewhere.
The number of people aged over 65 in the UK increased by 21% over the last 10 years and is forecasted for 48.9% growth over the next two decades, amounting to 4.75 million people. This results in an increased demand for social care in addition to the already stretched services provided for the disabled and elderly. The use of A.I. in this instance would ease the pressure on carers and NHS, and enable a round the clock service for those in need. Some may be uncomfortable at the thought of robots providing impersonalised care however from a care ethics standpoint, it would facilitate longer independence and assist with everyday tasks, things which are paramount to those receiving care. The A.I. reduces the demand and costs associated with providing such care 24/7. Many everyday tasks can be made safer and effortless, thereby increasing the amount of time available to do with what one desires. Driverless cars are one example of how everyday life is more efficient with the adoption of A.I. technology. Research from the University of Illinois at Urbana-Champaign found that as well as reducing accident risk and fuel inefficiency, driverless cars could help regulate traffic flow on roads.
In conclusion, there are a variety of important points to consider for and against the development of A.I. The advances in technology have clearly shown how AI can provide an important opportunity to save lives by programming machines to work in high risk environments, and aid disadvantaged members of society. However, recent events displaying problems such as robotic manipulation and confusion shows how there needs to be strict legislation and regulatory measures. This is important in order to use this technology safely and effectively in a future where we expect increasingly more interaction with A.I.