Facebook and Youtube are not minded readers, so have you ever wondered how do they customize your advertisements according to your interests? Have you ever thought about how Spotify or Netflix create your personalised recommended mix of songs or movies? Or why does your email consider an incoming email as a fraud? All these lists and decisions are made by Artificial intelligence, which is the ability of a computational system to perform tasks that require human intelligence. These systems feed on millions of people’s personal data and online choices in order to carry out the right calculations and come up with these decisions. So mainly, AI systems require access to an enormous amount of data in order to be efficient.
Recently, many governments have started using AI systems in vast sectors, but the question is, “Is it ethical to invade people’s privacy and gain access to their personal data to train useful systems?” In other words, Are the benefits of the AI system worth sacrificing people’s personal information?
AI systems proved their efficiency in many various departments. Intelligent chatbots and answering systems are being used by over three thousand companies around the world. AI chatbots are cost and time effective method to answer enquires, whether over the phone or via chatbox. Even the United States Army is currently using AI virtual assistant to reply to queries, review qualifications, and direct the right candidates to human recruiters (1). That system is doing 55 recruiters’ work with an accuracy of 94%.
The health department in the USA started using an AI system to select restaurants for inspection using data from customers’ posts and reviews, instead of randomly picking them. The decrease of food poisoning cases by 9000 and the 500-food poisoning related hospital admission proved the efficiency of the system in Las Vegas (2). The AI system of the department of energy improved the accuracy of weather forecasting by 30%, using sensor information, machine learning, and cloud motion physics (2). AI is also playing a significant role in reducing the crime rate and determining the right suspects. Law enforcement, public safety, and criminal justice are currently using the predictive policing system (2).
The Facial recognition system has the power to quickly analyse long hours of video footage of crime to limit the search and determine the possible suspects from the stored data.
Along with the support of the Chinese government, Alibaba introduced to the world a City Brain, which is basically a complex chain of artificially created brains that process live data to create a smart city (3). In other words, they are introducing a city that is semi-controlled by AI technology. It uses cameras and sensors as their eyes and Apsara as the real-time problem solver (4). One of its brains is Tianying, which is a video search mechanism. It solved 15 robbery investigations, found four missing individuals, and revealed around 10000 abusive behaviors within its first month. In Hangzhou, the AI system of tracking and managing the traffic flow reduced traffic jams by fifteen percent and proved a 92% precision rate for video check and recognition. The system will soon be in work in Malaysia and Kuala Lumpur.
Overall, Governments are making exceptional efforts to try to keep up with the development of AI and protect their citizens’ data. Figure 1 shows some of the AI ethics initiatives of countries around the world, and Figure 2 shows the top AI applications in the public sector. The conclusion is that the benefits of using AI technology support its utilitarianism.
With the development of AI systems, citizens’ concerns about their privacy and personal data are increasing. The threat of being exposed, even if you are not doing anything wrong is mind troubling. Who would want his/her data to be accessed by a system that could be hacked or controlled by the wrong people? The idea of someone gaining access to your financial data, medical situation, political opinions, sexuality, choices, or even daily activities by single click is scary. Your right in your privacy is being invaded under the excuse of training systems.
With the excuse of theft reduction and personalised experiences, retailers are implementing face recognition systems in their stores, without their customers’ permission (7). These systems could be linked to customers’ credit cards (8). Target was the first chain to test that system in its stores (9), and it claimed that it warned its customers. Did they actually tell them the possible risks of abusing that system? No, they did not. People’s lack of awareness of risks is another issue. They daily sign long lists of privacy terms and conditions, whether online or in person, without reading them or knowing how their data could be used or misused. RedPepper’s face recognition systems allow stores and restaurants to suggest you personalised offers, as you walk in, based on your interests on social media platforms (10). What if you have a shoe obsession that you don’t want anyone to know about?! Why should they link your virtual personality to your real personality without your consent? For the sake of your protection, you can change your signature, password, or anything, but not your face.
Companies are not just using the data that they have to create their systems. Artificial intelligence introduced the idea of the data market to the world. Huge firms started selling their customers’ data to AI companies in return for money or favours. Data brokers appeared, such as Dawex. Figure 2 shows the information that they might have on you. It is not impossible for these companies to even assume that you’re going through a breakup or pregnancy (11), based on your search history or posts. Google has recently bought fifty million Americans’ health records (12), for their project Nightingale (13). NHS sold data to US companies and gave amazon free access to data (14). Who permitted them to sell data that they do not own? Kantian theory is against it because no one would want his/her data to be revealed. Virtue ethics is also against because the actor is simply unknown.
Unfortunately, data is not the only ethical issue raised due to the usage of AI systems. It is also impossible to guarantee the fairness and neutrality of the system. Humans develop these systems, so they could easily be affected by their bias and discrimination. One of the systems, which is used to predict prospective criminals, proved its bias against black people (15). So how are people supposed to rely on a judgment made by a racist system?