Editorial Feature

The Potential Problems with Using AI

Image Credits: metamorworks/shutterstock.com

Recent technological advancements have made this question sound so natural. The capacity to understanding language, perform facial recognition and the automatization of manual labor through learning processes are only some of the aspects of AI.

In 1956, during a conference at Dartmouth College, the term “artificial intelligence” brought scientists from different fields together. Even though we have some advancements in the area of AI that span beyond what we can imagine, it is still lagging behind the hype that surrounds it. The development of machine learning and learning techniques based on neural networks has benefitted the technological progress of AI. Furthermore, increased computing capacity and availability of data for processing by AI algorithms has spurred the development of AI.

What Does AI Do for Us?

Using machine learning as the main approach, computer programs are trained to distinguish and respond to patterns of information. This technique can be applied to multiple areas, ranging from face recognition to spotting patterns in medical images.

An example of the implication of AI in medicine is the company DeepMind which collaborates with the UK’s National Health Survey in the development of software that is being taught to diagnose cancer and eye diseases by looking at medical images. Similar machine learning algorithms are being trained to notice signs of Alzheimer’s and heart disease. AI can also be used for molecular data analysis which ads the search for new drug potentials, a process that is currently very time-consuming for people. Overall, it seems like AI is becoming an indispensable part of healthcare.

AI has increasingly been applied to all settings in our life. From finance to transportation of goods, assisting research and medical advancements and air traffic control, AI has made our lives better in many ways. As the technology advances, its applications grow remarkably. However, the issues surrounding these advances take a rather obscure and concerning toll.

How Far Can It Go?

We have been increasingly allowing AI algorithms run our lives without, in most cases, considering the practical issues that surround progressively smarter AI technology. The root of these issues is in our trust in the smarter systems that are being built.

The way machine learning works is by being trained to first spot patterns of data and then analyze these patterns. One of the first issues that requires consideration is how to interpret the findings from these analyses. Since the data that is being analyzed by AI algorithms is not perfect we should not expect realistic results that can be generalized across the population and be used as a reason for taking any kind of action. Results from analyses and the decisions made on these results should be looked at with a degree of scrutiny because the chances of these analyses going wrong is high.

Although AI can help us understand climate change and diagnose cancer, regulations are required to control this smart technology from being abused. Despite its many applications, there is still a lack of transparency when it comes to the type of information that is being fed into AI algorithms. An example of what can go wrong with AI is the risk-assessment tool called Compas. The system was primarily used to decide who should be granted parole. The results were rather disturbing and pointed to the mistakes an algorithm can make. The system turned out to be discriminatory towards African-American and Hispanic men, predicting a higher rate of recidivism compared to white offenders.

Moreover, as AI continues to grow and become more advanced, the amount of data that will be needed for the successful development and execution of algorithms will continue to grow. This will increase the chances of people’s data being collected, stored and analyzed in many ways without them necessarily having granted consent. In large cities around the world, location data is being collected on a huge scale and used as a way to control traffic. An example of that is the City Brain project implemented in Hangzhou, China; a technology that is looking at conquering other large cities around the world. This highlights that the most realistic concerns about the progress of AI do not have anything to do with sci-fi inspired future robot invasion. The real threat seems to be privacy. Laws and regulations, such as GDPR, are trying to establish boundaries of what kind of data can be shared and stored within domains and give people the choice of deciding how their data is handled.

A report released by a collaboration between Oxford University, Stanford University and other institutions highlights the realistic concerns:

  • Email scams: online information and behavior obtained from social media platforms can be used to create fake emails.
  • AI-like financial firms: Payment processing and bank details can be dangerous if they end up in the hands of the wrong people.
  • Fake news and propaganda: Advancements of audio and video software and face recognition systems can be a threat that can span from a personal to a political level.
  • Destructive weapons: The widespread availability of open-source algorithms makes the development of dangerous weapons easier.

Take Home Message

As Taddeo says: “AI is not much different from electricity. It is our responsibility to steer the use of AI in a way to foster human flourishing and mitigate the risks that this technology brings about.” As big as the possibilities of improving healthcare systems, transportation and how we fight climate change, AI progress needs to be handled carefully and with consideration of the single user. As for the reader, a simple question such as “Do you want to share your location of social media?” can be a defining factor for the future of AI technology and most importantly, privacy regulations.

Sources and Further Reading

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Mihaela Dimitrova

Written by

Mihaela Dimitrova

Mihaela's curiosity has pushed her to explore the human mind and the intricate inner workings in the brain. She has a B.Sc. in Psychology from the University of Birmingham and an M.Sc. in Human-Computer Interaction from University College London.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dimitrova, Mihaela. (2019, September 09). The Potential Problems with Using AI. AZoRobotics. Retrieved on October 12, 2024 from https://www.azorobotics.com/Article.aspx?ArticleID=313.

  • MLA

    Dimitrova, Mihaela. "The Potential Problems with Using AI". AZoRobotics. 12 October 2024. <https://www.azorobotics.com/Article.aspx?ArticleID=313>.

  • Chicago

    Dimitrova, Mihaela. "The Potential Problems with Using AI". AZoRobotics. https://www.azorobotics.com/Article.aspx?ArticleID=313. (accessed October 12, 2024).

  • Harvard

    Dimitrova, Mihaela. 2019. The Potential Problems with Using AI. AZoRobotics, viewed 12 October 2024, https://www.azorobotics.com/Article.aspx?ArticleID=313.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.