on May 11th, 2022

We love new technology, it is what is driving the human race forward to find solutions to life’s biggest problems. In fact, technology like facial recognition streamlines simple tasks like unlocking your phone.

Photo by Youssef Sarhan on Unsplash

But it also creates a space for nefariousness. In 2016 an alarming realization was made when Apple’s iPhone couldn’t recognize users with deep or deeper skin tones.

In fact, the technology was predominately tested on white people and people with light skin tones, excluding consumers that don’t embody those physical traits since the technology was simply unable to recognize their skin.

The paper Racial Discrimination in Face Recognition Technology by Alex Najibi delved into why it was a problem, and how it showed the failure of software engineers to create something deemed exclusionary.

Najbi specifically looks at the work produced by The Gender Shades project.

The project evaluates the accuracy of AI-powered gender classification products and facial recognition used in the world.

Their findings have been that black employees and female employees are more often not accurately identified through facial recognition.

The research aims to show that machine learning algorithms can discriminate and also generalize results.

It leads to a worrying trend that a subsection of people become undefinable and, in a law-enforcement scenario can lead to physical interaction when someone is misidentified.

The ethical implications are twofold. First, without knowing people can be searched and checked via face recognition on databases but also when someone unidentifiable shows up on a database it can raise a red flag.

In many situations, a person can be unidentifiable because facial recognition still struggles to see black and female faces.

This sort of technology cannot just be rolled out en masse without considering how it will impact humanity.

Facial recognition laws

As technology is advancing those creating new software that relies on applications like facial recognition needs to be aware that laws might change.

It creates creativity but also means that ethics need to be at the heart of any future project that might use the technology.

The paper Facial recognition law in China highlights the specific problems and the more ominous side of data-capturing like facial recognition.

According to the paper, China established itself as a leader in the adoption and application of facial recognition as well as other AI technologies during the Covid19 pandemic.

The technology is used for the prevention and control of the virus. In the country, facial recognition could track individuals with the virus, and monitor their movement but cameras could also capture moments where people were temperature-checked.

The public and even the Government don’t always know who has access to this kind of data.

According to the paper, there are sparse laws governing this data and in the case of China, laws are too vague on who can use and administer technology like facial recognition.

As a point of reference in 2018 pop superstar, Taylor Swift used AI technology during her Reputation World Tour for her studio album Reputation.

Ticketholders were unaware that cameras were set up in many stadium entry points which ran facial recognition on alleged stalkers of Swift as well as known criminals.

While it was a safety measure, it does call into question who has the right to data, and if concertgoers should have known that they were being monitored in the way they were.

The authors of the Facial recognition law in China also highlight that the good-natured appeal of AI when it came to Covid19 monitoring doesn’t consider the unintended consequences.

The paper advocates that biometric data is a personal right, but the line is the fact that it is also a matter of national defense.

AI technology, especially facial recognition has been used increasingly to create deep fake content like pornography that includes children and people in the public domain.

This content is created thanks to the tracking and capturing of facial recognition, and it takes away the dignity of a person when the worst happens.

This is why engineers need to be cautious of relying on facial recognition as a norm because specific laws can quickly change, leading to programmers and new technologies to banish reliance on facial recognition – or alter how it can legally be used.

Why ethics is at a tipping point

Raffaella Ocone wrote the paper Ethics in Engineering and the Role of Responsible Technology.

Ethics in engineering is not new. Codes of conduct, professionalism, and workplace ethics have always been part of the engineering profession, like any other profession.

But this is a very limited view. The task of an engineer is to find solutions, and something like facial recognition is then an engineer’s job.

The lens of ethics is now shifting to what can engineers do, without breaking the law, to be socially responsible and not infringe on human rights.

Ocone asserts that responsible technologies involve the adoption and employment of engineering technology into a framework of responsibility.

In the same breath, if a specific technology is not developed or has not emerged yet, responsibility should be considered in the direct development of new technology.

Current ethical approaches in engineering consider generally only look at ethics after something new has been developed or take it into account with existing technologies.

There is a problem with the fact that the ethics of emerging technologies can only make use of speculative data about future products and processes, their envisaged use, and their impact.

She simplifies the context to the fact that:

  • Entrenched technology intervention can be too late.
  • In the case of emerging technologies, the train of actions can be unclear and unwelcome which leads to a delay in the development.

Entrenched technologies require careful consideration specifically in relation to the improvements that it brings to humanity.

These improvements must be considered in relation to their discipline and then its implications.

If a technology is slanted to give racist data sets it cannot and should not be used and further development is required.

A young person wearing a Rick t-shirt. Photo by levmotion on Unsplash

Finding inspiration in cartoons

Pop culture can’t escape engineering and technology.

This is evident in Rick and Morty, an American animated adult comedy of a space-travelling grandfather Rick and his hapless grandson Morty going on adventures together.

The way technology is used in their fictional world does ask an important question about the real-life applications of technology.

The paper Ethics and Technology: An Analysis of Rick and Morty explores why taking ethics seriously is important for engineers.

The paper uses concepts in the philosophy of technology to look at how the show brings ontological and ethical assumptions and problems to the table.

A microscope is especially on the advancements of technology and how technological societies need to adapt to ensure technology doesn’t endanger life.

In the paper, Rick is explored as a technopolitical thinker, who has a deep understanding of the uses, and counter-uses of technology and how it is used in a variety of domains.

Morty on the other hand is the layman with a traditional sense of thinking around technology, who is outright scared of it, refuses to use it, or uses it for personal gains

Far-Reaching Consequences of Widespread Usage of AI

On top of the reports of prolific usage of much-discussed Chinese facial-recognition software and the concept of social capital, there is news coming from every part of the world that is raising the alarm about what happens when AI-powered decisions go into effect unchecked.

Back in January 2021 Dutch government was forced to resign over what’s known as the “Childcare benefits scandal“.

Allegedly 26,000 families were wrongly accused of fraudulently claiming their childcare benefits and forced to repay thousands of euros to the tax office, sometimes leading to bankruptcy, unemployment, and personal issues.

The tax authority admitted that as many as 11,000 families were targeted on the basis of their ethnic background and other unsubstantiated signals, which were then used as data for an AI algorithm that predicted the likelihood of a person committing fraud.

References:

Facial recognition law in China. Available from: https://www.researchgate.net/publication/359749765_Facial_recognition_law_in_China [accessed Apr 28 2022].

Ocone, Raffaella. (2020). Ethics in Engineering and the Role of Responsible Technology. Energy and AI. 2. 100019. 10.1016/j.egyai.2020.100019.

Zhaohui, Su & Cheshmehzangi, Ali & McDonnell, Dean & Bentley, Barry & Veiga, Claudimar & Xiang, Yu-Tao. (2022). Facial recognition law in China. Journal of Medical Ethics. medethics-2022. 10.1136/medethics-2022-108130.

Ethics and Technology: An Analysis of Rick and Morty. Available from: https://www.researchgate.net/publication/357006511_Ethics_and_Technology_An_Analysis_of_Rick_and_Morty [accessed Apr 28 2022].

The latest news

EIT News

When Nature Inspires Engineers and Architects to Build Green

Explore how termite mounds can inspire architects and engineers to create more efficient and sustainable buildings. This article highlights five innovative ways these natural structures offer lessons for designing the... Read more
EIT News

Mechanical Engineering in Robotics: Challenges and Benefits

Explore the critical role of mechanical engineering in the fast-evolving field of robotics. From navigating intricate challenges to seizing exciting opportunities, this article examines how mechanical engineers are shaping the... Read more
EIT News

Transformative Innovations: Engineers and Process Automation in Mining

As process automation transforms mining, engineers are at the forefront of driving efficiency, safety, and sustainability in the industry. Discover how digital advancements are reshaping mining operations and redefining the... Read more
Engineering Institute of Technology