Daniel Halliday
Jun 7 · Last update 11 days ago.

Does face recognition technology pose a societal risk?

Surveillance cameras have long been widely spread throughout most cities, but with the advent of facial recognition software, augmented with machine learning algorithms, does this area of technology require greater public debate before this technology becomes widely used also? We are already tracked all over the internet, should we not be at liberty to decide if we want a society where all corners of the public sphere are covered by security cameras scanning people’s faces? What are the risks and benefits of facial recognition technology and should there be some form of debate before it becomes ubiquitous?
Stats of Viewpoints
There are also inherent racial biases
1 agrees
0 disagrees
There desperately needs to be public debate on this moving forward
1 agrees
0 disagrees
Viewpoints
Add New Viewpoint

There are also inherent racial biases

Facial recognition software that uses Artificial Intelligence poses a risk to marginalised communities, as women and ethnic minorities often throw up irregularities in results. Currently black women can still not be recognised by AI facial recognition, so the use of this technology in law enforcement and security matters may marginalise vulnerable people that are under-represented in society already. This may also lead to a frightening reality of mistaken identities leading to mistaken deaths or disaster, for example cases such as that of Jean Charles de Menezes, who was profiled by London’s Metropolitan Police in 2005, he was mistakenly thought to be a suicide bomber and shot. If you have people following orders from machines that have inherent biases, problems like police bias, racial profiling and errors involving casualty are going to be all the more worse and common.

news.bbc.co.uk/2/hi/uk_news/7764882.stm veridiumid.com/blog/racial-profiling-and-biometrics

Agree
Disagree
Latest conversation
Daniel Halliday
Sep 8
Created

There desperately needs to be public debate on this moving forward

Taking such sensitive biometric data without consent is almost like finger prints or DNA swabs being taken without knowledge or consent as a person is walking around in a public area, however with this technology this will be occurring in real time to everyone in the vicinity of the cameras. Police are currently using this technology in cities in the UK and China without any public debate on the issues of purpose, governance, intrusion, oppression, or even the ethics and efficacy of machine learning technology. All of these risks posed by facial recognition software have led to a ban in San Francisco despite the city being the long term heart of technological development; other cities should follow suit as we should all be allowed to decide what direction technology takes our respective societies in.

nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html

Agree
Disagree
Latest conversation
Daniel Halliday
Jun 7
Created
Translate