Blog: Rethinking Risk

Facial Recognition Technologies and Law Enforcement from a Risk Innovation Perspective

by | Jul 6, 2020

Facial Recognition Technologies and Law Enforcement from a Risk Innovation Perspective

 

Just how safe or risky are facial recognition technologies? As the cost of computing power decreases and the sophistication of machine learning algorithms continues to rise, facial recognition technology is finding more and more uses. Yet it’s successful and socially responsible use is facing an increasingly complex risk landscape, and one which is littered with orphan risks. As a result, organizations looking to use the technology are finding themselves grappling with making decisions that conventional risk management approaches leave them ill-equipped to make.

You have most likely seen facial recognition in headlines like “Proposed legislation banning governments from using it“, “Hawaiian airports under fire from using it to prevent the spread of COVID-19”, and “Customs and Border Patrol capturing 270 hours of drone footage from Black Lives Matter protests” in 15 cities through the wake of George Floyd’s death. Maybe you even saw John Oliver’s recent segment on it. With so much at stake, we will use this post to explore the state of play in facial recognition technologies, the business of predictive policing, and how these threats align with using a Risk Innovation approach to understand what is important to stakeholders, not just shareholders, and how social, organizational, and technological risks can converge to threaten critical areas of value.

 

The State of Play
Simply put, today’s facial recognition technology uses machine learning (one aspect of artificial intelligence) that allows a computer to detect a human face from a digital image or video frame depending on its application. Computers are able to do this with a few key ingredients involving (1) training data samples (often in the tens of thousands), (2) algorithms, or rules and processes, that process and analyze these data, and (3) goals for the algorithms to accomplish (for example, suggesting tags of common faces in your Facebook photos).

But, it’s not that simple. A known and well-documented failure of some facial recognition technologies is their inability to recognize non-white faces with accuracy, or in some cases at all. This bias comes from skewed training samples that have an overrepresentation of white faces which can lead to outcomes with dire consequences in some cases for people of color — especially where facial recognition is used to guide police actions, or to determine guilt or innocence in trials.

The Business of Predictive Policing Technologies
Among today’s most contentious applications of facial recognition, law enforcement has begun heavy investments and widespread use at the federal, state, and local levels in the facial biometric market. With one-half of American adults mapped to varying degrees of accuracy in government databases, there are serious implications for civil liberty infringements as mass surveillance becomes another method to deepen the imbalance of over-policing people of color and disenfranchised groups in America.

The over-reliance on the accuracy of facial recognition, and artificial intelligence in general, led to the wrongful arrest of a Detroit protester, Robert Julian-Borchak Williams, who was incorrectly matched to a photo of a 2018 shoplifter. This inaccuracy is not a one-off, and is a predictable outcome of the widespread bias of Amazon’s now-infamous Rekognition tool that misidentified 28 members of Congress as criminals (11 of whom were people of color).

In the wake of the killing of George Floyd and the resurgence of the Black Lives Matter movement in mainstream discourse, companies co-developing this technology, including Amazon, IBM, and Microsoft, have announced varying degrees of pauses and moratoriums on developing facial recognition software. All three of these particular companies have released statements encouraging Congress to pass legislation around rules and regulations for their facial recognition and law enforcement, thereby demonstrating initial steps towards considering values from their community and customer stakeholders. For example:

“IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” -IBM CEO, Arvind Krishna

In a rapidly evolving landscape requiring new levels of accountability and transparency efforts, companies are now being judged by not only the projects they do take on, but the ones they do not.

Risk Innovation for Emerging Technology
The orphan risks associated with facial recognition technology use by police forces intersect the Risk Innovation Nexus’ three domains in a number of ways:

In Unintended Consequences of Emerging Technology, companies are threatened by law enforcement Co-Opting the Technology (and undermining the original intentions of the developers) and Loss of Agency (by reducing the autonomy of individuals as civil liberties are infringed upon).

In Social & Ethical Factors, facial recognition technologies threaten community Privacy as the current social climate has brought to light issues of Social Justice & Equity. Shifts in social norms and evolving cultural behaviors and pressures mean that Social Trends are naming and shining a light on systemic racism with the rallying cry of Black Lives Matter.

In Organizations & Systems, there are threats to value for technology companies such as Bad Actors (such as Clearview AI, which has scraped 3 billion images from the web and works with law enforcement). And in taking into consideration their own Reputation & Trust, major companies are walking away (IBM) or stepping back (Amazon) from facial recognition technology.

Our approach to developing and leveraging a risk innovation mindset means identifying your values, stakeholders, orphan risks, and actions you can take. As a technology company at the cutting edge of software development, companies are being held to new standards in the impacts of their tools.

Initial steps that corporations are taking in pausing development for increased federal regulation are useful, but also require these companies to do more in protecting the values of their enterprise, investors, customers, and communities. By beginning to identify stakeholders by name, what is important to these groups, and how their technology can threaten it, big tech companies can responsibly innovate this powerful technology. At the Risk Innovation Nexus we recognize that this is the beginning of the rise of facial recognition and similar technologies and, therefore, the narrow window for planning and navigating possible applications.

While there is no prescriptive method to navigating a time of uncertainty like this, companies and organizations are expected to uphold their organizational values while also anticipating and protecting the values of their communities and customers.

For more information on the Risk Innovation Nexus approach, our resources, and our tools and services please sign up below to join our mailing list.

© 2020 Arizona Board of Regents on behalf of Arizona State University