Equitable tech: Companies pause facial recognition, but major questions remain
In response to protest calling for police reform around the country, some of the biggest names in tech are taking a closer look at their facial recognition technologies.
In recent weeks, protests calling for police and criminal justice reform continue to sweep the Untied States. These demonstrations have also spurred conversations surrounding flawed, racially-biased technologies used by law enforcement agencies, namely facial-recognition systems.
In recent years, police forces have developed quite the appetite for facial recognition technology with massive federal databases dedicated to these efforts. It's been estimated that about one-in-two US adults are logged in one law enforcement facial recognition network, according to a Center on Privacy & Technology at Georgetown Law report.
Now, as calls for institutional reform grow, some of the biggest names in tech including Amazon, IBM, and Microsoft are pulling their facial recognition products from the market outright or banning law enforcement from using the technologies altogether.
Flaws in facial recognition
A 2019 National Institute of Standards and Technology (NIST) study found that facial recognition algorithms are woefully riddled with flaws overall. Minority groups including Asians and African Americans had higher false positives in one-to-one matching. At times, these false-positives increased by a factor of up to 100 for these groups. The report also discovered gender biases, as African American females had a higher rate of false positives for one-to-many matching datasets.
"In a one-to-one search, a false negative might be merely an inconvenience—you can't get into your phone, but the issue can usually be remediated by a second attempt," said Patrick Grother, the primary author of the report and NIST computer scientist. "But a false positive in a one-to-many search puts an incorrect match on a list of candidates that warrant further scrutiny."
Unfounded warrant and further scrutiny only increase the risk for misuse when it comes to communities disproportionately affected by police violence. Prior to the NIST research, no other study had focused on demographic effects related to one-to-many matching, according to the researchers. By the time of this study, these same systems have been used by law enforcement agencies for years.
"It is important for technology companies to understand there are ethical considerations to the applications that they develop. They are not making these advancements in a vacuum," said José Morey, M.D. and Eisenhower Fellow.
SEE: Balancing public safety and privacy during COVID-19: The rise of mass surveillance (TechRepublic)
Responsible Innovation in the private sector
In recent years, major players in the tech sector have come under increased scrutiny for turning a blind eye to the misuse of their technology. Silicon Valley's cutting-edge technology solutions routinely fetch top dollar. Needless to say, the market for facial recognition has been piping hot for a few years and is currently an estimated $8 billion dollar industry. Once these technologies are introduced into the real world, these unforeseen or overlooked flaws can have dire consequences and ripple effects for decades.
"The tech industry mantra has been 'move fast, break things.' That way of doing business is unsustainable, encroaches on our long-cherished values, and is tearing our society at the seams," Hypergiant Industries' Chief Ethics Officer, Will Griffin said.
Oversight committees
Clashes between people and law enforcement have refocused attention on facial recognition in recent weeks. Inherent racial biases in these systems considered alongside increased risk of misuse and abuse have more people are calling for federal regulatory oversight. Morey suggested creating a federal agency akin to the FDA's role in monitoring the safety of pharmaceutical medications.
"In healthcare, we wouldn't allow the use of an untested drug. We need to know that it will first do no harm and then that it is truly efficacious. The same should be considered for law enforcement technology," Morey said. "Both industries have similar potential consequences for societies, so it would be prudent to consider having similar regulatory processes."
Facial recognition bans and the coronavirus
Last year, San Francisco made waves when it became the first city to ban the use of facial recognition technology citing unreliability and civil liberties concerns. Earlier this year, the European Union was considering groundbreaking legislation that would've banned the technology. Although officials eventually dropped the proposed ban.
Concurrently, more governments and health agencies have looked to implement these technologies for contact tracing to mitigate the spread of the coronavirus. A tricky paradox indeed.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
Ethical vetting process
For now, the recent moves by IBM, Amazon, and Microsoft mark a rare moment of self-regulation within the private sector. Regardless, larger questions remain wholly unanswered about why these technologies have been allowed to disproportionately target minority populations for so long? Moving forward, even larger concerns remain.
"The move raises several existential questions: What was the ethical vetting used when they developed and designed their facial recognition products? How broadly did they think about potential use cases and their impact on Americans and their own company values? What did they really learn from this backlash and what will be their ethical vetting process going forward?" Griffin said.