"It [facial recognition regulation] can be immediate, but maybe there's a waiting period before we really think about how it's being used," Pichai said. "It's up to governments to chart the course [for the use of such technology]." He spoke in favor of a five-year moratorium the European Commission is considering in Brussels. Later in the week, at the World Economic Forum in Davis, he spoke in favor of the regulation of AI while emphasizing that AI needs to get used to get better.
Microsoft, on the other hand, doesn't appear to like the idea of a delay in the use of facial recognition. Microsoft chief legal officer and president Brad Smith pointed to positive use cases like finding missing children, and seemed to contend that bans and moratoriums can stand in the way of progress. "There is only one way at the end of the day to make technology better, and that is to use it," he said.
It's important to understand that facial recognition is becoming a burgeoning industry with some of the best-funded AI startups in the world. And the AI industry as a whole is seeing huge growth. CB Insights and the National Venture Capital Association found that AI startups raised record amounts in 2019 in the U.S. and worldwide. Analysis in the 2019 AI Index shows that more than $70 billion was invested in artificial intelligence businesses last year, with facial recognition ranked high among areas of investment. Businesses are exploring uses cases for AI that range from patient or employee verification to analysis of job candidates in video interviews, but another moratorium demand this week points out its use in surveillance.
The United Nations called for a moratorium on private surveillance technology worldwide following an investigation into the May 2018 hacking of Amazon CEO Jeff Bezos' iPhone X. The investigation concluded that hackers gained access to the entirety of his phone's data through the delivery of a malicious MP4 video by Saudi Arabian crown prince Mohammad Bin-Salman via WhatsApp. The hack was part of an attempt to influence Washington Post coverage. Five months later, Washington Post columnist Jamal Khashoggi was killed by Saudi operatives.
PBS NewsHour's Nick Schifrin asked UN special rapporteur on extrajudicial executions Agnès Callamard if it's too late for a moratorium – if the spread of surveillance technology has already crossed the rubicon. Callamard replied that the world has no choice but to try to control these technologies. "We have to reign it in, in the same way that we have tried and sometimes succeeded in reigning in some of the weapons thought to be unlawful or illegal," she said.
"We have here an example of the richest man on Earth with unlimited resources, and yet it took him several months to realize his phone was hacked, and it took three months by top notch experts to uncover the source of the hacking. So this technology is a danger to all. It's a danger to national security and democratic processes in the United States."
Her point was underlined in a New York Times op-ed earlier this week that asserts that a focus on bans or moratoriums of facial recognition misses the larger point that digital data brokerage firms operate with virtually no regulation today, and that real change requires more comprehensive privacy regulation. Pressing pause can buy time, but it doesn't resolve the root problem.
It's ironic to think that Jeff Bezos, a man whose company sells facial recognition in secret to governments, is the chief victim of an alleged surveillance crime committed by the head of a government that is so severe that it prompts the United Nations to say we must take immediate action.
To Callamard's point, technology that threatens or challenges the principles underpinning democratic societies shouldn't be allowed to spread without scrutiny.
In response to government analysis that proves bias and fear that oppressive surveillance can be used to control people, lawmakers in about a dozen U.S. states are currently considering some form of facial recognition regulation. A Congressional committee may soon propose legislation regulating the use of facial recognition software by government, law enforcement, or the private sector. Restrictions suggested in hearings last week include the prohibition of use at protests or political rallies so as not to stifle First Amendment rights and a requirement that law enforcement be required to disclose when the tech is used to arrest or convict a suspect.
If federal policy like the kind being considered becomes law, facial recognition regulation could protect individual rights without the need for a moratorium. Standards for how it can be used, tests like the kind NIST performs and Microsoft endorses that allow third-party vendors to verify performance results, and limitations on AI's use in hiring practices and public places need to be defined and enforced.
Among debates about how to regulate facial recognition, you almost always find the claim that regulation will stifle innovation, but curbing the use of the technology in public settings does not necessarily stifle its innovation. That's a straw man argument. Use of the technology in lab or limited settings can lead to improvements.
People and their elected representatives can choose to delay a technology, but that's not to say there won't be consequences: What would happen if a market for facial recognition is allowed to grow only in China or other more permissive parts of the world?
But the question also needs to be asked: How could business and society change for the worse if no limits are placed on the use of facial recognition?
Some facial recognition systems have shown progress in performance, but analysis by the National Institute of Standards and Technology (NIST) shows inequity persists. At what point is a technology known to work best on middle-aged white men, and worse for virtually everyone else in society, considered a civil rights issue?
A foundation like the kind Microsoft supported then and now – built on consent, testing by third-parties, and human review – seems like part of what's needed to avoid a "race to the bottom."
Recent statements by Smith and Pichai about a need to use technology for it to get better seem to suggest that society has to adapt to technology instead of the other way around. But there's a lot left unanswered or unregulated about how facial recognition or surveillance technology can be used in society today." Extraction of people's data can fuel highly profitable predictive machines. The idea of a moratorium must remain on the table, because people need to be shielded from surveillance and the potential infringement of their rights, and moratoriums seem prudent, allowing time for regulation, standards, and tests to measure results.
Action is necessary even if it seems too late to stop the spread of private surveillance, because things are moving really fast in facial recognition right now, both for lawmakers anxious to take action and for businesses interested in the deployment or sale of the technology.
IBM called for rules aimed at eliminating bias in artificial intelligence to ease concerns that the technology relies on data that bakes in past discriminatory practices and could harm women, minorities, the disabled, older Americans and others. (via Bloomberg)
With black box AI, people are refused or given loans, accepted or denied university admission, offered a lower or higher price on car insurance, and more, all at the hands of AI systems that usually offer no explanations.
April 28-29, 2020 | Two Bit Circus | Los Angeles, CA
GamesBeat Summit 2020 is the most intimate game industry event bringing together senior-level execs, developers, and investors who this year will focus on how companies can best navigate transitions to new platforms, new machines, new markets, and new video game business models.