Microsoft has followed competitors Amazon and IBM in restricting how third parties, in particular law enforcement agencies, are provided with facial recognition technology. The company claims that it is not currently providing the police with the technology, but is now saying that it will not do so until federal laws are governing how it can be deployed safely and without infringing human rights or civil liberties.
IBM said earlier this week that all sales, development, and research of the controversial technology will cease. Amazon said Wednesday that it would stop providing police for one year to give Congress time to put in place “stronger regulations governing the ethical use of facial recognition technology.”
Microsoft President Brad Smith more strongly mirrored Amazon’s stance on Thursday when describing the company’s latest approach to facial recognition: not to rule out whether it would one day market the software to the police, but to push for legislation first.
Despite that, we have agreed that we will not offer facial recognition to U.S. police forces until we have a comprehensive civil rights statute in effect that will regulate this technology.
Smith added that Microsoft will still “put in place several new evaluation criteria to look for some possible applications of technology that go beyond what we currently have.” Research has shown that facial recognition devices, as they are educated using data sets consisting mainly of white males, have major difficulties in recognizing darker-skinned persons and also in deciding the identity of these individuals.
Artificial intelligence experts, campaigners, and politicians have been worried for years oversupplying technologies to the government, warning not only about racial profiling but also of civil rights and privacy abuses implicit in technologies that may lead to the growth of authoritarian states.
Although Microsoft has historically sold access to these technologies to police forces, the organization has now taken a more ethical stance. Last year, Microsoft denied California law enforcement access to its facial recognition tech due to human rights violations. It also announced that it would no longer invest in third-party companies developing technology back in March, on charges that the Israeli startup, Microsoft, had invested in providing technology to the Israeli government to spy on the Palestinians.
Smith himself has been expressing public concern about the dangers of unregulated facial recognition at least since 2018. The company took the dataset offline only after an investigation by the Financial Times.
According to the American Civil Liberties Union (ACLU), as recently as this year, Microsoft has supported legislation in California that would allow police departments and private companies to purchase and use such systems. These are the laws in San Francisco, Oakland, and other California cities that last year prohibited the use of technology by the police and governments. The bill, AB 2261, failed last week to win the ACLU and the coalition of 65 organizations that came together to fight it.
Microsoft has been a vocal supporter of federal regulations that would dictate how such systems can be used and what safeguards are in place to protect privacy and prevent discrimination.