San Francisco Bans Facial Recognition Technology

Today in Security: San Francisco Bans Facial Recognition Technology

​​​In a closely watched vote, the San Francisco Board of Supervisors banned city departments—including the police—from using facial recognition surveillance technology. The decision makes San Francisco the first U.S. city to ban the technology.

The board’s move also requires city departments to disclose the surveillance technology they are using and have new technology that collects or stores someone’s data approved by San Francisco’s Board of Supervisors.

​“This is really about saying we can have security without being a security state,” said Supervisor Aaron Peskin, who sponsored the legislation, in an interview with The San Francisco Chronicle. “We can have good policing without being a police state. Part of that is building trust with the community.”

Other jurisdictions are considering similar bans, including Oakland, California, and Sommerville, Massachusetts.

“In Massachusetts, a bill in the State Legislature would put a moratorium on facial recognition and other remote biometric surveillance systems,” according to The New York Times. “On Capitol Hill, a bill introduced last month would ban users of commercial face recognition technology from collecting and sharing data for identifying or tracking consumers without their consent, although it does not address the government’s uses of the technology.”

The use of facial recognition technology has increased rapidly over the past several years, with heightened interest from law enforcement to identify people of interest and security to track suspicious persons. Pop musician Taylor Swift, for instance, has been using facial recognition technology at her concerts since 2018 to identify known stalkers. 

​In recent research, the National Institute of Standards and Technology (NIST) found that between 2014 and 2018 facial recognition technology became 20 times more accurate in searching a database to find a matching photograph.

“The top-performing algorithms from the latest round of testing make use of a type of machine-learning architecture called convolutional neural networks,” according to “Good with Faces” in the March 2019 issue of Security Management. “Such machine-learning tools have been rapidly advancing in the last few years, and this has had a major impact on the facial recognition industry.”

​Critics, however, have expressed concerns that the technology in the wrong hands can be a dangerous tool that allows real-time surveillance. The New York Times took a look at this in an April exclusive about how China is using facial recognition technology to crackdown on Uighurs—a Muslim minority.

“The facial recognition technology, which is integrated into China’s rapidly expanding networks for surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review,” according to the Times. “The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”