News Technology

FTC bans Rite Aid from using facial recognition after false shoplifting accusations

Rite Aid, one of the largest drugstore chains in the U.S., has agreed to stop using facial recognition technology in its stores as part of a proposed settlement with the Federal Trade Commission (FTC). The FTC alleged that Rite Aid violated the privacy and civil rights of its customers by using faulty and discriminatory facial recognition systems that falsely flagged thousands of people as shoplifters.

Rite Aid’s facial recognition program

According to a Reuters investigation, Rite Aid deployed facial recognition systems in 200 stores across the U.S. from 2012 to 2020, without informing or obtaining consent from its customers. The systems were installed in mostly lower-income, non-white neighborhoods, where the risk of theft and violence was perceived to be higher.

The facial recognition systems were supposed to identify and alert store staff of potential shoplifters, based on a database of previous offenders. However, the systems were prone to errors and biases, and often misidentified innocent customers as matches. Some of the systems were also supplied by a company with links to China and its authoritarian government, raising concerns about data security and human rights.

FTC bans Rite Aid from using facial recognition after false shoplifting accusations

FTC’s charges and settlement

The FTC filed a complaint against Rite Aid, accusing the company of engaging in unfair and deceptive practices that harmed consumers. The FTC claimed that Rite Aid:

  • Failed to disclose its use of facial recognition to its customers, and did not provide them with any choice or control over the collection and use of their biometric data.
  • Failed to ensure the accuracy and reliability of its facial recognition systems, and did not test or audit them for potential errors or biases.
  • Failed to protect the privacy and security of its customers’ biometric data, and did not implement reasonable safeguards to prevent unauthorized access or disclosure.
  • Failed to comply with the laws and regulations of several states that restrict or prohibit the use of facial recognition without consent or notice.

The FTC proposed a settlement that would require Rite Aid to:

  • Cease using facial recognition technology in its stores, and remove or disable any existing systems or devices.
  • Delete or destroy any biometric data collected from its customers, and certify that it has done so to the FTC.
  • Obtain express consent from its customers before collecting or using any biometric data in the future, and provide clear and conspicuous notice of its practices and policies.
  • Implement a comprehensive biometric privacy program, and conduct regular assessments and audits to ensure compliance.
  • Cooperate with the FTC’s investigation and oversight, and report any violations or breaches.

The settlement is subject to public comment for 30 days, after which the FTC will decide whether to make it final.

Rite Aid’s response and implications

Rite Aid confirmed that it had ended its facial recognition program in July 2020, after Reuters informed the company of its findings. The company said that it had nothing to do with race, and that it was intended to deter theft and protect staff and customers from violence. The company also said that it had no evidence that its data was sent to China.

Rite Aid said that it agreed to the settlement with the FTC to avoid the cost and distraction of litigation, and that it did not admit any wrongdoing or liability. The company said that it respected the privacy and civil rights of its customers, and that it would comply with the terms of the settlement.

The FTC’s action against Rite Aid is one of the first of its kind in the U.S., and could have significant implications for the use of facial recognition technology by other retailers and businesses. The FTC has the authority to enforce consumer protection laws and regulate privacy and data security practices in the U.S., and it has signaled its intention to crack down on the misuse and abuse of facial recognition and other biometric technologies.

The FTC’s action also reflects the growing public awareness and concern about the potential harms and risks of facial recognition, especially for marginalized and vulnerable communities. Several states and cities have enacted or proposed laws and regulations to limit or ban the use of facial recognition by government agencies and private entities, citing issues such as privacy, consent, accuracy, bias, discrimination, and accountability. Several civil rights and advocacy groups have also called for a moratorium or a ban on facial recognition, arguing that it poses a threat to human dignity and democracy.

Facial recognition technology is a powerful and controversial tool that can have both positive and negative impacts on society. As the technology evolves and becomes more widespread, it is important to ensure that it is used in a responsible and ethical manner, with respect for the rights and interests of all stakeholders.

Leave a Reply

Your email address will not be published. Required fields are marked *