Meta in row after workers who say they saw smart glasses users having sex lose jobs
Meta is facing mounting pressure to justify its decision to terminate a significant contract with Sama, an AI training company, following allegations from Kenya-based workers who claimed they were exposed to graphic content captured by Meta's smart glasses.
The controversy began in February when Sama employees disclosed to two Swedish newspapers that they had witnessed intimate moments through the glasses' recordings, including users engaged in private activities such as using the bathroom and sexual encounters.
Within two months of these revelations, Meta severed its relationship with Sama, leading to the planned redundancy of 1,108 workers according to the company.
Meta maintains its decision was based on Sama's failure to meet established standards—a claim Sama categorically disputes. However, a Kenyan workers' organization suggests the contract cancellation was actually retribution for employees speaking publicly about their experiences.
In response to BBC News inquiries, Meta stated: "We decided to end our work with Sama because they don't meet our standards." The company has not directly addressed allegations that the termination was connected to worker testimonies.
Sama has vigorously defended its performance, stating: "Sama has consistently met the operational, security and quality standards required across our client engagements, including with Meta. At no point were we notified of any failure to meet those standards, and we stand firmly behind the quality and integrity of our work."
Disturbing Revelations
The Swedish publications Svenska Dagbladet and Goteborgs-Posten published their investigative findings in late February, featuring accounts from unnamed workers tasked with reviewing videos captured by Meta's smart glasses.
One worker described the scope of their exposure: "We see everything - from living rooms to naked bodies."
Meta acknowledged at the time that subcontracted workers might occasionally review content filmed on its smart glasses when users shared it with Meta AI. The company characterized this practice as standard industry procedure aimed at enhancing customer experience.
The revelations have prompted swift regulatory response. The UK's Information Commissioner's Office contacted Meta regarding what it termed a "concerning" report, while Kenya's Office of the Data Protection Commissioner launched an investigation into privacy concerns associated with the glasses.
A Meta spokesperson addressed the redundancies, explaining: "Last month, we paused our work with Sama while we looked into these claims. We take them seriously. Photos and videos are private to users. Humans review AI content to improve product performance, for which we get clear user consent."
Transparency and Privacy Concerns
Meta introduced its AI-powered glasses in September through partnerships with Ray-Ban and Oakley. The devices offer features including text translation and visual question-answering capabilities, proving particularly valuable for users with visual impairments.
However, as adoption has increased, so have concerns about potential misuse. The Sama workers functioned as data annotators, manually labeling content to train Meta's AI systems in image interpretation. They also reviewed transcripts of AI interactions to evaluate response quality.
One particularly troubling incident involved glasses left recording in a bedroom, subsequently capturing footage of a woman undressing without her apparent knowledge. While Meta's glasses include an indicator light that activates during recording, concerns persist about non-consensual recording, including reports of women being secretly filmed in Kenya.
Sama, a US-based outsourcing company that evolved from a non-profit focused on creating technology employment opportunities, now operates as a certified B-corporation emphasizing ethical business practices.
This is not Sama's first controversial Meta contract. A previous arrangement involving Facebook content moderation faced criticism and legal action from former employees who described exposure to traumatizing graphic content. Sama subsequently expressed regret about accepting that work.
Industry Implications
Naftali Wambalo of the Africa Tech Workers Movement, who is involved in ongoing legal action related to the earlier Facebook moderation case, has spoken with workers from the smart glasses contract. He believes Meta's decision to end the relationship was motivated by a desire to prevent workers from discussing human review of glasses-captured content.
"What I think are the standards they are talking about here are standards of secrecy," Wambalo told BBC News.
Meta has previously stated that users are informed about potential human review through its terms of service.
Mercy Mutemi, a lawyer representing affected workers and executive director of campaign group the Oversight Lab, views Meta's actions as a cautionary tale for the Kenyan government.
"We've been told that this is our entry route into the AI ecosystem," she explained to the BBC. "This is a very flimsy foundation to build your entire industry on."
The situation highlights broader questions about transparency, worker protection, and privacy in the rapidly expanding AI industry, particularly regarding the human labor that supports artificial intelligence systems.
Share this story