Meta’s Smart Glasses and the Consent Nobody Asked For

A journalist spends a month wearing Meta’s Ray-Ban smart glasses and comes away with an unsettling conclusion: not only did the glasses make her feel like a creep, they made her think like one. Elle Hunt’s account in The Guardian is a candid and at times uncomfortable read — one that cuts to the heart of a problem that goes far beyond awkward technology. At its core, this story is about consent, and who gets to decide whether they’re being watched.

Meta sold more than seven million pairs of these glasses globally in 2025. They look, to most people, like ordinary Ray-Ban Wayfarers. They record video and photos, stream calls in first-person, and pipe AI responses directly into the wearer’s ear. The recording indicator — a small blinking LED — is easy to miss in bright conditions, and workarounds to disable it entirely are widely shared online. Bystanders have no reliable way of knowing whether the person across from them on the tube, in a café, or in an IKEA aisle is filming them.

A Camera That Changes How You Think

What makes Hunt’s account particularly striking is not the technology itself, but what it does to behaviour. Within weeks, the glasses had become second nature. She found herself instinctively wishing she had been recording when she ran into an ex-partner, or when she spotted a stranger walking a dog that looked just like them. She describes the feeling as the possibilities of the technology overriding her better judgment — and even basic decency.

This is not a fringe reaction. The tech commentator Henry Fisher at Techlore makes a similar observation: when you wear something on your face, your entire relationship with the world shifts. A phone in your pocket is one thing. Something permanently between you and reality — recording the world on behalf of a company with a long track record of data exploitation — is quite another.

Meta has already confirmed that content captured through the glasses may be used to train its AI. Swedish journalists recently reported that human moderators employed by Meta review intimate footage from the devices, including footage of people using the toilet and having sex. Meta’s stated defence is that media stays on the user’s device unless shared — but that does little to address the deeper structural problem.

The Missing Consent

Every digital service you sign up for involves some form of consent — you click accept, you form a relationship with the company, and you accept the terms on offer. That relationship is often unfair and exploitative, but at least you chose to enter it. The people being filmed by Meta glasses chose nothing. They accepted no terms. They handed over nothing. And yet their faces, their conversations, and their private moments are being captured by a device they may never even have noticed.

There are no meaningful legal protections for bystanders in most countries. Filming in public is generally permitted. And there is nothing in Meta’s current product design that gives a stranger any meaningful recourse. Professor Iain Rice of Birmingham City University, quoted in the Guardian piece, puts it plainly: if you don’t want to be recorded, the only reliable option is to move out of the way. That is not a workable standard for a society that values privacy.

The technology also intersects with Meta’s known plans for facial recognition, which the company has reportedly been developing for integration into the glasses. Combine that with seven million pairs already in the wild, and the scale of potential intrusion becomes difficult to overstate. Every wearer becomes a potential node in a passive surveillance network — one feeding data back to a company whose entire revenue model depends on knowing as much about people as possible.

What Can Actually Be Done

Rice suggests that Meta could implement technical safeguards — for example, blurring and removing unapproved faces at the preprocessing stage before any data is stored or transmitted. That Meta has not done this is telling. It reflects a company that frames the responsibility entirely on the wearer, while harvesting whatever data passes through the camera in the meantime.

On an individual level, both Hunt and Fisher suggest that social pushback is legitimate and worth normalising. Ask friends who wear these glasses to take them off around you. Ask strangers you suspect of filming you to stop. Remind people, firmly and politely, that they are in the real world with other people in it. These feel like small gestures against a large problem, but norms shift through exactly this kind of repeated friction.

The broader ask is regulatory. The legal frameworks governing surveillance, consent, and data collection were not designed for a world where a pair of glasses can passively record everything within sight. Getting regulation to keep pace with technology is genuinely difficult — but it is necessary. The alternative is a slow normalisation of a world where you never quite know whether the person standing next to you is filing a report to Zuckerberg.

Key Takeaways

  • Meta’s Ray-Ban smart glasses record video and photos with an indicator light that is easy to miss and can be disabled via widely shared workarounds.
  • Bystanders filmed by these glasses have consented to nothing — they have no relationship with Meta and no meaningful legal recourse.
  • The technology changes wearer behaviour: a Guardian journalist reported feeling and thinking like a creep within weeks of using them.
  • Meta has confirmed that footage from the glasses may be used to train its AI, and has reportedly developed facial recognition capabilities for future integration.
  • Legal frameworks have not kept pace. Regulatory intervention is needed to protect bystanders, not just users.

Further reading and viewing:
The Guardian: I wore Meta’s smartglasses for a month
Techlore: Meta’s ‘Pervert Glasses’ Are Turning People Into Creeps

Photo: Mikhail Nilov via Pexels

Scroll to Top