Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Meta Is Turning Its Smart Glasses Into A Mass Surveillance Tool… And You Can’t Stop It

If not Palantir, why Palantir-shaped??

Palantir builds spy tech for the CIA, DHS, and ICE. It aggregates data, maps your life, and tells governments who to watch. Meta is building something with the same bones. It’s called Name Tag, a facial recognition feature coming to Ray-Ban smart glasses that lets a wearer look at a stranger in public and have an AI identify them in real time, pulling their name and profile directly from Facebook and Instagram. The surveillance hardware is a $300 fashion accessory, the database was built by 3 billion people tagging photos for free, and the targets are anyone, anywhere, who never agreed to any of it.

A leaked internal memo from May 2025, obtained by The New York Times, laid out the full scope: the feature is planned for every pair of Meta’s glasses, from Ray-Bans to the Oakley Meta HSTN sports line. Meta’s official response was a practiced non-denial: “we’re still thinking through options and will take a thoughtful approach if and before we roll anything out.” Companies that aren’t building something just say they’re not building it. Meta is not saying that.

The Database Was Being Built Before the Glasses Existed

Facebook turned on automatic photo tagging in 2010 with zero opt-in, and for eleven years, every time you tagged a friend’s face in a photo, you were feeding their facial recognition model. When Meta “deleted” over a billion faceprints in 2021 under lawsuit pressure, they kept the photos. They kept the social graph. They kept the engineers who built the whole thing. Name Tag isn’t a new product concept; it’s a previously mothballed capability getting a second run, this time with a camera on your face instead of a server in Menlo Park.

Anyone with a public Instagram account is immediately a potential target (it’s not like making your account private makes you any safer), which covers hundreds of millions of people who signed up to share photos, not to be enrolled in a real-world biometric identification system. Remember Portal, Meta’s smart home display with a face-tracking camera? It launched in 2018 right in the middle of the Cambridge Analytica fallout, and consumers collectively declined to put a Facebook camera in their living room. Meta discontinued it by 2022. The lesson they apparently took wasn’t “don’t build surveillance hardware.” It was “make sure the camera comes in wearing someone else’s face.”

They Know Exactly How We’ll React

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” That’s a sentence directly from an official internal planning document from Meta’s Reality Labs, dated May 2025, reviewed by The New York Times. The company was explicitly planning to exploit civic chaos as a launch window, timing the rollout of a mass surveillance feature to coincide with another crisis-event that occupies our mind so we’re distracted. Sleight of hand, with a dash of corporate evil. There’s no ethical framework in which that sentence represents good-faith product development.

Their original rollout plan was to debut Name Tag at a conference for the blind, wrapping a mass-surveillance tool in the language of accessibility before expanding it to the general public. That plan was eventually shelved, but the thinking behind it is the more revealing part. The accessibility framing was a softening mechanism, a way to generate human-interest coverage before the obvious misuse cases took over the conversation. Privacy advocates, abuse charities, and civil liberties groups were going to come for this feature regardless. The strategy was never to address their concerns. It was to buy a news cycle of goodwill first.

Your Face Is Being Reviewed in a Nairobi Office Park Right Now

Swedish newspapers Svenska Dagbladet and Göteborgs-Posten tracked Meta’s data pipeline from Ray-Ban glasses worn in Western homes to a company called Sama, operating out of an office park in Nairobi, Kenya. Workers there are paid to watch footage captured by glasses users and label what they see, teaching Meta’s AI to understand and interpret the visual world. The footage includes people on the toilet, naked bodies, couples in bed, bank card details accidentally filmed, and intimate conversations being had by people who had no idea they were being recorded, let alone reviewed by a contractor on another continent.

Meta’s defense was to point at a clause buried in their terms of service permitting “manual (human)” review of AI interactions, which is technically accurate and practically worthless as a justification, because no person buying a pair of fashion-forward smart glasses understands that clause to mean workers in Kenya are watching them undress. The April 2025 privacy policy update for the glasses silently expanded Meta’s right to use all captured photos, videos, and audio for AI training, with no prominent notification to existing owners. A class action lawsuit filed in San Francisco federal court in March 2026 argues this constitutes consumer fraud, given that Meta’s own marketing described the glasses as “designed for privacy, controlled by you.” The UK’s Information Commissioner’s Office wrote to Meta characterizing the situation as “concerning,” which in British regulatory language lands somewhere between “deeply troubled” and “genuinely alarmed.”

$2.1 Billion in Fines and Still Going

The fine history reads like a repeat offender’s rap sheet. Meta paid $650 million to settle an Illinois class action over collecting facial geometry without consent through Facebook’s “Tag Suggestion” feature. They paid another $68.5 million for the same BIPA violation in 2023. In 2024, Texas extracted $1.4 billion from them for capturing biometric data on millions of Texans “for commercial purposes” without informed consent, with the lawsuit specifically alleging Meta was disclosing that data for profit. That’s over $2.1 billion in biometric privacy penalties across four years, all for variations of the same violation, against the same company, building the same technology.

None of it changed the product roadmap. The Texas settlement of $1.4 billion represents roughly one percent of Meta’s $134 billion in 2023 revenue. The Electronic Privacy Information Center has filed complaints with the FTC calling Name Tag a direct facilitator of “stalking, harassment, doxxing and worse.” The EU’s AI Act classifies real-time remote biometric identification in public spaces as high-risk AI and prohibits it for most commercial applications. The fines and the regulatory pressure are clearly baked into Meta’s planning rather than functioning as deterrents. They paid $2.1 billion to establish what a decade of biometric data collection actually costs, looked at that number next to their revenue, and decided it wasn’t a fine. It was an investment.

The Glasses Are Just the Beginning

Name Tag as currently designed still requires the wearer to deliberately trigger an identification query. The next product removes even that minimal friction. Internal documents describe “super sensing” glasses with always-on cameras and microphones that record continuously for the entire duration they’re worn, feeding an unbroken stream to an AI assistant that builds a fully searchable log of the wearer’s day. The surveillance model shifts from opt-in query to permanent ambient default. Every person who passes within the glasses’ field of view gets their face processed, regardless of whether they’ve opted out, regardless of whether they even know the technology exists.

The threat model was demonstrated in 2024 by two Harvard students, AnhPhu Nguyen and Caine Ardayfio, using nothing but current, available hardware. They connected Ray-Ban Meta Gen 2 glasses to PimEyes, a commercial facial recognition engine, alongside LLM data extraction tools, FastPeopleSearch, and Cloaked.com for social security lookups. Streaming the feed to Instagram Live, they identified strangers on the Boston subway and pulled names, home addresses, phone numbers, and social security numbers in seconds. They approached a woman on the street, told her they’d met at a Cambridge Community Foundation event, and she believed them. They told a female student her Atlanta home address and her parents’ names; she confirmed they were right. Name Tag doesn’t make this possible. It already is possible. Name Tag just makes it Meta’s official product.

What “Opt-Out” Actually Means

Meta’s proposed safeguards rely on limiting identification to connected contacts or public accounts, and offering an opt-out toggle buried in Instagram settings. The connected-contacts restriction doesn’t address the most statistically common danger. Stalkers, abusers, and harassers overwhelmingly target people they already know. Limiting the feature to existing connections doesn’t reduce the risk to the most vulnerable users; it focuses it on them. Domestic abuse charities in the UK raised this point directly, noting that abusers could use Name Tag to locate survivors who have relocated, changed their appearance, or created entirely new digital identities to stay safe.

The opt-out toggle is available to Instagram’s roughly 2 billion monthly active users, almost none of whom will encounter it organically. Privacy protections that require the potential victim to proactively locate and activate a setting are not privacy protections. They are liability documentation. Abuse survivors, journalists, political dissidents, undocumented individuals, people in witness protection: these are the people with the highest stakes, and also the people with the least bandwidth to hunt through app settings on the off chance that facial recognition has been added to a device they don’t even own. The toggle protects Meta in a courtroom. It protects its users in no meaningful sense at all.

We Were Free Labor All Along

Twenty years of tagging photos, liking posts, following accounts, and uploading selfies. Every interaction trained the model. Every tagged face sharpened the database. Meta framed all of it as self-expression and social connection, and it was, but it was also free labor on the world’s largest biometric mapping project. The glasses are the hardware layer that connects that digital registry to the physical world. The data collection phase is largely complete. The deployment phase is now.

Reddit ran the same playbook with text and nobody stopped them either. In early 2024, Reddit signed a $60 million-per-year deal with Google to license user-generated content for AI training, then struck a separate deal with OpenAI estimated at $70 million annually. Two decades of forum posts, niche expertise, personal advice, and community-built knowledge that users created for each other got packaged and sold to the highest bidder. Users built the database. Reddit sold it. The users got nothing except the knowledge that their words now live inside a model they don’t control. Meta’s version is identical in structure and more intimate in substance, because the asset being extracted isn’t something you typed. It’s your face, your home, and the faces of everyone in your immediate vicinity.

While all of this unfolds on the hardware and data side, Meta is simultaneously stripping privacy from the software side. End-to-end encryption for Instagram DMs dies on May 8, 2026. Meta’s stated justification is that “very few people” were using it, which is a direct consequence of never making it the default and never promoting it. After May 8, Meta retains full technical access to message content, which means any contractor, government request, or legal process with sufficient leverage can access it too. The feature was specifically extended to users in Ukraine and Russia during the war as a safety measure for people in genuine danger. Those users are now being told to download their chats before the cutoff. The facial recognition is the front door. The unencrypted message access is the unlocked safe. At some point the question stops being “is Meta building a surveillance company?” and starts being “why are we still acting like it isn’t one?”

The post Meta Is Turning Its Smart Glasses Into A Mass Surveillance Tool… And You Can’t Stop It first appeared on Yanko Design.

Apple Finally Rounded the MacBook’s Corners After 18 Years

For about 18 years, every aluminum MacBook has looked more or less the same. Silver. Angular. Quietly serious. There’s nothing wrong with that. Apple’s unibody aluminum design, introduced in October 2008 and carved from a single block of metal, was genuinely elegant and set the template for an entire industry. But it also retired something along the way: the idea that a Mac laptop could feel chosen rather than just defaulted to.

The MacBook Neo, announced March 4 and starting at just $599, is the first real crack in that template. It comes in four colors (blush, indigo, silver, and a yellow-green called citrus) with enclosure corners that are noticeably softer than any aluminum Mac in recent memory. Whether that adds up to a proper design statement or just smart positioning is worth thinking through.

Designer: Apple

What happened to Apple’s color confidence

iBook G3 Clamshell (courtesy of Wikipedia)

Apple’s fondness for color didn’t always live inside an iPhone. The iBook G3, launched in 1999, came in tangerine and blueberry, and later in indigo and key lime. It was rounded, slightly toy-like, and completely unapologetic about being a consumer product. When the aluminum unibody arrived in 2008, Apple traded that warmth for precision machining and sharp rectilinear edges. Right call for the MacBook Pro. Default for everything else, apparently, for nearly two decades.

The result was a color drought in aluminum Mac laptops that has lasted until now. Silver, space gray, midnight, starlight: all variations on the same mood of professional restraint. The Neo’s citrus and blush aren’t just options on a spec page. They’re a quiet admission that not every laptop buyer wants a device that looks like it belongs in a boardroom. For Apple, that’s actually not a small thing to say at the product level.

Two different stories about corners

M1 MacBook Pro (2021)

There’s a distinction worth making here, because “rounded corners” gets used loosely when describing the Neo. MacBook displays have had rounded screen corners since 2021, which is a display-level detail and nothing new. What’s different on the Neo is the chassis itself. The physical aluminum enclosure is softer at the edges and corners than any aluminum Mac before it, and Apple’s own press materials describe “soft, rounded corners” specifically in terms of how the device feels to hold and carry.

That’s a real shift in the design language. The 2008 unibody was celebrated for machined sharpness, corners you could feel were engineered. The Neo softens that deliberately. It’s not a revival of the iBook, and it’s not trying to be, but the instinct is similar: a consumer Mac that feels a little more like it belongs to you. The notch is also gone, making this the first notchless MacBook since 2020, which quietly tidies up the one thing that made recent Airs feel slightly unfinished.

The repairability angle is actually a design story too

One thing that got a little buried under the color conversation: the Neo is the most repairable Mac laptop in years, and that’s partly a design decision worth noting. Teardowns showed how the whole machine was disassembled in just a few minutes using standard Torx screws throughout. No tape, no adhesive, anywhere inside. That’s a first for a modern Mac. The USB-C ports, speakers, and headphone jack are all modular. The keyboard can be replaced on its own, without swapping the entire top case, which on the MacBook Air currently costs over $370 in parts.

The internal simplicity isn’t accidental. The A18 Pro chip runs so efficiently that the Neo needs no fan at all, which removes a whole layer of thermal engineering that usually clutters a laptop’s interior. The result is a cleaner, more logical internal layout. Whether Apple arrived here from genuine design philosophy or from regulatory pressure (the EU’s right-to-repair push has been building for years) is an open question, but the outcome is real either way.

What it doesn’t fix, and what might come next

It’s not all sunshine and rainbows, of course. The base model has 8GB of non-upgradable RAM, one USB-C port runs at USB 2.0 speeds, and there’s no backlit keyboard. These are calculated trade-offs for the price point, not mistakes, but they matter depending on what you actually need the machine for. And repairability, for all the justified enthusiasm, is still partial: the RAM and storage are fixed at purchase, just like every other current Mac.

Still, the Neo feels like Apple designing for a specific person it had previously ignored: someone who was never going to spend $1,000 on a MacBook Air and wasn’t particularly well served by anything else Apple made. The color, the softer form, the price, the clean internals, all of it points at the same person. What’s genuinely interesting is whether any of this travels upmarket. If a future MacBook Air gets a color story this confident, the Neo might end up looking less like an entry-level product and more like Apple quietly figuring out what comes next.

The post Apple Finally Rounded the MacBook’s Corners After 18 Years first appeared on Yanko Design.

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.

Top 5 Ways E Ink Displays Are Transforming Modern Design

Traditional screens have become the digital equivalent of energy vampires, constantly draining batteries while bombarding our eyes with harsh blue light that leaves us squinting and tired. LCD and OLED displays demand constant power to maintain their bright, flashy visuals, creating a world where we’re always hunting for charging cables and dealing with screens that become unreadable the moment we step into sunlight.

E Ink displays offer a refreshingly different approach to this screen fatigue problem. By mimicking the look and feel of actual ink on paper, this technology flips the script on what we expect from digital displays. E Ink dominates the ePaper market, though other electronic paper technologies exist alongside it. The result feels like reading a book instead of staring at a glowing rectangle.

What Makes E Ink Different

Unlike traditional displays that blast light at your face, E Ink reflects ambient light just like a printed page would. The technology uses tiny microcapsules filled with charged particles that rearrange themselves to form text and images. Once an image appears, it stays there without using any power at all, which explains why e-readers can last for weeks on a single charge.

The benefits extend far beyond just battery life. E Ink displays remain perfectly readable in bright sunlight, where your smartphone screen would become a useless mirror. The flexible nature of the technology means displays can bend, curve, and even fold without breaking. For designers tired of working around the rigid constraints of glass screens, E Ink opens up entirely new possibilities.

Designer: Montblanc

The Current Limitations

E Ink comes with certain trade-offs that designers need to understand. Colors remain somewhat muted compared to the vibrant displays we’re used to, though recent advances have brought more life to ePaper screens. Refresh rates are slower, so you won’t be watching Netflix on an E Ink display anytime soon. Large panels can still be pricey, though costs keep dropping as production scales up.

These constraints haven’t stopped designers from finding creative ways to harness E Ink’s strengths. Smart product teams have learned to work within these limitations, focusing on applications where the technology’s benefits far outweigh its drawbacks. The results often surprise people with their elegance and practicality, proving that constraints can spark innovation.

Designer: BOOX

Five Industries Embracing E Ink Innovation

The real magic happens when you see E Ink displays in action across different industries. Each sector has found unique ways to leverage the technology’s strengths, creating products that simply wouldn’t be possible with traditional screens. Here are five concrete examples that show how E Ink is changing the design game.

Laptops: Your Lid Becomes a Canvas

Designer: ASUS

Laptop lids have been boring black rectangles for decades, but E Ink is changing that in fascinating ways. ASUS’s Project Dali concept turns the back of your laptop into a customizable display where you can showcase artwork, display your calendar, or show off your company logo during meetings. It’s like having a digital tattoo for your computer that changes whenever you want it to.

Designer: Lenovo

Lenovo took this concept to market with their ThinkBook 13x Gen 4 SPE, which features an actual E Ink display built into the lid. You can switch between personal artwork during coffee breaks and professional branding during client presentations. The display sips so little power that it barely affects battery life, yet it transforms your laptop from anonymous tech into a personal statement piece.

Transportation: Solar-Powered Information That Actually Works

Public transit signs have always been a nightmare to power and maintain, especially at remote bus stops without electrical connections. Boston’s MBTA solved this problem elegantly by deploying solar-powered E Ink signs throughout the city’s bus stops and Green Line stations. These displays show real-time arrival information, service alerts, and schedules without requiring a single wire to be run.

Designer: MBTA, E Ink

The beauty of these installations becomes obvious during New England winters, when the signs keep working despite snow, ice, and sub-zero temperatures. Solar panels provide enough juice to keep the displays running continuously, while the E Ink technology ensures perfect readability whether you’re squinting through morning glare or trying to read in dim evening light.

Makers: DIY Dreams Made Accessible

The maker community has embraced E Ink displays with the enthusiasm typically reserved for new Arduino boards or 3D printing breakthroughs. Waveshare offers dozens of different E Ink modules that work seamlessly with Raspberry Pi, Arduino, and other popular platforms. Suddenly, creating a custom weather station or smart home dashboard doesn’t require a computer science degree or a massive budget.

Designer: Waveshare

Hobbyists use these displays to build everything from digital art installations to battery-powered information kiosks that can run for months without maintenance. The paper-like appearance means these creations blend naturally into homes and offices, avoiding the harsh, obviously digital look of traditional screens. It’s democratized display technology in ways that would have seemed impossible just a few years ago.

Fashion: Accessories That Change With Your Mood

Fashion has always been about self-expression, but E Ink takes personalization to an entirely new level. The Tago Arc bracelet demonstrates this beautifully, featuring a flexible E Ink display that lets you cycle through hundreds of different patterns using your smartphone. One moment you’re wearing geometric shapes, the next you’re sporting flowing organic patterns that match your outfit perfectly.

Designer: LIBR8TECH

The bracelet never needs charging because it draws power through NFC only when changing patterns. This means you get infinite customization without the hassle of yet another device to plug in every night. It’s the kind of accessory that makes people do double-takes, wondering how your jewelry just changed designs right before their eyes.

Consumer Electronics: Devices That Respect Your Attention

Designer: reMarkable

E Ink device like the BOOX Note Max and reMarkable Paper Pro Move have created an entirely new category of devices focused on thoughtful interaction. These tablets feel remarkably similar to writing on paper, making them favorites among designers, writers, and anyone who takes handwritten notes seriously. The screens don’t strain your eyes during long reading sessions, unlike their LCD counterparts.

The BOOX Palma takes this concept in a different direction by creating a phone-sized E Ink device that looks and feels like a smartphone but focuses entirely on reading and productivity. This pocket-sized e-reader runs Android, giving you access to reading apps, note-taking tools, and basic communication functions without the distracting elements that make regular smartphones so addictive. It’s like carrying a digital book that happens to connect to the internet, perfect for people who want to stay connected without getting sucked into endless social media scrolling.

Accessibility Revolution

E Ink technology has become surprisingly accessible to individual designers and small companies over the past few years. Development kits and reference designs are readily available from multiple suppliers, while costs have dropped to levels that make experimentation feasible for creative projects and startup ventures. You no longer need deep pockets or specialized engineering knowledge to explore ePaper possibilities.

This democratization has accelerated innovation across multiple industries. Designers can prototype E Ink applications quickly and affordably, leading to creative solutions that might never have emerged from traditional corporate research and development cycles. The growing ecosystem of compatible components and software libraries continues to lower barriers while expanding creative possibilities for everyone.

Designer: Pedro Luraschi

Designer: Ashtf

Technical Progress Continues

Recent advances have addressed many of E Ink’s early limitations while opening up new application areas. Color reproduction has improved dramatically, though it still requires thoughtful design consideration. Refresh rates have increased enough to support interactive applications, while manufacturing improvements have reduced costs and increased reliability across the board.

Research into advanced ePaper technologies continues at a rapid pace. Flexible displays that can fold, roll, or stretch are becoming practical for commercial applications. Integration with touch sensors and other interactive elements keeps improving, making E Ink displays suitable for sophisticated user interface design that goes beyond simple text and images.

Designer: Sony (FES U Watch)

A Different Design Philosophy

E Ink represents a fundamentally different approach to digital interaction, one that prioritizes sustainability, comfort, and thoughtful engagement over flashy visuals and constant stimulation. This philosophy resonates with designers who want to create products that enhance human experience without competing aggressively for attention. The technology encourages restraint and purposefulness in ways that feel refreshing in our cluttered digital landscape.

Products built around E Ink often exhibit a deliberate, focused quality that stands out from the noise. The constraints imposed by the technology force designers to think carefully about essential functions and user needs, often resulting in elegant solutions. The influence of E Ink thinking extends beyond products that actually use the technology, shaping broader conversations about conscious design practices.

As E Ink continues to mature, these ideas will likely influence how we think about digital interaction across many different product categories and industries. The technology has already proven that displays don’t need to be bright, fast, and power-hungry to be effective. Sometimes the best solution involves stepping back from the latest and greatest to focus on what actually serves people well.

Designer: E Ink

The post Top 5 Ways E Ink Displays Are Transforming Modern Design first appeared on Yanko Design.

The iPhone Air is NOT a precursor to Apple Glasses… Here’s why

I recently read a Digital Trends piece that spoke about how the ‘iPhone Air is setting us up for Apple Smart Glasses‘, and I couldn’t help but think about how journalists who look at the iPhone Air have one of two reactions. There’s one group of bloggers who believe this particular launch is just a stepping stone to a foldable phone… while the other group, marveling at how all the computing of the iPhone Air exists inside the bump, believe that this is actually leading to Apple building smart glasses. The latter are wrong, but before I tell you my spicy take, let me just preface by declaring that Apple almost certainly could be working on both foldables as well as smart glasses. I just don’t think the iPhone Air is leading to Apple Glasses – because there’s already a device that’s been leading to it. The Watch.

Something about Apple launching a new product really makes journalists lose all sense of objectivity. I’m not being rude, I’m saying this because I’ve found myself doing this too. I was genuinely excited when Apple unveiled the Touch Bar, the Dynamic Island, and Camera Control. It felt ground-breaking for precisely 4 minutes before I then reminded myself… the Touch Bar was first put on a Lenovo laptop 2 years before apple, the Dynamic Island is still larger than most hole-punch cameras, and the Camera Control, while great, doesn’t beat the innovation that Sony’s had in their ‘camera phone’ era. I’m not dunking on Apple, but hear me out – it’s impressive how Apple managed to fit an entire smartphone into the iPhone Air’s camera bump, but Apple’s done this before – the Apple Watch is essentially a computer crammed into a wristwatch. Saying the iPhone Air is building up to smart glasses means completely ignoring all of Apple’s work in the Watch category.

Will the iPhone Air’s innovations lead to wearable breakthroughs? Absolutely. The watch’s heart rate monitor led to breakthroughs in heart-tracking tech that made it to the AirPods Pro 3. The Center Stage camera on the Mac made it to the iPhone. Innovation always travels in multiple directions. But nobody looked at the heart rate monitor on the Apple Watch and thought, wow, this is definitely going to go into my ear someday.

The point is, Apple’s been on track for making powerful wearable devices. The Watch is essentially a computer that’s only limited by its chipset and OS. Bump the S-series chipset’s capabilities to match the A-series and the smartwatch essentially becomes a powerful computing device. It’s already got a gyroscope, it has a battery, tracks health, is ridiculously water-resistant, and could easily pack a camera if you remove the entire sensor array on the bottom for calculating blood oxygen, wrist temperature, respiratory rate, and heart rate variability.

So what’s the iPhone Air hinting at? To be honest, I think the most logical conclusion is a foldable. Apple’s built the iPhone Air to be ridiculously strong, thin, and capable of matching up to the performance of regular flagships. The iPhone Air’s thin design still has a massive battery, which obviously doesn’t port to a pair of smart glasses. Digital Trends asks this exact same question, wondering if Apple’s glasses will have a tethered battery pack like the Vision Pro. But then again, this is exactly what I find so amusing – journalists forgetting that Apple’s been making powerful computing devices with tiny batteries. All you need to do is look at your wrist!

The post The iPhone Air is NOT a precursor to Apple Glasses… Here’s why first appeared on Yanko Design.

❌