Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierTechs Design

BEYOND Expo 2026: Asia’s Biggest Tech Event Just Told the World That AI Software Was Only the Warm-Up

Par : Sarang Sheth
10 mai 2026 à 23:30

Every major tech conference eventually finds its thesis statement. CES landed on “everything is connected.” SXSW staked out culture-meets-technology. BEYOND Expo‘s thesis for 2026 is more specific, and honestly more timely: AI has spent years proving itself in software, and the interesting question now is what happens when it leaves the screen. The official theme, “AI: Digital to Physical,” takes over from last year’s theme of Transforming Uncertainty into a Trigger for Innovation. Timed perfectly around the global speculation that AI’s a bubble, it’s a genuine reflection of where the most consequential AI work is actually happening right now, in robotics labs, automotive platforms, wearables, and manufacturing floors across the Greater Bay Area.

BEYOND has been building toward this moment since Dr. Lu Gang launched it during a global lockdown in 2021, a decision he’s called delusional in hindsight during an interview with Yanko Design, but with the kind of grin that says he’d do it again. The original problem he was solving was simpler than people realize: Asia’s most interesting founders kept showing up at CES and Web Summit as attendees rather than headliners. A hardware startup out of Shenzhen with genuinely world-class AI chops would get a 3×3 booth on a back wall while the stage went to the usual suspects. BEYOND was built to fix that imbalance, and five years in, it’s working.

Click Here to know more about the BEYOND Expo 2026

The 2026 edition is aiming for 30,000 attendees, a significant jump from 2024’s 20,000, and the programming reflects a maturing event that knows its own strengths. The summit lineup spans Humanoid Robotics and Embodied AI, Enterprise Agentic Workflows, Autonomous Driving, AI-Integrated Wearables, and a PayFi and Decentralized AI track that will either feel prescient or premature depending on your priors. What ties all of it together is the through-line of AI becoming something you interact with physically, not just through a chat interface. That’s a meaningful editorial choice, and one that puts BEYOND in a different conversation than conferences still treating large language models as the whole story.

The most interesting addition this year is the GBA Innovation Tour, which gives international attendees direct access to Greater Bay Area manufacturing infrastructure. This matters more than it might sound. Lu Gang has argued for years that what makes Asia’s tech ecosystem genuinely different isn’t just the innovation pipeline, it’s the compression of the distance between idea and physical product. Watching an AI concept move from prototype to production in a Shenzhen facility in weeks rather than months is something you can describe in a keynote, but apparently you need to see it to really understand the scale and speed involved. The tour is BEYOND’s way of making that argument visceral rather than theoretical.

Last year’s theme, “Unveiling Possibilities,” was about reframing uncertainty as creative fuel, which was the right message for a chaotic moment. “AI: Digital to Physical” is more declarative, more confident. It names a specific transition that the industry is mid-stride through, and plants BEYOND squarely in the middle of it. Registration and exhibition details are live at beyondexpo.com.

Click Here to know more about the BEYOND Expo 2026

The post BEYOND Expo 2026: Asia’s Biggest Tech Event Just Told the World That AI Software Was Only the Warm-Up first appeared on Yanko Design.

The Roomba Guy Just Built a Robot Pet You Might Actually Love

Par : Ida Torres
9 mai 2026 à 22:30

If you’ve ever watched your Roomba bump helplessly into a chair leg for the third time and thought, “I deserve better from my robots,” you’re not alone. And apparently, neither did Colin Angle. The co-founder of iRobot, the man who essentially put a hockey puck-shaped vacuum in millions of homes, left the company in 2024 with a new question rattling around in his head: what if a robot could actually feel like it cares about you? The answer is the Familiar, the first prototype from his new startup, Familiar Machines & Magic. And it is not your average robot.

Picture a creature somewhere between a soft-eared dog and a round, slightly abstract bear. It has four legs, huge paws, and doe eyes that make it immediately charming in a way that no Roomba ever attempted to be. It’s furry, expressive, and was designed with the help of former Disney Imagineers, which explains why it looks like it belongs in an animated feature rather than a tech showcase. The Familiar has 23 degrees of freedom, meaning it can wiggle its ears, tilt its head, and wag a small nub of a tail with the kind of fluidity that feels less mechanical and more… alive. Its coat is touch-sensitive, built specifically to encourage physical interaction between you and it.

Designer: Familiar Machines & Magic

It also doesn’t talk. That detail feels deliberate and, to me, very smart. Voice assistants have trained us to think of robots as tools we command. The Familiar is going for something completely different. It’s designed to read your tone of voice, your body language, your overall energy, and respond accordingly. Angle calls it “Consumer Physical AI,” and while the label sounds like something off a product white paper, the idea behind it is genuinely compelling.

The name itself is worth noting. A “familiar” in folklore refers to the supernatural animal companion of a witch or magical figure, a creature bonded to a person not through ownership but through genuine connection. Angle’s team chose that name deliberately, and I think it sets the tone for what they’re trying to build. The goal isn’t to sell you a novelty gadget. It’s to create a new kind of relationship between humans and machines, one built on trust, attentiveness, and something approaching care.

Now, I’ll be upfront: I have feelings about this. Part of me finds it genuinely beautiful as a design concept. The Familiar was clearly approached the way good industrial design should be, with deep thought about how an object makes you feel, not just what it does. The choice to make it animal-like rather than humanoid is interesting, too. There’s far less of the uncanny valley unease that tends to follow humanoid robots around, and more of the universal warmth that most people already extend toward animals.

But another part of me wonders about the emotional stakes here. We’re already watching people form attachments to AI chatbots. A touch-sensitive, furry, expressive robot that mirrors your emotional state is a much more potent version of that. Angle has said he wants it to feel like the machine actually cares about him. That’s a lovely vision. It’s also a design brief that puts enormous responsibility on the creators to get it right, because the flip side of emotional bonding is emotional dependence.

Still, I’d be lying if I said the Familiar didn’t make me curious in the best possible way. The prototype images are almost disarmingly sweet. It looks like something you’d want sitting on the couch next to you while you read, or settled quietly in the corner while you work. If any robot was ever designed to move through your life rather than just function within it, this might be it.

The Familiar is still in the prototype stage, with no confirmed price or release date. But as debut concepts go, it’s a strong one. Whether or not it ever makes it into our homes, it raises questions about what we actually want from the machines we live with. And those questions feel well overdue.

The post The Roomba Guy Just Built a Robot Pet You Might Actually Love first appeared on Yanko Design.

AI-powered earbuds with built-in camera expand your capabilities in the real world

Par : Gaurav Sood
8 mai 2026 à 15:20

Headphones and earbuds have, over the last couple of years, become staples of this fast-paced world for good reason. The little audio gear essentials can do a multitude of tasks with just the push of a tactile button or pinch gesture. They can trigger smart assistant support for a smarter you, and if this concept were to be imagined, they can give you a pair of smart eyes, too.

The idea of a pair of earbuds with integrated cameras is not new, as Emil Lukas imagined, and now another concept reinforces the merit in having a pair of lenses on earbuds. Dubbed Lightwear, the earbuds look something straight out of a sci-fi flick, but underneath, they are a pair of smart assistant earphones that enhance your environmental perception in real time.

Designer: Suosi Design

Touted as the world’s first AI-powered earphones, Lightwear comes with a set of HD camera lenses to expand the sense perception as a vision module to interpret the surroundings and deliver the desired result. One can detail them via voice command about any information required in real time, and the buds respond with a detailed set of instructions or navigation guidance. Having gesture control support, the buds can control the connected home devices remotely using just gesture commands. All the data fed into the smart data system is end-to-end encrypted and stored locally. For enhanced privacy and protection, the sensitive data is automatically cleared on a scheduled cycle.

Compared to Emil’s version, these earbuds have a very downplayed camera presence, which I prefer. They look and feel just like any normal earbud, but have a function that makes them stand out from most pairs of earbuds that have the predictable features. Unlike other AI-powered earbuds, these stand true to their name as they come with the added visual apparatus to put forth better results. The use of AI functions is not limited to the earbuds, since the charging case does the same. This removes the use-case scenario to just when the earbuds are being worn. Loaded with highly sensitive microphones, the AI features can be triggered anytime the user wants. Privacy is also taken care of, as the user can opt to activate the fingerprint unlock module to prevent any unauthorized use.

These have an over-the-ear design, reminiscent of the way IEMs sit flush on the ears. The battery resides in those lobes, and although the designers don’t specifically talk about the usage time, these should last longer than TWS earbuds. Nor is there a specific word about the sound quality, ANC levels, or the app features. But then it’s just a concept centred on the form factor and usability.

The post AI-powered earbuds with built-in camera expand your capabilities in the real world first appeared on Yanko Design.

Forget Smarter AI, This Robot Thinks Presence Is the Point

Par : Ida Torres
2 mai 2026 à 20:45

We keep building AI to do more. More answers, more speed, more certainty. Designer Mehrnaz Amouei looked at that trajectory and asked a fundamentally different question: what if we built AI to be more present instead? The result is POCO, a soft robotic companion that might be one of the most quietly radical design concepts to emerge in recent years. It doesn’t talk over you, doesn’t flood you with information, and it doesn’t pretend to know things it doesn’t know. POCO sits with you. Literally.

At its core, POCO is a soft, tactile object that pairs with a smartphone, which serves as its computational brain and face. A soft textile body wraps around the device, transforming rigid, glass-and-metal technology into something that moves, breathes, and gestures in response to your presence. Together, they create something that sits somewhere between object, creature, and companion, and that deliberate ambiguity is very much intentional. You’re not quite sure what to call it, and that’s entirely the point.

Designer: Mehrnaz Amouei

Amouei developed POCO through research at the University of Illinois at Chicago, grounding the project in studies on loneliness and trust. Her findings indicated that people don’t actually want AI that projects certainty or control. They want availability and responsiveness. They want something that shows up without taking over. From those findings came the concept of “constructive interdependence,” a design philosophy where POCO’s limitations aren’t bugs to be patched but features embedded directly into the interaction model itself. The robot communicates what it can and cannot do through its behavior and physical states, which is a level of honesty you don’t often get from technology that typically overpromises and underdelivers.

I think that matters more than it might initially seem. The dominant conversation around AI right now is almost entirely about expansion: more capability, more integration, more autonomy. POCO pushes back on that without being preachy about it. It reframes the question of what good AI design actually looks like, and the answer it offers isn’t “smarter,” it’s “more trustworthy.” That is a genuinely different value system, and it feels overdue.

The sustainability dimension is also worth paying attention to. Rather than introducing new hardware and generating more electronic waste, POCO repurposes a device most people already own. That decision isn’t just a nice bonus; it’s built into the concept from the start, aligning with the UN’s Sustainable Development Goals around mental well-being and responsible consumption. In product design terms, that means the project was developed with a broader cultural and environmental context in mind, not just a user persona sitting in a lab.

Physically, POCO responds to touch, movement, and environmental cues. It adapts to a user’s preferences while maintaining a consistent identity, which is a surprisingly nuanced balance to strike in any product, let alone one sitting at the intersection of soft robotics and emotional design. Because interaction happens through touch rather than voice commands or screen taps, there’s an intentional slowing down embedded in the experience. You can’t rush a tactile exchange the same way you can type faster or speak louder. That shift from speed to presence feels like a meaningful counter-proposal to how most tech is currently designed. We’ve grown so accustomed to interfaces that demand our attention that a device asking only for our company reads almost as radical.

POCO has already earned an Honorable Mention from the International Design Awards and drawn coverage from major design publications. Whether it ever moves into consumer production remains an open question. But as a design statement, it’s doing exactly what the best concept work should: prompting us to reconsider what we actually want from the technology we live with, and whether expanding capability was ever really the right goal. Maybe the most interesting AI isn’t the one that knows the most. Maybe it’s the one that knows when to just stay close.

The post Forget Smarter AI, This Robot Thinks Presence Is the Point first appeared on Yanko Design.

A Tiny Pinwheel Is Doing What AI Giants Won’t

Par : Ida Torres
7 avril 2026 à 16:20

Every time you type a prompt into ChatGPT, something happens somewhere far away. Servers spin up. Electricity moves. Carbon gets generated. The whole transaction is so clean and invisible on your end that it might as well not be happening. That’s by design, and it’s worth thinking about. Although with the way we use technology these days, we seldom think about the consequences on our environment.

London-based creative studio Oio wants to change that, starting with a small 3D-printed box and a bright yellow pinwheel. Their project, the Hot Air Factory, is a domestic AI device that processes your questions and requests locally, without connecting to the cloud, and every time it thinks, it physically exhales. Hot air pushes out of the top of the device and spins that cheerful little pinwheel. The harder it thinks, the faster it spins. You’re watching computation happen in real time, which turns out to be a surprisingly powerful thing.

Designer: Oio

The concept is simple: make the invisible visible. We know AI uses energy. We’ve read the headlines. But knowing abstractly that data centers are energy-hungry is different from watching a pinwheel turn every time you ask your AI assistant to summarize something. One is a statistic. The other is a moment of honest accountability.

What makes the Hot Air Factory smart, beyond its obvious design appeal, is how it translates cost into human-readable terms. It doesn’t give you kilowatt-hours because most people have no idea what that means. Instead, it tells you something like “that prompt cost the equivalent of brewing a cup of tea” or “watching Netflix for five minutes.” Suddenly the math becomes personal. Suddenly you start wondering whether you really needed a 500-word AI response to a question you could have Googled.

Oio co-founder Matteo Loglio describes it as “a small, domestic AI that reveals the hidden energy cost behind every prompt.” The factory also lets you dial up or down the level of intelligence it uses. Want a quick answer? Use a lighter model, spend less energy. Need something more complex? Crank it up, and watch that pinwheel work for it. You can even schedule your heavier prompts for the night shift, when energy is cleaner and the grid is quieter. These are design decisions that carry real ethical weight, and they’re baked in with zero condescension.

The playfulness and the seriousness aren’t in conflict here. They’re exactly the point. The Hot Air Factory is built in a Frutiger Aero visual language, all soft curves and clean optimism, the kind of aesthetic that makes you want to put it on a shelf next to your plants. But underneath that approachable exterior is a genuinely complicated machine running open-source large language models on a local GPU. It looks like something a friendly robot would carry. It functions like a small act of protest.

AI companies have very little incentive to make their energy costs legible to users. Invisibility is convenient. It keeps things frictionless. It keeps you prompting without thinking about the bill. A report from the US Department of Energy projected that by 2028, data centers could account for 12% of total electricity consumed in the US. That’s not a small number, and it keeps growing every time we treat AI like it runs on good intentions and cloud magic.

The Hot Air Factory isn’t saying AI is bad. It isn’t demanding you stop using it. What it’s doing is quieter and more persuasive than that. It’s asking you to look. To see. To feel, just a little, what your digital habits cost in the physical world. That’s the argument made not through a lecture or a campaign, but through a yellow pinwheel spinning in your living room.

Design can do that. Sometimes a small, well-made object says more than a policy paper ever could. The Hot Air Factory is currently looking for collaborators to help bring it to a wider audience, still working its way from experiment to something anyone can own. If the goal is conscious computing, the first step might just be this: a tiny box, a spinning fan, and the quiet discomfort of watching a machine breathe.

The post A Tiny Pinwheel Is Doing What AI Giants Won’t first appeared on Yanko Design.

OMO X self-balancing electric scooter employs AI and Robotics to refresh urban riding experience

Par : Gaurav Sood
25 mars 2026 à 00:30

Two-wheelers have always demanded a certain level of skill and balance from riders, especially at low speeds or when navigating crowded city streets. OMO X by Omoway attempts to change that equation by introducing what the company describes as the world’s first mass-produced self-balancing electric motorcycle. Designed around advanced robotics and artificial intelligence, the new age electric bike blends traditional scooter convenience with autonomous technology that aims to make urban mobility easier and safer.

At the core of the two-wheeler is Omoway’s newly introduced OMO-ROBOT architecture, a full-stack control platform that integrates sensors, perception systems, decision-making software, and mechanical actuation into a unified framework. The system combines aerospace-grade gyroscope technology with reinforcement-learning models to continuously stabilize the motorcycle. This architecture allows the OMO X to maintain balance on its own, even when stationary, eliminating one of the biggest challenges riders face on two-wheeled vehicles.

Designer: Omoway

The balancing capability is achieved through a Control Moment Gyroscope (CMG) module. Using the principle of angular momentum, the spinning gyroscope actively stabilizes the vehicle, keeping it upright without rider input. Beyond simply preventing tip-overs, the system also supports a range of riding assistance features. These include slip prevention on wet surfaces, assistance while cornering, and obstacle-avoidance capabilities designed to enhance safety during everyday riding.

Omoway is also positioning the OMO X as a highly intelligent mobility device. The scooter incorporates a network of sensors and cameras that continuously monitor the surrounding environment and feed data into an AI-based riding system. This enables features such as adaptive speed adjustments, hazard detection, and automated safety responses if the system identifies a potential risk. Some demonstrations have even shown the scooter maneuvering on its own, driving onto a stage without a rider, and responding to remote commands through a smartphone app.

Another notable capability is automated parking. Instead of requiring riders to maneuver the bike into tight urban spaces manually, the OMO X can guide itself into a parking spot once a location is selected. The system relies on its self-balancing capability and onboard sensors to navigate safely, a feature that reflects the growing overlap between robotics and personal transportation.

The electric scooter’s futuristic design further reinforces its technological identity. Its sharp, angular styling and distinctive lighting signature give it a modern aesthetic that stands apart from traditional scooters. In a way, it carries the Tesla Cybertruck aesthetic, with a continuous front light bar replacing a conventional headlamp and creating a visually striking presence on the road.

Production plans for the OMO X are already underway. The company announced that the model has entered mass production following its global launch event in Singapore, with pre-orders expected to open in April 2026. Indonesia has been selected as the first launch market, where the electric scooter will debut commercially in Jakarta shortly afterward. Omoway is reportedly working with multiple regional distributors and plans to establish a dealer network of more than 100 locations in the country.

The post OMO X self-balancing electric scooter employs AI and Robotics to refresh urban riding experience first appeared on Yanko Design.

Rokid’s Smart Glasses Let You Pick Your AI: Gemini or ChatGPT

Par : JC Torres
10 mars 2026 à 19:15

Most wearable tech that puts an AI assistant in your ear assumes you want only theirs. The earpiece, the speaker, the entire software stack, all funneled through one model chosen for you before you even open the box. Rokid’s latest update to the AI Glasses Style takes a different position entirely, turning the glasses into what is effectively an open platform where you pick the brain behind the voice.

The update makes the Style the first smart glasses to natively support Google’s Gemini, sitting alongside OpenAI’s ChatGPT, DeepSeek, and Alibaba’s Qwen in a unified interface. Users toggle between them freely, which means reaching for Gemini for a quick Google Maps query and switching to ChatGPT for something else entirely is up to you.

Designer: Rokid

The glasses themselves debuted at CES 2026 in January, and the hardware makes a reasonable case for the category. At 38.5 grams, with a TR90 frame and titanium alloy hinges, they sit closer to a regular pair of prescription glasses than anything resembling a prototype. The frame takes prescription lenses directly, with a fitting service starting at $79, including photochromic options in over 200 colors that darken within 25 seconds.

Powering the AI and imaging workload is a dual-chip setup: an NXP RT600 handles always-on, low-power tasks, while a Qualcomm AR1 manages heavier processing. The same Qualcomm chip is in Meta’s Ray-Ban glasses, though the battery life here runs to 12 hours, noticeably longer than Meta’s. A 12MP Sony-sensor camera sits at the bridge, capturing 4K stills and 3K 30fps video with up to 10 minutes of continuous recording. A privacy indicator light signals to people nearby when the camera is active.

Audio comes through directional AAC speakers built into the temples, focused toward the ears with minimal bleed. The AI interaction itself works through a two-finger tap to summon any of the four models, head gestures for call management, and voice prompts in 12 supported languages. Real-time translation, navigation, photo recognition, and AI-generated meeting summaries are all part of the feature set, fed through whichever model the user has selected.

For anyone already oriented around a specific AI assistant, the practical appeal is straightforward. Someone in Google’s ecosystem gets Gemini in their glasses without compromise; someone who prefers ChatGPT for writing picks that instead. At $299 to start, with a lens fitting service folding in prescription and photochromic options, the Style has cleared 15,000 units sold ahead of its formal global rollout, which is a reasonable early signal for a category still working out what it wants to be.

The post Rokid’s Smart Glasses Let You Pick Your AI: Gemini or ChatGPT first appeared on Yanko Design.

❌
❌