Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 1 février 2026Flux principal

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Par : Sarang Sheth
31 janvier 2026 à 02:45

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.

À partir d’avant-hierFlux principal

Ban-Rays - Les lunettes qui détectent les smart glasses

Par : Korben
29 novembre 2025 à 06:42

De nos jours, quand un mec chelou avec des lunettes cheloues nous fixe, on ne sait plus si c’est parce qu’il nous trouve irrésistible ou s’il est en train de balancer notre tronche à une IA pour savoir qui on est. Bon, pour vous, la question se pose peut-être moins, mais vous voyez l’idée ^^.

Heureusement, pour lutter contre ça, y’a maintenant un projet open source pour détecter ces petits curieux équipés de Ray-Ban Meta ou d’autres lunettes-caméras. Ce projet s’appelle Ban-Rays (jeu de mots avec “banned”, roh roh roh) et le but c’est de créer des lunettes capables de repérer les smart glasses équipées de caméras.

Et pour arriver à cela, le dev derrière ce projet utilise deux approches complémentaires.

La première, c’est l’approche optique basée sur un principe physique assez marrant. En effet, mes capteurs CMOS des caméras ont la particularité de renvoyer la lumière infrarouge directement vers sa source. C’est ce qu’on appelle l’effet “cat-eye” ou rétro-réflectivité, du coup, en balançant des impulsions IR vers une paire de lunettes suspecte et en analysant le signal réfléchi, on peut théoriquement détecter la présence d’une caméra. Et les capteurs produisent des pics de signal bien nets et rapides, contrairement aux surfaces réfléchissantes classiques qui génèrent des ondes plus longues.

Pour le moment, les tests avec les Ray-Ban Meta montrent des résultats un peu inconsistants à courte distance (genre 10 cm), mais le principe est là et ça s’améliore. Ah oui et le matos utilisé c’est un Arduino Uno, des LEDs infrarouges (940nm et 850nm), une photodiode et un transistor. Rien de bien méchant donc niveau budget.

Et la deuxième approche, c’est côté réseau avec la détection Bluetooth Low Energy. Les Ray-Ban Meta utilisent un identifiant fabricant spécifique (0x01AB pour Meta) et un Service UUID bien particulier (0xFD5F). Le souci c’est que pour le moment, ça ne détecte les lunettes que pendant l’allumage ou le mode appairage. Pour une détection continue pendant l’utilisation normale, faudrait du matos plus costaud genre modules nRF pour sniffer les paquets CONNECT_REQ. Mais bon, ça viendra puisque c’est dans la roadmap du projet.

Alors oui, vous allez me dire que les Ray-Ban Meta ont une petite LED qui s’allume quand elles filment, donc c’est pas discret. En théorie oui auf que cette LED est tellement minuscule que la Data Privacy Commission irlandaise a carrément remis en question son efficacité comme protection de la vie privée. Et surtout, un bidouilleur propose maintenant de désactiver cette LED pour une soixantaine de dollars. Meta a bien prévu une protection qui empêche les lunettes de fonctionner si on couvre la LED avec du scotch, mais le gars a trouvé comment contourner ça et sa liste de clients s’allonge…

Et l’autre truc que j’ai remarqué avec ces lunettes connectées, c’est qu’elles se déclenchent tout le temps pour tout et n’importe quoi. Comme ça écoute en permanence pour répondre aux commandes vocales, impossible d’avoir une conversation normale sans que le machin réagisse à un mot qui ressemble vaguement à “Hey Meta”. C’est encore pire que Siri ou Alexa qui font déjà des déclenchements intempestifs. Perso, c’est pour ça que je ne veux pas de ce genre de lunettes, même si je reconnais que c’est pratique pour photographier ou filmer des choses (dans le cadre de mon boulot hein…)

Et les inquiétudes sont d’autant plus justifiées qu’une étude de 2024 a montré qu’en combinant des Ray-Ban Meta hackées avec de la reconnaissance faciale en temps réel, on pouvait identifier des inconnus dans la rue. Encore plus récemment, l’Université de San Francisco a dû alerter ses étudiants après qu’une personne mystérieuse ait utilisé ces lunettes pour filmer des femmes sur le campus et partager les vidéos en ligne. Sympa l’ambiance de parano.

Bref, si vous êtes inquiet par ça (ou juste soucieux de votre vie privée), le projet Ban-Rays est sur GitHub avec tout le code en C++, Python et un peu de C. C’est encore expérimental mais les deux approches sont prometteuses et si vous voulez contribuer, y’a plein de trucs à améliorer comme les patterns de balayage IR, la fusion des données multi-longueurs d’onde, l’interrogation active BLE…

Source

ChronoFrame - Reprenez le contrôle de vos photos

Par : Korben
6 novembre 2025 à 11:00

Bon, si vous me lisez depuis loooongtemps, vous connaissez forcément le risque que représentent les métadonnées contenues dans les images que vous partagez en ligne. Oui, je parle bien des fameux EXIFs qui contiennent aussi bien le modèle d’appareil photo utilisé, l’heure précise à la seconde près où vous avez pris le cliché, les réglages de l’objectif, parfois même l’altitude, et surtout les coordonnées GPS exactes de l’endroit où vous étiez.

Et toutes ces données, si vous mettez vos photos en ligne par exemple, chez Google ou Apple, et bien eux les récupèrent et les utilisent. C’est dommage, surtout que ce sont des données qui sont quand même utiles pour peu qu’on garde ça en local sur sa machine.

Alors que faire ?

Hé bien, il existe un logiciel open source sous licence MIT qui s’appelle ChronoFrame . C’est une galerie photo que vous pouvez héberger vous-même, qui va parser automatiquement toutes les données exif de vos clichés, extraire la géolocalisation, faire du reverse géocoding pour identifier le lieu exact et afficher tout ça sur une espèce de carte interactive sur laquelle vous pouvez naviguer pour revoir vos souvenirs de voyage.

En gros c’est comme Google Photo sauf que c’est vous qui gérez vos données et vous contrôlez qui accède à quoi.

L’intérêt de ChronoFrame, c’est qu’il rend visible l’invisible. Vous uploadez une image, ChronoFrame lit les métadonnées, extrait les coordonnées GPS si elles existent, et lance un appel à l’API Mapbox ou MapLibre pour faire du reverse geocoding. Ça, ça veut dire transformer des coordonnées GPS (48.8584, 2.2945) en adresse lisible (“Tour Eiffel, Paris, France”).

Et surtout, ChronoFrame supporte les Live Photos d’Apple ET les Motion Photos de Google. La génération de miniatures, quand à elle, utilise ThumbHash , un algorithme de placeholder ultra-compact créé par Evan Wallace (cofondateur de Figma). Ainsi au lieu de générer plusieurs tailles de miniatures (100x100, 200x200, 400x400…etc), ThumbHash encode une version floue de l’image dans moins de 100 bytes et comme ça, les vignettes se chargent instantanément, et l’affichage est ensuite progressif (flou -> net) jusqu’à ce que l’image full résolution arrive.

L’interface est bien sûr responsive, supporte le touch et la navigation par gestes, et donne une expérience proche d’une app native. Pour la déployer, vous devez créer un fichier .env avec vos variables d’environnement (email admin, mot de passe, provider de stockage, token Mapbox…etc), vous lancez docker pull ghcr.io/hoshinosuzumi/chronoframe:latest, et hop, ça tourne direct.

Le guide de démarrage détaille tout le process et ça vous prendra 5 minutes chrono.

Voici un exemple de configuration minimale :

[email protected]
CFRAME_ADMIN_PASSWORD=VotreMotDePasse
NUXT_PUBLIC_MAP_PROVIDER=maplibre
NUXT_PUBLIC_MAP_MAPLIBRE_TOKEN=votre_token_maptiler
NUXT_STORAGE_PROVIDER=local
NUXT_PROVIDER_LOCAL_PATH=/app/data/storage
NUXT_SESSION_PASSWORD=$(openssl rand -base64 32)

Vous pouvez aussi utiliser S3 au lieu du stockage local :

NUXT_STORAGE_PROVIDER=s3
NUXT_PROVIDER_S3_ENDPOINT=https://s3.amazonaws.com
NUXT_PROVIDER_S3_BUCKET=votre-bucket
NUXT_PROVIDER_S3_REGION=eu-west-1
NUXT_PROVIDER_S3_ACCESS_KEY_ID=votre_key
NUXT_PROVIDER_S3_SECRET_ACCESS_KEY=votre_secret

Une fois lancé, vous accédez à l’interface web, vous vous loggez avec votre email/password (ou via GitHub OAuth si configuré), vous allez dans /dashboard, et vous uploadez vos photos.

Voilà, j’ai trouvé ça cool parce que reprendre le contrôle de ses photos, ça veut pas forcément dire supprimer les métadonnées comme je l’ai souvent conseillé. Ça peut aussi vouloir dire décider de qui a accès à ces métadonnées. Car ça reste des informations précieuses et c’est quand même dommage de s’en priver donc autant héberger soi-même ses photos, comme ça vous pouvez les exploiter comme bon vous semble.

Notez que ChronoFrame ne vous aidera pas à supprimer vos EXIFs, mais il existe des outils pour faire ça comme ExifTool ou mat2 . Vous pouvez aussi scripter ça avant d’uploader quoique ce soit sur les réseaux sociaux mais la plupart des gens ne le font pas parce qu’ils ne savent même pas que les données sont là. Je sais aussi que des sites comme X.com retirent certaines des méta données avant de diffuser votre photo publiquement mais ça ne veut pas dire qu’eux ne les exploitent pas en amont pour vous balancer de la pub par exemple…

Voilà, si vous voulez voir ce que ça donne, il y a un site de démo où vous pouvez voir l’interface en action !

Merci à Lorenper pour le partage de cette appli !

PNG - Le grand retour du format qui refuse de mourir

Par : Korben
25 juin 2025 à 06:18

Bon, je sais ce que vous allez me dire : “PNG ? Sérieux Korben ? On est en 2025 et tu nous parles d’un format d’image qui date de l’époque où on écoutait encore Britney Spears sur nos lecteurs CD ?” Eh bah figurez-vous que ce bon vieux format PNG qui dormait tranquillement depuis 20 ans vient de se réveiller, et il a visiblement pris des vitamines pendant son sommeil !

Meta vous espionne même en mode incognito !

Par : Korben
10 juin 2025 à 21:11

Vous pensiez être invisible en mode incognito avec votre VPN ?

Et bien Meta vient de nous prouver que vous étiez aussi discret qu’un rhinocéros dans un magasin de porcelaine. En effet, leur dernière trouvaille technique transforme votre smartphone en mouchard et cette fois, ça pourrait leur coûter la bagatelle de 32 milliards d’euros d’amende.

L’affaire a éclaté en juin 2025 quand une équipe de cinq chercheurs a révélé au grand jour le “localhost tracking” de Meta. Tim Vlummens, Narseo Vallina-Rodriguez, Nipuna Weerasekara, Gunes Acar et Aniketh Girish ont découvert que Facebook et Instagram avaient trouvé le moyen de contourner toutes les protections d’Android pour vous identifier, même quand vous faites tout pour rester anonyme. VPN activé, mode incognito, cookies supprimés à chaque session… Meta s’en fichait complètement.

Quand Geohot critique Tenstorrent de manière constructive

Par : Korben
26 mai 2025 à 08:18

Quand George Hotz, alias geohot, décide de donner des conseils à une boîte de semiconducteurs, ça donne un README de 100 lignes sur GitHub qui commence par “If you want to get acquired / become scam IP licensing co…I can’t help you.” C’est “subtil” ^^. Le hacker qui a jailbreaké le premier iPhone et qui fait maintenant rouler des voitures autonomes avec comma.ai vient de publier ses “conseils” pour Tenstorrent.

Pour ceux qui ne suivent pas le marché des puces IA de près, Tenstorrent c’est LA boîte qui fait rêver en ce moment. Fondée par Jim Keller (oui, LE Jim Keller qui a conçu les architectures x86-64 chez AMD et les puces A4/A5 d’Apple), l’entreprise développe des processeurs spécialement conçus pour l’IA avec une approche “dataflow” plutôt que l’architecture SIMD classique des GPU.

OpenAI préparerait un réseau social dopé à l'IA

Par : Korben
16 avril 2025 à 12:34

S’il y a bien un truc dont je suis sûr, c’est que le monde n’a pas besoin d’un nouveau réseau social. Et pourtant, c’est exactement ce que prépare Sam Altman, le CEO d’OpenAI, dans ce qui ressemble à une nouvelle saison de Game of Thrones version Silicon Valley.

Donc si vous êtes un nerd qui s’intéresse aux stratégies des titans de la tech, vous allez adorer cette nouvelle bataille qui se prépare.

Meta to Take EU Regulation Concerns Directly to Trump, Says Global Affairs Chief

18 février 2025 à 14:00
When companies are treated differently and in a way that is discriminatory against them, then that should be highlighted to that company’s home government,” Meta’s global affairs head said.

Linux Foundation Announces Initiative to Support Chromium Ecosystem

Par : Megan Crouse
13 janvier 2025 à 22:06
The open-source Chromium architecture is the backbone of many internet browsers, including Google’s Chrome. Google, Microsoft, and others have expressed support for the initiative.
❌
❌