Identifiez-vous pour voir le profil complet de Maxime
ou
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
France
Identifiez-vous pour voir le profil complet de Maxime
Maxime peut vous mettre en relation avec plus de 10 personnes chez Gladia
ou
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
1 k abonnés
+ de 500 relations
Identifiez-vous pour voir le profil complet de Maxime
ou
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
Voir les relations en commun avec Maxime
Maxime peut vous mettre en relation avec plus de 10 personnes chez Gladia
ou
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
Voir les relations en commun avec Maxime
ou
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
Identifiez-vous pour voir le profil complet de Maxime
ou
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
Articles de Maxime
-
Ready to Rocket: Joining MadKudu as CTO!
Ready to Rocket: Joining MadKudu as CTO!
Pop pop! 🎉 The word on the street is true—I am the CTO at MadKudu, baby! And it's finally time to tell you the story……
136
30 commentaires
Activité
1 k abonnés
-
Maxime Gaudin a partagé ceciTomorrow, April 2, 4 PM CET. I'm going live to benchmark STT models on your audio. Not ours. Yours. If you haven't uploaded your audio yet, do it now. I want the files you're afraid to test. Heavy accents, industry jargon, background noise, crosstalk. The one recording that breaks everything. I'll run them live. Side by side. Across the top providers. No cherry-picked samples. No controlled conditions. Your data, under pressure. If you think your audio is too messy to transcribe well, perfect. That's exactly what I want. À demain!
-
Maxime Gaudin a partagé ceciWhat happens when you remove brand names from speech-to-text benchmarks? See for yourself: the leaderboard is live! Last week, I asked you to blindly judge 6 speech-to-text providers. Hundreds of you did. The results are in. Gladia is #2. Right behind Mistral. I can’t tell you how proud I am of the team. We’re a 50-person startup, up against Deepgram, AssemblyAI, ElevenLabs, Speechmatics, and Mistral. And when real users evaluate blindly, we’re right at the top. Shoutout to Mistral AI! Seriously impressive work. Being second to you is no small thing. But this isn’t static. The leaderboard updates with every vote. Models improve. Rankings shift. That’s the whole point. And we know exactly where we need to get better. Mistral, we’re coming for you 😉 Think we don’t deserve #2? Go vote.
-
Maxime Gaudin a partagé ceciOpenAI just killed Sora. 6 months after launch. Disney's $1B deal? Gone overnight. Half the internet is calling this the AI bubble finally popping. I think they're reading it wrong. Sora wasn't killed by bad tech. It was killed by GPU economics. The thing burned so much compute it was starving OpenAI's own teams. No retention. No monetization. Just viral demos and infinite infrastructure bills. That's what happens when you ship a consumer AI product with no business model beyond "look how cool this is." The AI that actually survives is not flashy. It's infrastructure. APIs. Developer tools. Boring stuff that solves measurable problems for businesses willing to pay. We live this at Gladia every day. Every millisecond of real-time transcription has a cost. You learn fast that "impressive demo" and "sustainable product" are two very different conversations. Sora dying isn't a bubble bursting. It's the AI industry growing up. The winners won't have the best demos. They'll have the best unit economics. What's the most overhyped AI product you've seen disappear this year?
-
Maxime Gaudin a republié ceciMaxime Gaudin a republié ceciScaling STT systems isn't just a model problem. It's a scale, cost, and latency problem. In this episode with Maxime Gaudin, CTO at Gladia, we get into what breaks in production. Not just models, but infrastructure, GPUs, and economics. Here's what stood out 👇 - Winning isn’t just about model quality, it is surviving brutal tradeoffs between latency, cost, and scale. - The real challenge is not training one great model, it is running it cheap enough to meet market pricing without breaking performance. - STT is getting commoditized so fast that providers have to chase better accuracy while selling at margins that keep shrinking. - Big models don’t matter if they are too expensive to run at scale. - Real-time voice AI lives or dies under a hard latency budget, and staying under 300 milliseconds leaves little room for mistakes. - The industry obsession with one model that does everything may be the wrong path if smaller specialist models can outperform it in the moments that matter. - Every model upgrade is risky because improving one language or task can make another one worse. - Testing speech systems is harder than people admit because teams know something broke, but don’t know what. - General transcription errors can be patched by an LLM, but once a name, phone number, email, or address is lost, it is gone. - The next edge in voice AI may come from tiny models trained for high-value details like PII, not from one giant model trying to handle everything. - Email addresses sound simple until real accents, pauses, corrections, and spelling cues expose how messy spoken language really is. - The companies that win enterprise voice AI will be the ones that orchestrate many narrow models well, not the ones chasing a single universal model. - Infrastructure strategy is becoming a product decision because legal rules, traffic spikes, and customer use cases all change what “best” deployment looks like. - Cloud scaling breaks in real-time spikes, like emergency calls. - Using managed infra and large DevOps teams at once wastes money. - Customers want one vendor for everything, even if quality drops. - The market will reward depth over breadth if a vendor can become truly exceptional in one painful, business-critical part of the voice stack. If STT is becoming commoditized, does the real advantage shift to specialized models that win on PII?
-
Maxime Gaudin a partagé ceciFun fact: When I am not building AI, I spend time with my llamas! 🦙 And they have a strong opinion on STT benchmarks. Don’t get left behind 🫵
-
Maxime Gaudin a partagé ceciBenchmarks in speech AI are broken. Everyone knows it. Nobody talks about it. When I joined Gladia as CTO, one thing quickly became painfully obvious: - No one could reproduce speech benchmarks. - Two teams could test the same model and get completely different results. So, the team and I went digging and we realized that normalization was a big part of the explanation. As a result, we open-sourced the whole methodology. Normalization rules, scripts, all of it. Reproduce our results. Challenge them. Watch to see what we found 😏 Full repo + evaluation pipeline in comments.
-
Maxime Gaudin a partagé ceciI've been chasing local text-to-speech for months. Every model I tried was either robotic, painfully slow, or needed a GPU I don't have. I genuinely started to believe that good voice synthesis would always require a cloud API and a credit card. The open-source TTS space hasn't made it easy to stay optimistic. Coqui AI, the team behind the most popular open-source voice model, shut down last year. The maintainer said it plainly — deep learning is too expensive to sustain without funding. Other projects required hardware most developers don't have sitting on their desk. I was ready to accept the trade-off. Keep paying. Keep depending on someone else's servers. Yesterday, KittenTTS dropped their latest release (https://lnkd.in/enwwDdKD). Under 25MB. Runs on CPU. No GPU. No waiting. No API key. No internet connection 🤯 I hit generate and just sat there. The quality had no right to be that good at that size. I attached a video so you can hear it yourself. My voice agent now runs 100% on my MacBook. No tokens to burn. No latency from network round-trips. Fully private. Fully mine. A year ago this wasn't possible. Today it fits in a model smaller than most profile pictures. This is what open source does. People said local TTS couldn't compete with cloud APIs. The KittenTTS team didn't argue. They just shipped. What's something you assumed required the cloud that you've recently been able to run locally? #opensource #voiceai
-
Maxime Gaudin a partagé ceciI left my previous job to bet on voice AI Turns out I wasn't crazy! 😎 VivaTech just named Gladia one of Europe's Top 100 Rising Startups selected by Accel, Eurazeo, HV, Northzone, and Partech. Here's what bugs me about the AI conversation right now: everybody is obsessed with LLMs, image generation, coding assistants. I get it, that's where the hype is. But look at what's actually happening on the ground! Every major tech company is building voice agents. Call centers are automating at scale. Every device is getting a voice interface. Healthcare, finance, legal, all moving to voice-first workflows. Voice isn't a feature. It's becoming the primary way humans interact with machines. At Gladia we saw this early. We built speech models in-house. We run real-time and async transcription at scale. We've been building while the spotlight was elsewhere. Incredibly proud of the team. The work is paying off 🥳 But seriously, why is voice still the underdog in the European AI conversation? #VivaTech #VoiceAI #Gladia
-
Maxime Gaudin a partagé ceciDeepSeek just dropped DeepSeek-V3.2, and I’m genuinely trying to wrap my head around the efficiency-to-performance ratio here... If you haven’t looked at the specs yet, stop what you’re doing. This isn’t just an "incremental update." This is a wake-up call for every closed-source frontier model out there. We are talking about a model that is matching Gemini 3 Pro level performance on reasoning benchmarks... for a fraction of the inference cost! When you can run frontier-level reasoning for $0.40 per million tokens, the economics of building AI products change overnight. 🤯 Why this matters to me as a CTO? At Gladia, we obsess over efficiency and latency. We know that the best model isn't always the one with the highest parameter count—it's the one that delivers intelligence where and when you need it, without bankrupting your unit economics. DeepSeek proves that you don’t need a trillion-dollar cluster to compete at the frontier. You need better architecture and smarter training runs. The gap between "open weights" and "closed frontier" didn't just shrink today. It arguably vanished. The question I keep asking myself today (I would love to know your take on it): With inference costs dropping this low, what features have you been holding back on that are suddenly viable today?
-
Maxime Gaudin a aimé ceciMaxime Gaudin a aimé ceciBREAKING NEWS: Google just re-entered the game 🔥🔥 They want to take the crown 👑 back from Chinese open source AI. And... Gemma 4 is FINALLY Apache 2.0 aka real-open-source-licensed. From what I've seen it's going to be a pretty significant model. But give it a try yourself today: brew upgrade llama.cpp # you might need to install from source until build 8637 is in your package manager later today: brew install llama.cpp --HEAD 🔴 My personal recommendation: if you have at least 24GB of RAM or VRAM, run the (very good) 26B MOE: llama-server -hf ggml-org/gemma-4-26B-A4B-it-GGUF:Q4_K_M if you have 16GB of RAM or VRAM, run the dense E4B: llama-server -hf ggml-org/gemma-4-E4B-it-GGUF:Q8_0
-
Maxime Gaudin a aimé ceciMaxime Gaudin a aimé ceciThe worst cold email I ever received had a perfect open rate. Great subject line. Terrible everything else. The moment I opened it, I knew I was being sold to. We spend so much time optimizing for what the company wants to see. The question nobody asks: what does this actually feel like to receive? Having enough taste to recognize when your own work fails that test, that's the actual job. And AI just made it a lot easier to fail at scale.
-
Maxime Gaudin a réagi à ceciMaxime Gaudin a réagi à ceciJe parie que vous ne vous attendez pas à ça 😅 On continue les présentations des résidents de la Uneed Residency, avec aujourd'hui : Hugo Lassiège, co-fondateur de Malt et co-fondateur de Writizzy ! Le parcours d'Hugo est un peu atypique dans le monde du indie hacking. En 2012, il cofonde Malt, la plateforme de freelances qu'on ne présente plus, avec aujourd'hui 700 salariés. Le genre de trajectoire que beaucoup considéreraient comme un aboutissement ! Sauf qu'il y a un an, il a fait le chemin inverse. Il quitte Malt, retour au solo, retour au code, retour à l'indie hacking. Et depuis, il partage toute cette expérience: succès, galères, leçons... Sur son blog et sa chaîne youtube. Son projet actuel, je le connais bien puisqu'on travaille dessus ensemble : Writizzy, une plateforme de blogging Européenne, accessible, sans IA, et compétiteur direct de Ghost, Medium etc 👀. En 6 mois : 360 blogs créés, 65 000 visites sur les blogs des utilisateurs le mois dernier, et 200€ de MRR. Son plus gros défi ? La distribution. Comme beaucoup de devs-fondateurs, construire le produit c'est la partie facile. Depuis peu, la plateforme est maintenant "complète" : il va donc falloir s'attaquer au marketing ! Ce qu'il vient chercher à la résidence ? Des bonnes vibes, tout simplement. Hugo aime construire, et il aime encore plus en parler avec des gens qui font la même chose. Fun fact : il a passé un an à Tokyo, mais les meilleurs ramen qu'il a mangés, c'était à Paris 😎. La suite bientôt 👋🏻
-
Maxime Gaudin a réagi à ceciMaxime Gaudin a réagi à ceciIntroducing Willow Atlas 1 — our new frontier speech-to-text model. Atlas 1 outperforms systems from ElevenLabs, Deepgram, OpenAI, and others by a MASSIVE margin, but the real difference is how it’s built. It runs on the first scalable, human-powered transcription infrastructure designed for real-time dictation. Most models achieve 5–7% word error rate on clean audio and drop to 10–15% in real-world conditions. Atlas 1 holds at 1.2% on clean audio and 2.1% in production, with an even larger gap in noisy environments. The result is simple: voice dictation that works without constant corrections. Atlas 1 is rolling out to all Willow users today. Try it at WillowVoice.com
-
Maxime Gaudin a réagi à ceciMaxime Gaudin a réagi à ceciExciting news on the latest performance benchmarks of our model Solaria 1! Across Switchboard, Gladia shows on average ~29% less WER than other providers. Why this matters: If performance only looks good on clean speech, it won’t hold up in conversation. Switchboard is among the most challenging datasets out there - noisy background, overlapping speech. True conversational audio. Which goes to show that our models are designed for real, messy, customer audio, not academic datasets. You can test it on your own audio in our playground. (Link to our open-source benchmarks in the comments)
-
Maxime Gaudin a réagi à ceciMaxime Gaudin a réagi à ceciWhat a deal ! OVHcloud acquires Lakekeeper (Vakamo), RisingWave and starlake.ai to create Europe’s most open, real-time, AI-native data platform. Lille & Paris, #OVHcloud today announced the acquisition of three leading #opensource technologies, Lakekeeper from Vakamo, Risingwave and Starlake.ai, to accelerate its vision of building the most interoperable and AI-powered European data platform. These strategic acquisitions will enable OVHcloud to extend its AI Endpoints offer with a fully integrated, next-generation data platform designed for real-time intelligence, open interoperability and full data sovereignty. Lakekeeper, developed by Vakamo, acts as a universal bridge between modern data clouds such as Snowflake, Databricks, and Google GCP, ensuring seamless interoperability across ecosystems without vendor lock-in. Risingwave, a breakthrough in #streaming architecture, merges #OLTP and #OLAP workloads under a unified layer, making real-time analytics as natural as batch processing. #Starlake.ai automates the most complex data ingestion and migration workflows across #datalakes, warehouses and #lakehouses, reducing time-to-data and operational complexity. Negotiations began in June 2025 in Paris, coordinated under the supervision of DATANOSCO, acting as strategic advisor to align open innovation, European regulation and AI ethics requirements. With this new foundation, OVHcloud positions itself as a European global leader in next-generation data platforms, where AI meets interoperability and sovereignty. “Europe deserves a #dataplatform that’s open, trusted and ready for the AI era, and today, we’re building it,” declared OVHcloud’s CEO. The integration roadmap will kick off this quarter, with joint engineering teams already collaborating across San Francisco, Lille, Berlin and Warsaw. Well done OVHcloud, unbelievable !
-
Maxime Gaudin a réagi à ceciMaxime Gaudin a réagi à ceci🔊Reminder: “Bring your own audio” is happening tomorrow. Over the past few days, developers have been sending us the recordings that usually break speech-to-text systems. Some of them are… brutal. Noisy calls. People talking over each other. Mics that sound like they came from 2007. Tomorrow we’ll run those files through multiple STT APIs live and see what actually happens. If you submitted audio → this is where you find out how your file performs. If not, it should still be a fun one to watch. (Link in the comments)
Expérience et formation
-
Gladia
***
-
*******
***
-
******
********** * ***
-
******** ******** *** ******** ********** ** ****
******** ****** ************ undefined
-
-
***** ************* ** ********
********* ************
-
Voir toute l’expérience de Maxime
Découvrez son poste, son ancienneté et plus encore.
Bon retour parmi nous
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
Nouveau sur LinkedIn ? Inscrivez-vous maintenant
ou
En cliquant sur Continuer pour vous inscrire ou vous identifier, vous acceptez les Conditions d’utilisation, la Politique de confidentialité et la Politique relative aux cookies de LinkedIn.
Langues
-
Anglais
-
Recommandations reçues
4 personnes ont recommandé Maxime
Inscrivez-vous pour y accéderVoir le profil complet de Maxime
-
Découvrir vos relations en commun
-
Être mis en relation
-
Contacter Maxime directement