Sign in to view Ryan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Ryan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
San Francisco, California, United States
Sign in to view Ryan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
195K followers
500+ connections
Sign in to view Ryan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Ryan
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Ryan
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Ryan’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
The Peterman Pod
****
-
*******
*********** * *******
-
****
************ ***** ******** **************
-
****
********** ****** ******** ******* *** *********** undefined
View Ryan’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Ryan’s full profile
-
See who you know in common
-
Get introduced
-
Contact Ryan directly
Other similar profiles
Explore more posts
-
Nemanja Divjak
From coding my first lines of… • 6K followers
Google VP just called out the AI wrapper apocalypse 🔥 Been saying this for months - if you're just slapping a UI on GPT, you're toast. Real builders ship differentiated IP, not thin wrappers. The shakeout is coming faster than most think. https://lnkd.in/d4TQi77M #AIStartups #BuildReal
-
Alec Borman
Hydrolix • 672 followers
TL;DR: I built a professional‑grade music system (Tenuto) with Rust/Wasm, a decoupled architecture, and AI as my integration engineer—proving one engineer can now deliver what once required a team. An AI review of my codebase pointed out something I hadn’t considered: Tenuto shares architectural DNA with platforms like Photoshop and Figma—a decoupled kernel, a multi‑language runtime, and a rigorous file format that acts as the single source of truth. Seeing my name mentioned alongside those giants felt surreal. They are the work of thousands; I’m a solo developer with a GitHub repo and zero funding. But it made me realize that the principles I applied—specification‑first, strict decoupling—are exactly the ones that scale. My approach: Specification first – I wrote the full Tenuto language spec before a single line of Rust. This gave me a blueprint that AI could follow, not guess at. Decoupled “narrow waist” – The compiler (Rust/Wasm) knows nothing about audio; the audio engine (WebAudio worklet) knows nothing about the compiler. They communicate via a lock‑free ring buffer. AI as co‑architect – I used Google AI Studio with a custom RAG blueprint that indexed my entire codebase. When I needed to refactor the core math engine, I didn’t need a team—I described the change, and the AI synchronised the Rust compiler, TypeScript workers, and hardware emitters in one session. The result worked immediately because the architecture was designed for that kind of decoupling. Tenuto is my “crown jewel.” It proves that with a rigid spec and a “sovereign architect” methodology, one person can own the entire vertical stack—from compiler math to WebGL pixels. I’m now open to new challenges in systems architecture, developer tools, or business infrastructure. If your team values industrial‑grade rigor and high‑velocity engineering, let’s talk. https://lnkd.in/gpYnZqvY #SystemsArchitecture #Rust #WebAssembly #BlackBoxMindset #SoftwareEngineering #AI #Innovation #SoloDev #MusicTech #GoogleAIStudio #OpenToWork
1 Comment -
Yaron Pdut
Varonis • 2K followers
Boris Cherny’s View on Vibe Coding Limitations (from the Podcast) • Best for throwaway/prototype code: “I do this all the time [vibe coding], but it’s definitely not the thing you want to do all the time.” It’s ideal for non-critical path code, quick experiments, or prototypes that might get discarded. • Lacks maintainability and thoughtfulness: “You want maintainable code sometimes. You want to be very thoughtful about every line sometimes.” AI can produce working code fast, but it often results in messy, hard-to-maintain output (e.g., inconsistent style, over-engineering, or subtle issues). • Requires human oversight for quality: On the Claude Code team, they hold AI-generated code to the exact same bar as human-written code. If it “sucks,” they don’t merge it—they ask the model to improve it. • Preferred approach: Pair programming with the model: For important code, align on a plan first (e.g., using Claude Code’s plan mode), iterate incrementally, review/clean up, or even hand-write critical parts. Boris still manually codes core sections where he has strong opinions (e.g., parameter names). • Models aren’t perfect yet: “The models are still overall not great at coding… this is the worst it’s ever going to be.” Massive improvements are coming, but current limitations mean vibe coding alone isn’t reliable for production. Broader Limitations of Vibe Coding (from Industry Discussion & Karpathy’s Own Reflections) Karpathy popularized the term but has since highlighted drawbacks, even abandoning pure vibe coding for complex projects due to persistent bugs and “slop”: • Produces “slop” or low-quality code — Hallucinations, duplicated logic, bloated/over-engineered solutions, inconsistent architecture → Leads to technical debt and hard-to-debug systems. • Security vulnerabilities — AI can introduce exploits (e.g., insecure dependencies, leaked keys) that humans might miss if not reviewing deeply. • Hard to debug and maintain long-term — Lack of deep understanding means subtle/non-obvious problems pile up; fixing requires trial-and-error. • Not suitable for production/critical software — Fine for weekend projects or demos, but risky for anything with real stakes (e.g., scalability, reliability). • Paradox: Best for experienced coders — Experts can spot/fix issues; novices risk building fragile apps without realizing. In summary, vibe coding is a powerful tool for speed and creativity (especially prototypes), but its limitations make it unsuitable as a full replacement for thoughtful, reviewed coding—especially in professional settings. As Boris puts it, treat AI like a pairing partner: collaborate actively, enforce high standards, and intervene where needed. Models will get better rapidly, narrowing these gaps over the next months/years. https://lnkd.in/e446EFR3
4
-
Alexander Nevedovsky
Audos.com • 25K followers
"We’re hiring - in person in SF." 🚩 🚩 🚩 My unpopular take: the San Franciscan (and Bay Area) talent pool is way overhyped. There’s nothing wrong with building hubs in cheaper (and sometimes more effective) places. If Claude Code solved coding, let’s not forget that it’s also solving the need for people to be in specific places at specific times. You can build things with remote teams. You just need to know how. And contrary to what you might think from this post, it's also about spending time together, in person. But not necessarily all the time or even every week (hybrid models). Only hiring in SF is definitely not the solution (only if you're a super hyped up research lab able to afford a million per head). -- Follow me (Alexander Nevedovsky) for unfiltered repeat founder thoughts (no AI slop) on startups, investing & AI agents.
76
19 Comments -
Russell Jurney
Graphlet AI • 7K followers
An interesting project for fans [like me] of Boundary (YC W23) BAML: https://lnkd.in/gwqRX3qn > LLMs are powerful but their outputs are unpredictable. Most solutions attempt to fix bad outputs after generation using parsing, regex, or fragile code that breaks easily. > > - Outlines guarantees structured outputs during generation — directly from any LLM. > - Works with any model - Same code runs across OpenAI, Ollama, vLLM, and more > - Simple integration - Just pass your desired output type: model(prompt, output_type) > - Guaranteed valid structure - No more parsing headaches or broken JSON > - Provider independence - Switch models without changing code > - Rich structure definition - Use Json Schema, regular expressions or context-free grammars
3
3 Comments -
Skultety Bendeguz
Gaem Labs • 3K followers
Anyone wondering how to pull off being a solo-game dev in todays economy: Step 1: get a job (possibly part time) Step 2: learn how to gamedev (optional) Step 3: develop a game (make it XR and multiplayer if you are a masochist like myself) Step 4: join a builder community (to avoid being alone, going insane and to have enough playtesters at your hand at any time; my recommendation is mesh.) Step 5: make videos/content of said game Step 6: No idea, havent made it this far Thats it. Picture is about how I do administrative work for my job, debugging Colocation multiplayer in Unreal and cutting my next round of videos while the project is packaging. I’ve been running this setup for over a year now and while its is hard juggling so many things at least i can work on something i enjoy and lets me grow. Ps.: If you are not as dumb as me you would get a job with much less responsibility. (Being a cto takes a lot of brainpower even as a consultant, who would have thought right?)
305
57 Comments -
Valerio Velardo
Autonomo • 17K followers
Here’s a list of papers I’d study before planning / developing a generative music model. They’ll give you a strong foundation. You’ll also get an idea of the state of the art, and what to expect. 1. Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation 2. Mousai: Text-to-Music Generation with Long-Context Latent Diffusion 3. MusicLM: Generating Music From Text 4. Simple and Controllable Music Generation 5. Masked Audio Generation using a Single Non-Autoregressive Transformer 6. Long-form music generation with latent diffusion 7. Video2Music: Suitable Music Generation from Videos Using an Affective Multimodal Transformer Model 8. ACE-Step: A Step Towards Music Generation Foundation Model You have to be comfortable with the topics below to follow these papers: - Transformers - LLMs - Diffusion / flow models - Encoder architectures - Spectrogram / Mel-spectrogram representations — 💼 Check my music tech advisor service: https://lnkd.in/divEFrTA 📩 Follow my content: https://lnkd.in/d7MPrPQT #GenAI #AIMusic #GenerativeMusic
164
8 Comments -
Kha Ngo
Herond Browser • 722 followers
Why Chative.io Isn’t Just Another GPT Wrapper In today’s AI-driven hype cycle, many startups look like “GPT wrappers.” Chative.IO defies that label. Its AI agent isn’t a standalone chat—it’s embedded in real workflows. It unifies customer chats, automates abandoned cart reminders, delivers product recommendations from real data, personalizes follow-ups, powers dashboards, and scales across multiple sales channels. That’s what smart VC firms should recognize: deep integration over superficial signage. For VCs chasing winners, Chative.io is a standout: not just about hype—but execution, depth, and real returns for e-commerce businesses. #Startup #ScaleUp #VentureCapital #InvestInTech #AIStartups #SaaS #ProductLedGrowth #BusinessGrowth #ArtificialIntelligence #AIForBusiness #AIChatbot #ConversationalAI #Omnichannel #EcommerceTech #FutureOfWork #Automation
9
1 Comment -
Aish /
Tech For Good • 7K followers
Woah, LinkedIn is filled with YC and vibe coding. Why are they pushing it so much? Turns out, it's not just hype, it's capital 😉 they haven't launched a "vibe fund", but they’ve quietly backed dozens of AI-native startups through their standard $500k checks. That’s tens of millions indirectly fueling vibe-coded teams. Nearly 25% of startups in recent batches are building with 95%+ of their code written by AI. Teams of 10 doing the work of 50. Some pulling $1M–$10M ARR. Solar (Lumenary) is one example. But it's bigger than one startup, it’s a pattern. Vibe coding isn’t a trend. It’s becoming a YC lens on the future of product velocity. Still early. But the signal is strong.
19
4 Comments -
Ville Rauma
Empires Not Vampires… • 2K followers
Just a friendly note to all the people dissing GenAI and their "proof" is a single or a few prompts getting a bad result or getting an uninspired result: You're not getting it. You're actually not going to learn or understand anything about LLMs/Diffusion or the larger AI ecosystem with a few quick tests, furthermore you're denying yourself the biggest X-factor I have ever seen in my 20y in tech/games if you're approach it like that. GenAI is a complex, growing and fast evolving set of new technologies and development practises which require daily use over extended period to understand and be effective. It's not just ChatGPT, it's not just Gemini or Claude, this goes so much deeper than that. Just as an example, effective GenAI use requires that you learn disciplined context building&control and integrated design-documentation practises as part of your daily development cycle to be effective.
33
13 Comments -
Jesse Landry
Vention • 13K followers
$55 million in seed funding just hit the Radical AI books, led by RTX Ventures, with a crew stacked deeper than an ISS launch manifest: nVentures (NVIDIA's VC arm), Eni Next (of Eni S.p.A.), noa, Infinite Capital, and AlleyCorp. That's not just a round, that's a declaration. And the message is clear: #materialsscience isn't slow and academic anymore. It's autonomous, atomically precise, and scaling faster than most folks can spell "interatomic potential." Founded in 2024 and now running full throttle from Midtown Manhattan, Radical AI is rebuilding the entire scientific process from the molecular level up. Joseph F. Krause, PhD, ex-Army National Guard CBRN Specialist, Army Research Lab fellow, and AlleyCorp investor, partnered with serial builder and AlleyCorp Board Partner Jorge Colindres to construct something we don't see enough of: a deep tech company with actual technical depth. Add Gerbrand Ceder to the founding trio, Samsung Distinguished Chair at University of California, Berkeley, 550+ publications, 130,000 citations, and more patents than most folks have unread emails, and you've got a team calibrated for moonshots, not MVPs. Their product? A fully autonomous lab, an A-Lab, capable of running 50 to 100 times more materials science experiments than traditional setups, without needing a human to refill the coffee pot. It's AI, #robotics, and molecular #quantum mechanics choreographed to a single tempo. Inside is TorchSim, their PyTorch-native #simulation engine running atomistic calcs at speeds that make DFT look like dial-up. They've built their own supercomputer. Created the world's fastest #MachineLearning Interatomic Potential. Dropped the largest known #datasets in materials science. Then mapped out a nine-tool full-stack robotic lab and launched the OS to run it. And they did all this in a year. Radical AI isn't playing the game. They're changing the field. Their platform doesn't treat discovery, creation, and deployment as a sequence. It's one flow, one intelligence, merging #physics with robotics and computation into an autonomous loop. That's not disruption. That's reconstitution. And the industries paying attention? #Defense. #Energy. #Biotech. #Space. #Semiconductors. All the arenas where slow science bottlenecks fast futures. #Startups #StartupFunding #VentureCapital #SeedRound #AI #Materials #MaterialsTech #Research #Robotics #RobotTech #DeepTech #Infrastructure #Technology #Innovation #TechEcosystem #StartupEcosystem If engineering peace of mind is what you crave, Vention is your zen.
33
2 Comments -
Hugo Lebreton
Usermade • 814 followers
I learned to code right when LLMs showed up in 2023. I remember going back and forth between Stack Overflow (RIP) and ChatGPT, pasting between the two, half the time not sure which one was actually helping. Today, building software costs a fraction of what it did three years ago. That's going to change who hires developers. Accounting firms, nonprofits, event agencies... places that never had a developer on payroll are going to hire one. Someone who gets the business problem and ships the fix. Devs are becoming product managers, product managers are becoming devs.
6
-
Vic Singh
RRE Ventures • 5K followers
A Founders Guide to The Long Build™ - Physical AI Thesis Part III This one is for the builders. In this final post, my partner Will Porteous and I along with insights from builders in the trenches and investors who play the long game share our guide for founders building intelligence in the real world. A few core takeaways: • Sim before steel: Runway is precious, model the system behavior • Capital intensity is not the enemy — misaligned capital is • Build complete solutions before you talk platform • Hardware opens the door, software wins the room • Trust is earned through resilience, not hype
10
1 Comment -
Justin Weiss
Machine Mythology • 5K followers
I just saw that California Governor Gavin Newsom signed SB 53, the first law in the country that requires big AI labs—OpenAI, Anthropic, Meta, and the usual suspects—to lay their cards on the table. They’ll now have to publicly disclose their safety practices and, more importantly, report any AI-related risks straight to the state. On the one hand, that's pretty reassuring. Finally, someone is asking these companies to be a little more transparent instead of just trusting them to self-police in a cutthroat race. On the other hand, California is just one state. The rest of the country is still a wide-open playing field where labs can push models with little oversight. That kind of patchwork could cause compliance headaches and friction between states. So, do companies slow down to comply with California’s rules, or do they just shift their riskiest work elsewhere? And if each state makes its own version of AI law, it could either keep the industry honest or tie it up in so much red tape that innovation drags. It makes you wonder: is SB 53 setting the standard for how AI will be governed, or is it about to spark a messy race where states compete to either regulate hard or adopt a "move fast and break things" attitude? Credit: Dr. Laura Caroli.
98
67 Comments -
John Burkey
Wonderrush.ai • 6K followers
I don't think Apple remembers C,C++, Rust, or Java, which in almost all circumstances are better performing than swift, have more experts to hire, have better tools, etc. From swift.org: "The only language that can span from embedded and kernel, to server and apps. Swift excels no matter where it’s used: from constrained environments like firmware where every byte counts, to cloud services handling billions of requests a day." And I like swift, and use it every day ! And we don't even use swift anymore for native IOS UI development - because flutter is so much faster to get stuff done, and you get a cross platform UI out of it.
2
3 Comments -
Zachary Alexander
Enduring Advantage • 2K followers
This talk surfaces a number of important topics. One of them is, do you have to fight the AI Revolution with the whole company? To channel Clayton Christensen, why not pick a small, underserved sub-market and build a wholly-owned AI-first spin-off to serve it? Then use what you've learned to improve the entire business. The reason is that Foundation models double their capabilities every 7 months. Chances are, the company will have to shrink to survive. --Zachary
-
Sai Kolasani
PlotViews • 1K followers
Anthropic just released Claude Sonnet 4.5, and the headline isn’t a flashy demo; it’s endurance + reliability. In launch tests and early coverage, teams have run it autonomously for ~30 hours (previously ~7), and Anthropic says it even coded a full chat app during trials (about 11,000 lines). That matters for real work where agents need to stay focused across apps, tabs, and long tasks. A couple of things that jumped out at me: - Agents that finish, not just start. Reports (and Anthropic’s post) say Claude 4.5 can work on its own for about 30 hours. In practice, that means you can give it a coding or research task, leave it alone, and come back later to see real progress, without babysitting it the whole time. - Better at actually using a computer. Reports say Claude 4.5 clicks through websites and apps more accurately, opens the right tools, and sticks with the same plan for much longer Can't wait to try it out in Cursor. #AI #GenerativeAI #AIAgents #SoftwareEngineering #DeveloperTools #Anthropic #Claude
12
-
Ryan Estes
Hampton • 19K followers
LLMs are trained on transcripts. Podcast transcripts, specifically, are some of the cleanest, most context-rich training data available. Real conversations. Named experts. Specific claims. Timestamps. It's structured gold for model training. Which means if you're a founder and you're showing up in podcast conversations, not just as a host, but as a referenced expert, you're quietly building a presence inside the models themselves. AEO is the practice of getting your name, your framework, your opinion cited when an LLM answers a question in your space. Podcast mentions are one of the most underrated vectors for this. Here's why it compounds: → Transcripts get indexed AND ingested → Third-party mention = higher credibility signal than self-published content → Guest appearances create topical clustering around your name + category → The more niche the show, the more precise the association The brands and founders who will dominate AI search in 6 months are the ones being talked about in right now. #kitcaster
12
-
Hashir Abdi
FDA • 317 followers
Open source’s real edge isn’t code. Instead, It’s the cost of curiosity. Winners don’t just ship models — they lower the unit cost of trying. Two hidden taxes on curiosity: compute price and license friction. Remove both, and experiments explode. (Simple as that, no Really!) Each fork is a cheap hypothesis test. Scale the tests, not the press release. Breakthroughs follow a power-law. More shots → fatter tail → outsized wins. Standards aren’t set by the “best model.” They’re set by whomsoever hosts the most experiments. (This repeated lesson is based on my long and cherished history with Open Source) Cheap power + open weights turns a nation into a Monte Carlo engine for discovery. Closed systems hoard capability; open ecosystems hoard error signals — and improve faster. (Fail fast, Fail Often is what OSS is all about after all) Protocol power outlives product power. Quiet lesson: Don’t fight to win the leaderboard. Fight to own the gradient of the world’s curiosity.
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content