1752vc (Formerly Pegasus)’s cover photo
1752vc (Formerly Pegasus)

1752vc (Formerly Pegasus)

Venture Capital and Private Equity Principals

Santa Monica, CA 19,946 followers

More than a VC Firm— We are your Catalyst for Growth.

About us

More than just a source of funding—we are a comprehensive platform designed to accelerate success for startups, investors, and emerging venture leaders. Our mission is to empower ambitious founders and aspiring investors by providing access to the tools, knowledge, and community they need to thrive in a competitive market. Through various innovative programs, we help startups scale effectively, teach emerging angels the art of investing, and support venture fellows in building their careers. Our accelerator selects the most promising startups from thousands of applicants annually, offering hands-on mentorship, operational guidance, and access to a robust network of 850+ investors and industry leaders. Meanwhile, our sales program equips founders with critical skills to grow revenue, while our angel and fellow programs are tailored to develop the next generation of investors and venture capital professionals.

Website
https://www.1752.vc
Industry
Venture Capital and Private Equity Principals
Company size
2-10 employees
Headquarters
Santa Monica, CA
Type
Privately Held
Founded
2024
Specialties
Venture Capital, Startups, Accelerators, Fundraising, Mentors, and Investment

Locations

Employees at 1752vc (Formerly Pegasus)

Updates

  • Check out this fascinating new AI paper 👀 "MSA: Memory Sparse Attention" And it might be a big step toward true long-term memory in AI Today’s LLMs still struggle beyond ~1M tokens. MSA pushes that boundary to 100M tokens — while keeping precision stable (less than 9% degradation 🤯) and compute efficient. A few things that stood out: • Linear scaling in both training and inference • 100M-token inference on just 2 GPUs • Memory that’s actually modifiable and end-to-end trainable • Strong gains over RAG + memory agents on long-context tasks The bigger idea: decoupling memory from reasoning. Instead of cramming everything into attention, MSA treats memory as a scalable, structured system — closer to how humans operate over lifetime knowledge. If this direction holds, it could unlock: → Persistent AI personas (real “digital twins”) → Massive corpus reasoning without retrieval hacks → Long-horizon agents that actually remember This feels like a shift from simply expanding context windows → toward fundamentally better memory architectures. Less about how much you can fit in context, more about how intelligently models store, update, and reason over information at scale. Paper link in the comments 👇️

    • No alternative text description for this image
  • Founders, your next move starts here. Applications are open. Ignite has supported a wide range of founders in refining their business models, strengthening unit economics, and navigating fundraising with greater clarity and conviction. We’ve consistently seen founders gain real traction through the program by doubling down on the core fundamentals that truly drive progress. Ignite is our self-paced startup academy designed for builders who want to execute. It’s a natural next step for founders coming out of programs like YC Startup School or Founder University who are looking for more structure and hands-on, practical guidance. What founders get: • A clear path to a $100K investment • $1M plus in perks and partner tools • Access to a strong founder and operator network • Proven systems and frameworks • The flexibility to execute at your own pace If you are serious about building and scaling, Ignite gives you the playbook and support to do it right. Application in comments 👇

    • No alternative text description for this image
  • A huge moment from one of our portfolio companies. Genloop is officially number one Spider2 🏆 Spider2 is widely considered one of the toughest data reasoning benchmarks—and Genloop is now leading globally with a 96.7% score. For perspective: Snowflake → 75% ByteDance → 84% AT&T → 86% This isn’t a toy benchmark. Spider2 tests real-world complexity: 150+ databases, 13,000+ tables, and over 500,000 columns. This is exactly the problem Genloop was built for—turning messy, distributed enterprise data into reliable insights through true reasoning, not just querying. Proud to back Ayush Gupta and the team as they push the frontier forward. Read the article here: https://lnkd.in/gCYc2w4R

  • Memory is becoming a defining challenge—and opportunity—in building intelligent agents. Agents aren’t just tools anymore. They’re evolving systems that operate over time, make decisions across contexts, and improve through accumulated experience. And at the center of that evolution is memory. Memory is what allows agents to: • Retain context across interactions • Build on prior knowledge • Adapt behavior based on experience • Move from reactive to truly intelligent systems In our latest VC Unfiltered article, “Memory: The Architecture of Intelligence,” we explore why memory is emerging as the core layer in modern AI—and how it’s reshaping what it means to build intelligent agents. This shift doesn’t just improve AI. It redefines intelligence itself. Read the full article 👇

    • No alternative text description for this image
  • Fascinating AI paper we came across this week 🔥  "MiroThinker-1.7 & H1: Towards Heavy-Duty Research Agents via Verification" MiroThinker introduces a new class of AI: research agents that can reason over long horizons — and verify themselves along the way. At first glance, that might not sound new — agents already “self-check,” right? Sort of. But most current systems rely on prompting tricks or after-the-fact reviews (generate → critique → regenerate). It’s often fragile and inconsistent. What’s different here: 1. Structured multi-step thinking The model is trained to plan, use tools, and stay coherent across complex workflows. 2. Built-in verification (the real unlock) Verification is integrated into the reasoning process itself: Checks happen at each step (local) & across the full reasoning chain (global) This helps prevent the biggest failure mode in AI today: errors compounding across long workflows The bigger shift: → We’re moving from “AI that answers” to AI that reasons over time → Systems that can sustain thinking across complex tasks → Outputs that are not just fluent — but internally consistent and evidence-backed AI isn’t just getting smarter — it’s getting more accountable. Research Credits: MiroMind.ai; Song Bai; Lidong Zhao; Carson Chen; Guanzheng Chen; Yuntao Chen; Zhe Chen; Ziyi CHEN; Jifeng Dai; Xuan Dong; Yue Deng; Yu Fu; Junqi Ge This paper was surfaced by our portfolio company Genloop through their LLM Research Hub, which regularly highlights important developments in AI research. Paper linked in the comments 👇

    • No alternative text description for this image
  • Founders, we’re actively investing. We write $100K checks to early-stage companies. We're generalists with a strong focus on AI and frontier tech, and are open to exceptional teams in any industry. At 1752vc (Formerly Pegasus), we don’t just invest. We work alongside founders on GTM, sales, and execution to turn early momentum into real traction and fundable growth. If you’re building, let’s talk Application in comments👇

    • No alternative text description for this image
  • Thank you to University of Southern California for having us at this year’s VCIC (Venture Capital Investment Competition). It was great to have our Principal, Ben C. Kahan, join as a judge. We always value the opportunity to stay close to the university ecosystem and support the next generation of investors. VCIC is a great hands-on way to learn venture capital. Student teams from ~60 universities across the U.S., Asia, and Europe step into the role of investors, evaluating real startups that are actively raising. At the USC regional, six teams spent the day watching founders pitch, running live diligence sessions, and ultimately presenting a term sheet and investment memo for the company they believed was most venture-backable. What stood out most was the intensity of the process, with students being consistently pressure-tested on their assumptions while receiving direct, unfiltered feedback in real time. Impressive talent across all the teams and congrats to UC Berkeley on taking first place! University of Southern California USC Marshall School of Business Lloyd Greif Center for Entrepreneurial Studies - USC Marshall VCIC (Venture Capital Investment Competition)

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • The biggest shift in AI isn’t getting talked about enough. The next wave isn’t about better responses. It’s about systems that can learn, reason, and make decisions on their own. Part 2 of our 14-part AI series, “Cognition: How Intelligent Agents Learn, Reason, and Decide,” explores how AI is moving beyond static outputs into dynamic systems that improve with context, memory, and experience. This is the real unlock. Not better answers, but better thinking. Link to the full article in the comments 👇

    • No alternative text description for this image

Similar pages

Browse jobs