Zero Shot’s first signal: ex‑OpenAI engineers will back applied AI and steer clear of vibe coding, digital twins, and shaky robotics data
Zero Shot, a new VC launched by former OpenAI engineers, has quietly closed $20 million toward a $100 million target and is sending a single clear market signal: deployable, revenue‑adjacent AI gets priority; “vibe” ideas and speculative foundational plays do not.
A technical gatekeeper not a hype fund
Founding partners Evan Morikawa (ex‑head of applied engineering at OpenAI), Andrew Mayne (early prompt engineer turned deployment consultant), Shawn Jain (ex‑OpenAI researcher and generative‑AI founder), Kelly Kovacs, and Brett Rounsaville built Zero Shot on insider technical experience. That team, plus advisors who previously led people, communications, and product at OpenAI (and who receive carried interest), means investment judgments will be informed by hands‑on deployment tradeoffs rather than headline narratives.
Their stated rule is concrete: prioritize startups that can be integrated into customer workflows or factory floors within measurable timeframes, and avoid categories they judge to have structural technical or market barriers—examples named explicitly by the fund are “vibe coding” platforms, firms promising robust transfer from synthetic “embodiment” datasets to real robots, and many “digital twin” plays where language models already replicate the claimed workspace value.
How to verify Zero Shot’s screening in practice
Check three observable signals when evaluating whether Zero Shot’s thesis is being applied: (1) portfolio selection—early checks should show companies with paying pilots, customer integrations, or hardware proof‑of‑concepts; (2) co‑investor set—participation from other deployment‑focused investors such as OpenAI’s fund or Mira Murati indicates alignment; and (3) governance—advisors with operational roles and carry imply active post‑investment support rather than passive endorsement.
To make those checks faster, use the table below which captures the sectors Zero Shot flagged, why the fund views them as risky, and concrete signals that would overturn that judgment for a given startup.
| Sector | Why Zero Shot is wary | What would change the fund’s view |
|---|---|---|
| Vibe coding / subscription creativity tools | Likely to be subsumed by general large models’ code/text generation; weak defensibility. | Strong enterprise lock‑in, proprietary domain data, or unique inference latency/accuracy that general LLMs cannot match. |
| Embodiment training data for robotics | High sim‑to‑real transfer risk; uncertain timelines to functional robots. | Replicable real‑world robot deployments with measurable task success across sites. |
| Digital twins | Complex, expensive to maintain; LLMs can often provide similar decision support without full twin fidelity. | Clear ROI from twin integration (e.g., reduced downtime measured in field trials) or indispensability to regulated workflows. |
Early portfolio choices that illustrate the thesis
Zero Shot has publicly invested in Worktrace AI (enterprise AI management software) and Foundry Robotics (factory robotics that augment human operators) and has one undisclosed investment, all announced alongside news of the $20 million close. Both named companies raised seed rounds that included other notable backers—reporting shows participation by OpenAI’s fund and Mira Murati—which signals shared confidence in deployment‑first approaches rather than speculative model construction.
Governance choices mirror that operational bent: the fund’s advisor slate—former heads of people, communications, and product from OpenAI—receive carried interest and are expected to provide hands‑on guidance. That arrangement increases the likelihood Zero Shot will push portfolio companies toward measurable deployments, customer pilots, and integration work rather than open‑ended research milestones.
What founders and investors should do differently now
Founders targeting Zero Shot or similar technically driven funds should make two shifts immediately: present live integrations, pilot metrics, and customer economics rather than roadmap promises; and document transferability and maintenance costs for any simulated or synthetic training approach. For example, a robotics startup relying on “embodiment” data should show multi‑site robot performance benchmarks, not just simulation accuracy.
For other VCs and corporate investors, the practical decision lens is to prioritize proof of production readiness: require real‑world KPIs, ask for co‑investor references (look for participation from deployment‑oriented funds), and set a 12–24 month checkpoint to evaluate whether an investment’s path to integration is progressing. Watch how Zero Shot deploys its remaining capital and whether its portfolio skews exclusively toward applied plays or eventually reaches into foundational research—that will be the clearest test of the fund’s long‑term thesis.
Short Q&A
Is Zero Shot just another AI fund chasing OpenAI credentials? No — the founders’ operational backgrounds (Morikawa, Mayne, Jain) and advisor carry structure point to an investment practice built around deployment expertise, not reputation alone.
When will the fund deploy the rest of its $100M target? The fund has closed $20M so far; pace is unknown. Track subsequent closings and follow‑on investments in current portfolio companies for the best signal of tempo.
How should a founder get their attention? Demonstrate customer pilots with measurable business outcomes, documented integration plans, and technical evidence that your solution can’t be replaced by a vanilla LLM or off‑the‑shelf cloud service.

