Former GitHub CEO Launches Checkpoints To Document AI Generated Code Logic

Former GitHub CEO Launches Checkpoints To Document AI Generated Code Logic

Thomas Dohmke spent four years running GitHub, the platform where most of the world’s open-source code lives. He watched as AI tools began writing software faster than humans could reasonably review it, creating a bottleneck that the industry calls “AI slop.” Now, he is betting that the current systems for managing code—which rely on humans reading line-by-line—are about to break under the pressure of machine-generated volume. His new answer is a startup called Entire.

Key Takeaways

  • Former GitHub CEO Thomas Dohmke raised a $60 million seed round for startup Entire.
  • Entire reached a $300 million valuation following its initial funding round.
  • Entire’s first product, Checkpoints, provides open-source management for AI-generated code.

Dohmke left GitHub in August 2025 to solve a specific problem: volume. We are in an “agent boom,” where AI programs write software autonomously. The issue is that our current tools were built for human speed, not machine speed. When an AI submits a thousand lines of code, a human developer still has to figure out if it works.

Entire has raised $60 million to build a new set of tools designed specifically for this era. The company is valued at $300 million right out of the gate, which is the largest seed round ever recorded for a developer tool startup.

The big deal

If you do not write code, you might not realize how much manual labor goes into maintaining software. When a contributor submits code to a project, a human has to review it, test it, and approve it. This system works when humans are writing the code. It breaks down when AI agents flood a project with suggestions.

Open-source projects are currently overwhelmed by what the industry calls “AI slop”—poorly designed, often unusable code generated by AI models. It looks like code, but it often fails in subtle ways. Maintainers are wasting hours shifting through this noise.

Entire aims to fix this by changing how we track AI contributions. Instead of just dumping code into a file, the system tracks the logic behind it. If this works, it could stop open-source projects from drowning in low-quality automated spam and make AI coding agents actually useful for large enterprises.

How it works

Entire is building a “universal semantic reasoning layer” that sits on top of existing code databases. Its first product, Checkpoints, is an open-source tool that changes what gets saved when an AI writes a program.

Think of it like grading a math test. If a student just writes the final answer, the teacher has no idea if they understood the material or just guessed. You need to see their work. Checkpoints forces the AI to “show its work” by saving the prompt, the transcript, and the context alongside the final code.

By pairing the software with the context that created it, a human developer can look at a confusing piece of code and instantly see why the AI wrote it that way. The system also includes a database compatible with Git—the industry standard for code storage—so it plugs into workflows developers already use.

The catch

The source text does not list specific technical limitations or performance issues for the new tool. However, the primary challenge identified is the sheer scale of the problem. The “manual system” of software production—issues, repositories, and pull requests—was never designed for AI. Entire is trying to retrofit a massive, established ecosystem.

There is also the matter of the “AI slop” itself. While Checkpoints helps document where the code came from, the source does not explicitly explain how the tool prevents bad code from being written in the first place, only that it helps humans manage and review the volume.

What now?

Entire is releasing Checkpoints immediately as an open-source tool. Developers can begin using it to track the “thought process” of their AI coding agents. The company plans to use its funding to build out the rest of its platform, including a user interface designed specifically for humans to collaborate with AI agents.

If you manage a software team, you should watch to see if this tool actually reduces review time or just adds more data to look at. The next test is whether major open-source projects adopt this standard to filter out the noise.

Exit mobile version