Securing the World’s Software in the Age of AI
AI will be the most consequential technology of our lifetime. It may also be the most dangerous.
Not because AI itself is the threat, but because everything we depend on runs on software, and software has never been secure enough for what is coming. AI is already creating a clear divide between the organizations that harness its power and those that fail to confront its risks.
Daniele, Andrea, and I have spent our careers at the intersection of AI and security. We have seen what this technology can do, and we are excited by it. But we also see what is at stake. Current security was built for a slower, simpler world. That world is ending.
We started depthfirst because we believe security is one of the defining challenges of the age of abundant intelligence.
AI Only Moves in One Direction
AI capability is accelerating. Models are getting smarter, cheaper, and more accessible. The cost per unit of intelligence is dropping by orders of magnitude, and capabilities that required a frontier lab months ago now run on open-source models.
This will not slow down. The economics do not allow it. The research does not allow it. Every breakthrough compounds the next.
For security, what matters is that this acceleration is symmetric. Every gain in AI capability is a gain for attackers too. AI is already being used to find weaknesses faster, chain exploits more creatively, and run reconnaissance at unprecedented scale. These attacks are getting more sophisticated and cheaper to execute at the same time.
This is not a future risk. It is already happening.
Software Runs Everything
This would matter less if software were peripheral. It is not.
Hospitals, power grids, financial markets, telecommunications, supply chains, air traffic control. All of them run on deeply interconnected software. And every year, more of the infrastructure society depends on moves into code.
A vulnerability in the right place is not an inconvenience. It is a threat to the systems that keep modern life running. Securing software is no longer an IT problem. It is inseparable from securing society itself.
At the same time, the attack surface is exploding. AI is changing how software gets built. Engineering teams are shipping more code, more services, and more infrastructure changes than ever before. More code means more complexity. More complexity means more places where things can go wrong.
The attacks are getting smarter, the targets are getting more critical, and the surface area is growing. All at once. All accelerating.
The Gap That Cannot Be Closed by Humans
Current security was not built for this.
Most security tools analyze code the way a spell checker reads a novel. They can catch isolated mistakes on the page, but they cannot follow the plot. They do not understand how one component changes the meaning of another, how permissions shape what is possible, how systems interact over time, or how a detail that looks harmless in isolation becomes dangerous when it connects with something else later. The result is predictable: a flood of alerts, most of them false, while the vulnerabilities that actually matter go unnoticed.
Real security requires system-level reasoning. It means tracing how data moves across services, modeling how an attacker would chain weaknesses together, and proving, not guessing, that a vulnerability is exploitable. It means seeing the full picture: the code, the cloud infrastructure it runs on, and the live environment serving real users. Attackers do not limit themselves to one layer, and neither can defense. This is the work elite security researchers do.
There are not enough of these people in the world. And even the best human teams cannot keep pace with the volume of software being produced today, let alone what is coming.
This is the gap. Attacks are scaling with AI. Software complexity is scaling with AI. Defense is still scaling with humans. That equation has only one outcome.
The Only Way Out
There is only one way to close this gap: AI that can do the security reasoning itself.
Not AI layered on top of existing tools. Not AI that generates more alerts for humans to review. AI that can reason through vulnerabilities, verify whether they are real, and determine what to do about them. AI that operates at the same speed and scale as the threats it faces.
We believe this is not only possible, but inevitable. Security is one of the few domains where training custom AI models has a natural advantage. The tasks are well-scoped and measurable. Given a codebase, you can objectively evaluate whether a model found a real vulnerability and produced a valid proof of exploitability. You can score its work. That means you can train it. The model tries, gets feedback, and improves. Every iteration makes it better.
But building this requires AI agents that can reason through complex, multi-step security problems. Very few teams in the world have the expertise to do it.
depthfirst
That is why we built depthfirst as an applied AI lab, backed by a research team with world-class expertise in reinforcement learning.
We train our own models to reason through vulnerabilities the way an expert would: forming hypotheses, testing them, iterating, and arriving at verified conclusions.
We recently saw what this approach can deliver. Our first vulnerability discovery agent, trained on cryptocurrency smart contracts, outperformed every frontier model we tested, including Opus 4.6, at a fraction of the cost. It was built on an open-source base model that is seven months old and 30x cheaper to run.
But training better models is only part of the problem. To find real vulnerabilities, those models need to reason across the full system: application code, cloud infrastructure, the enterprise environment, runtime behavior and the connections between them. And finding vulnerabilities is not enough. The platform has to fit into how teams actually work, surfacing issues in the right place, generating fixes developers can apply directly, and verifying that those fixes hold. That is what the depthfirst platform does.
The early results confirm what we believed. Since launching in late 2025, Fortune 500 companies and fast-growing businesses like Ripple, Chainguard, ClickUp, Lovable, and Supabase have adopted depthfirst. We already achieve 8x higher vulnerability recall with 85% less noise than traditional tools. And because our platform understands the full context of a customer’s systems, it generates fix recommendations developers trust, with 80% acceptance rate.
We are still early. But the signal is clear.
The Road Ahead
Our long-term ambition is to build AI systems capable of securing all types of software, from low-level systems to web, mobile, and AI agents, autonomously from design through production.
AI capability will keep accelerating. Attacks will keep getting smarter. Software will keep getting more complex and more critical. These trends are not reversing. The only variable is whether defense keeps up.
Securing the world’s software in the age of AI is one of the most important challenges of our generation.
We’re here to do it.
Qasim, Daniele & Andrea