The Real Housewives of Silicon Valley
17 Jul 2025
•
Baptiste Fernandez
Out of the lab, into the war room
The AI tech talent war has hit fever pitch. OpenAI’s $3 billion bid for Windsurf collapsed spectacularly last week, with Google wading in with an offer of its own – but rather than buying the AI coding platform outright, Google’s DeepMind poached Windsurf CEO Varun Mohan, co-founder Douglas Chen, and key members of its R&D team.
Google is reportedly paying $2.4 billion to license Windsurf’s technology without acquiring any direct stake in the startup, while Cognition – the company behind AI coding agent Devin – has now stepped in to buy the remaining Windsurf assets.
The deal raises all manner of questions for rank-and-file workers deemed “not essential” in acqui-hire scenarios like this. As Decibel Partners’ founder Jon Sakoda noted, it “rewrites the traditional cap table ‘rules’” for early-hires who had hoped to capitalize on stock options through a more traditional exit.

This has become a familiar story, as the main protagonists in the AI arms race wrestle for the leaders, engineers and researchers equipped to push the frontier of AI infrastructure and developer tooling.
While it’s true that such deals upend the traditional startup playbook around equity and acquisition outcomes, they also underscore a deeper shift: in today’s AI landscape, the most valuable asset isn’t just IP – it’s people.
“This isn’t a war for companies – it’s a war for minds,” Bain Capital partner Saanya Ojha wrote. “Big Tech is chasing the few individuals capable of bending the curve of progress. And they’re willing to pay billions to do it.”
It’s worth taking a look at some of the more notable hires and acqui-hires over the past year to get a handle on where industry is placing its bets – and what that might reveal about the future shape of AI teams.
From papers to products: The new AI-hiring gold rush
In March 2024, Microsoft hired the core team behind AI startup Inflection AI. This included Mustafa Suleyman, one of DeepMind’s original founders, who transitioned over to lead the all-new Microsoft AI unit; and Karén Simonyan, who is now Chief Scientist of Microsoft AI. Other notable members to join Microsoft from Inflection included Jordan Hoffmann, formerly a research scientist at DeepMind.
In Suleyman, Microsoft hired someone with real experience in shipping AI – at DeepMind he led the applied AI division where he was charged with integrating the company’s technology across Google’s products. And in Simonyan and Hoffmann, Microsoft acquired raw model-building and research firepower.
This hints at a fundamental shift in what “tech talent” looks like at the AI frontier. Breakthrough research and novel models are still essential, of course, but integration and deployment at scale is equally critical – and often the bigger challenge. And this is reflected elsewhere across the AI landscape.
In June, Meta procured a 49% stake in Scale AI, a company that provides data labeling, model evaluation, and infrastructure services for AI applications. Meta stopped short of a full acquisition, but it did hire Scale AI CEO Alexander Wang to head up a new “superintelligence” division at Facebook’s parent company.
Wang has a reputation as a leader who both understands AI’s technical complexities, and also how to build a business — after all, he “scaled” Scale AI into a $14 billion core infrastructure provider. Co-leading Meta’s new superintelligence unit is another fresh hire – Nat Friedman, the ex-GitHub CEO who most recently led early-stage VC firm NFDG in partnership with Daniel Gross – who has been deemed one of the most influential people in AI today. Meta, for what it’s worth, has also now hired Gross, luring him away from stealthy AI startup Safe Superintelligence.
It doesn’t stop there, either – far from it. Meta hired Trapit Bansal, followed by Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai – all from OpenAI, and all high-calibre AI and ML researchers. Furthermore, Meta is also now hiring Apple’s Ruoming Pang, a distinguished engineer and AI executive with a background in developing foundational AI models.
As with Microsoft, this shows that Meta isn’t just betting on one kind of talent — it’s assembling a full-stack AI dream team, from research through production.
While OpenAI has been fending off Meta’s aggressive advances, the ChatGPT hitmaker has itself been busy poaching key personnel from rivals. This includes Udya Ruddarraju, director of infrastructure engineering at Elon Musk’s xAI; Mike Dalton, infrastructure engineer at xAI; David Lau, VP of software engineering at Tesla; and Angela Fan, an AI researcher at Meta.
The skills on show here is notable, insofar as it includes people with specific infrastructure expertise. This gives OpenAI the hands-on experience to run bigger AI models more efficiently by improving the systems that handle compute, data, and deployment at scale. And with Fan joining the same team, this brings a deeper understanding of how large models behave during training, what kind of data flows, memory usage, and compute patterns they need.
In short, these hires help OpenAI build systems that are better tuned for real-world model performance.
Also this month, Anysphere — maker of the AI coding assistant Cursor — made two standout hires from Anthropic. Boris Cherny, who led the engineering behind Claude Code, joins as chief architect and head of engineering, while Catherine Wu, the product manager behind Claude Code, steps in as head of product at Anysphere.
Together, this brings a blend of deep technical insight and product intuition – the kind of talent that knows how to turn cutting-edge AI into tools developers actually want to use. It’s also another signal that companies aren’t just hiring researchers; they’re prioritizing people who can shape AI into workflow-friendly products.
The talent taper
Coding cul-de-sac: A quiet crisis in skill-building
The talent-grab frenzy shows that companies are no longer hiring in isolated silos. They’re chasing multi-disciplinary teams that reflect a fundamental reality – AI products require deep integration between science, systems, and strategy.
But there’s another stark reality facing the less-experienced segment of the technology workforce. Many corporate leaders are assuming that AI will replace junior-level engineers and scientists, meaning that they are cutting headcount or freezing hiring. As one Senior Applied Scientist at Amazon said, this jeopardizes a pipeline that requires consistent investment in junior talent to create the next generation of skilled engineers and leaders – which by many estimations, we will still need even in an AI-driven future.

As more jobs succumb to automation, and junior engineering roles contract, a longer-term problem emerges: even if certain tasks can be automated today, what happens to the people who once relied on those roles to hone their craft and grow into senior talent?
On the upside, the Big Tech talent grab underscores that AI is now core infrastructure – it’s far beyond the R&D playground it once was. This means companies are investing in the long-term, and even if macro conditions are shaky, demand for strong talent is still there.
So the direction of travel is clear: the most valuable talent will be those who can bridge disciplines, combining technical depth with product thinking, systems awareness, and adaptability. For budding developers and AI practitioners, staying in the game means getting sharp on systems, savvy with models, and hands-on with the tools that glue it all together.
System-centric thinking – versus model-centric thinking – is the name of the game.