Two things happened last week that look unrelated but aren’t.

JPMorgan told its 65,000 engineers and technologists that AI tool usage is now part of their performance reviews. Managers are tracking how often staff use tools like ChatGPT and Claude Code. Internal systems classify workers as “light users” or “heavy users.” By the end of March, most developers had official performance goals tied to AI utilization.

The same week, Oracle finished executing what may be the largest layoff in its history with an estimated 30,000 employees, roughly 18% of its workforce, cut via 6 AM termination emails with no prior warning. The company is redirecting billions into AI data center infrastructure. The restructuring charge is $2.1 billion. TD Cowen estimates the cuts free up $8 to $10 billion in annual cash flow in invest in AI infrastructure.

JPMorgan is mandating AI adoption. Oracle is replacing humans with compute. Both are symptoms of the same structural shift: the org chart we’ve been running for 30 years is collapsing, and nobody has built the replacement yet.

The Capability Gap Is Closing

The traditional engineering ladder (e.g junior, mid, senior, staff, principal) was built on an assumption that experience equals capability equals output. A senior engineer produces more, and produces better, than a junior engineer. That delta justified the title, the compensation, and the organizational hierarchy.

A junior engineer with Copilot and Claude Code can now produce scaffolding, boilerplate, tests, and first-pass implementations at a volume that would have taken a mid-level engineer a full sprint two years ago. The output isn’t identical in quality but it’s close enough that the gap is no longer a multiple in marginal terms.

Engineers with two years of experience are shipping features that used to require five years of accumulated knowledge about patterns, frameworks, and edge cases. They’re not better engineers than their predecessors were at two years. They just have access to a tool that compresses the experience curve.

When the output gap between levels shrinks, the entire justification for the ladder changes. “Senior” stops meaning “can produce more” and starts meaning “knows what not to build.” The value shifts from execution to judgment and most promotion rubrics don’t measure judgment but simply output.

Promotions Are Breaking

Think about how your company promotes engineers today. At most organizations, the path from mid-level to senior requires demonstrating that you can handle increasingly complex projects, mentor others, and deliver independently. The evidence is almost always output-based: what did you ship, how complex was it, what was the scope.

Now imagine every engineer on your team has an AI assistant that handles the complexity scaffolding. The junior engineer’s project looks just as complex as the senior’s because the AI handled the hard parts of both.

How does your promotion committee distinguish between the engineer who made good judgment calls the AI couldn’t make and the engineer who accepted every suggestion without questioning it?

This is where Stanford’s recent sycophancy research becomes relevant in an unexpected way. Their study, published in Science, found that AI chatbots affirm users’ positions 49% more often than humans do even when the user is clearly wrong. Users couldn’t distinguish between sycophantic and objective responses. And after interacting with a sycophantic AI, they became more convinced they were right and less willing to reconsider.

Now apply that to code review: an engineer asks their AI assistant to review their architecture decision. The AI says it looks great. The engineer submits it with confidence. The AI wasn’t exercising judgment… it was optimizing for agreement! Spoiler: the architecture has a scaling problem that won’t surface for six months.

The senior engineer on the team would have caught it, maybe. Not because they’re faster or more productive, but because they’ve seen that pattern fail before. That’s the kind of value that the current promotion system doesn’t know how to measure, because it was never designed for a world where output is abundant and judgment is the bottleneck.

Enjoying this? I write about AI implementation and engineering leadership every week.

Interviews Are Already Broken

If promotions are breaking, interviews broke months ago.

Take-home coding assignments are effectively dead. Everyone knows candidates use AI, and there’s no reliable way to prevent it without making the exercise so artificial that it stops correlating with actual job performance. Live coding rounds are testing a skill — performing under pressure without the tools you’ll use every day at work — that has almost no relationship to the job itself.

System design interviews are holding up better, because they’re closer to testing judgment than execution. But even there, candidates can practice with AI coaches that have internalized every system design pattern ever discussed on the internet. The signal-to-noise ratio is cratering across every traditional interview format.

What replaces them? I think we’re heading toward something that looks more like design reviews and incident post-mortems than coding exercises. Show me a system you built, walk me through a decision you made that was wrong and how you recovered, explain a tradeoff where you chose the less obvious path and why.

These are judgment-based evaluations.

Roles Are Collapsing

The JPMorgan story isn’t really about, it’s about the boundaries between roles dissolving.

A PM who can vibe code a working prototype doesn’t need to write a spec and wait for an engineering sprint to see if their idea works. A designer who can ship a functional component doesn’t need a frontend engineer to translate their Figma. A data analyst who can write a Python script to automate their own reporting pipeline doesn’t need to file a ticket with the platform team.

I’ve been writing about polymorphic cultures — organizations where humans and AI agents work alongside each other with fluid boundaries. What I’m seeing now is that the fluidity isn’t just between humans and AI. It’s between the humans themselves. When AI gives everyone access to capabilities that used to be gated by specialization, the org chart built around that specialization stops making sense.

This connects directly to the minutes added to workforce framing, but with a twist. AI isn’t just adding minutes to each person’s capacity. It’s redistributing capability across the org chart.

The capacity distribution is flattening.

The Oracle Warning

Oracle represents the blunt-force version of this transition by deleted layers of it. Thirty thousand people received termination emails before sunrise, and the company redirected the savings into GPU clusters and data center construction.

The framing from Oracle’s leadership is “organizational change.” The financial reality is simpler: they are replacing human capital with compute capital at a ratio that would have been unthinkable two years ago.

I wrote about Block’s layoffs last month and argued that the story most people were telling — “AI is killing jobs” — missed the more important structural point about how companies are redesigning themselves around AI from scratch. Oracle is a more extreme version of the same pattern. But there’s an additional lesson here.

The companies that cut the most humans aren’t necessarily the ones that will perform the best. They’re the ones that decided the judgment layer was expendable. History suggests that’s a very expensive bet to be wrong about.

The right model isn’t Oracle’s “replace humans with compute” or JPMorgan’s “measure whether humans use compute.” It’s something harder: figure out which humans you need for judgment, give them the best AI tools available, and measure the outcomes.

What Comes Next

I don’t have a clean framework for what replaces the traditional eng ladder, but I can see the outline:

Titles will decouple from output and reattach to judgment scope. A senior engineer won’t be “someone who ships complex features.” They’ll be “someone whose judgment we trust on decisions that are expensive to reverse.” That’s a fundamentally different evaluation, and it requires fundamentally different evidence.

The IC/manager split may dissolve. If AI handles the coordination overhead that justified management layers (e.g. status updates, ticket triage, sprint planning, progress reporting) then the role of the engineering manager changes from “person who coordinates work” to “person who develops judgment in others.” That looks a lot more like a coaching role than a management role. Which is something I’ve been arguing for years, but AI is making the case faster than any leadership book ever did.

Compensation models will get weird. When a 10-person team with AI tooling can outperform a 50-person team without it, how do you compensate the 10? If the value creation per engineer 5x’s but the headcount drops by 80%, does compensation follow the headcount line or the value line? Nobody has solved this yet, and the companies that figure it out first will have a massive recruiting advantage.

The cognitive overhead tax gets heavier before it gets lighter. The transition period is the hardest part. Engineers are simultaneously expected to do their jobs, learn AI tools, prove they’re using AI tools, and figure out what their role looks like in a world where AI handles half of what they used to do. The mental load of navigating an organizational structure that’s actively dissolving underneath you is enormous. Leaders who ignore that load will lose their best people first, because the best people have the most options.

I’ve been building and scaling engineering organizations for fifteen years. I’ve managed through IPOs, acquisitions, and hypergrowth. None of those transitions felt as structurally disorienting as this one.

The leaders who figure this out won’t be the ones with the best AI tools or the biggest token budgets. They’ll be the ones who can answer a question that sounds simple but isn’t: when AI can do the work, what are the humans for?

The answer is judgment, and the challenge is building an organization that actually values it.


This post connects to ideas from Tokenmaxxing Is Lines-of-Code Thinking for the Agentic EraMinutes Added To WorkforceBlock Just Cut 3,500 Jobs. You’re Reading It Wrong., and Are You Managing Your Manager? (The Agentic Update).