In 2015, I stood in front of a room full of engineers at Code Driven NYC and laid out a formula for how individual contributors could take charge of their own careers. The framework was simple: Dreams, Alignment, Goals, Feedback. Know what you want and make sure you and your boss want the same things by setting specific objectives while also demanding mentorship.
That talk came from my experience as VP of Engineering at ShopKeep, where I watched talented engineers stall out not because they lacked ability but because they had no system for drawing great management out of their managers. But that talk was written for a world where the IC’s primary currency was execution. The manager was the gatekeeper of career progression, and your job was to navigate that relationship skillfully..
What Changed
I have had three very different seats since giving that talk. Each role changed my perspective on the manager-IC dynamic, but nothing has changed it as fundamentally as watching AI reshape what individual contributors can actually do.
In Awe in an AI World, I wrote about individuals operating at enterprise scale. A single engineer building, testing, deploying, and monitoring systems that would have required a squad two years ago. When an IC can operate at that scale, the entire premise of “managing your manager” shifts. You are no longer trying to prove you can handle more scope. The agents already expanded your scope. You are trying to prove something harder: that you know what to build and why it matters. The original formula needs an update.
Step 1: Dreams → Capabilities
The 2015 version: Know what you want. CTO? Staff engineer? Founder? Pick a destination and work toward it.
The problem now: The destinations themselves are shifting. When I gave that talk, nobody had “Chief AI Officer” on their career roadmap. The IC who wanted to become a staff engineer had a well-understood set of milestones: own larger systems, mentor more people, drive cross-team initiatives. Those milestones made sense when seniority was defined by the complexity of systems you could hold in your head and the code you could ship with your hands.
AI collapsed that definition. A mid-level engineer with strong agentic workflows can now ship what used to require staff-level scope. The junior-senior gap that I talked about on the InfoQ podcast is closing fast, at least on the execution dimension.
So “know what you want” is no longer enough. The better question is: what can you do that AI cannot?
This is not a doom question. AI cannot exercise judgment about what is worth building. It cannot navigate organizational politics. It cannot build trust with a skeptical stakeholder. It cannot decide when the technically elegant solution is wrong because it does not account for how the sales team actually works. It cannot hold the tension between shipping fast and building right, and know when to lean which way.
These are capabilities, not job titles. And the IC who develops them deliberately, rather than hoping they emerge as a byproduct of writing a lot of code, is the one who will thrive regardless of how the career ladders get redrawn.
The updated step one is not “dream about a title.” It is identify the capabilities that compound over time and that AI amplifies rather than replaces. Then build your career around developing those capabilities, not around climbing a ladder that might not exist in three years.
Enjoying this? I write about AI implementation and engineering leadership every week.
Step 2: Alignment → Co-Piloting
The 2015 version: Find the overlap between your ambitions and your manager’s priorities.
The problem now: The alignment conversation has a third participant, and it changes the power dynamic entirely.
In the old model, your manager had something you needed: context, resources, air cover, promotion authority. Alignment was about trading your execution for their support. You mentor junior devs (which helps your manager’s team metrics) and in return you get management experience on your resume.
In the agentic model, something subtle but important has shifted. The IC who masters AI-powered workflows might have more operational insight than their manager. You are the one spending eight hours a day working alongside agents, discovering what they are good at, where they break, how to orchestrate them for complex tasks. Your manager is getting this information secondhand, in standups and status updates.
This means alignment is no longer a negotiation between someone with context and someone with execution. It is a co-piloting relationship where both people bring essential and different knowledge.
Your manager brings organizational awareness, strategic framing, stakeholder relationships, and pattern recognition from managing through multiple cycles. You bring hands-on knowledge of what is actually possible with the tools, where the real bottlenecks are, and what the team’s AI-augmented capacity can realistically deliver.
The updated conversation is not “here is what I want, and here is how it helps you.” It is “here is what I am seeing on the ground with these tools, and here is how it changes what we can commit to.” That is a fundamentally more valuable conversation, and it positions you as a strategic partner rather than a skilled executor asking for a promotion.
In my last post on the Four Burner Problem, I talked about the cognitive overhead tax, the mental cost of all the background processes running in your head. Your manager has their own version of that tax, and a huge part of it comes from uncertainty about what the team can actually deliver. An IC who reduces that uncertainty by surfacing clear, grounded signal about AI-augmented capacity is providing value that no amount of shipped features can match.
Step 3: Goals → Intent
The 2015 version: Set measurable, time-bound objectives at both micro and macro levels. Ship feature X this sprint. Complete four mentorship sessions this quarter.
The problem now: When AI collapses execution timelines, output-based goals lose their meaning as differentiators.
I wrote about this indirectly in The Next Platform Giant Is Hiding in Plain Sight when I described the Intent Layer, the idea that in an agentic world, the code is increasingly an artifact and the intent is the real intellectual property. When I tell Claude Code to refactor a billing module, the value is not in the code it produces. It is in knowing that the billing module needed refactoring, understanding the business constraints, and articulating what “done” looks like clearly enough for an agent to execute.
The 2015 version of goals was about proving you could execute. Ship this. Build that. Hit this velocity target. In a world where an AI agent can ship and build at machine speed, those goals are table stakes. Every IC has access to the same execution leverage.
The updated goals should center on intent quality. Can you identify the right problems? Can you articulate requirements clearly enough for agents to execute without extensive rework? Can you evaluate whether agent-generated output actually solves the problem you described, or just the problem you literally asked for? Can you make architectural decisions that hold up as the system scales?
Concretely, this means the goal-setting conversation with your manager shifts from “I will ship features X, Y, and Z this quarter” to something more like “I will own the technical strategy for domain X, including identifying the highest-impact problems, defining success criteria, directing agent-assisted implementation, and validating outcomes.” The deliverable is not the code, it is the judgment applied to the code.
And here is the part that would have sounded absurd in 2015: the IC who can write a perfect problem statement might be more valuable than the IC who can write perfect code. Because the problem statement scales through agents. The code does not scale beyond the person writing it.
Step 4: Feedback → Signal
The 2015 version: Demand regular feedback, and use your manager as a mentor who has “probably done your role” and “has more experience than you.”
The problem now: Your manager probably has not done your current role, because the role is being redefined faster than anyone’s experience can keep up with.
This is the step that changes the most, and it is the one I think about the most because I have been on both sides.
When I gave the 2015 talk, I could draw on my own experience as an engineer to give specific, actionable feedback to my reports. I had done their job. I knew what good looked like. The mentorship model assumed a knowledge gradient flowing downhill from manager to IC.
That gradient has flattened in the agentic era. A manager who last wrote production code in 2020 does not have a visceral understanding of what it means to work alongside AI agents all day. They do not know the failure modes, the productivity patterns, the new categories of risk. The IC living in that world eight hours a day has operational knowledge the manager simply cannot have.
This does not make the manager useless. Far from it. What it means is that feedback becomes bidirectional in a way it never was before. The manager provides signal about organizational direction, political dynamics, career positioning, and pattern recognition across teams. The IC provides signal about ground-level reality, tool capabilities, workflow innovations, and emerging risks.
The word I keep coming back to is signal, not feedback. In a world where AI generates enormous amounts of output and data, the scarcest resource is signal: knowing what matters, what is noise, what to pay attention to, what to ignore. A good manager helps you find signal in the organizational layer. A good IC helps their manager find signal in the technical layer.
The updated version of step four is not “make sure your manager is a mentor.” It is build a signal exchange where you are both learning from each other in real time. Regular one-on-ones still matter. But the content shifts from “here is how you are doing against your goals” to “here is what I am seeing, here is what you are seeing, and here is what we think it means together.”
The Updated Formula
Ten years later, here is the revised framework:
Capabilities, not dreams. Build a career around what compounds and what AI amplifies, not around titles that might not exist tomorrow.
Co-piloting, not alignment. You and your manager are both navigating uncharted territory. Bring your operational knowledge to the partnership, not just your ambitions.
Intent, not output. Your value is in articulating what to build and why. The execution layer is increasingly shared with agents.
Signal, not feedback. In a noisy world, the most valuable thing you can do for each other is surface what actually matters.
The Meta Point
The original talk was about empowering ICs to take ownership of their careers instead of waiting for their manager to develop them. That principle has not changed. If anything, it matters more now.
In a world where AI agents are expanding what individuals can do, the IC who waits passively for direction will be outpaced by the one who actively shapes what they work on, how they work, and what capabilities they develop. The managers I work with today are overwhelmed. They are trying to figure out how AI changes team structures, career ladders, performance evaluation, and hiring, all at once. They need ICs who bring clarity to the relationship, not ICs who add to the noise.
The best engineers I am working with right now are not the ones writing the most code. They are the ones who have figured out what deserves to be built and can articulate it clearly enough that a combination of agents and humans can execute against it. They are managing their managers not through politics but through signal. They are making their manager’s job easier by being the person in the room who understands both the technology and the problem deeply enough to connect them.
That is the update to managing your manager: stop trying to impress them with output. Start making them smarter with signal.
The original “Are You Managing Your Manager?” talk was given at Code Driven NYC in November 2015 and later published as a blog post. This post builds on ideas from Awe in an AI World, The Next Platform Giant Is Hiding in Plain Sight, and Can AI Solve the Four Burner Problem?.
Read more:
- Can AI Solve the Four Burner Problem?
- The Next Platform Giant Is Hiding in Plain Sight
- Awe in an AI World
- Are You Managing Your Manager? (2015 Original)
About the Author:
Duncan Grazier is a CTO focused on AI. He has scaled companies from 9 to 300+ engineers through IPO and helped lead multiple acquisitions. Duncan thinks, talks and writes about practical AI and the changing model for engineering leadership.
Project content
Duncan Grazier Personal & Professional Branding
Created by you