Skip to content
· 20 min read

Knowledge Work Leverage

How to multiply the impact of knowledge work through systematic leverage

Last year I wrote a script that reformats physics simulation benchmarks into a standard table. It took maybe 40 minutes. Since then, that script has run over 300 times. It saves about 5 minutes per run. That’s 25 hours of work produced by 40 minutes of effort, a 37x return, and the number keeps growing.

Meanwhile, I also spent about 40 minutes in a meeting last year explaining the same benchmark format to a colleague verbally. That conversation helped exactly one person, one time. Same 40 minutes. Wildly different returns.

This is the core idea behind leverage in knowledge work, and most knowledge workers completely ignore it. They spend their careers doing the equivalent of that meeting: producing output that helps one person, once, then evaporates. They optimize for hours worked when they should be optimizing for impact per hour. The difference between a 1x knowledge worker and a 10x knowledge worker is almost never “works 10 times harder.” It’s “built systems that multiply every hour of effort by 10.”

The leverage framework (stolen and extended)

Naval Ravikant laid out a framework for leverage that’s become famous in tech circles. He identified four types:

Type Example Permission needed? Marginal cost of replication
Labor Hiring people to do work Yes (people must agree to work for you) High (salaries)
Capital Investing money behind problems Yes (someone must give you capital) Variable
Media Books, podcasts, videos, blog posts No Near zero
Code Software, scripts, automation No Near zero

The critical distinction: labor and capital are permissioned leverage. You need someone’s approval. You need a boss to give you headcount. You need investors to write checks. You need a bank to approve a loan.

Media and code are permissionless leverage. Nobody has to approve your blog post. Nobody has to fund your open source project. Nobody has to authorize your shell script. You can create these at 2 AM in your apartment and deploy them to the world by morning.

Naval’s framework is from 2018. It’s missing a category that barely existed then but dominates now.

The fifth type: AI leverage

There are now five types of leverage available to knowledge workers, and the fifth type is unlike anything that came before it.

Type What it multiplies 2020 state 2026 state
Labor Hours of human effort Unchanged for centuries Still permissioned, still expensive
Capital Money behind a problem Requires permission Requires permission
Media Distribution of ideas YouTube, podcasts, blogs Same + AI-generated content at scale
Code Automation of procedures Write it yourself or hire someone AI writes most of it, you review and direct
AI Cognition itself GPT-3 existed, barely usable Agents that write code, research, analyze, and execute autonomously

AI leverage is different from code leverage in a fundamental way. Code automates procedures. You define the steps, the computer executes them. AI automates cognition. You define the goal, the AI figures out the steps. Code leverage requires you to know the solution. AI leverage requires you to know the problem.

This changes the economics of knowledge work completely. Before AI, the bottleneck was always skilled human cognition. You could have all the code and media leverage in the world, but someone still had to think through the hard problems, write the initial code, design the architecture, make the judgment calls. That person was the bottleneck, and their time was the constraint.

Now that constraint is loosening. Not disappearing. Loosening.

Linear work vs. leveraged work

Here’s a mental model that changed how I think about my time. Every task falls somewhere on this spectrum:

LINEAR                                              LEVERAGED
|                                                          |
Answering one email         Writing docs        Building a tool
Attending a meeting         Writing a blog post  Creating a template
Explaining something        Recording a video    Writing a library
once, verbally              Teaching a class     Training an AI agent
                                                 Writing an orchestration spec

Linear work produces value once. You answer the email, the value is delivered, it’s done. Leveraged work produces value repeatedly. You write the documentation, and every future person with that question gets their answer without your involvement.

The math on this is brutal. Suppose you’re a senior engineer who spends 60% of your time on linear work and 40% on leveraged work. Your colleague with identical skills spends 30% on linear work and 70% on leveraged work. After one year, you’ve both worked the same number of hours. But your colleague’s leveraged work is still producing value: the tools they built are still saving time, the documentation they wrote is still answering questions, the templates they created are still shipping features faster.

After two years, the gap is enormous. After five years, it’s career-defining. And the person who worked “harder” (more hours on linear work) ends up with less total impact.

The most productive knowledge workers I’ve observed share a specific habit: when they encounter a problem, their first instinct is not “how do I solve this?” It’s “how do I solve this in a way that solves it for everyone, forever?” The second question takes longer to answer. It also produces 100x the value.

The five sources of permanent leverage

Not all leveraged work is equal. Some leveraged work compounds over time. Other leveraged work provides a one-time multiplier and then stops. The compounding variety is what you want.

1. Writing

Andy Matuschak has a phrase I think about constantly: “knowledge work should accrete.” Most knowledge work doesn’t. You have a great insight in a meeting, it influences three people’s thinking for a week, and then it’s gone. You write a careful email explaining a design decision, it sits in someone’s inbox, never to be found again.

Writing that is published, organized, and discoverable is a different animal. A well-written technical blog post from 2019 still teaches people in 2026. A clear RFC from three years ago still prevents the same mistake from being made again. Richard Hamming put it this way: “Knowledge and productivity are like compound interest.” But the compounding only works if the knowledge is captured in a durable, accessible form.

The leverage from writing is not just about distribution (though distribution matters). It’s about thinking. Writing forces you to make your reasoning explicit. It exposes gaps you didn’t know you had. The process of writing a technical post about something you built will reliably improve the thing you built, because writing reveals the parts where your understanding is shallow.

Writing compounds in three ways:

  1. Direct leverage: the piece teaches 1,000 people something you’d otherwise explain one at a time
  2. Thinking leverage: the writing process itself improves your understanding
  3. Reputation leverage: good writing attracts opportunities, collaborators, and serendipity

2. Code and tools

Every script, library, template, and automation you build is a robot that works for you while you sleep. The 40-minute benchmark script from my opening paragraph is a small example. Larger examples include:

The internal tool that auto-generates API documentation from code comments. Took two weeks to build. Saves the team 4 hours per week. Over two years, that’s 416 hours saved, or about 10 full work weeks. The tool didn’t get tired. It didn’t ask for a raise. It didn’t take vacation. It just ran.

But the real leverage from code isn’t just time saved. It’s error prevention. The tool produces consistent output. It doesn’t forget a field. It doesn’t use the wrong format. It doesn’t get distracted. Every time a human does a repetitive task manually, they introduce variance. Every time a tool does it, the variance is zero.

The best engineers I know have a personal toolkit of scripts, templates, and automations that make them abnormally fast. Not because any single tool is revolutionary, but because the collection of 50 small tools eliminates 50 small frictions, and the time saved compounds.

3. Systems and processes

A system is a tool for groups of people. If code is a personal robot, a system is an organizational robot.

Consider the difference between “I review PRs carefully” and “We have a PR review checklist that catches 90% of common issues before a human reviewer looks at it.” The first scales with one person’s attention. The second scales with the organization.

Systems leverage is underrated because it’s invisible. When a system works, nobody notices. When the deploy process is smooth, nobody thinks about the three weeks someone spent making it smooth. When the incident response runbook handles the 3 AM page correctly, nobody appreciates the afternoon someone spent writing it. Systems produce leverage by preventing work, and prevented work is invisible.

The most valuable systems eliminate decisions, not just tasks. A style guide eliminates thousands of individual formatting decisions. A deploy checklist eliminates the decision about what to verify before shipping. A architecture decision record (ADR) eliminates re-litigating the same debate every six months.

4. Teaching and mentoring

When you teach one person something, you get a 1:1 return. When you teach them how to teach it to others, you get exponential returns.

The most leveraged form of teaching isn’t lectures or documentation (though both help). It’s changing how someone thinks about a class of problems. If you teach a junior engineer the specific fix for a bug, you’ve solved one bug. If you teach them how to read a stack trace, how to form hypotheses, how to systematically narrow down the root cause, you’ve solved every future bug they’ll encounter.

This is the difference between teaching fish and teaching fishing, and it’s the reason that some senior engineers are worth 10x their salary. They don’t produce 10x the code. They make everyone around them 2x more effective, and in a team of 10, that’s equivalent to adding 10 more engineers.

5. AI orchestration (the new frontier)

This is the newest form of leverage, and the one I’m most excited about. The ability to direct AI agents to perform knowledge work on your behalf.

A well-written prompt is leverage. A well-designed agent spec is more leverage. A full orchestration protocol for a team of AI agents is maximum leverage. You write the spec once, and it can execute complex cognitive work repeatedly.

I recently built a 14-agent system to audit a physics engine. The orchestration protocol (the document describing how the agents coordinate) took me maybe 15 hours to write. The system then ran for 3 hours and produced the most thorough audit I’ve ever received. If I had done the same audit manually, it would have taken weeks. And the protocol is reusable. With modifications, I can run a similar audit on a different codebase.

The leverage math:

Manual audit:                    ~80 hours of my time
Agent system (first run):        15 hours (protocol) + 3 hours (execution) = 18 hours
Agent system (subsequent runs):  2 hours (adaptation) + 3 hours (execution) = 5 hours

That’s a 4.4x multiplier on the first run and a 16x multiplier on subsequent runs. And those numbers assume I’m doing the manual audit at full speed without breaks, which is unrealistic.

Why most people optimize for the wrong thing

There’s a reason most knowledge workers don’t think this way, and it’s not stupidity. It’s incentives.

Most workplaces reward visible activity. Answering emails quickly. Being in meetings. Responding to Slack within minutes. Shipping the ticket in front of you. These are all linear work. They’re visible, countable, and immediately attributable to you.

Leveraged work is the opposite. Building a tool takes a week and produces no visible output until it’s done. Writing documentation feels like you’re “not doing real work.” Designing a system means you’re thinking instead of typing, and thinking looks identical to staring out the window.

The result is a systematic bias toward linear work in most organizations. People who do the most visible work get promoted, even when people who do the most leveraged work create more total value.

What gets rewarded What creates value
Responding to Slack fast Building systems that prevent the question from being asked
Closing tickets quickly Creating tools that auto-close entire categories of tickets
Being in every meeting Writing the document that makes the meeting unnecessary
Working late Building automation that does the work while you sleep
Knowing the answer Writing the answer where everyone can find it

This is not a rant about bad management (though bad management makes it worse). It’s a structural problem. Leveraged work has delayed, diffuse, and hard-to-attribute returns. Linear work has immediate, concentrated, and easy-to-attribute returns. Every incentive system in every company is biased toward the latter.

If you want to do leveraged work, you have to do it on purpose, often despite your incentives, not because of them.

The 10x myth and the leverage reality

The “10x programmer” is one of the most debated ideas in software engineering. The original research (Sackman, Erikson, and Grant, 1968, later extended by Curtis, Mills, DeMarco and Lister, and others) found 10-fold differences in productivity between programmers with the same levels of experience. This finding has been replicated across multiple studies over multiple decades. It is one of the most consistently reproduced results in software engineering research.

But the framing is wrong. The question isn’t “are some programmers 10x faster at writing code?” The question is “why does one person’s output create 10x more value than another’s?”

The answer is almost always leverage. The 10x programmer isn’t typing 10x faster. They’re:

  1. Solving the right problem (leverage through problem selection)
  2. Building reusable solutions instead of one-off fixes (leverage through code)
  3. Writing it up so others learn from it (leverage through writing)
  4. Creating tools that prevent entire categories of future bugs (leverage through systems)
  5. Making architectural decisions that save thousands of hours downstream (leverage through design)

Each of these is a leverage multiplier. Stack five 2x multipliers and you get 32x. The “10x programmer” is really a “2x at five different leverage points” programmer.

Construx’s Steve McConnell, who has studied this for decades, points out that the variation exists at the team level too: the best teams are 10x more productive than the worst. The team-level leverage comes from the same place. Clear processes (systems leverage). Shared tools (code leverage). Good documentation (writing leverage). Knowledge sharing (teaching leverage).

The AI leverage landscape in 2026

The state of AI-assisted knowledge work has changed dramatically in the last two years. Let me give you the specific lay of the land, because vague “AI is changing everything” doesn’t help anyone decide what to actually do.

The tools

Tool What it’s good at Where it falls short
Claude Code Deep reasoning, multi-file refactors, agent orchestration, 200K context window. SWE-bench score of 80.9%. Hit $1B annualized revenue by Nov 2025. Token-expensive for long sessions. Agent Teams feature is powerful but requires careful spec writing.
Cursor Flow state. Autocomplete feels natural, inline chat is fast, small-to-medium scoped tasks handled well. Struggles with large architectural changes. Less capable on novel reasoning.
GitHub Copilot Ubiquitous, frictionless inline suggestions, good enough for most repo-level tasks. Agent and workspace features improved significantly in 2025. Lower ceiling on complex tasks. Enterprise-friendly but not power-user optimized.
Codex (OpenAI) Async task execution, sandboxed environments, PR generation. Newer entrant, less battle-tested at scale.

About 82% of developers now use AI coding assistants daily or weekly. These tools are no longer early-adopter territory. They’re baseline.

The nuance nobody talks about

Here’s where it gets interesting. METR (a research organization focused on measuring AI capabilities) ran a randomized controlled trial in 2025: 16 experienced open-source developers completed 246 tasks on repositories they’d maintained for an average of 5 years. Half the tasks were randomly assigned to allow AI tools, half were not.

The result: developers using AI took 19% longer to complete tasks.

Read that again. Experienced developers, working on their own codebases, got slower with AI assistance. Not faster. Slower.

And here’s the kicker: after the study, those same developers estimated they’d been 20% faster with AI. Their perception was the exact opposite of reality.

Before you conclude “AI tools are useless,” understand why this happened. The developers accepted less than 44% of AI-generated code. They spent time prompting, reviewing, testing, and ultimately rejecting AI output. For experts working on familiar codebases, the overhead of managing AI exceeded the benefit.

This result tells you something crucial about AI leverage: it’s not automatic. You don’t get leverage just by turning on Copilot. You get leverage by knowing when and how to use AI, and equally importantly, when not to use it.

When AI creates leverage (and when it doesn’t)

HIGH AI LEVERAGE                         LOW AI LEVERAGE
|                                                    |
Unfamiliar codebase                    Your own code you wrote last week
Boilerplate generation                 Novel algorithm design
Test writing                           Subtle debugging
Documentation generation               Architecture decisions
Refactoring patterns                   Performance optimization
Exploring solution spaces              Security-critical code review
First drafts of anything               Final review of anything

The pattern: AI creates leverage when you need breadth, speed, or exploration. It destroys leverage when you need depth, precision, or judgment. The METR study caught experienced developers in the “low leverage” zone: they already had depth and context. The AI was adding noise to a signal they already had.

Contrast this with the other side of the research. Studies on junior developers and developers working on unfamiliar codebases consistently show large productivity gains, sometimes 55% faster task completion. The AI fills knowledge gaps they actually have.

The practical implication: the leverage of AI is inversely proportional to your existing expertise in the specific task. For tasks where you’re a beginner, AI is a massive multiplier. For tasks where you’re an expert, AI is often overhead.

The real AI leverage play

Given everything above, the highest-leverage use of AI isn’t “turn on autocomplete and hope for the best.” It’s strategic deployment at specific leverage points:

1. Expanding your surface area. You’re an expert in Python but need to write some Rust. AI closes the gap between “I understand the concepts” and “I know the syntax and idioms.” This used to require days of documentation reading. Now it takes minutes.

2. First drafts at scale. Need 15 test cases? Need API documentation for 30 endpoints? Need to refactor the same pattern across 50 files? AI handles the 80% that’s mechanical, you handle the 20% that requires judgment.

3. Agent orchestration. This is the big one. Instead of using AI as a co-pilot (reactive, one task at a time), you use it as a team (proactive, parallel, specialized). The orchestration spec I wrote for my physics engine audit is an example. You define the cognitive architecture, the AI executes it.

Gartner forecasts that by the end of 2026, 40% of enterprise applications will feature task-specific AI agents, up from less than 5% in 2025. The shift from “AI as autocomplete” to “AI as agent team” is the single biggest leverage expansion happening right now.

4. Leveraging AI to create leverage. This is meta, but it’s the most important pattern. Use AI to help you write the documentation. Use AI to help you build the internal tool. Use AI to help you create the templates and systems. AI doesn’t just provide leverage directly. It accelerates the creation of all other forms of leverage.

The leverage stack: putting it all together

The highest-performing knowledge workers I know (and the pattern I try to follow) stack leverage types. They don’t pick one. They combine them.

                         ┌─────────────────┐
                         │  AI Orchestration │
                         │  (amplifies all   │
                         │   layers below)   │
                         └────────┬──────────┘
                                  │
                    ┌─────────────┴─────────────┐
                    │                             │
              ┌─────┴─────┐                ┌──────┴──────┐
              │  Systems   │                │  Teaching   │
              │  & Process │                │  & Mentoring│
              └─────┬──────┘                └──────┬──────┘
                    │                              │
              ┌─────┴──────────────────────────────┴─────┐
              │                                           │
        ┌─────┴─────┐                            ┌───────┴──────┐
        │   Code    │                            │   Writing    │
        │   & Tools │                            │   & Media    │
        └───────────┘                            └──────────────┘

Writing and code form the base layer. They’re the most accessible forms of permissionless leverage. Anyone can start writing today. Anyone can start building tools today. No permission needed.

Systems and teaching sit on top. They require more organizational context but produce greater multiplier effects. A tool helps one person. A system helps a team. Teaching helps everyone you’ll ever work with.

AI orchestration sits on top because it amplifies everything beneath it. AI helps you write faster. AI helps you code faster. AI helps you build systems faster. AI helps you create better teaching materials. It’s a leverage multiplier applied to leverage multipliers.

The stack compounds. A person who writes well, builds tools, creates systems, teaches others, and orchestrates AI agents isn’t just additively more productive. They’re multiplicatively more productive. Each layer amplifies the others.

How to actually shift your ratio

Knowing that leveraged work matters is easy. Actually doing more of it is hard. Here’s what’s worked for me.

The 70/30 rule

Try to spend at least 30% of your working time on leveraged work. Not 100% (you still have to answer emails and attend some meetings). But at least 30%. If you’re currently at 10%, getting to 30% will transform your output within a year.

Block the time. Put “tool building” or “documentation” or “writing” on your calendar. Protect it the way you’d protect a meeting with your CEO. Because the compounding returns of that time will exceed anything you’d accomplish by responding to Slack 15 minutes faster.

The “build the second time” heuristic

The first time you encounter a problem, solve it. The second time you encounter the same problem, build the tool, write the document, or create the system. Don’t do it the first time (you don’t know if it’ll recur). Don’t wait until the tenth time (you’ve already wasted nine repetitions). The second time is the sweet spot.

The “would this help someone in six months?” test

Before you solve a problem, ask: “If someone has this exact problem six months from now, will my solution help them?” If the answer is no (because you solved it in a meeting, or in a private Slack DM, or in your head), consider solving it in a way where the answer is yes. Write it up. Build a tool. Create a template.

Keep a leverage journal

At the end of each week, write down: what did I do this week that will still be producing value in six months? If the answer is “nothing,” you spent the whole week on linear work. That’s not sustainable. Not because you’ll burn out (though you might), but because you’re leaving compounding returns on the table.

Use AI to accelerate the shift

The biggest barrier to leveraged work is the upfront time investment. Writing documentation takes time. Building tools takes time. Creating systems takes time. And when you’re buried in linear work, carving out that time feels impossible.

AI collapses the upfront cost. That internal tool that would have taken two weeks to build? It takes three days with AI assistance. That documentation that would have taken a full day to write? Two hours. That orchestration spec that would have required a week of design? Two days.

AI doesn’t change the value of leveraged work. It changes the cost. And when the cost drops, the ROI becomes impossible to ignore.

The compounding machine

Here’s what the long game looks like. You spend a year building leverage. You write 20 technical posts. You build 30 small tools. You create 5 organizational systems. You mentor 3 junior engineers. You write 2 agent orchestration specs.

By the end of the year, those 20 posts are still teaching. Those 30 tools are still running. Those 5 systems are still preventing work. Those 3 engineers are still growing (and starting to mentor others). Those 2 agent specs are still executing.

You haven’t done 10x more work than someone who spent the year on linear tasks. You’ve done the same amount of work. But your work is still working. Theirs is done.

That’s the compounding machine. It doesn’t require genius. It doesn’t require working 80-hour weeks. It doesn’t require permission from anyone. It requires a consistent, deliberate choice to spend your time on things that keep producing value after you’re done spending time on them.

Richard Hamming gave a talk in 1986 called “You and Your Research.” He said: “Knowledge and productivity are like compound interest. Given two people of approximately the same ability, the one who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.”

He was talking about thinking. But the principle applies to leverage more broadly. Given two people of approximately the same ability, the one who consistently creates leveraged output will be tremendously more impactful over a career. Not because they’re smarter. Not because they work harder. Because they learned to make their work outlast the day it was done.

Continue Reading

Attention & Cognitive Load

Next page →