Interdisciplinary Thinking
Why the most interesting problems live at the intersection of fields
In 2024, the Nobel Prize in Chemistry went to Demis Hassabis and John Jumper for AlphaFold2. Not biologists. Not chemists. A neuroscience PhD and a physicist who had spent years working on AI systems at DeepMind. They solved one of the hardest problems in structural biology by importing ideas from deep learning, attention mechanisms, and geometric reasoning that had nothing to do with chemistry as a discipline. The protein folding community had been stuck for decades. The breakthrough came from outside.
This pattern repeats so often that you’d think people would stop being surprised by it. They don’t. And the reason they don’t is more interesting than the pattern itself.
The graveyard of well-defined problems
There’s a useful distinction that David Epstein draws in Range, borrowed from psychologist Robin Hogarth: “kind” learning environments versus “wicked” ones. In kind environments, the rules are clear, the feedback is immediate, and patterns repeat. Chess. Golf. Classical music performance. Deliberate practice works here. Ten thousand hours works here. Early specialization works here.
Most of the problems worth solving aren’t kind. They’re wicked. The rules shift. The feedback is delayed, noisy, or misleading. The relevant variables aren’t obvious. Climate change is wicked. Product design is wicked. Building software systems that actually serve human needs is wicked. Making sense of geopolitics is wicked.
Epstein’s central argument is that in wicked environments, breadth beats depth. Not because depth doesn’t matter, but because depth alone leaves you pattern-matching against a library of solutions that may not contain the right one. The person who has seen problems in five different domains recognizes structural similarities that the specialist, buried in the conventions of a single field, simply cannot see.
This isn’t motivational hand-waving. Philip Tetlock ran one of the longest-running forecasting studies in social science, tracking 284 experts making 28,000 predictions over 18 years. His finding: the best forecasters weren’t the ones with the deepest expertise in a single domain. They were the “foxes,” people who drew on diverse strands of evidence, thought probabilistically, and synthesized ideas from multiple frameworks. The “hedgehogs,” experts with one grand theory that they extended to every domain, performed worse than chance. Worse than dart-throwing chimps, as Tetlock put it.
The hedgehogs were more confident, more articulate, and more wrong. The foxes were more uncertain, more self-correcting, and more accurate.
Why institutions kill cross-pollination
If interdisciplinary thinking is so powerful, why is it so rare? The answer is structural, not cognitive. People aren’t bad at thinking across domains. The systems they operate in actively prevent it.
Universities are organized into departments. Departments have budgets tied to student enrollment. Faculty get tenure by publishing in journals that serve specific disciplines. A cognitive scientist who publishes in an AI journal and a philosophy journal and a design journal has a weaker tenure case than someone who published the same number of papers in one discipline’s top three venues. The incentive structure punishes breadth.
Industry is the same. Job titles are narrow. “Machine learning engineer” is different from “product designer” is different from “cognitive scientist.” Hiring pipelines filter for depth: years of experience in X, familiarity with Y framework, publications in Z subfield. The person who spent three years doing computational neuroscience, two years doing product design, and two years doing systems engineering gets rejected by all three pipelines for not having “enough” experience in any one.
Funding agencies compound the problem. NIH study sections are organized by disease. NSF panels are organized by discipline. A proposal that sits at the intersection of two fields often gets reviewed by experts in neither, or worse, by experts in one who dismiss the other as irrelevant.
The result is a system that produces deep specialists who are excellent at solving problems that look like previous problems, and terrible at solving problems that don’t. The system works fine in kind environments. It fails catastrophically in wicked ones.
And here’s the thing. The ratio of wicked problems to kind problems is increasing, not decreasing. As domains mature, the easy problems within each domain get solved first. What’s left are the problems that live at the boundaries, the ones that require combining tools and concepts from multiple fields in ways that nobody in any single field would think to try.
Structure-mapping: the cognitive engine of cross-domain transfer
The question of how people actually transfer ideas across domains has a real answer, not just anecdotes. Dedre Gentner, a cognitive scientist at Northwestern, spent decades studying analogical reasoning. Her structure-mapping theory explains both why cross-domain transfer is powerful and why it’s hard.
The core idea: when you encounter a new problem, your brain searches memory for situations with similar relational structure. Not surface features. Structure. A heart is “like a pump” not because they look alike, but because the relational pattern (fluid pushed through channels by a periodic compression mechanism) is the same.
Gentner’s research reveals an important asymmetry. Novices retrieve analogies based on surface similarity. They remember previous problems that looked similar: same domain, same objects, same context. Experts retrieve analogies based on structural similarity. They remember previous problems that worked the same way, even if they came from completely different domains.
This means that expertise in one domain, real expertise, not just familiarity, creates retrieval cues that fire when you encounter structurally similar problems in other domains. A physicist who deeply understands damped harmonic oscillators can recognize that pattern in economic cycles, population dynamics, neural oscillations, and mechanical suspension systems. The surface features are completely different. The relational structure is identical.
But there’s a catch, and it’s a big one. Gentner’s lab studies show that people routinely fail to retrieve structurally similar cases when the surface features differ. You can teach someone a principle using one example, give them a structurally identical problem in a different domain five minutes later, and they won’t make the connection. Unless you explicitly prompt them to compare.
This is why reading broadly, by itself, isn’t enough. The comparison has to be active. You have to deliberately ask: “Where have I seen this pattern before? What’s the underlying structure? What domain does this remind me of, and why?” Without that active comparison, the knowledge sits in separate compartments and never connects.
Gentner calls the active version “analogical encoding.” When people are asked to compare two cases from different domains and extract the common principle, they transfer that principle to new problems at dramatically higher rates than people who study the same cases independently. The act of comparison creates an abstract schema that’s indexed by structure, not surface features, and that schema becomes retrievable across domains.
This has a practical implication that I take seriously: the value of reading across domains isn’t in the reading itself. It’s in the deliberate comparison work you do afterward. Five minutes of asking “how does this connect to what I already know?” after reading a paper from an unfamiliar field is worth more than an hour of passive reading.
The historical record is not subtle about this
The evidence for cross-domain breakthroughs isn’t limited to one or two famous examples. It’s pervasive.
| Breakthrough | Domains combined | The connection |
|---|---|---|
| Darwin’s theory of natural selection | Geology (Lyell’s gradualism) + economics (Malthus on population) + taxonomy + biogeography | Malthus’s insight about population pressure, applied to species rather than humans, gave Darwin the mechanism of selection |
| Shannon’s information theory | Boolean algebra + electrical engineering + thermodynamics | Shannon recognized that switching circuits could implement Boolean logic, then borrowed entropy from thermodynamics as a measure of information |
| PageRank (Google) | Citation analysis from academic publishing + linear algebra + web graph theory | Brin and Page imported the citation-as-endorsement metaphor from academic publishing to rank web pages |
| CRISPR gene editing | Microbiology (bacterial immune systems) + molecular biology + bioinformatics | Jennifer Doudna and Emmanuelle Charpentier recognized that a bacterial defense mechanism could be repurposed as a precision editing tool |
| AlphaFold2 | Attention mechanisms from NLP + geometric deep learning + structural biology | Imported transformer-style attention and equivariant neural networks into a domain that had been using physics-based energy minimization |
| Transformer architecture | Sequence-to-sequence translation + attention (from neuroscience/cognitive science via Bahdanau) + positional encoding | Vaswani et al. eliminated recurrence entirely, borrowing the attention concept and scaling it in ways the original cognitive science framing never imagined |
Notice the pattern. In every case, the breakthrough didn’t come from going deeper into one domain. It came from importing a concept or mechanism from another domain that reframed the problem. Darwin didn’t become a better taxonomist. He imported an economic idea (population pressure and competition for scarce resources) and applied it to biology. Shannon didn’t become a better electrical engineer. He imported Boolean algebra and thermodynamic entropy and created an entirely new field.
Herbert Simon is maybe the purest example of an interdisciplinary thinker in the 20th century. He won the Nobel Prize in Economics for bounded rationality, the A.M. Turing Award for his contributions to AI, and the National Medal of Science for his work in cognitive psychology. He also made foundational contributions to political science, organizational theory, and philosophy of science. His key insight, satisficing (that real decision-makers optimize for “good enough” rather than optimal), came directly from observing how actual humans make decisions across multiple institutional contexts. A pure economist would never have formulated it, because the assumption of rational optimization was too deeply embedded in the discipline’s framework.
Doug Engelbart, who invented the computer mouse, hypertext, networked computing, and the graphical user interface, was trained as an electrical engineer but spent his career thinking about “augmenting human intellect.” His research program was fundamentally interdisciplinary: he combined ideas from radar systems, cybernetics, library science, and cognitive psychology. The people who were “just” building better hardware couldn’t see the product he saw, because they weren’t thinking about the human cognitive system that the hardware was supposed to serve.
Foxes, hedgehogs, and the forecasting evidence
Tetlock’s research deserves deeper treatment because it puts actual numbers on the advantage of interdisciplinary thinking.
In his original study (published as Expert Political Judgment in 2005), Tetlock classified experts as foxes or hedgehogs based on their cognitive style.
| Trait | Hedgehog | Fox |
|---|---|---|
| Core approach | One big theory, applied broadly | Many frameworks, applied selectively |
| Response to disconfirming evidence | Explain it away | Update beliefs |
| Confidence calibration | Overconfident | Better calibrated |
| Prediction accuracy | Worse than chance (in long-range forecasts) | Significantly better than chance |
| Style of reasoning | Deductive from first principles | Inductive, integrative, probabilistic |
| Relationship to own expertise | Extend it everywhere | Recognize its limits |
The hedgehogs were specialists. They had deep knowledge of one domain and one theoretical framework, and they applied that framework to everything. The foxes were integrators. They pulled from multiple frameworks, weighted evidence from different sources, and updated their beliefs when reality disagreed with their predictions.
The effect size was not small. Foxes significantly outperformed hedgehogs on calibration (their confidence matched their accuracy) and discrimination (they could tell the difference between more and less likely outcomes). Hedgehogs were actually worse than a simple algorithm that just predicted “no change” from the status quo.
In the follow-up work (Superforecasting, 2015), Tetlock identified the characteristics of the very best forecasters. They were:
- Actively open-minded (sought out disconfirming evidence)
- Comfortable with numbers and probabilities
- Pragmatic rather than ideological
- Intellectually curious across many domains
- Willing to update incrementally rather than in dramatic reversals
Notice that “deep expertise in the relevant domain” isn’t on the list. The best forecasters were people who could synthesize information from many domains, not people who knew one domain exhaustively.
This doesn’t mean expertise is useless. Tetlock is careful about this. You need to know enough about a domain to understand its key dynamics. But beyond a moderate level of domain knowledge, additional depth shows diminishing returns while additional breadth continues to pay off. The function is something like:
Forecasting accuracy = f(breadth, depth)
Where:
- marginal value of breadth remains positive for a long time
- marginal value of depth plateaus quickly
- interaction term (breadth x depth) is positive
(breadth is more valuable when you have moderate depth)
The optimal profile isn’t maximum breadth or maximum depth. It’s moderate-to-deep expertise in a primary domain combined with genuine competence across several adjacent domains. Which brings us to the shape metaphors.
Beyond T-shaped: the architecture of a useful generalist
The T-shaped model, attributed to Tim Brown and IDEO, is the standard framework for thinking about interdisciplinary skills. The vertical bar is deep expertise in one domain. The horizontal bar is broad familiarity across many domains. It’s a useful starting point, but it has a problem: it implies that the breadth is shallow. A thin horizontal line. Just enough to communicate with specialists in other fields. Not enough to actually do work in those fields.
A more useful model is pi-shaped (or comb-shaped, depending on who you ask): deep expertise in two or three domains, with genuine breadth across several more.
Breadth (genuine competence, not just awareness)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
┃ ┃ ┃
┃ ┃ ┃
┃ ┃ ┃
Domain A Domain B Domain C
(primary) (secondary) (tertiary)
T-shaped: one deep bar, thin horizontal
Pi-shaped: two deep bars, thicker horizontal
Comb-shaped: three+ deep bars, substantial horizontal
The difference between the T and the pi/comb matters for a specific reason. Surface-level familiarity with a field doesn’t activate the structural retrieval that Gentner’s research describes. You need enough depth in a second domain to have internalized its relational patterns, its core abstractions, its characteristic failure modes. Then those patterns become available as retrieval cues when you encounter structurally similar problems in your primary domain.
This means the path to useful interdisciplinary thinking isn’t “read a blog post about everything.” It’s “get genuinely good at two or three things that are far enough apart to have different core abstractions, and then actively look for structural connections between them.”
The “far enough apart” condition matters. Cognitive science and neuroscience are close. The abstractions overlap heavily. Cognitive science and product design are farther apart. Cognitive science and thermodynamics are very far apart. The farther apart the domains, the more surprising (and potentially valuable) the structural connections, but also the harder they are to find.
The Medici Effect and its limits
Frans Johansson’s Medici Effect argues that innovation happens at the intersection of diverse fields, cultures, and industries. He’s right about the mechanism, but the framing can be misleading. It makes cross-domain innovation sound like something that happens by accident, like you wander into a Renaissance-era salon, overhear a conversation between an artist and an engineer, and suddenly invent perspective drawing.
The reality is more disciplined than that. The people who consistently produce cross-domain breakthroughs aren’t dilettantes who dabble in everything. They’re people who invest serious effort in multiple fields and then do the hard cognitive work of finding structural connections.
Linus Pauling, who won two Nobel Prizes (Chemistry and Peace), was asked how he came up with good ideas. His answer: “You have a lot of ideas and throw away the bad ones.” But what he didn’t say explicitly is that having “a lot of ideas” requires having a lot of source material. Pauling read voraciously across chemistry, physics, biology, and medicine. His structural chemistry work drew on quantum mechanics (physics), his vitamin C research drew on epidemiology (medicine), and his work on sickle-cell anemia drew on genetics (biology). Each domain gave him a different set of conceptual tools, and the breakthroughs came from applying tools from one domain to problems in another.
The limit of the Medici Effect framing is that it under-emphasizes the depth requirement. You can’t import ideas from a field you don’t understand. You can import surface-level metaphors (and people do this all the time, usually badly), but surface-level metaphors don’t transfer the relational structure that actually solves problems. When someone says “let’s apply natural selection to our business strategy,” that’s usually a surface metaphor that doesn’t carry any of the actual mechanism (variation, selection pressure, heritability, fitness landscapes) into the business context. When someone says “this optimization landscape has local minima that our gradient-based approach will get stuck in, so we need something like simulated annealing,” that’s a structural transfer from statistical mechanics that actually works.
The difference between useful cross-domain thinking and useless cross-domain thinking is depth. You need enough depth to transfer structure, not just labels.
A personal system for reading across fields
Since this is a first-person piece, let me describe the system that works for me. The fields I read across are cognitive science, AI/ML, product design, philosophy (epistemology and philosophy of mind), and physics. That’s probably too many. But each one feeds the others in ways that justify the time investment.
Here’s the practical system.
Intake is structured by time horizon. Daily, I read papers and blog posts (30-60 minutes). Weekly, I go deeper into one topic that caught my attention during the daily reading. Monthly, I read a book or long-form treatment that synthesizes across a field.
Notes are organized by concept, not by source. When I read something, the note doesn’t go into a folder labeled “cognitive science” or “AI.” It goes into a note organized around the concept or pattern. If Gentner’s structure-mapping theory and a paper on transfer learning in neural networks are both about “how learned representations generalize to new domains,” they go in the same note. This forces the comparison that makes cross-domain transfer happen.
Every note has a “connections” section. After writing up what I learned, I spend five minutes listing connections to other things I know. This is the analogical encoding step. “This is structurally similar to X because Y.” Most of these connections are bad. Some of them are not. The ratio doesn’t matter. The practice of looking for connections is what builds the structural retrieval capability over time.
Reading across fields has diminishing returns within a session but compounding returns across sessions. Reading three cognitive science papers in a row has declining marginal value by the third paper. Reading one cognitive science paper, then one AI paper, then one design paper has increasing marginal value because each one creates new connection opportunities with the previous ones. So daily reading sessions rotate across fields.
The most valuable reading is primary sources from unfamiliar fields. Blog posts summarizing research are fine for discovering things. But the actual papers (or textbooks, or technical documentation) contain the mechanistic detail that enables structural transfer. Reading a summary of AlphaFold2 tells you “they used attention mechanisms for protein structure prediction.” Reading the actual paper tells you how they encoded spatial relationships using invariant point attention, which is a geometric deep learning idea that has nothing to do with NLP-style attention, and that specific architectural insight is transferable to any problem involving 3D structure prediction.
The uncomfortable middle period is real. When you start reading in a new field, there’s a phase where you understand enough to be confused but not enough to be useful. This phase lasts months, sometimes years. The temptation is to retreat to your primary domain where everything makes sense. Resisting that temptation is the entire game. The value of cross-domain knowledge is precisely that it’s hard-won, which means most people won’t do it, which means the insights available at the intersection are under-exploited.
When specialization wins (because it does)
Everything I’ve said so far could be read as “generalists are better than specialists.” That’s not what the evidence says. The evidence says something more specific and more useful: generalists have an advantage in novel, wicked, complex problem spaces. Specialists have an advantage in well-defined, kind, repeatable problem spaces.
If you’re doing surgery, you want the surgeon who has done this exact procedure 3,000 times, not the one who has a fascinating cross-disciplinary perspective on the biomechanics of tissue. If you’re debugging a performance issue in a specific database, you want the DBA who has seen this class of problem a hundred times, not the generalist who thinks it might be “like a traffic flow problem.”
The domains where specialization dominates share certain features:
- Clear rules that don’t change
- Tight feedback loops (you know quickly if you’re right or wrong)
- Patterns that repeat reliably
- Well-defined evaluation criteria
- Low ambiguity about what “good” looks like
The domains where interdisciplinary thinking dominates share different features:
- Ambiguous or evolving rules
- Delayed, noisy, or misleading feedback
- Novel situations that don’t match previous patterns
- Multiple valid evaluation criteria that conflict
- High uncertainty about what “good” looks like
The strategic question isn’t “should I be a generalist or a specialist?” It’s “what kind of problems do I want to solve, and what profile optimizes for those problems?” If you’re drawn to problems at the intersection of fields, problems that nobody has a clean solution for, problems where the right framing is more valuable than the right execution, then the pi-shaped profile is the one to build.
And increasingly, those are the problems that matter most. The well-defined problems are the ones most susceptible to automation. If a problem has clear rules, tight feedback, and repeating patterns, an AI system will eventually solve it better than any human specialist. The problems that remain valuable for humans are precisely the wicked, ambiguous, cross-domain ones where framing and synthesis matter more than execution speed.
The integration problem: why reading broadly isn’t the same as thinking broadly
There’s a failure mode that afflicts a lot of self-described “interdisciplinary thinkers,” and I want to name it directly because I’ve fallen into it myself. The failure mode is collecting ideas from multiple domains without actually integrating them.
You can read cognitive science and AI and philosophy and never once find a connection between them. You can attend talks in three different departments and walk away with three separate sets of notes that never reference each other. You can describe yourself as “interdisciplinary” on your LinkedIn profile while doing work that draws on exactly one domain.
The difference between reading broadly and thinking broadly is active integration. It’s the deliberate search for structural parallels. It’s the willingness to take a concept from one field, strip away its domain-specific details, extract the abstract principle, and apply that principle to a problem in another field.
This is cognitively expensive. It requires holding two different conceptual frameworks in working memory simultaneously and finding the mapping between them. It’s much easier to just read one more paper in your primary field, where the concepts are already familiar and the connections are already mapped.
The people who actually produce cross-domain insights aren’t just broad readers. They’re active integrators. They ask, with almost annoying frequency: “What does this remind me of? Where have I seen this structure before? What would this look like if I translated it into domain X?”
Richard Feynman was famous for this. When he learned about a new phenomenon in any field, his first instinct was to try to derive it from first principles, which usually meant translating it into the conceptual framework of physics. Sometimes this worked brilliantly (his contributions to quantum electrodynamics). Sometimes it didn’t (his biological research was interesting but not paradigm-shifting). But the practice of always trying to translate across domains was what made his intuition so powerful in his primary field.
Building the practice: concrete steps
If this resonates, here’s how to actually build cross-domain thinking as a skill rather than an aspiration.
Pick two to three domains that are far enough apart to have genuinely different core abstractions. “Machine learning” and “statistics” are too close. “Machine learning” and “cognitive science” are better. “Machine learning” and “evolutionary biology” are very far apart and potentially very generative. The further apart, the harder the connections are to find, but the more valuable they are when you find them.
In your secondary domains, aim for textbook-level understanding, not survey-level. Read the actual introductory textbook. Do the exercises. Work through the math. You need to internalize the relational patterns of the field, not just its vocabulary. This takes months per domain. There’s no shortcut.
Keep a cross-domain connection log. Every time you notice a structural similarity between two domains, write it down. Most of these will be surface-level or wrong. That’s fine. The practice of looking for connections is what trains the pattern-matching system. Over time, the ratio of deep to surface connections improves.
Find people who work at intersections. The fastest way to develop cross-domain intuition is to talk with people who have already done the integration work. Not people who read broadly (lots of people read broadly), but people who have actually produced work that combines ideas from multiple fields. Ask them how they found the connections. Ask them what failed. Ask them which concepts from field A turned out to be most useful in field B, and why.
Present your work to audiences outside your primary domain. Nothing forces you to find the abstract structure of your own ideas like explaining them to someone who doesn’t share your domain-specific vocabulary. If you can explain a machine learning concept to a biologist in terms they find meaningful (not dumbed down, but translated into their framework), you’ve found the abstract structure.
Tolerate the discomfort of being a beginner. The biggest barrier to cross-domain learning is ego. If you’re an expert in one field, being a beginner in another field feels terrible. You’re used to understanding things quickly. Now everything takes longer. You make mistakes that would embarrass an undergraduate. The temptation to retreat to the comfort of your primary domain is strong. Resist it. The discomfort is the signal that you’re learning something genuinely new, which is the only way to build the kind of cross-domain knowledge that produces novel insights.
The meta-argument: why this matters now more than it ever has
There’s a reason I’m writing about this now, and it’s not because interdisciplinary thinking is a timeless virtue. It’s because the problems we’re facing in 2026 are more wicked, more cross-domain, and more novel than the problems of any previous decade.
AI alignment is not a computer science problem. It’s a problem that requires computer science, cognitive science, philosophy of mind, game theory, political science, and ethics, at minimum. Nobody trained in any one of those fields has the full picture.
Climate change is not an atmospheric science problem. It’s a problem that requires atmospheric science, economics, political science, materials science, behavioral psychology, and engineering. The atmospheric scientists can tell you what’s happening. They can’t tell you how to coordinate seven billion people to change their behavior.
Pandemic preparedness is not a virology problem. It’s a problem that requires virology, epidemiology, supply chain engineering, behavioral economics, public health communication, and political science. We learned this the hard way.
The common thread: the most consequential problems of our time sit at the intersection of multiple fields, and the institutions we’ve built (universities, companies, funding agencies, professional societies) are optimized for depth within fields, not synthesis across them.
This creates an opportunity for individuals who are willing to do the hard work of cross-domain integration. The institutional barriers are real, but they’re barriers for everyone. If you build the practice of cross-domain thinking while most people don’t, you’ll have access to insights and connections that are structurally invisible to specialists.
That’s not a guarantee of success. Cross-domain thinking is a necessary condition for solving wicked problems, not a sufficient one. You still need the discipline to go deep, the rigor to check your analogies, the humility to recognize when a connection is surface-level rather than structural. You need the fox’s breadth and the hedgehog’s depth, which is exactly the combination that most institutions make it hard to develop.
But the opportunity is real. The most interesting problems are the ones nobody owns, the ones that sit in the gaps between departments and disciplines and job titles. The people who solve those problems will be the ones who learned to think across boundaries, not because it was easy or rewarded, but because the problems demanded it.