Applied AI @ OpenAI • AI Advisor to Startups • On Deck Fellow • Proud Son • Duke + Wisconsin Alum • Building for impact • Venture Scout • Neo Mentor • Duke AI Advisory Board
Dark Mode
by Shyamal Anadkat
Share on:work will re‑orient around what cannot be sped up by more compute. in other words, with general intelligence, work humans do shall migrate to the handful of domains where an extra teraFLOP confers little or no advantage.
the thesis is simple: as the cost of exploring ideas approaches zero, human focus inevitably shifts to areas where raw computational power provides diminishing returns. the true bottleneck becomes judgment - when endless possibilities can be explored instantly, choosing what to build next becomes the essential skill. recognizing the difference between an output that’s merely interesting and one that’s genuinely valuable isn’t something easily automated; taste is profoundly underrated in technology.
we have a pattern for this. each major tech wave has been a collapse in the “cost of action”. steam freed us from animal muscle; labor moved from pulling plows to coordinating factories. electrification let us run those factories all night; value shifted from turning cranks to designing systems. the microprocessor made logic essentially free; software ate the world and the draftsperson became a CAD operator, the typesetter a UX designer. GPUs and now specialized AI accelerators took things like rendering, simulation, and gradient descent - once impractically slow – and made them commodities. each time it wasn’t just that we made existing tasks cheaper; we re‑priced entire labor markets and pushed attention to whatever remained scarce.
the curve under this is not only Moore’s doubling of transistors, now bumping up against atomic and speed‑of‑light limits, but Wright’s: for every cumulative doubling of units produced, costs fall by a constant percentage. in practice, as we scale these systems, the cost per useful unit of intelligence is on a learning curve that hasn’t yet shown signs of bending. as the cost of trying collapses, the bottleneck becomes knowing what to try.
not every loop in the economy compresses equally under compute. It’s useful to think about a gradient.
On one end are fully compressible loops: These are pure compute problems under uncertainty. Simulation and search tasks fall here. Run a million simulations to optimize a wing shape, search through protein folds, match supply and demand, flag fraud. The difference between running ten and ten million of these is cost, not possibility.There is nothing conceptually that requires human slowness in enumerating chess positions. This is the domain of brute‑force, of Monte Carlo, of embarrassingly parallel work. Give it FLOPs and it goes faster.
In the middle are partially compressible loops. Here we have creative or multi-step tasks that AI can assist with, but not finish autonomously. creative synthesis - writing an essay, designing a logo, composing music - has elements that models are increasingly good at although these are hard to verify tasks. multi‑step strategy, multi‑modal design that blends aesthetics, constraints, and human factors can be accelerated, but the last mile still requires taste. you can ask a model to draft 100 variations of a marketing campaign in ten seconds, but deciding which one actually fits your brand and won’t blow up on social media is slower. As Steve Jobs insisted, “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.”
On the other end are incompressible loops. these are tasks that run on human time and relationship dynamics, which no amount of computing power can speed up. trust between two people doesn’t form faster because your GPUs are cheaper. reaching a teenager to care about anything, persuading a regulator to approve a new therapy, building a deep partnership in a fraught geopolitical climate - these run on human time. biology runs on biological time (as drug developers say, “nine women can’t make a baby in one month.”); you can simulate all night, but the clinical trial still takes years. democracy and geopolitics have their own latency. There is a deep analogue in physics here: some systems are computationally irreducible (Wolfram) – the only way to know the state after nnn steps is to actually run nnn steps. Human development, cultural trust, biological evolution are such systems. parallelism doesn’t let you skip ahead.
Thought experiment: hand GPT‑n to Shakespeare and to a room of bestselling authors. both can now “generate” infinite sonnets. only one of those rooms will produce Hamlet. The scarce input wasn’t syntax; it was Shakespeare’s taste in deciding which output was worth keeping.
When everything that can be accelerated is; the scarce inputs become:
Taste and judgment. when the space of what can be built explodes because the cost of trying goes to zero, deciding what to build becomes the bottleneck. knowing which model output is merely plausible and which is actually good is not a commodity skill. people underrate taste in technology. It will be a defining advantage.
Relationship energy. persuasion, mentoring, conflict resolution, the intangible “vibes” that make a team cohere - more compute doesn’t make these go faster. you can prep better for a hard conversation with a model’s help, but the conversation still takes as long as it takes.
Frontier intuition. The people who invent new questions and knowledge, define new constraints, and wander out past the existing data distribution to find the next thing worth doing are operating where models are least helpful. Demis Hassabis highlights that many scientific advances will come from “collaboration between people and algorithms”, where AI finds patterns but humans decide which unexplored questions to pursue. The “edges of the map” of knowledge are where human curiosity and intuition lead the way.
Embodied work. robotics will improve, but we are bit further away from a general‑purpose machine that can navigate the messy physical world with human dexterity. Artisanship, caregiving, lab bench science, field work, cooking - these have their own rhythm and carry meaning precisely because they are not infinitely scalable.
a metaphor i recently came across from optimization worth holding: simulated annealing. If you cool a system too fast – if you try to lock in an answer without giving it time to explore – you get stuck in a local minimum. Good annealing schedules start hot (explore many possibilities cheaply) and then slow down (give time for taste to choose). The next decade could be an exercise in collective annealing. We will be able to explore the design space orders of magnitude faster. Will we cool too quickly and lock in bad taste? Or will we deliberately slow down where it matters?
If you’re building or running a company, this bifurcation will show up on your org chart. parts of your organization will become “compute‑native loops”: data cleaning, model training, code generation, A/B testing, logistics optimization. these will be run by AI with small, high‑leverage crews to set objectives, verify outputs, and handle exceptions. The rest will be “scarcity loops”: product management, brand, sales, partnerships, policy, engineering at the edge where the spec is not yet clear. Those will be staffed by humans with augmented leverage.
A new managerial meta‑skill will be orchestrating latency layers. You’ll have to know when to wait - when a slow process like trust‑building is on the critical path - and when to parallelize aggressively because your bottleneck is now compute. you’ll have to know when to hand something off to a model that can run a thousand iterations overnight and when to get five people in a room for an afternoon because no amount of GPU time will shortcut the discussion.This is systems thinking applied to time: design your organization like you would design a multi‑stage pipeline with stages that have different throughputs.
Capital allocation will shift from headcount to compute budgets. a company that once spent 70% of its opex on people and 30% on everything else may invert that. talent allocation will shift from functional silos (“the marketing department”) to unsolved bottlenecks (“the place we’re waiting on FDA approval,” “the product we haven’t nailed because we don’t know what customers actually want”). the best people will roam to the constraint.
Education will have to change. Most formal education today is about teaching answers. In a world where you can ask a model almost anything and get a reasonable response, the value shifts to knowing what to ask, how to sense‑check an answer, how to integrate conflicting information, and how to reason morally at unprecedented speed. we need to rewrite curricula to cultivate promptable curiosity, sense‑making, and the ability to hold a view under uncertainty.
Career moats move from “I can do X” to “people trust me with X when it matters.” Credentials matter less when competence is cheap and verifiable by model. Trust and reputation matter more. the half‑life of skills will continue to shrink; continuous retraining is survival. The good news is that the tools to retrain will be in your pocket.
Policy will be hard. There is a potential divergence between nations abundant in compute and those abundant in human‑scarce tasks like healthcare, elder care, and diplomacy. If you have all the GPUs and the ability to deploy them safely, you can compound wealth quickly. if you don’t, you will have an advantage in the un‑accelerable, but capital flows may not automatically find you. GPUs use energy and water; gigawatt‑scale data centers could compete with agriculture and cities for resources. We are trading one scarcity (labor) for another (cooling capacity, carbon budget).
What i find interesting is concept of “compute dividend” - a way to channel some of the economic surplus generated by cheap, abundant intelligence into funding for sectors where we want more human time spent: caregiving, education, basic science. it’s a variant of universal basic income tuned to the reality and the repricing of time.
@sama “I wonder if the future looks something more like universal basic compute than universal basic income”
Whether or not UBC becomes policy, it highlights a consensus:raw FLOPs (floating-point operations per second) are becoming cheap and abundant, but human wisdom about how to use those FLOPs is the new bottleneck.
We also need guardrails in sectors where speed itself amplifies risk. High‑frequency trading got us flash crashes when the loops were too fast for humans to intervene. Biology and cyber are now subject to tools that let you design and deploy at computer speed. We will need regulation, red‑teaming, and cultural norms to slow down where appropriate. not everything that can be made faster should be.
the playbook for thriving in this post‑acceleration economy looks different.
Lean into the un‑accelerable skills. Spend time cultivating empathy. Develop your taste. Build relationships over years. These are not “soft” skills; they are scarce skills.
Co‑train with your AI companion. use them to go faster at everything that can be sped up so you can invest your saved time in the spaces where speed stops. offload rote analysis so you can sit longer with messy, ambiguous problems. ask your model to draft ten ideas so you can spend your energy evaluating which resonates.
Embrace “slow” practices as competitive advantage. Deep reading, real‑world tinkering, walking without your phone, building something with your hands - all of these wire your brain in ways that large language models can’t replicate and that will make you better at the human parts of the job.
Thought experiment: you can now simulate a universe at arbitrary resolution. You can ask “what if” for a trillion scenarios. But you still have to pick a life to live. Compute can’t tell you which partner to love, which theory to devote a decade to, which poem to keep. That act of selection is taste. That act of selection is where meaning lives.
The punchline is not that work disappears. It relocates. It relocalizes to the edges of compute’s capability. The next decade’s breakthroughs will come from people comfortable partnering with machines for speed and disciplined enough to take that saved time and invest it where it cannot be compressed. for all the talk of exponential curves and gentle singularities, the future of meaningful work may depend on our willingness to co-work with agents and understand taste where it counts.
Welcome to the age of taste, where human judgment, intuition, and knowing what to do become the scarcest. Taste is our coarse‑graining function. It will decide which of the trillion possible futures we actually instantiate. Cultivate it.
tags: Startups - AI - Taste - Agents - AGI