shyamal's space

Logo


Applied AI @ OpenAI • AI Advisor to Startups • On Deck Fellow • Proud Son • Duke + Wisconsin Alum • Building for impact • Venture Scout • Neo Mentor • Duke AI Advisory Board

Dark Mode

25 June 2025

Age of Taste: where the cost of trying is ~zero, but knowing what to do is everything

by Shyamal Anadkat

Share on:

Age of Taste - shyamal x gpt-4o

work will re‑orient around what cannot be sped up by more compute. in other words, with general intelligence, work humans do shall migrate to the handful of domains where an extra teraFLOP confers little or no advantage.

the thesis is simple: as the cost of exploring ideas approaches zero, human focus inevitably shifts to areas where raw computational power provides diminishing returns. the true bottleneck becomes judgment - when endless possibilities can be explored instantly, choosing what to build next becomes the essential skill. recognizing the difference between an output that’s merely interesting and one that’s genuinely valuable isn’t something easily automated; taste is profoundly underrated in technology.

we have a pattern for this. each major tech wave has been a collapse in the “cost of action”. steam freed us from animal muscle; labor moved from pulling plows to coordinating factories. electrification let us run those factories all night; value shifted from turning cranks to designing systems. the microprocessor made logic essentially free; software ate the world and the draftsperson became a CAD operator, the typesetter a UX designer. GPUs and now specialized AI accelerators took things like rendering, simulation, and gradient descent - once impractically slow – and made them commodities. each time it wasn’t just that we made existing tasks cheaper; we re‑priced entire labor markets and pushed attention to whatever remained scarce.

the curve under this is not only Moore’s doubling of transistors, now bumping up against atomic and speed‑of‑light limits, but Wright’s: for every cumulative doubling of units produced, costs fall by a constant percentage. in practice, as we scale these systems, the cost per useful unit of intelligence is on a learning curve that hasn’t yet shown signs of bending. as the cost of trying collapses, the bottleneck becomes knowing what to try.

Wright's law

Tweet image

not every loop in the economy compresses equally under compute. It’s useful to think about a gradient.

Loops

Thought experiment: hand GPT‑n to Shakespeare and to a room of bestselling authors. both can now “generate” infinite sonnets. only one of those rooms will produce Hamlet. The scarce input wasn’t syntax; it was Shakespeare’s taste in deciding which output was worth keeping.

Taste, Trust, Intuition, Dexterity

When everything that can be accelerated is; the scarce inputs become:

a metaphor i recently came across from optimization worth holding: simulated annealing. If you cool a system too fast – if you try to lock in an answer without giving it time to explore – you get stuck in a local minimum. Good annealing schedules start hot (explore many possibilities cheaply) and then slow down (give time for taste to choose). The next decade could be an exercise in collective annealing. We will be able to explore the design space orders of magnitude faster. Will we cool too quickly and lock in bad taste? Or will we deliberately slow down where it matters?

Organizing for a post-acceleration world

If you’re building or running a company, this bifurcation will show up on your org chart. parts of your organization will become “compute‑native loops”: data cleaning, model training, code generation, A/B testing, logistics optimization. these will be run by AI with small, high‑leverage crews to set objectives, verify outputs, and handle exceptions. The rest will be “scarcity loops”: product management, brand, sales, partnerships, policy, engineering at the edge where the spec is not yet clear. Those will be staffed by humans with augmented leverage.

A new managerial meta‑skill will be orchestrating latency layers. You’ll have to know when to wait - when a slow process like trust‑building is on the critical path - and when to parallelize aggressively because your bottleneck is now compute. you’ll have to know when to hand something off to a model that can run a thousand iterations overnight and when to get five people in a room for an afternoon because no amount of GPU time will shortcut the discussion.This is systems thinking applied to time: design your organization like you would design a multi‑stage pipeline with stages that have different throughputs.

Capital allocation will shift from headcount to compute budgets. a company that once spent 70% of its opex on people and 30% on everything else may invert that. talent allocation will shift from functional silos (“the marketing department”) to unsolved bottlenecks (“the place we’re waiting on FDA approval,” “the product we haven’t nailed because we don’t know what customers actually want”). the best people will roam to the constraint.

Education will have to change. Most formal education today is about teaching answers. In a world where you can ask a model almost anything and get a reasonable response, the value shifts to knowing what to ask, how to sense‑check an answer, how to integrate conflicting information, and how to reason morally at unprecedented speed. we need to rewrite curricula to cultivate promptable curiosity, sense‑making, and the ability to hold a view under uncertainty.

Career moats move from “I can do X” to “people trust me with X when it matters.” Credentials matter less when competence is cheap and verifiable by model. Trust and reputation matter more. the half‑life of skills will continue to shrink; continuous retraining is survival. The good news is that the tools to retrain will be in your pocket.

Policy will be hard. There is a potential divergence between nations abundant in compute and those abundant in human‑scarce tasks like healthcare, elder care, and diplomacy. If you have all the GPUs and the ability to deploy them safely, you can compound wealth quickly. if you don’t, you will have an advantage in the un‑accelerable, but capital flows may not automatically find you. GPUs use energy and water; gigawatt‑scale data centers could compete with agriculture and cities for resources. We are trading one scarcity (labor) for another (cooling capacity, carbon budget).

What i find interesting is concept of “compute dividend” - a way to channel some of the economic surplus generated by cheap, abundant intelligence into funding for sectors where we want more human time spent: caregiving, education, basic science. it’s a variant of universal basic income tuned to the reality and the repricing of time.

@sama “I wonder if the future looks something more like universal basic compute than universal basic income”

Whether or not UBC becomes policy, it highlights a consensus:raw FLOPs (floating-point operations per second) are becoming cheap and abundant, but human wisdom about how to use those FLOPs is the new bottleneck.

We also need guardrails in sectors where speed itself amplifies risk. High‑frequency trading got us flash crashes when the loops were too fast for humans to intervene. Biology and cyber are now subject to tools that let you design and deploy at computer speed. We will need regulation, red‑teaming, and cultural norms to slow down where appropriate. not everything that can be made faster should be.

Playbook for the taste economy

the playbook for thriving in this post‑acceleration economy looks different.

Lean into the un‑accelerable skills. Spend time cultivating empathy. Develop your taste. Build relationships over years. These are not “soft” skills; they are scarce skills.

Co‑train with your AI companion. use them to go faster at everything that can be sped up so you can invest your saved time in the spaces where speed stops. offload rote analysis so you can sit longer with messy, ambiguous problems. ask your model to draft ten ideas so you can spend your energy evaluating which resonates.

Embrace “slow” practices as competitive advantage. Deep reading, real‑world tinkering, walking without your phone, building something with your hands - all of these wire your brain in ways that large language models can’t replicate and that will make you better at the human parts of the job.

Thought experiment: you can now simulate a universe at arbitrary resolution. You can ask “what if” for a trillion scenarios. But you still have to pick a life to live. Compute can’t tell you which partner to love, which theory to devote a decade to, which poem to keep. That act of selection is taste. That act of selection is where meaning lives.

The punchline is not that work disappears. It relocates. It relocalizes to the edges of compute’s capability. The next decade’s breakthroughs will come from people comfortable partnering with machines for speed and disciplined enough to take that saved time and invest it where it cannot be compressed. for all the talk of exponential curves and gentle singularities, the future of meaningful work may depend on our willingness to co-work with agents and understand taste where it counts.

Welcome to the age of taste, where human judgment, intuition, and knowing what to do become the scarcest. Taste is our coarse‑graining function. It will decide which of the trillion possible futures we actually instantiate. Cultivate it.

tags: Startups - AI - Taste - Agents - AGI