Choose your reading length
When I think about what AI does for me, I keep returning to a German word: Handlungsimpulsverstärker — an action-impulse amplifier.
Here’s the underlying idea. We like to believe that our actions begin with thought — that we reason our way toward decisions. But the sequence runs the other way. A stimulus triggers an emotion. The emotion generates an impulse toward action. Only then does conscious thought arrive, not as the origin of the impulse, but as its rationalizer and modulator. Thought can shape the impulse, refine it, even abort it — but it cannot truly initiate it. The impulse is already underway.
This has consequences for what we build. Before AI, translating an impulse into reality required sustained motivation across days, weeks, months. The cognitive load of figuring out each step, holding the structure in mind, debugging, iterating — all of this demanded fuel. Many impulses died not because they were bad ideas, but because the distance between impulse and completion was too great.
AI changes that distance. When I work with an AI assistant, I offload cognitive scaffolding. I can reason at a higher level of abstraction because I’m not consumed by implementation details. The impulse travels further before exhausting its motivational fuel.
This is what I mean by Handlungsimpulsverstärker: AI doesn’t create new desires or replace human judgment. It amplifies the reach of impulses that were already there, allowing more of them to become real.
Further reading: Daniel Kahneman’s Thinking, Fast and Slow (2011) offers the foundational framework for understanding how intuitive, automatic cognition (System 1) operates before deliberate thought (System 2) can intervene — and why this matters for how we work with tools that process patterns faster than we consciously can.
When I think about what AI does for me, I keep returning to a German word: Handlungsimpulsverstärker — an action-impulse amplifier.
Here’s the underlying idea. We like to believe that our actions begin with thought — that we reason our way toward decisions. But the sequence runs the other way. A stimulus triggers an emotion. The emotion generates an impulse toward action. Only then does conscious thought arrive, not as the origin of the impulse, but as its rationalizer and modulator. Thought can shape the impulse, refine it, even abort it — but it cannot truly initiate it. The impulse is already underway.
Daniel Kahneman’s Thinking, Fast and Slow captures this with his distinction between System 1 and System 2. System 1 is fast, automatic, intuitive — it operates before we’re consciously aware. System 2 is slow, deliberate, effortful — the thinking we notice ourselves doing. We like to imagine System 2 is in charge. But System 2 is largely a reviewer of proposals that System 1 has already generated. It can veto, but it rarely originates.
This isn’t just a conceptual model — it’s neurologically observable. In the 1980s, Benjamin Libet measured the timing of brain activity during voluntary movement. He found that the brain’s “readiness potential” — the neural preparation for action — begins approximately 550 milliseconds before movement. But conscious awareness of the decision to move only emerges around 200 milliseconds before movement. The brain has already committed to a direction before we experience ourselves as deciding. What we call “conscious will” arrives after the fact, with just enough time to modulate or abort — but not to originate.
This has consequences for what we build. Before AI, translating an impulse into reality required sustained motivation across days, weeks, months. The cognitive load of figuring out each step, holding the structure in mind, debugging, iterating — all of this demanded fuel. Many impulses died not because they were bad ideas, but because the distance between impulse and completion was too great. The motivational energy depleted before the work was done.
AI changes that distance. When I work with an AI assistant, I offload cognitive scaffolding. I can think at a higher level of abstraction because the implementation details are no longer mine alone to hold. The impulse travels further before exhausting its motivational fuel.
Consider a concrete example. I’m currently building a browser-based Mars viewer — a way for anyone to stand on the surface, look around, and feel the awe that Carl Sagan imagined when he spoke of browsing another world with a child. Without AI, this impulse would demand weeks of sustained effort: coordinate systems, panoramic libraries, transformation pipelines. Each step an opportunity for the impulse to die. With AI, I can iterate through architectural decisions and offload the cognitive scaffolding. The impulse travels further because the path shortened.
This is what I mean by Handlungsimpulsverstärker: AI doesn’t create new desires or replace human judgment. It amplifies the reach of impulses that were already there, allowing more of them to become real. The motivation was always mine. AI just made it go further.
Further reading:
Daniel Kahneman, Thinking, Fast and Slow (2011) — The foundational framework for understanding how intuitive cognition (System 1) operates before deliberate thought (System 2) can intervene.
Benjamin Libet et al., “Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act,” Brain 106 (1983): 623–642 — The landmark study demonstrating that neural preparation for action precedes conscious awareness of the decision to act.
When I think about what AI does for me, I keep returning to a German word: Handlungsimpulsverstärker — an action-impulse amplifier.
Here’s the underlying idea. We like to believe that our actions begin with thought — that we reason our way toward decisions. But the sequence runs the other way. A stimulus triggers an emotion. The emotion generates an impulse toward action. Only then does conscious thought arrive, not as the origin of the impulse, but as its rationalizer and modulator. Thought can shape the impulse, refine it, even abort it — but it cannot truly initiate it. The impulse is already underway.
Daniel Kahneman’s Thinking, Fast and Slow offers a useful shorthand for this with his distinction between System 1 and System 2. System 1 is fast, automatic, intuitive — it operates before we’re consciously aware. System 2 is slow, deliberate, effortful — the thinking we notice ourselves doing. But this binary can obscure the richness beneath it. What Kahneman calls "System 1" is not one thing — it includes bodily sensation, emotional response, and intuitive pattern-recognition, each with its own character. System 2, the rational layer, is real but limited: it arrives late to the process, reviewing proposals already in motion. We like to imagine it is in charge. More often, it is a latecomer offering commentary.
This isn’t just a conceptual model — it’s neurologically observable. In the 1980s, Benjamin Libet measured the timing of brain activity during voluntary movement. He found that the brain’s "readiness potential" — the neural preparation for action — begins approximately 550 milliseconds before movement. But conscious awareness of the decision to move only emerges around 200 milliseconds before movement. The brain has already committed to a direction before we experience ourselves as deciding. What we call "conscious will" arrives after the fact, with just enough time to modulate or abort — but not to originate.
Libet himself saw this not as an argument against free will, but as a reframing of it. Our freedom lies not in initiating action from pure reason, but in the power of veto — the ability to stop an impulse that has already begun. This is a more modest but perhaps more honest picture of human agency. It does not mean that the impulse is somehow "not us" and the veto is the "real self." The emotional, the intuitive, and the rational are all equally part of who we are — they simply arrive in consciousness at different times after a stimulus. Rational thought is not the origin; it is a late participant in a process already underway.
Yet there is a cost to editing. Aborting an impulse creates friction. The emotional driver remains; only the rational counterforce opposes it. Sustaining that opposition is effortful and depleting. This is why relying heavily on veto power is psychologically expensive — thought cannot sustain strength for as long as emotional drivers can. The healthier path, when possible, is not to abort impulses but to shape the conditions that generate them in the first place.
This has consequences for what we build. Before AI, translating an impulse into reality required sustained motivation across days, weeks, months. The cognitive load of figuring out each step, holding the structure in mind, debugging, iterating — all of this demanded fuel. Many impulses died not because they were bad ideas, but because the distance between impulse and completion was too great. The motivational energy depleted before the work was done.
AI changes that distance. When I work with an AI assistant, I offload cognitive scaffolding. I can think at a higher level of abstraction because the implementation details are no longer mine alone to hold. The impulse travels further before exhausting its motivational fuel.
This is where the philosophy of mind becomes practical. Andy Clark and David Chalmers, in their 1998 paper "The Extended Mind," argued that cognition does not stop at the skull. When we use a notebook to store information we would otherwise hold in memory, that notebook becomes part of our cognitive system. The same applies to a calculator, a diagram, a well-organized workspace. If a process in the external world functions the way a cognitive process would if it happened in the head, then it is part of cognition — just distributed beyond the brain.
AI fits this pattern, but with a difference. A notebook stores; AI processes. When I work with an AI assistant, I’m not just extending my memory — I’m extending my capacity to reason at scale. The assistant holds context I would otherwise lose, explores possibilities I would otherwise miss, and returns structured outputs I would otherwise have to construct piece by piece. It functions like a junior colleague: less experienced than me, but capable of exceptional results when mentored well. The collaboration is genuinely cognitive — not in the sense that the AI is conscious, but in the sense that together we form a system that thinks better than either of us alone.
I would go further: what we call "artificial intelligence" is better understood as artificial intuition. Large language models are pattern-matchers of extraordinary refinement. They operate the way intuition does — fast, automatic, drawing on vast accumulated experience to produce responses that feel intelligent. But they lack the deliberate, self-correcting quality of rational thought. And they lack entirely the deeper layers: bodily sensation, emotional resonance, the mammalian inheritance that grounds human judgment in something felt. Their intuition is powerful but untethered — which is why, with the right structural guidance from a human, they can accomplish remarkable things, but without that guidance, they drift.
This is the collaboration: I provide the structure, the goal, the quality check. The AI provides the intuition, the pattern-completion, the tireless generation of possibilities. Together, we cover more ground than I could alone — not because the AI replaces my thinking, but because it amplifies my capacity to act on what I already wanted to do.
Consider a concrete example. I’m currently building a browser-based Mars viewer — a way for anyone to stand on the surface, look around, and feel the awe that Carl Sagan imagined when he spoke of browsing another world with a child. Without AI, this impulse would demand weeks of sustained effort: coordinate systems, panoramic libraries, transformation pipelines. Each step an opportunity for the impulse to die. With AI, I can iterate through architectural decisions and offload the cognitive scaffolding. The impulse travels further because the path shortened.
There is also a second mode of amplification. In the Mars project, AI extends my reach — I can build more with less. But when I write, AI amplifies the strength of my communication. Writing to reach a broad audience is a skill I find difficult to develop alone. I know what I mean, but translating that into words that land for others requires an intuition about how people read, what they expect, where they lose the thread. AI has that intuition. It has absorbed more writing than I will ever read. When I collaborate with it, my point lands more powerfully — not because the AI supplies the ideas, but because it helps me find the form that carries them.
This is what I mean by Handlungsimpulsverstärker: AI doesn’t create new desires or replace human judgment. It amplifies the reach of impulses that were already there, allowing more of them to become real. The motivation was always mine. AI just made it go further.
Further reading:
Daniel Kahneman, Thinking, Fast and Slow (2011) — The foundational framework for understanding how intuitive cognition (System 1) operates before deliberate thought (System 2) can intervene.
Benjamin Libet et al., "Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act," Brain 106 (1983): 623–642 — The landmark study demonstrating that neural preparation for action precedes conscious awareness of the decision to act.
Andy Clark and David Chalmers, "The Extended Mind," Analysis 58, no. 1 (1998): 7–19 — The philosophical argument that cognitive processes extend beyond the brain into tools and environment. Available at: https://consc.net/papers/extended.html
For an accessible discussion of how large language models mirror System 1 cognition — rapid pattern recognition without deliberate reasoning — see the growing literature on AI and dual-process theory, including discussions at the intersection of cognitive science and machine learning.