Ashes Logo
HomeProjectsHermes
A
Projects
Home
Hermes
A
Profile
    All Posts

    To The Prompter Go The Spoils

    December 26, 2025
    I have noticed an interesting pattern in business and government: we only acknowledge a problem once the problem comes into focus. And when problems do come into focus, they are often treated as new and novel, where we preference new information over existing information as somehow adding more value to the conversation by virtue of being new. But when looking to the future, the past has far more value than we care to admit. The "modern" problems of AI aren't actually new; they are simply the latest chapter in a very old book where we can finally apply what we have learnt. We’ve been chewing over the "how" and the "why" of machine intelligence for centuries, building a foundation of first principles that many people are simply not aware of. Consider the timeline: - 16th Century: The Golem of Prague enters our folklore. It is the first real debate on machine intelligence: a powerful, man-made entity created to serve, but one that lacks a soul or moral compass. It was a warning that a tool without strict parameters eventually becomes a threat to its creator. Note how similar this sounds to something like Mary Shelley’s Frankenstein. - 1840: Ada Lovelace develops the world’s first computer programming language. She didn’t build a tool; she was mapping out the logical steps (loops, if-else statements, memory management) for a machine that didn't even exist yet. She understood the logic of the problem before the hardware existed. - 1940: Isaac Asimov publishes the first manual for AI ethics in his book “I Robot”. A scifi author before the invention of the computer was already trying to solve the problem of how to keep an "autonomous" mind aligned with human safety. Why does this matter now? Because you do yourself a disservice by ignoring the thoughts others have already had, unbiased by the AI mania of the past few years. These pioneers weren't focused on the "machine", they were focused on the logical problem space and how to navigate the concept of AI without being drawn into the model, breakthrough or framework of the week. To bring the conversation into the here and now, lets look at two conflicting metaphors that capture our current relationship with machine intelligence. In Douglas Adams’ *The Hitchhiker’s Guide to the Galaxy*, the supercomputer "Deep Thought" is built to calculate the "Answer to Life, the Universe, and Everything." After 7.5 million years of processing, it delivers the answer: **“42.”** When the creators respond in confusion, the machine replies simply: *“I think the problem, to be quite honest with you, is that you’ve never actually known what the question is.”* Contrast this with the **Infinite Monkey Theorem**: the idea that given enough monkeys on typewriters and infinite time, they will eventually produce the complete works of Shakespeare. Here lies the friction: **Deep Thought** represents a failure of **definition**. It possesses infinite resources, but the output is useless because the prompter cannot frame the problem. **The Monkeys** represent a failure of **intent**. They have a clear goal (Shakespeare) but no methodology. They are simply agents of chaos, hitting keys until probability leads to an answer. It strikes me that this is exactly where we are trapped today. We are repeating the mistake of Deep Thought’s creators: we have been handed a distillation of all human knowledge, yet we haven't spent the time to understand the question. Users are feeling disillusioned because they are asking vague, undefined questions to a probabilistic engine. They are receiving "42" or "typewriter garbage" in return and blaming the machine, rather than their own inability to utilize the system. Currently, the majority uses AI for surface-level chores: *“Fix this spreadsheet,” “Summarize this email,” “Research this topic.”* These are valuable, but they barely scratch the surface. They treat AI as a magic box rather than what it actually is: an **amplifier of your cognitive ability and logic capabilities.** If you aren't sharpening the mind behind the machine (the same logical mind Ada Lovelace was using in 1840) you will be replaced. Not by AI, but by a human who has learned to use it as a lever. In a world where AI executes tasks faster than you ever could, where does your value lie? Many suggest moving "up" to become an "orchestrator," but they rarely explain the *how*. The "how" is rooted in the transition from an Artisan of your craft to an Operator of a machine. In the Industrial Revolution, we moved from manual craft to machine operation. The machine performed the labor, but the operator had to know what "good" looked like. If the operator didn't understand the parameters of a high-quality textile, the machine simply produced high-speed trash. This is the danger of our current corporate strategy. By "top-loading" companies with experienced staff while neglecting the junior pipeline, we are destroying the training ground where people learn what "good" actually looks like. If you haven't spent time in the "low-level execution" trenches, you lack the context to manage the AI's output. You can't be a manager if you don't understand the work you are managing. Current AI is essentially the world’s most sophisticated version of **autocorrect.** It predicts the best next word, but it does not understand your intent, your company’s culture, or the wider context of your industry. It is a Golem—it will do exactly what you tell it to do, even if what you told it to do is a mistake. Your job is no longer to learn *execution*; it is to learn *judgment*. - “If I execute this task this way, what does it mean for the wider system?” - “Can I look at this AI output and identify the relative quality of the output” If you cannot understand the problem space at this level, you aren’t "managing" AI, you are gambling that the AI knows what it is doing, which we know it doesn’t. AI amplifies both your strengths and your failures. If you don’t know where your blind spots are, you can assume the AI is currently amplifying them. To the Prompter go the spoils. AI isn't coming for your job, but the person who understands defining problem spaces and learning what good looks like is. The race is on.
    · · ·
    Share