If you are a CEO spending your Tuesday afternoon prompting a Large Language Model to pretend it’s the ghost of Steve Jobs or Andy Grove, you aren't "innovating." You are hallucinating.
Recent headlines have fawned over executives who claim that chatting with AI personas of dead business icons provided "game-changing advice." It’s a seductive narrative. It suggests that for the price of a monthly subscription, you can have a boardroom filled with the greatest minds in history. It's also complete nonsense.
The "advice" these CEOs are receiving isn't wisdom. It’s a statistical average of public relations data, filtered through a polite safety layer, and served back to them as a mirror of their own biases. When you ask an AI version of Elon Musk how to handle a product launch, you aren't getting Musk’s tactical genius; you’re getting a weighted probability of what his Twitter feed and biography authors said he might do.
The Midwit Trap of "Average" Wisdom
Large Language Models (LLMs) are, by definition, engines of consensus. They function by predicting the next most likely token in a sequence based on a massive corpus of data.
When you ask an AI clone for advice, the model navigates toward the center of the bell curve. It gives you the "most likely" response. True leadership, however, is found in the tails—the outliers. Steve Jobs didn't become Steve Jobs by doing what was statistically probable. He became an icon by making moves that were, at the time, statistically absurd.
By following "AI clone advice," you are effectively outsourcing your strategy to a committee of the internet’s collective median. You are institutionalizing mediocrity. If the AI tells you to "focus on the user experience" or "iterate fast," it hasn't given you a breakthrough. It has given you a platitude.
The Feedback Loop of Narcissism
The danger isn't just that the advice is generic. The danger is that it’s addictive because it confirms what you already want to do.
In psychology, we call this confirmation bias. In the C-suite, we call it a disaster. When a CEO talks to a "Digital Jobs," they subconsciously frame their prompts to lead the model toward the answer they desire.
"Hey Steve, I’m thinking about cutting the R&D budget to save the quarterly earnings, but I want to keep the soul of the product. How would you handle this?"
The AI, designed to be helpful and follow instructions, will find a way to justify that path using "Jobs-ian" language. It will talk about "focus" and "saying no to a thousand things." The CEO walks away feeling empowered, thinking the ghost of Apple past just blessed their budget cuts. In reality, they just played a high-tech game of "Simon Says" with themselves.
Why Context is the Only Thing That Matters
Business strategy is 10% theory and 90% context.
An AI clone of Jack Welch doesn't know your debt-to-equity ratio. It doesn't know that your Head of Engineering is burnt out or that your primary competitor just signed a secret exclusivity deal with a supplier in Taiwan.
When the real Ben Horowitz gives advice, he is doing so based on the specific, messy, human variables of the moment. When a chatbot simulates Ben Horowitz, it is performing a vibe check. It lacks the "grounded truth" of your specific market conditions.
I’ve watched companies burn through millions in venture capital because they prioritized "models" over "markets." They built products based on what a framework said should work, rather than looking at the raw, ugly data of customer behavior. Consulting a digital clone is the ultimate form of this delusion. It’s an attempt to find a shortcut to the hard work of thinking.
The Technical Reality: Compression is Not Comprehension
We need to stop using the word "intelligence" and start using the word "compression."
An LLM is a compressed version of the internet. When you "talk" to a clone of a business icon, you are interacting with a lossy compression of that person’s public persona.
$I(X;Y) = H(X) - H(X|Y)$
In information theory, the mutual information between two variables tells us how much knowing one reduces uncertainty about the other. The "X" here is the actual strategic brilliance of a dead CEO. The "Y" is the AI clone. Because the AI only has access to public data (books, interviews, speeches), the "H(X|Y)"—the remaining uncertainty—is massive. The most critical parts of a leader’s decision-making process are the parts that were never made public: the private fears, the gut instincts, the backroom deals, and the failures they covered up.
The AI clone is a reconstructed output with a high loss rate. You are making bets on your company’s future based on a pixelated, low-resolution version of reality.
The Cult of the Persona
There is a deeper, more cynical reason why CEOs love this trend: it’s great for the ego.
It feels better to say "I consulted a digital twin of Peter Drucker" than to say "I Googled some management tips." It adds a veneer of tech-savviness to what is essentially a lonely executive looking for a sounding board.
If you need a sounding board, hire a coach who will tell you that you’re being an idiot. Hire a COO who disagrees with you. Buy a beer for a front-line employee and actually listen to them for twenty minutes. These are all infinitely more valuable—and more terrifying—than talking to a computer program that is literally programmed to please you.
Stop Prompting, Start Observing
The obsession with AI clones is a symptom of a broader trend: the flight from reality. We would rather talk to a ghost in the machine than face the cold, hard facts of our own balance sheets or the shifting preferences of our customers.
If you want to use AI in your business, stop using it as a therapist or a mentor. Use it for what it’s actually good at:
- Data synthesis: Analyzing 10,000 customer reviews to find the three things people actually hate.
- Anomaly detection: Finding the one line in a 500-page contract that’s going to screw you in three years.
- Drafting: Getting a first version of a boring internal memo out of the way so you can do real work.
Everything else is theater.
The next time you see a headline about a CEO who "learned a valuable lesson" from a digital version of a billionaire, understand that you are looking at a person who has run out of ideas. They are looking for magic in a math equation.
True leadership cannot be automated because true leadership requires skin in the game. An AI clone of Jeff Bezos doesn't care if your company goes bankrupt. It doesn't feel the weight of the layoffs. It doesn't lose sleep over the ethical implications of a pivot.
If you can't make a decision without asking a chatbot what a dead guy would do, you aren't a CEO. You're a fanboy with a title.
Burn the digital idols. Look at your own data. Talk to your own people. Make your own mistakes. That is the only way to build something that isn't just an echo of the past.