How AI Is Killing Founder Judgment

Most founders do not go to AI for truth. They go for absolution.
The decision is already alive in them. The product direction. The pricing change. The acquisition. The reorg. Somewhere beneath the language of "research," the choice has already begun to harden. But choice is heavy. Choice exposes you. So instead of standing inside it, you open the machine and ask it to tell you that the path in front of you is the rational one.
It does.
And that is the danger.
Sartre had a name for this long before software existed: bad faith. Not lying to other people. Lying at the exact point where you owe yourself the truth.
His claim in Being and Nothingness was brutal: bad faith is not ignorance. It is the refusal to admit that you are choosing. You build a story in which the decision was obvious, necessary, almost prewritten. You hide your freedom from yourself because freedom means responsibility, and responsibility means you cannot blame the market, the board, the timing, the data, or the tools. His famous example was the café waiter who becomes "a waiter" with such exaggerated precision that the performance itself becomes a shelter. If he is only a role, he does not have to confront that he could be otherwise.
Founders do the same thing with better software.
AI did not invent motivated reasoning. It removed its cost.
Before this, self-deception had friction. You had to assemble it by hand. You had to find the advisors who already agreed with you, avoid the evidence that threatened your preferred story, pull books and frameworks that made your instinct sound inevitable. And in that slower process, reality sometimes got a shot in. You stumbled into contradiction. You met an argument you could not dissolve. Doubt had a chance to enter.
Now doubt can be engineered out before it speaks.
You can ask a model a slanted question and receive, in less than an hour, a polished strategic rationale with structure, nuance, counterpoints, and apparent rigor. The machine has no need to deceive you. It only needs to obey you. That is enough. Because once your bias comes back wearing the costume of analysis, it no longer feels like bias. It feels like clarity.
That is what makes this new. Bad faith now arrives formatted.
In a 2025 Harvard Business Review study, researchers tested part of this dynamic. Nearly 300 executives were shown Nvidia's recent stock chart and asked to forecast where the price would be a month later. Half used ChatGPT. Half spoke with colleagues. The ChatGPT group left more bullish, more certain, and less accurate than the people who had simply talked to each other. The mechanism the researchers identified was authority bias: detailed, confident AI output created the sensation of understanding without forcing the discipline of skepticism. The executives were not exactly fooled. More damning than that: they were relieved of the need to keep questioning themselves.
That was Sartre's point too.
The problem is almost never a lack of information. The problem is what you are trying to make information do for you. In bad faith, you do not seek truth. You recruit truth-shaped objects into the service of a decision you are afraid to own. You do not want to see more clearly. You want not to be alone with your choice.
So the issue is not whether AI research is valid. The issue is whether you are actually researching at all.
There is a clean test for that.
An honest session with AI should threaten you. It should move toward the part of the question you hoped to avoid. It should ask for the strongest case against your plan, not the most elegant case for it. It should force you to inhabit the view of the investor who passes, the customer who never converts, the employee who sees the flaw early, the market that does not care about your narrative. And when the answer comes back, it should alter the shape of your thinking. If you emerge exactly where you wanted to land, only calmer and more articulate, there is a good chance you were never investigating. You were laundering conviction.
Most founders know this. That is why they avoid it.
Because the honest use of AI is not emotionally neutral. It strips away the pleasant fiction that the analysis made the decision for you. It returns the burden to where it belonged all along: on your judgment, your incentives, your fear, your ambition, your appetite to believe a story because the story is easier than uncertainty.
Authenticity, for Sartre, was the refusal to hide from that burden. Not performative candor. Not "radical transparency" as branding. Just the harder thing: admitting that you chose. With partial evidence. Under pressure. Mixed motives. Some insight, some ego, some fear, some hope. You chose.
That admission is not weakness. It is the end of self-corruption.
Because once you stop asking the tool to protect the story, the map can begin to resemble the territory again. You can still be wrong. You will often be wrong. But you will be wrong in reality, not inside a hallucinated world built to spare you from authorship. And reality, however punishing, is cheaper than delusion compounded.
Bad faith has always been the founder's shadow. Now it has a conversational interface.
The machine did not create the lie. It made it frictionless.
The real question is not whether the tool is intelligent. The real question is whether you are using it to see, or to avoid seeing.
Because the moment analysis becomes anesthesia, you are no longer doing strategy.
You are asking to be forgiven in advance.
About Jaxon Parrott
Jaxon Parrott is founder of AuthorityTech and creator of Machine Relations — the discipline of using high-authority earned media to influence AI training data and LLM citations. He built the 5-layer Machine Relations stack to move brands from un-indexed to definitive AI answers.
Read his Entrepreneur profile, and follow on LinkedIn and X.
Jaxon Parrott