The Courage to Think: Our Greatest Bottleneck in the Age of AI
The year 2025 will stand as the year many started to see a segmentation in their workplace. The barriers of work just got a lot more blurry. Some people started creating massive amounts of code, some people started generating outputs you would never have expected from them, some people started asking bigger questions than had ever been asked before. Our tools changed, like an unlock in our society, and we are culturally shifting to take advantage of it. Fundamental to this change, I'd like to write about one thing I believe is affecting everything in the AI-fueled era: self esteem.
In particular, I argue the companies that will thrive in the next decade of AI will be those most able to concentrate and scale self esteem within their teams.
But first, let's define a term. As a technologist, I tend to have an aversion to ill-defined emotional terms impossible to apply reason to, and I won't subject you to such pain. What is self esteem? I take my definition from the psychologist Nathaniel Branden:
Self-esteem is the disposition to experience oneself as being competent to cope with the basic challenges of life and of being worthy of happiness. It is confidence in the efficacy of our mind, in our ability to think. By extension, it is confidence in our ability to learn, make appropriate choices and decisions, and respond effectively to change. It is also the experience that success, achievement, fulfillment—happiness—are right and natural for us.
Two things stand out from this definition:
- Belief in the capability to think
- Belief in being worthy of the happiness that comes from the achievement of values
Now, I'll be the first to tell you that self esteem is not automatic. People struggle with this their whole lives in a myriad of ways. But why is it more important now than ever?
Historically, work has always had economic requirements. Work has always benefited from those who take on their work with full consciousness, but different work eras have capitalized the mind in various ways. The industrial revolution that first took advantage of the assembly line did not economically require mass amounts of thinkers. The role of building a car was massively benefited by physical workers who were efficient and sustainable in their work—usually a single task. When information technology became a full force in society, a new class of worker was able to be economically sustained due to its demand: the knowledge worker. This worker, while less manual, was much more mentally focused on a particular domain. A programmer was seen much like a factory unto their own, whose outputs could be estimated, and it required of them primarily to be conscious of the quality of their work and new innovations to make their job more profitable. It was never essential (though often beneficial) for them to step far out of their domain.
What has changed in the era of AI is that the barriers to the ways in which people can provide economic value have all been lowered. An investment in a skill like programming historically would take a very long time to produce even minimal outputs. Similarly with art, professional writing, and so on. These outputs all have value, and we are now all more capable than ever to be suppliers. But like all supply chains, bottlenecks exist. The bottlenecks of AI tools are their own study of interest, but the one more crucial before them is having the humans actually choose to utilize the AI.
It is a mental action first to make any kind of choice. Whether it be what we eat, what programming languages we learn to work with, or whether we work with quality or not—AI has given us all more choices in our lives. The argument that "other people's required skills are the bottleneck" is evaporating, and what's being left are people who choose to act and people who do not.
Weeding through AI responses and artifacts is not easy. It requires several traits in a world where objective reality outside of prompts still matters: critical thinking, the willingness to verify, and the endurance to look at what is an endless stream of data.
But here is where self esteem becomes the linchpin. Critical thinking presupposes something: that you believe your mind is capable of discerning truth from falsehood. The willingness to verify presupposes that you believe your judgment matters. The endurance to persist through complexity presupposes that you believe the outcome—your success, your achievement—is worth pursuing.
A person with low self esteem, confronted with AI's capabilities, faces a particular kind of paralysis. They may see the tool's output and think: "This is probably better than what I could produce." They defer. They accept. They become, in essence, a relay station for machine-generated content rather than a curator and director of it. The tragedy is not that they use AI—it's that they abdicate the very mental function that would make AI useful.
Contrast this with the person of high self esteem. They approach AI as what it actually is: a powerful tool that extends human capability but does not replace human judgment. They read an AI's output with the same critical eye they would apply to a junior colleague's draft. They ask: Does this make sense? Is this accurate? Does this serve my actual purpose? They are willing to reject, revise, and redirect—because they trust their own capacity to know what good looks like.
The Manager's New Challenge
For those who lead teams, this shift creates an unprecedented challenge. In prior eras, a manager could reasonably assess competence through credentials, through observable outputs over time, through the steady accumulation of domain expertise. A senior programmer had earned their seniority through years of writing code, debugging systems, and understanding the accumulated wisdom of their craft.
Now, a junior employee with high self esteem and an AI subscription can produce outputs that superficially rival the senior's work. The question is no longer "can this person produce the artifact?" but rather "can this person judge the artifact?" And judgment—real judgment—is invisible until it isn't. It reveals itself in the subtle decisions: what to include, what to discard, when to trust and when to verify.
This means that hiring, developing, and retaining talent now requires an uncomfortable kind of assessment. You are no longer just evaluating skills. You are evaluating something closer to character—specifically, whether a person possesses the psychological foundation to wield powerful tools responsibly.
The Organizational Implications
Companies that understand this will begin to select for self esteem, even if they don't name it as such. They will notice that their highest performers share certain traits: they ask hard questions about AI outputs rather than accepting them wholesale; they take ownership of results even when the AI did the heavy lifting; they treat mistakes as data rather than as evidence of their inadequacy.
Companies that fail to understand this will find themselves in a peculiar hell. They will have teams equipped with the most powerful tools in human history, using them to produce mediocre or actively harmful outputs—because no one on the team possesses the psychological capacity to challenge what the machine produces. The AI becomes a crutch rather than a lever. The organization becomes an elaborate system for laundering machine output through human signatures.
Worse, these organizations will likely struggle to diagnose the problem. The outputs will look professional. The velocity will seem impressive. But the quality of judgment embedded in those outputs will be absent, and this absence will compound over time in ways that are difficult to trace back to their source.
The Individual Imperative
For the individual worker, the implications are equally stark. Your value in the market is migrating. It is moving away from "can you do X?" toward "can you think about X?" The premium is shifting toward those who can identify what problem actually needs solving, whether a proposed solution actually solves it, and what second-order effects might emerge from implementation.
These are not skills in the traditional sense. They are capacities—capacities that flow from a particular psychological posture toward oneself and toward reality. They flow from self esteem.
This is not to say that technical skills become worthless. Quite the opposite: deep domain knowledge becomes more valuable because it provides the foundation for judgment. The programmer who truly understands systems architecture can evaluate AI-generated code in ways a novice cannot. But that domain knowledge must be activated by a willingness to think, to challenge, to assert one's own conclusions against the machine's confident-sounding outputs.
Building Self Esteem in Teams
If self esteem is indeed the critical variable, then leaders face a practical question: can it be cultivated?
The answer, I believe, is a qualified yes—but not through the methods most organizations currently employ. Self esteem is not built through praise, through affirmations, through reducing the difficulty of tasks to guarantee success. These approaches, well-intentioned as they may be, often accomplish the opposite: they communicate to people that they cannot handle reality as it actually is.
Self esteem is built through the experience of efficacy. It is built when people face genuine challenges and discover—through their own effort and thought—that they can overcome them. It is built when people make mistakes, take responsibility, learn, and improve. It is built when the environment rewards honest engagement with reality rather than the appearance of competence.
For organizations, this suggests several principles:
First, create environments where questioning is safe and expected. If employees are punished for challenging AI outputs—or for challenging any outputs—they will learn to suppress their judgment. The self esteem muscle atrophies when it isn't used.
Second, hold people accountable for outcomes, not just outputs. When a person submits AI-generated work, they should understand that they are staking their reputation on its quality. This is not cruelty; it is respect. It communicates that their judgment matters.
Third, invest in the kind of learning that builds genuine competence. Surface-level familiarity with AI tools is not enough. People need deep enough knowledge in their domains to evaluate what the tools produce. This kind of learning is harder and slower than prompt engineering tutorials, but it is what creates durable value.
Fourth, model the behavior you wish to see. Leaders who visibly think, question, and sometimes reject AI outputs—while also embracing AI's genuine benefits—demonstrate that this posture is valued and expected.
The Philosophical Stakes
I want to end where Branden's definition took us: the belief that happiness, achievement, and fulfillment are right and natural for us.
This is not a trivial component of self esteem. It is, in many ways, the most important. A person can believe in their own competence while simultaneously believing they don't deserve the fruits of that competence. This is a particular kind of self-sabotage, and it is distressingly common.
In the AI era, this manifests in subtle ways. People who feel undeserving of success may use AI to produce impressive outputs while internally discounting those achievements because "the AI did it." They may hesitate to claim credit, to advocate for themselves, to push for bigger opportunities—because at some level, they have not integrated the understanding that their judgment, their direction, their choices were what made the AI useful.
Companies cannot directly address this psychological pattern, but they can avoid reinforcing it. They can create cultures where the human role in AI-assisted work is acknowledged and valued. They can remind people that the machine has no purpose, no values, no judgment—that these come only from the humans who wield it.
Conclusion
The segmentation we are witnessing in the years to come is not primarily a technical segmentation. It is not the case that some people have better AI tools than others, or even that some people have more AI skills than others. The segmentation is psychological.
On one side stand people who believe in their capacity to think and their worthiness to succeed. They approach AI as a lever for their judgment, a multiplier for their will. On the other side stand people who have, for whatever combination of reasons, not developed this belief. They approach AI as a replacement for the thinking they were never confident they could do, and in doing so, they hollow out the very capability that would make them valuable.
The organizations that win in this era will be those that understand this distinction and act on it. They will select for self esteem. They will cultivate it. They will build cultures that demand the exercise of judgment rather than its surrender.
This is not soft wisdom. This is not an inspirational diatribe for keynote speeches. This is the hardest kind of organizational work there is—because it requires confronting questions about human nature that we have historically been content to ignore.
The AI has forced our hand. The question of what humans are actually for, in economic terms, is no longer avoidable. And the answer, I submit, is this: we are here to think. Those who believe they can, will. Those who believe they are worthy of the results, will claim them.