The growing use of artificial intelligence (AI) in professional work raises an important question: even when AI supports decision-making, could it weaken certain professions in the long run? The author of this article argues that the benefits of AI tools do not come without hidden consequences, illustrating this through a chess experiment.
Teams in the chess experiment consisted of pairs made up of a strong AI and a weaker, human-like system. Surprisingly, the winning teams were not those with the most powerful AI, but those where the AI was better adapted to collaborate with its partner. The conclusion is that it is not enough for a system to be powerful, it must be useful in cooperation with humans
This also changes how we think about the “interpretability” of AI systems. It is not only important to understand the output an AI provides, but also to be able to act meaningfully based on it.
The author highlights two forms of skill erosion. The first is individual: people may lose skills over time if they rely too heavily on AI. For example, research has shown that after using AI tools, doctors’ ability to independently detect certain conditions declined. The second is collective and less visible: entire professions may gradually lose the ability to question their own goals if AI systems “embed” particular ways of thinking.
The problem is especially evident in situations that require judgment and ethical decision-making. If AI reduces every uncertainty to a number or percentage, this is insufficient in cases that require careful judgment, such as assessing risk in sensitive social contexts. Such decisions cannot be reduced to numbers alone.
Unlike chess, fields such as healthcare, law, and education do not operate under fixed rules, their goals are constantly reassessed and redefined. This creates a risk that AI, while useful to individuals, may narrow the space for professional debate.
For this reason, the author emphasizes that AI systems should not be designed solely for individual users. It is important to enable professional communities to participate in their development and adaptation, for example through clear feedback processes and procurement requirements.
In conclusion, developing AI tools that truly support people is not just a technical matter of efficiency, it is also about preserving how professions think, make decisions, and evolve.