I recently reacted, rather negatively (1), to speech of Yuval Harari at Davos 2026 (2), My reaction was not quite on the points that it should have been. In this blog I reflect on what I should have said: that the real #AI debate is about governance, power, and responsibility, not apocalypse. This reflection continues the line of argument developed in my recent book "The AI Paradox" (3), where I explore why the future of AI depends less on superintelligence and more on human cooperation and institutional design. Read the blog here: https://lnkd.in/dUnCxSSZ. I welcome your comments #AI #AIfutures #responsibleAI #AIparadox #davos2026 #AIgovernance #AIapocalypse (1) https://lnkd.in/drvfxvD8 (2) https://lnkd.in/da-MUdai (3) https://lnkd.in/eywK_m7x
Like any technology, « AI » in itself does not « do » anything. We do not have to « adapt to it ». People and organizations make choices and take (or do not take) decisions. The diffusion and adoption of AI technologies should remain a means, not an end per se. And the debate should indeed focus more on the ends…
You are so right, governance and responsibility, not apocalypse.
I would not worry. Yuval has no credentials in AI, product development or offers solutions for mankind. He is a talker. Stick to your path and don’t worry about these type of folks. The people don’t listen to them, they want a voice and solutions. I have never seen loving kindness from his talks or actions feeding the poor. I would happily debate him in front of a public audience.
Solution to the key ai problem you pose, “Concentration of AI capability in a small number of corporations and states poses immediate democratic questions”, is the sane as that to prevent loss of control over AI. So there is no competition between risks.
Governance, not apocalypse. Three words that should frame every AI policy discussion this year. The institutional design point resonates - we pour resources into debating what AI might become while underinvesting in the structures that decide how it is deployed today.
fully agree (imo the knife analogy is not successful), we need responsible and ethical use with proper checks and balances
Interesting, thank you! 🙏 Seems we're onto something with Enterprise-wide AI Risk Management® (EW-AiRM®) 🤔💪
Thanks. These are important points. I especially liked the section on „Human intelligence emerges from cooperation, shared norms, moral development, and social embedding.“
Fully agree. Just like inequality is a product of human decisions rather than an economic necessity, human choices will shape whether AI technologies will be a blessing or a curse. But as long as greed and moral bankruptcy prevail over responsibility, it is hard to be optimistic. AI technologies are not the problem. Humans are. As always.