Bugge Holm Hansen’s Post

View profile for Bugge Holm Hansen

Copenhagen Institute for…57K followers

The new report from Elon University’s Imagining the Digital Future Center is out, and I am pleased to have contributed as one of the expert participants. What stands out to me most in this report is not the finer details, which only a few readers will fully immerse themselves in, but the broader message. Many of the experts involved, myself included, point to the need for a stronger and more coordinated societal response to the age of artificial intelligence, not only to manage technical risks, but also to strengthen the human and institutional capacity needed to navigate rapid change. One of the most important insights, in my view, is that the main concern is not a single dramatic AI failure or sudden rupture. It is something more gradual, and therefore easier to overlook. As AI becomes more deeply embedded in everyday life and key societal systems, there is a risk that human agency is slowly diminished, making it harder for individuals and institutions to question developments, challenge decisions, or even recognise what is quietly being lost. That kind of shift can easily be mistaken for progress. But over time, it may erode judgement, accountability, our shared sense of reality, and the social foundations on which democratic societies depend. But there is another layer to this report that could easily be missed. If change can be mistaken for progress without truly being progress, then it raises a more fundamental question: what does real progress actually look like? That is the conversation I personally hope this report will help bring forward. That said, I do not see myself as being in the prediction business, but I am grateful to have been included in such a strong group of contributors to making this massive 375-page report. Kudos to the friends and colleagues who contributed to the report: David Vivancos, vint cerf, Guido van Rossum, Marina Gorbis, John Battelle, Helen Edwards, Francisco J. Jariego, PhD., Ari Wallach, Terri Horton, EdD, MBA, MA, SHRM-CP, PHR, SWP, Aleksandra Przegalinska, Mark Schaefer, Nirit Cohen 🔮, Gerd Leonhard, Paul Saffo, Avi Bar-Zeev, Russ White, Ph.D., Alf Rehn, Tracey Follows, David Bray, PhD, R "Ray" Wang, Evelyne Tauchnitz, Devin Fidler, Roger Spitz, Gary A. Bolles, Matthew James Bailey, Mícheál Ó Foghlú, John M Smart, John C. Havens, Ray Schroeder, Alexandra Whittington, Amy Zalman, Ph.D., Jonathan Kolber, Chris Riley, and many more. Thank you to Lee Rainie and Janna Quitney Anderson for once again producing a report that helps elevate this important conversation.

Wiesław Mazur

Matematic Solutions5K followers

2d

This report asks the right question. Not "will AI change institutions" but "are institutions designed to absorb that change without losing their own function." What Bugge describes as the gradual erosion of human agency has a concrete architecture. It does not happen through a single decision. It happens through thousands of small delegations: "let the system check," "let the algorithm suggest," "let the model assess." Each one seems reasonable. The sum creates an environment where questioning a decision remains technically possible, but institutionally unexpected. And therefore rare. But there is a deeper layer here. Agency is not simply "diminished." It is being reallocated, without deliberate design. Decisions do not disappear. They migrate into systems, workflows, defaults. And that is precisely where the structural risk lies. Not that people stop deciding, but that no one knows where the decision actually sits anymore. At that point, accountability becomes unassignable. Not because someone concealed it. Because no one designed it in the first place. That is not a technological problem. It is a problem of institutional trust architecture. And that is where the real stakes of AI are being decided. 375 pages. Worth it.

David Elfanbaum

These days I work at the…1K followers

1d

The underlying challenge of mitigating the risks presented in the report is that impact on human psychology is inherent in medium of AI. Our neolithic brains evolved to take the lowest cognitive load option in most situations. They are tuned to experience conversational interaction as if another person is behind the communication. And a host of cognitive biases make us vulnerable to the influence of AI. There are options to decrease those impacts. But they all make AI less attractive to most users and less functionally useful. For instance, I created a system prompt to prohibit the AI from using first person pronouns when referring to itself. And pushed back whenever I called it "you." It was a very kludgy experience.

Like
Reply
Samir Bico

Opoura2K followers

1d

A remarkable report. It moves beyond "AI as a tool" and captures a more important shift: from device to environment. AI is no longer something we just use, but something we increasingly operate within. It also surfaces a tension. The future it describes is both inevitable and governable, ambient and steerable. It warns of eroding human agency while calling for more of it, and calls for institutional reinvention while those institutions remain slow and fragile. What it gets right is protecting the human capacities that resist automation: judgment, meaning-making, ethical reasoning, imagination. These are not soft qualities. They are where agency concentrates. Where it feels less grounded is in constraints. AI is bounded by energy, compute, and infrastructure. Human behavior tends toward convenience and cognitive offloading. Institutions rarely adapt at the speed the report assumes. So the challenge is less about building a single "resilience infrastructure", and more about shaping conditions: preserving friction where it matters, working within constraints, and strengthening the capacities machines cannot carry for us. If that holds, the future is not AI happening to us, but something we shape. Thanks for sharing, Bugge Holm.

Janna Quitney Anderson

Elon University2K followers

2d

Bugge, you tell it like it is! The insights you share in this report - among hundreds of essay responses - truly stand out. The weight of the words that you and the others share in it are an act of important leadership. Please continue to carry the torch, urging leaders and the public to work together NOW to intentionally build a fair and equitable human resilience infrastructure for the AI Age. For those who want to read more from Bugge, check out his essay in the report chapter "Institutions Must Take the Lead": https://imaginingthedigitalfuture.org/reports-and-publications/human-resilience-in-the-age-of-ai/institutions-must-take-the-lead/

ndaeyo iwot

The West African Institute of…1K followers

1d

This conversation highlights several areas of genuine concern and calls for intentional mitigating actions.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories