Asking Good Questions in an Age of AI

Asking Good Questions in an Age of AI

Every so often I am asked to participate in a survey fielded by Elon University's Center for Imagining the Digital Future. As you might expect, this year's survey focuses on the impact of AI, and includes this prompt:

  • If you do think it is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives: How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?

It's rare that a survey asks its respondents to actually write something cogent and long form, so I figured I'd publish my response here. I'd be curious to hear your thoughts! If you'd like to participate, the link to the survey is here.

--

The keys to engaging with and learning from information systems such as AI are similar to those we encountered with the rise of search (IE Google) and the broader world wide web. In short, we must prize the formation of high quality questions, and the ability to critically evaluate and take action based upon machine-generated responses to those questions. 

This statement presumes that society focuses on revising the approach of its academic institutions - particularly early schooling - with an eye toward teaching critical thinking, with a particular emphasis on the values that drive scientific methodology. In short, critical thinking becomes foundational in an age of AI. Those with highly developed sense of rational inquiry will prosper in the context of a world where ambient artificial intelligence exists. We already see this playing out, where the most fruitful applications of AI are found in medical, financial, and other research-intensive fields. 

Beyond critical thinking, another crucial action we must take is to intelligently regulate digital systems (AI-driven platforms in particular) to encourage a distributed architecture of power and control as it relates to data and ownership rights. The prevailing architecture in today’s commercial Internet cedes most power, control, and leverage over data to corporate interests (companies like Meta, Google, Apple, Amazon, Netflix, et al). Through complicated and opaque terms of service and related policies, these companies produce, store, and leverage consumer data in a centralized architecture that delivers digital services back to the edge, but retains power and control at the center. A central question of the AI era will become whether power and control will migrate to the edge. 

Another way of thinking about this issue is by asking this question: Who does the AI ultimately work for? Is it controlled by the end user, or is the AI ultimately controlled by a centralized platform like OpenAI, Google, or Meta? 

The “surveillance capitalism” model developed over the past 25 years of Internet history is currently shaping the business and product decisions of AI-first companies. Whether that model continues to prevail will have immense implications on the kind of society we live in 5-10 years from now. Regulatory frameworks which encourage the movement of data provenance and ownership rights to the edge of the network - to users - could unleash exponential innovation and flourishing in our economy. But maintenance of the status quo will concentrate power and profit in the hands of the few, portending significant societal rupture in the future.

You can follow whatever I’m doing next by signing up for my site newsletter here. Thanks for reading.

One of my favorite LinkedIn posters, Mitko Vasilev, is an expert in local AI, and usually signs off his posts with: Make sure you own your AI. AI in the cloud is not aligned with you; it’s aligned with the company that owns it. I think these are wise words, even if you use the big systems. People should work to have the hardware to run their own local AI even at a modest scale, and use it to play with HuggingFace models on their own. I think this will become more popular.

Who controls the corporate shills? Isn’t it obvious?

If it's open source, then it's working for you. Just don't forget to seal those backdoors back to China.

Love the philosophical question… Curious to hear your perspective (once all the answers have come in)

I would also flip that question around: Who's responsible for AI's output? (The user? The coder?)

To view or add a comment, sign in

More articles by John Battelle

  • Apple: From Rebel to Tyrant in 50 Years

    Apple turns 50 years old tomorrow. I've been using its products for 48 of those years.

    10 Comments
  • AI, Big Tech, And the Individual: Who's In Control Now?

    Last night I dreamt I was merging onto a rushing freeway. My on-ramp was far too short, a concrete embankment hemmed me…

    7 Comments
  • When Will The Claw Close?

    It’s “phase two” of the AI boom, and the claws are out. Back at the tail end of 2024, I wrote these words: “2025 will…

    13 Comments
  • Anthropic Could Win The Consumer Market

    I was going to write a long piece on the implications of the ongoing cage match between Anthropic and the US…

    2 Comments
  • My Bad, LinkedIn!

    Earlier this week, as a major storm took aim at the little island where I live, I saw a story in which Sam Altman, CEO…

    18 Comments
  • The First Ads on ChatGPT Echo With History

    Well, they’re here. Just a quick note for now (lots more to say later, but a board meeting in SF means that’ll be…

    8 Comments
  • Social Media Is The Original Generative AI. What Can We Learn From That?

    Last night my wife looked up from her phone, disgusted. "All I'm getting is Jeffrey Epstein and Peter Attia!" she said.

    4 Comments
  • How Search Led To Today's AI

    It's been fun to go back to Berkeley, where I first taught Journalism more than 20 years ago. I'm leading a seminar on…

    6 Comments
  • All Ten 2026 Predictions In One Place

    It took me two weeks, 6000 words and nine posts, but I can finally round up my predictions for 2026 in one place…

    17 Comments
  • Who (or What) Can You Trust?

    Predictions 2026, #1 The modern English verb 'to conjure' is derived from the Latin conjurare, meaning ‘band together…

    7 Comments

Others also viewed

Explore content categories