Asking Good Questions in an Age of AI
Every so often I am asked to participate in a survey fielded by Elon University's Center for Imagining the Digital Future. As you might expect, this year's survey focuses on the impact of AI, and includes this prompt:
- If you do think it is likely that AI systems will begin to play a much more significant role in shaping our decisions, work and daily lives: How might individuals and societies embrace, resist and/or struggle with such transformative change? As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? What practices and resources will enable resilience? What actions must we take right now to reinforce human and systems resilience? What new vulnerabilities might arise and what new coping strategies are important to teach and nurture?
It's rare that a survey asks its respondents to actually write something cogent and long form, so I figured I'd publish my response here. I'd be curious to hear your thoughts! If you'd like to participate, the link to the survey is here.
--
The keys to engaging with and learning from information systems such as AI are similar to those we encountered with the rise of search (IE Google) and the broader world wide web. In short, we must prize the formation of high quality questions, and the ability to critically evaluate and take action based upon machine-generated responses to those questions.
This statement presumes that society focuses on revising the approach of its academic institutions - particularly early schooling - with an eye toward teaching critical thinking, with a particular emphasis on the values that drive scientific methodology. In short, critical thinking becomes foundational in an age of AI. Those with highly developed sense of rational inquiry will prosper in the context of a world where ambient artificial intelligence exists. We already see this playing out, where the most fruitful applications of AI are found in medical, financial, and other research-intensive fields.
Recommended by LinkedIn
Beyond critical thinking, another crucial action we must take is to intelligently regulate digital systems (AI-driven platforms in particular) to encourage a distributed architecture of power and control as it relates to data and ownership rights. The prevailing architecture in today’s commercial Internet cedes most power, control, and leverage over data to corporate interests (companies like Meta, Google, Apple, Amazon, Netflix, et al). Through complicated and opaque terms of service and related policies, these companies produce, store, and leverage consumer data in a centralized architecture that delivers digital services back to the edge, but retains power and control at the center. A central question of the AI era will become whether power and control will migrate to the edge.
Another way of thinking about this issue is by asking this question: Who does the AI ultimately work for? Is it controlled by the end user, or is the AI ultimately controlled by a centralized platform like OpenAI, Google, or Meta?
The “surveillance capitalism” model developed over the past 25 years of Internet history is currently shaping the business and product decisions of AI-first companies. Whether that model continues to prevail will have immense implications on the kind of society we live in 5-10 years from now. Regulatory frameworks which encourage the movement of data provenance and ownership rights to the edge of the network - to users - could unleash exponential innovation and flourishing in our economy. But maintenance of the status quo will concentrate power and profit in the hands of the few, portending significant societal rupture in the future.
—
One of my favorite LinkedIn posters, Mitko Vasilev, is an expert in local AI, and usually signs off his posts with: Make sure you own your AI. AI in the cloud is not aligned with you; it’s aligned with the company that owns it. I think these are wise words, even if you use the big systems. People should work to have the hardware to run their own local AI even at a modest scale, and use it to play with HuggingFace models on their own. I think this will become more popular.
Who controls the corporate shills? Isn’t it obvious?
If it's open source, then it's working for you. Just don't forget to seal those backdoors back to China.
Love the philosophical question… Curious to hear your perspective (once all the answers have come in)
I would also flip that question around: Who's responsible for AI's output? (The user? The coder?)