Sigmund Droid
So, tell me about your motherboard.
A recent Harvard Business Review study revealed the most common use of AI in 2025 is for therapy. Which means that somewhere out there, ChatGPT is probably asking someone "And how does that make you feel?"
But here's a thought: maybe it's time we ask how people feel about AI.
The Collective Intelligence Project (CIP) has been doing exactly that. Since 2021, CIP has partnered with major AI companies and governments to test what happens when you ask the public what they want from AI. They’ve worked with Anthropic to train Claude on one of the first collectively-designed constitutions, and alongside Audrey Tang and Taiwan's Ministry of Digital Affairs to pilot democratic processes for AI policy.
Every two months, their Global Dialogues survey takes the pulse of thousands of people across 70+ countries. The latest round focused on AI consciousness attribution. How do these results make you feel?
Trust in AI chatbots (58%) is now more than double the trust in elected representatives (28%)
36.3% felt an AI truly understood their emotions or seemed conscious
11% would consider a romantic relationship with AI, and 17% wouldn’t mind if their partner did
No wonder people were holding actual funerals for deprecated Claude models and mourning their preferred AI companions when GPT-5 rolled out.
“Every time people participate in [Global Dialogues surveys], around 70% of them say, ‘I have never felt listened to before.’ And this is a problem we want to solve. I think we can move from a future of artificial general intelligence to a future of augmented collective intelligence.” - Divya Siddarth, co-founder of the Collective Intelligence Project, on Reid Hoffman and Aria Finger’s podcast Possible
Cultural Intelligence Agency
So, what do you do with insights like these? If you're CIP, you open them up to the world and see what people create. This spring, CIP hosted the Global Dialogues Challenge, inviting anyone to turn their survey data into something new. Over 500 people responded with films, research papers, video games, and more.
The winner was the AI Cultural Intelligence Agency, a detective game where kids discover how different cultures think about AI and figure out how to bridge those perspectives. It's designed to train the next generation of technologists to build inclusive systems from the start.
Creator Saranyan Vigraham explained: “We’re teaching our children the ‘how’ of AI brilliantly — coding, algorithms, technical skills. But we’re forgetting the ‘why’ and the ‘for whom.’ The children playing detective today become the leaders building inclusive AI tomorrow. Let's make sure they're ready to make it work for everyone, and not just people who look and think like them.”
All the winners can be viewed at cip.org/challenge.
"Tech revolutions don’t automatically create shared power and prosperity. That balance emerges when people recognize their collective agency in shaping how tech impacts their lives and demand shared governance models that expand the range of stakeholders included in shaping those decisions." - Michelle Barsa, Principal, AI x Human Connection at Omidyar Network, a partner of Global Dialogues
From Feedback to Frameworks
The Global Dialogues data is making one thing clear: there’s a growing gap between how AI is being developed and how people actually experience it. CIP’s next move is to close that gap by continuing to turn public input into tools that can shape the future of AI.
Weval: Think Wikipedia, but for AI safety. This open-source project lets anyone build custom evaluations that test what actually matters in real-world deployment. One recent Weval, built in Sri Lanka, checked whether models could handle local history and context. Most struggled without heavy prompting, a reminder that “global” AI often isn’t.
Global Dialogues Digital Twins Evaluation Framework [in progress]: If you’re going to delegate decisions to an AI, shouldn’t it reflect who you are? Your values and behaviors? CIP is experimenting with ways to measure how well AI agents can act as stand-ins for real people.
First Contact Assembly [in progress]: Partnering with the Earth Species Project, CIP is exploring how governance might change if we can understand non-human animal communications.
If CIP has anything to say about it, AI won’t write the future. We will — hopefully with better prompts.
"Every perspective excluded from AI development — whether human or from another species — is a future we'll never see, a solution we'll never find, and a harm we'll never anticipate." - Aza Raskin, co-founder of the Center for Humane Technology and Earth Species Project
Quick Bytes
Other Sector Stories
The Future of Life Institute released an AI Safety Index, rating leading AI companies on key safety and security domains. The highest grade? C+. Safety is still trailing capability, big time.
History repeats itself. Back in March, I talked with Stephen Hood at Mozilla about the original browser wars of the late 90’s and early 2000’s. Project Liberty’s recent newsletter shared that we might be on the edge of a sequel: this time, the wars won’t be about web browsers but AI agents, each vying to be the default interface for how we interact with the digital world.
APN Opportunities and Funding News
Building an AI-powered nonprofit? Apply to Fast Forward’s Accelerator by September 8. The hybrid program, which runs from February to May 2026, provides $25K, training from seasoned tech nonprofit founders, and mentorship from tech experts.
The Salesforce Accelerator – Agents for Impact equips nonprofits with funding, tech, and pro bono expertise to confidently create customized AI agent solutions. Submit your interest by September 10, 2025 to be considered for the next cohort.
Let’s Talk
I am living and breathing AI for humanity these days. If you are too, let’s talk!


