Integrating AI into citizen science for health: risks, rewards, and strategies
Featured image credit: Citizen Science for Health
This blog post recaps Pietro Michelucci’s keynote talk from the November Citizen Science for Health 2025 Conference in Zurich, Switzerland. Ravish Dussaruth, Maggie Lane, Luca Michelucci, Caroline Nickerson, and Gretė Vaičaitytė also contributed to the development of the talk.
Pietro Michelucci, Executive Director of the Human Computation Institute, kicked off the conference as keynote speaker, with a talk titled “Integrating AI into Citizen Science for Health: Risks, Rewards, and Strategies.”
The recording is below.
Summary
He began by reminding the audience just how extraordinary the human brain is.
“It is a massive predictive engine and extremely efficient,” Pietro said. “Supercomputers with equivalent processing power require about 20 megawatts of power, while the human brain does the same computation with only 20 watts — a marvel of evolution.”
As amazing as the brain is, it’s more focused on keeping us alive than analytic tasks like adding two numbers together. “When it comes to logic or arithmetic, the human brain doesn’t do as well. That’s why we created computing machines in the first place,” said Pietro.
Pietro revealed an AI Timeline, tracing how artificial intelligence has evolved from what he called “old AI” — rigid, rule-based systems — to “new AI,” represented by today’s large language models like ChatGPT.
According to Pietro, old AI was about machines mediating processes with humans in the loop, which means that AI systems rely on human judgment at key points. This includes the frameworks of microtasking (computers distributing small tasks to people), workflows (software orchestrating sequences of human decisions), and ecosystems (open platforms where people collaborate at scale).
We can see these shifts even in our own projects and Stall Catchers is a prime example. Stall Catchers is a project in which 140,000+ volunteers help Cornell University researchers identify capillary stalling in the brains of mice in order to advance Alzheimer’s disease research.
“Because of this crowd effort, Cornell’s [Alzheimer’s] research now moves about five times faster,” he noted. “What once took six months can now take a month or two.”
But we wanted to take it farther! You might remember our crowbot study, when our partners at DrivenData held a machine learning competition with Stall Catchers data.
“We discovered something interesting,” Pietro shared. “If you looked at groups of humans, their collective answer was better than any single bot. But when we looked at hybrid cohorts — humans and bots working together — we got an even better answer. That suggested a new kind of complementarity, a new kind of diversity of perspectives that was actually beneficial.”
The Crowdbots Study provided evidence for Pietro’s argument: the future of citizen science for health depends on this human–AI complementarity.
But AI still has gaps and Pietro has identified at least 5 of them:
- Physical grounding – AI doesn’t have a model of the real world. Pietro showed an image of a generative failure – an old woman with fingers coming out of her face. It can produce pixels consistent with text but inconsistent with lived experience.
- Causal reasoning – Pietro told ChatGPT, “I put a rain sensor in my yard to turn off sprinklers when it gets wet—how much water can I save in Phoenix?” It said, “enough to fill a swimming pool.” But it missed that when the sprinkler turns on, it rains on the sensor, and the sensor turns the sprinkler off.
- Active abstraction – Pietro asked it to cluster the 20 brightest stars by proximity to each other. It grouped them by how they look in the sky, not by actual distance in space. It only understood when he clarified.
- Episodic memory – It doesn’t remember context. You can tell it you’re taking warfarin in the morning, and later it’ll tell you to take ibuprofen, which is bad for people on warfarin, because it forgot your history.
- Metacognition – AI doesn’t reflect on its own recent thinking..
Pietro proposed a next step that is mindful of these gaps, especially in the context of participatory science (citizen science): Precision AI, an individualized approach that pairs AI with humans to contextualize projects and help them play off each other’s strengths through adaptive dialogues.
Pietro’s team is already building this with the Polymath Plus project, which will help citizen participants of all levels work alongside professional mathematicians to prove new mathematical theorems, thanks to the help of personalized AI tools and assistants. AI is poor at causal reasoning (much like humans), so one of the plans for Polymath Plus is to enlist a "Formal Theorem Prover" to check our work, even when AI says it's ok.
In the health space, Pietro made clear this approach is also relevant, ending his talk with this thought: precision AI can also strengthen health-based citizen science by aligning with participants’ interests, adapting to their individual needs and contexts, and making better use of our growing understanding of the complementary strengths of humans and machines.
Work with us!
Want to learn more about our experience at the conference? Read our conference recap blog post, and also check out another blog post summarizing one of Pietro’s other talks from the conference.
If you were part of the CS4Health community — or if you’re just curious about human computation — we’d love to continue the conversation. Email us at info@humancomputation.org to say hello and discuss collaborations. Maybe we can come to a city near you – invite us to your next conference!