Barbara Geld: "Debunking AI: How false narratives emerge and what we can do to get a grounded understanding."
- qohubs
- Mar 26
- 6 min read

In this interview, our colleague Barbara Geld, an applied cognitive scientist, shares her insights and understanding of AI. She explores the challenges and opportunities that arise with its rapid advancement, engaging in a process of debunking common misconceptions of AI while offering a thought-provoking perspective on its impact.
1. You're interested in the relationship between humans and machines. What fascinates you most about this topic?
I'm particularly interested in how we're moving beyond seeing technology as merely an extension of human capabilities. The relationship is much more dynamic, as we're entering an era of mutual adaptation and augmentation. Machines learn from our behaviors and patterns, while simultaneously, our cognitive processes, work practices, and even social structures evolve in response to these technologies.
This co-creation process happens at multiple levels. At the individual level, we develop new thinking strategies when working with artificial systems. At the organizational level, team structures and workflows transform around technological capabilities. And at the societal level, we're witnessing shifts in how we define expertise, authority, and even human value.
What excites me isn't technology for technology's sake, but the potential of these hybrid human-machine systems to enhance our collective capabilities. The question isn't just how we can make smarter machines, but how this partnership can help us address complex challenges in ways neither humans nor machines could accomplish alone.
The most fascinating aspect is that we're not just designing tools anymore, rather we're designing relationships. And these relationships will fundamentally reshape how we think, work, and interact with our world. Understanding the psychological and cognitive dimensions of this shift is crucial if we want to direct this co-evolution toward beneficial outcomes for individuals, communities, and society as a whole.
2. In your opinion, what role can artificial intelligence (AI) play in business?
While much of the conversation around AI in business focuses on automation, there is much to be said about opportunities that lie in augmentation and adaptation. These partnerships can transform how organizations work when we recognize what both humans and machines bring to the table.
When organizations move beyond seeing AI as either a cost-cutting tool or a magical solution, they can develop truly transformative collaborations that enhance human capabilities rather than simply replacing them.
These partnerships work best when we leverage complementary strengths. AI is spectacular at processing huge amounts of information, spotting patterns, maintaining consistency, and handling routine tasks. We humans bring contextual understanding, ethical judgment, creative thinking, and the ability to navigate ambiguity and new situations.
The challenge for businesses is developing what can be described as "AI intuition", a sense of which problems benefit from algorithmic approaches versus those where human judgment shines. This intuition doesn't happen automatically; it comes through learning about AI capabilities, experimenting with different applications, and thoughtfully reflecting on what works.
In creative fields, I've seen AI tools become collaborative partners that expand possibilities rather than replace human creativity. In consulting, they enhance decision-making by providing evidence-based insights while leaving room for professional judgment. Even in traditional industries, the shift is toward knowledge enhancement; helping people work smarter rather than just faster.
What often gets overlooked is that effective AI integration is about more than just implementing technology. It means rethinking how work is organized, how teams communicate, and how decisions happen. Organizations are typically seen underinvesting in tackling this restructuring holistically; those getting the most value aren't simply digitizing existing processes but reimagining workflows around these new capabilities.
3. Many people fear that AI could make their jobs redundant. Is this fear justified?
Given how AI is portrayed in the media and the general confusion about what these technologies can actually do, people's concerns about job displacement make complete sense. When powerful new technologies emerge, these worries have historical precedent, yet the reality is typically more nuanced than simple “replacement stories” suggest.
From my perspective as a cognitive scientist, what we're witnessing isn't jobs vanishing but transforming. Most roles contain diverse tasks with varying automation potential. The pattern across industries isn't job-for-job replacement but redistribution of responsibilities. Humans are shifting toward aspects of work that leverage our uniquely human capabilities, while machines handle tasks better suited to computational approaches. This transformation varies significantly across different types of work. Roles centered around routine information processing face more substantial changes than those requiring complex judgment, people skills, or creative thinking. Yet even in fields experiencing significant disruption, new roles and responsibilities typically emerge alongside changing skill requirements.
What psychologists call "anticipatory anxiety" plays a major role in these fears. The uncertainty about how specific jobs might change creates stress independent of actual outcomes. I've noticed this anxiety often diminishes when people gain concrete understanding of AI capabilities, replacing vague fears with specific knowledge about how these technologies actually work. On that topic, there's also a fascinating disconnect between people's ideas about AI and reality. Stories about artificial general intelligence create outsized fears, while the significant limitations of current systems often go unrecognized. This perception gap makes it harder to have grounded conversations about realistic impacts.
The key insight is that technology rarely simply replaces human capabilities. Instead, it changes how we apply our uniquely human skills. The important question isn't whether AI will eliminate jobs, but how we shape its implementation to enhance human potential rather than diminish it. This perspective shifts the conversation from fear to agency, focusing on how we can direct these transformations toward beneficial outcomes.
4. What are the typical cognitive traps or biases in organizations regarding AI, and how can they be consciously countered?
I've noticed several recurring cognitive patterns when organizations adopt AI that often create roadblocks to effective implementation.
One of the biggest challenges, across industries and demographics when adopting new technology, is creating the right balance of trust. Organizations need to develop what is called "calibrated trust", where people rely on AI systems in areas in which they excel, under the circumstances in which this trust is justified; something which greatly differs based on the context. Building this balanced trust requires helping teams understand AI's actual capabilities, as well as limitations, rather than what marketing suggests or what science fiction has led us to imagine.
Organizations also consistently underestimate the importance of mental models, the internal representations people develop to understand how systems work. I've seen many implementations struggle not because the technology didn't function, but because users lacked a coherent understanding of its capabilities and limitations. Education about how these systems "think" makes a tremendous difference in adoption and appropriate use. Currently, there's a critical gap between what I refer to as “perceived” versus "objective intelligence" (we are really talking about capabilities, not true intelligence). People often overestimate what AI systems can do based on how human-like or impressive they seem in demonstrations. This leads to what is often referred to as "outcome blindness" or evaluating AI based on how well it mimics human conversation or generates coherent text, without critically evaluating whether it’s actually doing something valuable, trustworthy, or meaningful. Organizations can counter this by establishing clear metrics for how AI should improve meaningful outcomes, rather than being seduced by impressive demos.
5. How can the qohubs approach and method help organizations learn about and adapt to AI?
Our approach at qohubs fundamentally recognizes that adapting to AI isn't just about technical understanding, but about creating a social environment where people can genuinely engage with these topics and ideas in meaningful ways.
On a meta level, we focus intensely on creating psychological safety during the learning process. It's encouraged and nurtured that people feel safe to bring up questions and express concerns without judgment. There are no "silly questions" and no right or wrong perspectives in these discussions. In this context, people are much more likely to engage authentically with new perspectives and experiences. This is especially important when discussing topics such as AI, which can sometimes feel threatening to professional identity and expertise.
Fundamentally, the emphasis on collaborative learning is central to our approach. At qohubs, we highlight discussion, collaboration, and discourse, engaging with peers in communication, sharing thoughts, and exploring different perspectives. We create spaces where team members can understand each other's viewpoints, which often sheds light on their own thoughts or opens up new considerations they hadn't previously recognized.
When we talk about learning about AI, it's never just about teaching people technical information or giving them a manual to read. It's about engaging with real examples and connecting to people's lived experiences. qohubs creates environments where people share their concrete concerns and experiences, helping them understand that while AI has general definitions and capabilities, what it means for "us in our organization" might be something quite specific. While understanding what AI is and isn't, does become apparent throughout our sessions, what's crucial is that participants connect these potentially abstract concepts with examples from their own experiences. This sharing of perspectives allows people to relate more deeply to the material and to each other. This approach gives organizations incredible insight by providing teams the freedom and safety to discuss potentially transformative technologies. It creates dedicated space for teams to collectively realize: "This is what we actually have, this is what we need, and this is how we move forward." The sessions allocate time for teams to understand current capabilities, identify gaps, and develop practical next steps that make sense in their specific context.
Beyond individual benefits, this collaborative approach helps organizations develop a shared language and understanding around AI that supports more coherent implementation strategies. When teams learn about AI together through shared experiences rather than isolated training, they develop collective capability that's much more powerful than individual knowledge alone. qo
Discover More About AI! We've developed an exciting program with our collaboration partner VETTURELLI. Interested? Drop us a line at meet@qohubs.com.
Comments