Skip to content
Paul Sherman

Hallucination as Engagement

Human-AI RelationshipProvisional

The reframing of AI errors and hallucinations as a productive feature rather than a flaw, arguing that the need to correct AI output creates engagement, attention, and a sense of partnership that would be absent if results were perfect

1 session2 annotated passages

Evidence

I think the hallucinations are not a bug. I think it's a feature.

You build a relationship with AI because you have to correct it. You have to pay attention. It's not like sending something to the printer and you get exactly what was on the screen. Then you start engaging with it. And how I talk about it as a partner, I mean, that's giving it a personality and that's understanding it has flaws and strengths, and I think that's the main takeaway for me from AI is that if you want to use the strengths, you have to accept the flaws and work with them.

Sessions

Related Themes in Human-AI Relationship

← Back to Theme Explorer