Interim Findings (April 23, 2026)
Interim findings from 12 participant interviews on how people use AI at work and home.
I've analyzed 12 participants' session data and written up an in-flight description of the findings so far. This isn't a full report. Findings and conclusions will most definitely change as I collect more data. In the spirit of openness, I'm posting the interim findings here.
(AI disclosure: I wrote the description of each finding. Then I ran my prose through Claude to correct spelling and grammar errors.)
People are deriving massive value from using AI, but wonder what the hidden costs are.
Nearly all of the people I’ve interviewed so far (reminder and disclaimer: I’ve only interviewed 12 people so far and this is a sample of convenience, not a rigorous, demographically balanced sample) are excited about being able to use AI to extend their capabilities and delegate time-consuming, low-value tasks such as drafting email to AI.
At the same time, people worry about a future where their AI-produced output is consumed by others’ AI-driven processing and synthesis. Several wondered aloud if they’re just having AI produce work that won’t be critically assessed by other people.
Evidence
- “I would be a lot less efficient at everything if I couldn’t use it anymore. That would be a sad day.” (P11)
- “I’m on one week sprint cycles… which makes you want to really pant. So I have to use AI to at least help me get drafts or clean up a report.” (P4)
- “For a research activity that would take a researcher alone five days to complete… with necessary human AI interaction, you might get closer to three days.” (P5)
- “A lot of them now are making [documents] look really cool… but nobody’s still going in for that second layer.” (P4, on AI output that looks polished but goes unexamined)
- “I spend more time having to fact-check [AI outputs], which you guessed it, my product manager does not do.” (P4)
“My relationship with AI? It’s complicated.”
People are fascinated by AI, wary of it, fearful of its impact, and hopeful about what it might unlock in the future. Many participants described a trajectory from skepticism to reluctant adoption, only to find themselves using AI far more than they expected or intended.
Evidence
- “I have like a love-hate relationship with it.” (P3)
- “As much as I said that I wasn’t adopting AI, I think I was doing it more than I thought I was doing it.” (P1)
- “It is surprisingly seductive in that I might be overrelying on it suddenly… I’ve gone from being a little bit of a Jared Spool skeptic… to now suddenly I use it a lot more than I think I ever envisioned.” (P10)
- “I don’t like hype and I saw all the hype… When it happened with AI, there was too much hype and people making statements that are unsubstantiated.” (P1)
Everyone has techniques for mitigating AI-induced errors, but they don’t always work.
Every participant reports experiencing AI hallucinations and wrong answers. People attempt to mitigate AI errors in various ways, including using a structured prompting style such as “Role → Task → Style → Format → Constraints,” instructing AI to always refer to a source of truth for date and time references, and cross-checking AI output against domain experts or original sources.
Evidence
- “The biggest disappointment would be like when it’s confidently wrong. Like it thinks that it’s right and then it starts telling you to do things or that these things are facts.” (P2)
- “Right off the bat, the first time I used this, it hallucinated a whole quote.” (P4)
- “I don’t take what comes out of an AI at face value, ever.” (P7)
- “It was last November, I needed to make a calendar… and it just got the dates wrong.” (P10)
- “When I build it out that way [like a persona + workflow] I get great results. However… using really clipped prompts is where it fails.” (P6)
- “It works for a little while… but eventually it starts to drift.” (P6, on prompt decay over time)
Organizations are struggling to find the optimal path forward with AI.
Organizations deploying AI internally do it in all sorts of ways, ranging from free-for-all to highly circumscribed. And some organizations are being remarkably dumb about it. How? By requiring a percentage of work time to be devoted to AI use and measuring people’s in-application time, regardless of context or outcome. This leads to institutional stupidity like people using AI for trivial personal tasks like trip or meal planning while at work. By setting arbitrary AI-generated code targets (e.g., “50% of code written by AI”) that developers push back on because cleaning up low-quality AI code takes longer than writing it themselves. And by tying AI adoption to OKRs and efficiency metrics without accounting for the time spent fact-checking and correcting AI output.
Evidence
- “They rolled it out at work and basically told us you better start using it… They monitor how often we chat with it.” (P10)
- “They want us to be down 10 hours of work a week with these tools by the end of the year… I feel a little weird about how they’re keeping track of it.” (P4)
- “They’re saying ‘Oh, we want 50% of code to be written by AI’… developers are like, I already spend so much time cleaning up this low-quality code… it just would have been faster.” (P4)
- “Making grocery lists and dumb stuff to hit the usage target.” (P4)
“What do you mean I can’t use [tool X]?”
At organizations where AI tool use is limited to certain vendors, people are “voting with their fingers” and using non-approved AI tools on their personal computers in order to accomplish certain work tasks. This isn’t policy defiance for its own sake; it’s driven by capability gaps between what the approved tools can do and what people actually need.
Evidence
- “Google AI Studio has been my favorite tool to use. That’s my primary tool, but it’s all side of desk… The bank mandates Copilot and blocks Google entirely… I literally… I would sit here and then my actual AI computer was on my left.” (P3)
- “We’re not allowed to use Figma Make… We’re not allowed to use Lovable… I just went home one day and just experimented on my own computer.” (P8)
“I’m not losing skills, but I worry that other people will.”
Nearly everyone reports encountering people who socialize AI output without critical thought and analysis. And no one thinks they’re doing this, just other people. People aren’t overly concerned that using AI will degrade their own skills and abilities, but they’re very concerned about others, particularly younger cohorts entering the workforce in the coming years who may have never known a world without AI.
Evidence
- “My biggest fear is that we’re not replacing the apprentice level people… who’s going to watch the watchers who will know that something is wrong because they never did it.” (P3)
- “I watched both my kids… my biggest worry was like, are they cheating off of others? And now I feel like AI has almost given rise to the legitimacy of that… is he capable of doing anything?” (P10)
- “As soon as kids’ competency… everything tanked as soon as Chromebooks entered the picture. And now we want to talk about kids using AI.” (P9)
- “No, I’m going to keep thinking for myself… It gets lost because it’s a muscle. So we need to keep practicing and using it.” (P7, on intentionally preserving her own critical thinking)
Executives are short-sighted in their approach to AI.
Several people mentioned that executives’ reactions to AI were purely driven by short-term cost-saving objectives. The people doing the actual work see this clearly, and they’re not optimistic that leadership understands the gap between what AI can actually do and what they think it can do.
Evidence
- “The legacy of this phase of AI, this gold rush mentality, is just going to really be exposing the greed of C-level executives in the world right now… ‘I’m the person who cut staff expenses by 30% and as a result you stakeholders all got higher dividends and I got a bigger bonus.’” (P10)
- “The people who are actually hands-on with the work kind of understand that it’s not there… to do kind of full replacement. Whether or not leadership understands that I don’t know… there’s going to be a dramatic overreaction… massive overcorrection.” (P5)
- “Companies are going to make these rash, reckless decisions about personnel and teams because of their inordinate expectations… they’re going to make terrible decisions.” (P1)
- “I talked to a person a week ago who was let go because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn’t do all the things.” (P1)
AI will unlock medical and scientific breakthroughs.
People’s biggest fears about AI varied, but every single person expressed the hope that AI held the promise to unlock new advances in science and medicine.
Evidence
- “Progress in medicine and health-related things… AI is fantastic… in diagnosing stuff and reading images… spotting patterns and understanding much better.” (P7)
- “Medical research field, crunch through data sets much much more quickly… potentially life-saving drugs get to market more quickly.” (P10)
- “I’m just hoping it can just accelerate progress in a lot of different areas… in say medicine, technology… getting there faster might be a good thing.” (P5)
- “Anything in the scientific field that really has to crunch data, in the medical field, all that human health where it can just resource… get the information. I think that’s where it really shines.” (P12)
- “As much damage as it does, it can be used to fight back against that damage.” (P9)
Additional Observations
These patterns emerged across multiple participants but are earlier in development. They may become primary findings as more data is collected.
“You need enough knowledge to doubt what AI is telling you.”
People’s ability to catch AI errors is directly proportional to their existing domain expertise. This creates a paradox: the people who benefit most from AI assistance (novices) are the least equipped to detect when it’s wrong.
Evidence
- “You need enough knowledge to be doubting what this AI is telling you. But if you don’t have that knowledge, you tend to trust whatever is coming.” (P7)
- “If it’s a subject matter that is common like tech, right, and there’s old long history with it, it’s probably going to get that right. But if you ask it about the Figma update from yesterday, it’s going to get it wrong.” (P9)
People are developing disclosure norms around AI use, but there’s no consensus.
Some participants treat AI disclosure as a moral obligation. Others actively hide their use. The norms are forming in real time and vary wildly by workplace culture and seniority.
Evidence
- “It’s a personal policy with me because I just believe in being transparent… I think it’s plagiaristic if you don’t.” (P1)
- “We shouldn’t be ashamed of talking about it. And especially in a power dynamic where there’s different seniority, I think we should be completely transparent about what the expectations are.” (P7)
- “I have a couple carousel lessons about how to be a thought innovator and not a slop generator.” (P9)
People are using AI as a sounding board and thinking partner, not just a production tool.
Several participants described using AI less for generating finished output and more for pressure-testing their own ideas, getting unstuck, or having a conversation they can’t have with a colleague.
Evidence
- “I will say, ‘Hey Claude, I want to write a LinkedIn post on X subject. Interview me on the subject.’ And then take my answers, do not change them, and turn it into a LinkedIn post… that way it’s me, my words.” (P9)
- “My first assumption is that I did not define the problem well enough, I think.” (P8, treating AI failure as a mirror for his own thinking)
People worry that AI is changing what counts as “real.”
A few participants raised concerns that go beyond skill erosion or job loss into something more existential: if AI-generated content becomes pervasive enough, it changes people’s relationship with reality itself.
Evidence
- “It’s going to change our view of reality… if we start perceiving things that are not real often enough, it is dangerous for us.” (P8)