Skip to content
Paul Sherman

Research: People and AI

An ongoing investigation into how people use AI, and how it's affecting their work, their thinking, and their lives.

About This Research

AI tools are becoming part of how people work, think, and relate to each other. But most of what we hear about AI adoption is either hype or fear. There's remarkably little grounded research into the actual lived experience of people who are integrating these tools into their daily routines.

This project looks beyond the hot takes. Through in-depth interviews with people across roles, industries, and levels of AI experience, I'm building a rich, evidence-based picture of how AI adoption actually works: what drives it, what it changes, what hidden costs it imposes, and what it enables.

Methods

The study uses two complementary data-gathering techniques. First, a short pre-interview survey collects organizational and demographic context, along with initial descriptions of how participants and their organizations are using AI. Then, a 30-minute semi-structured interview digs deeper into the stories behind those survey responses.

The interviews follow behavioral interviewing techniques, asking for specific stories and concrete examples rather than general opinions. Topics cover the full arc of AI adoption: from first use through current patterns, including successes and failures, changes to work and personal activities, trust calibration, social dynamics around disclosure, and cognitive and emotional effects. All sessions are recorded, transcribed, and de-identified for analysis.

Goals

The goal is to gather, synthesize, and share publicly the results of this research. Outputs will include a synthesis of key findings and themes, a framework for understanding AI adoption patterns, a typology of how work practices are changing, and a map of the social norms emerging around AI use and disclosure. All findings will be published here as the research progresses, making this a living, evolving body of work rather than a single report.

To that end, everything on this site is published under a Creative Commons BY-NC-SA 4.0 license. The study materials, analytical methods, codebook, and findings are all freely available to use, adapt, and build on with attribution. Public research about how people actually use AI should itself be public, and the CC license makes that concrete: other researchers, designers, and teams working on these same questions can take what's here and run with it.

Emerging Themes

As sessions accumulate, patterns are starting to emerge. So far, 34 themes have been identified across 5 categories, covering everything from adoption patterns and trust calibration to concerns about skill erosion and job displacement.

This project is open source under a Creative Commons BY-NC-SA 4.0 license. In that spirit, I built a lightweight theme explorer so you can dig into the codebook, browse themes by category, and follow them back to the sessions where they appeared. Try it out.

Explore Themes

Become a Participant

I'd love to talk with you for 30-45 minutes about your experiences with AI. Whether you're a power user, a cautious experimenter, or somewhere in between, I want to hear from you.

This is a personal research project. There are no sponsors and no corporate agenda, just genuine curiosity from someone who has spent more than 20 years studying how people adapt to new technology.

What's in it for you: a transcript and analysis of your session, plus access to upcoming research readouts and highlights. All results are de-identified. Your personal information won't be shared with anyone.

Session Writeups

Apr 20, 2026

P15 - Senior Developer, Telecommunications

A senior developer and DevOps engineer at a telecommunications company serving defense and government clients, whose company went all-in on AI tooling with Claude Code, Cursor, mandatory training, and weekly success stories, and who now watches a colleague prototype in 48 hours what took legacy teams years to build, while wondering who will accept AI-built software when the government asks for an audit trail.

Are the AI vendors going to be like crack dealers that say, 'Oh, the first taste is affordable,' and then they're going to turn the screws on us and now what? We're stuck with a codebase that no monkey in the company can grok, right?

Organizational AI Adoption ChallengesApprenticeship ErosionJob Security AnxietyKnowledge DisplacementVendor Lock-in AnxietyTrust CalibrationAugmentation Not Replacement
Apr 20, 2026

P14 - Head of Design, Healthcare Software

The sole designer at a startup building an AI-powered application platform for a regulated industry, who taught himself Claude Code to produce front-end prototypes after realizing his engineering counterpart's AI-accelerated pace had left him two or three steps behind, and who now contends with subject matter experts vibe coding interfaces that look finished on the surface but lack design system alignment, documented intent, or user-centered reasoning.

I don't want to know what Claude thinks about this. I just want to know what you think. Like, here's why this doesn't make sense. Tell me what you think.

Trust CalibrationOrganizational AI Adoption ChallengesAI Slop DetectionDisclosure NormsApprenticeship ErosionKnowledge DisplacementJob Security AnxietyAugmentation Not ReplacementVibe Code Governance
Apr 20, 2026

P13 - UX Design Consultant, Consumer Finance

A UX design consultant with deep financial services experience who adopted AI through personal experimentation, then built it into her professional workflow for executive communications, job search automation, and portfolio development, all while navigating heavily regulated environments where AI use happened entirely in the shadows, with employees and VPs alike emailing themselves AI-generated work disguised as late-night inspiration.

And because we could not use it at work, I was using my personal account on my phone and then I was emailing myself at work, saying 'midnight ideas, insomnia crisis.' But it was funny because most of the VPs were doing the same thing.

Corporate Tooling GapPervasive IntegrationAI as EqualizerAI as Learning PartnerOrganizational AI Adoption ChallengesAI Learning Resource FragmentationTrust CalibrationKnowledge DisplacementDisclosure NormsUseful AI TechniquesJob Security AnxietyAI Governance Anxiety
Apr 20, 2026

P12 - UX Designer/Researcher, Advertising & Design

A UX designer and researcher with a visual advertising background who entered AI through Midjourney image generation, consolidated around Gemini as a primary tool, and treats a structured prompting framework learned in an AI training class as his most significant unlock for getting reliable, audience-appropriate output.

I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot. I don't know where that disconnect is, but I would say if you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways.

Trust CalibrationUseful AI TechniquesAI as EqualizerSkill ErosionDisclosure NormsKnowledge Displacement
Apr 17, 2026

P11 - CTE Program Manager, K-12 Education

A career technical education program manager in a large urban school district who enthusiastically uses AI across work, doctoral studies, meal planning, and freelance design describes circumventing her district's approved tool restrictions by switching to the guest network, while pushing to establish the very AI usage norms that she herself works around.

I serve a majority minority district and what are the sources? What's the input? Can I see the data set that AI was trained on? Because I want to know that when it's giving a teacher an answer, if it gives them something that's wrong I want them to be able to identify it.

Pervasive IntegrationCorporate Tooling GapTrust CalibrationKnowledge DisplacementDisclosure NormsApprenticeship ErosionAI Bias Amplification
Apr 17, 2026

P10 - UX Manager, Insurance

A UX manager at an insurance company whose AI adoption was explicitly mandated by his employer describes settling into 'low-risk ideation' as his only use case after a hallucination incident shattered his trust, while worrying that AI's seductive pull is eroding both his own thinking and his sons' capacity to learn without it.

It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives to kind of push against my default way of doing things.

Trust CalibrationHallucination FrustrationKnowledge DisplacementApprenticeship ErosionAI as Sounding BoardDisclosure NormsJob Security AnxietyOrganizational AI Adoption Challenges
Apr 17, 2026

P9 - UX Researcher and AI Specialist, Independent

An independent UX researcher and AI specialist who has woven Claude into nearly every facet of her daily life, from an 8 a.m. planning brief that manages her ADHD to building her own SaaS replacements, describes a feedback loop where AI-powered organization creates a false sense of capacity, and is actively planning her 'Plan B' for when vendor lock-in becomes a liability.

That gift of being able to get my thoughts out, put it in a suitcase, know it's safe, and visit with it whenever I want to work on that.

AI as Cognitive ProstheticPervasive IntegrationVendor Lock-in AnxietyRadical TransparencyUseful AI TechniquesKnowledge DisplacementAI Digital DivideTrust CalibrationSelf-MaintenanceAI as Learning Partner
Apr 16, 2026

P8 - UX Researcher/Designer, Electric Utilities

A UX researcher/designer at a large electric utility company navigates some of the strictest AI security restrictions in the study, going home to experiment on personal equipment when workplace tools are blocked, and arrives at a distinctive reframing of AI hallucinations as a feature that creates engagement and partnership rather than a flaw that undermines trust.

You build a relationship with AI because you have to correct it. You have to pay attention. It's not like sending something to the printer and you get exactly what was on the screen.

Corporate Tooling GapTrust CalibrationAugmentation Not ReplacementAI Slop DetectionDisclosure NormsPrompt DriftAI as EqualizerHallucination as EngagementOrganizational AI Adoption Challenges
Apr 16, 2026

P7 - Principal Design Researcher, Software Consulting

A principal design researcher at a software consultancy ran a controlled comparison of AI-assisted vs. unassisted transcript analysis and found that AI excels at targeted retrieval but creates tunnel vision that strips away the contextual understanding where the most valuable implicit insights live.

I think it's part of the process for us as researchers to immerse ourselves in the data. If we skip that, we don't understand the data afterwards, and we are not retaining that important knowledge that is kind of layering in the back of your mind.

Trust CalibrationAugmentation Not ReplacementKnowledge DisplacementDisclosure NormsSelf-MaintenanceHallucination FrustrationApprenticeship ErosionPervasive Integration
Apr 15, 2026

P6 - Senior Technical Product Manager, Consumer Finance

A senior technical product manager at a global credit card processor describes building AI-powered fraud detection tools by day and wrestling with Copilot's prompt drift by night, arguing that AI has become a necessary counterweight to the information environment humans created, while worrying that the same convenience will erode the critical thinking skills the next generation needs.

In a way, we've created an information environment where we need it. We need AI. And I really think that that's the biggest promise of it, is to stop using it as a potential replacement for humans and use it as a way for us to manage this infosphere that we've built ourselves.

Trust CalibrationAugmentation Not ReplacementKnowledge DisplacementPrompt DriftAI as Learning PartnerCorporate Tooling GapAI as Cognitive ProstheticInvisible AI
Apr 15, 2026

P5 - Sr. Manager, UX Research, Software

A senior UX research manager with 25+ years of experience describes AI as the most energizing change to their discipline in three decades, while detailing how every win required substantial human oversight to produce trustworthy results, and predicting that leadership will dramatically overreact to AI's capabilities before a massive overcorrection.

For a research activity that would take a researcher alone five days to complete, if you look at it with AI alone, it might take a day, but in order to do a good job of it, the necessary human AI interaction, you might get closer to three days.

Trust CalibrationAugmentation Not ReplacementHallucination FrustrationKnowledge DisplacementJob Security AnxietyAI as Sounding BoardDisclosure NormsAI Slop Detection
Apr 15, 2026

P4 - Senior UX Researcher, Software

A senior UX researcher on one-week sprint cycles describes an organization simultaneously investing in custom AI research tools and mandating AI usage through performance bonuses, while fabricated pain points flow unchecked through vibe-coded PRDs and data centers consume farmland in their community.

My bonus, my performance, is attached to how much I use AI at work. So I have to [use it]... if I don't I might not get my bonus.

Trust CalibrationHallucination FrustrationExpectation EscalationCorporate Tooling GapJob Security AnxietyApprenticeship ErosionOrganizational AI Adoption ChallengesInfrastructure Anxiety
Apr 14, 2026

P3 - Head of Design, Banking

A design director at a major bank who managed a 45% team reduction while maintaining delivery across 14 pods, building AI tools on a personal machine because corporate tooling cannot do what the work demands.

My biggest fear is that we're not replacing the apprentice level people. They still need a fundamental of whatever their craft is without AI. Who's going to watch the watchers?

Expectation EscalationCorporate Tooling GapApprenticeship ErosionAI for Strategic PositioningCreative IP ConcernOrganizational AI Adoption ChallengesJob Security Anxiety
Apr 14, 2026

P2 - IT Business Analyst & Adjunct Professor, Healthcare

An IT business analyst and adjunct statistics professor who uses AI almost exclusively through ChatGPT, primarily for personal tasks: learning Spanish through gaming communities, translating medical jargon during a family hospitalization, and recovering thousands of dollars in pet insurance claims.

The biggest disappointment would be like when it's confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts.

Trust CalibrationHallucination FrustrationAI as EqualizerSkill ErosionAugmentation Not ReplacementJob Security Anxiety
Apr 14, 2026

P1 - Principal UX Designer, Insuretech

A veteran UX practitioner with 25+ years of experience and a freshly completed PhD initially resisted AI due to the hype cycle, then became one of its most prolific adopters.

I will never 100% trust AI ever because I don't think it will earn that. It's an oxymoron.

Hype ResistancePervasive IntegrationTrust CalibrationSelf-MaintenanceAI as ValidatorExpectation EscalationRadical TransparencyKnowledge DisplacementOrganizational AI Adoption ChallengesHallucination Frustration

Interim Findings

(April 23, 2026) I've analyzed 12 participants' session data and written up an in-flight description of the findings so far. This isn't a full report. Findings and conclusions will most definitely change as I collect more data. In the spirit of openness, I'm posting the interim findings here.

(AI disclosure: I wrote the description of each finding. Then I ran my prose through Claude to correct spelling and grammar errors.)

Interim Findings (v1, April 23, 2026)

Under the Hood

Glad you scrolled this far. Here's the part I'm genuinely excited about: I designed the entire analysis pipeline for this project around AI-assisted workflows, and the speed is sort of absurd. A raw interview recording becomes a de-identified, coded, and published session writeup with pull quotes, theme annotations, and a full analytical narrative in under an hour. That turnaround would have been unthinkable two years ago.

The study design, the analysis workflow, and the tooling are all still evolving as I run sessions and learn what works. This is a living project, not a frozen archive. I'm iterating on the process in the open because I think the methods are as interesting as the findings, and because other researchers might find them useful.

Everything here is published under a Creative Commons BY-NC-SA 4.0 license. Creative Commons is a nonprofit that provides free, standardized licenses for sharing creative and academic work. BY-NC-SA means you can use, adapt, and build on anything here as long as you give attribution, keep it non-commercial, and share your work under the same terms. The study materials, the analytical process, the findings: all of it is yours to learn from and build on.

Study Materials

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.