Skip to content
Paul Sherman
April 14, 2026

The Pragmatic Equalizer

P2 - IT Business Analyst & Adjunct Professor, Healthcare

An IT business analyst and adjunct statistics professor who uses AI almost exclusively through ChatGPT, primarily for personal tasks: learning Spanish through gaming communities, translating medical jargon during a family hospitalization, and recovering thousands of dollars in pet insurance claims.

The biggest disappointment would be like when it's confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts.

P2: Survey Data and Session Summary

Survey Responses

QuestionResponse
Age25-34
Highest Level of EducationMaster's degree
Current Role / Position LevelIndividual contributor
Job TitleIT Business Analyst II
Years of Professional Experience8-15 years
Organization DescriptionWe provide health care for the community.
Industry SectorInformation technology (software, hardware, semiconductors, and IT consulting)
Individual AI Tools UsedText generation (e.g., creating documents, emails, summaries), Media creation (images, audio, video), Data analysis and synthesis
Organizational AI ToolsCustomer-facing chatbots or virtual assistants, Internal search and knowledge summarization, Security, fraud detection, or anomaly monitoring, Predictive analytics for business forecasting, Content moderation or filtering systems, Code generation and developer tools
AI Adoption InvolvementNo direct involvement in adoption or deployment (mostly a user of a deployed AI system)
Biggest Work WinMy biggest win professionally comes from moonlighting as an adjunct professor. I use AI to create data sets for students when we are doing statistics-based assignments. This gives them a scenario that is a lot more realistic and in-depth than I would be able to on my own. IE: create 100 fictitious IT tickets for productivity analysis and staffing recommendations.
Biggest Work DisappointmentProfessionally, I haven't had a disappointment. Personally, when I catch it being "positively wrong" - where it really acts like it knows what it is saying but it's giving false information.
Org's Biggest AI SuccessBased purely off my limited utilization knowledge, I would say the forecasting reports that leadership uses for staffing, etc. Our BI team made the dashboards with predictive analytics "before AI was cool".
Org's Biggest AI ChallengeBiggest challenge has been navigating sensitive information. Our BI team is in the process of developing their own GPT system so they can prevent it from getting out "into the wild".

Background

P2 is an IT business analyst at a hospital who moonlights as an adjunct professor teaching statistics and IT courses. He holds a master's degree and has 8-15 years of professional experience. His AI use centers almost entirely on ChatGPT and tilts heavily personal rather than professional. At work, his hospital's business intelligence team has been doing predictive analytics "before AI was cool," building staffing forecasting dashboards for leadership, but P2 himself has had no direct involvement in AI adoption or deployment.

P2's personal AI use is where the session gets interesting. He uses ChatGPT to learn Spanish through gaming communities and Puerto Rican Facebook groups, replacing paid language lessons he could no longer afford. He played Runescape in a Venezuelan guild for a month before anyone realized he was American. He used it to translate medical jargon during his sons' hospitalization, copying doctor's notes from MyChart into ChatGPT to understand what was happening. And he used it to navigate pet insurance claims, recovering thousands of dollars he believes he would have lost without the tool's help in matching the insurer's specialized language.

P2 presents a notably pragmatic relationship with AI. He draws a clear line between using AI as a "learning tool" versus a "do it for me tool," a distinction he applies both personally and in his classroom. His trust calibration is stakes-based: medical information gets cross-checked, collectible card inventories do not. And his biggest fear is job security, particularly around web development being automated, which has him considering going back to school for "more human-centric" skills.

Key Findings

Stakes-Based Trust Calibration

P2 describes a verification heuristic that scales effort to the importance of the question. High-stakes information (his sons' medical care) gets cross-referenced against other sources. Low-stakes queries (inventorying collectible cards) get taken at face value. This is less systematic than P1's two-layer methodology but arguably more representative of how most people actually calibrate trust in practice.

"I would say that it has to do with how important what you're asking it is. So I think it's a great first step. But then depending on what it is, I would always try to also corroborate like okay this is indeed true or not."

The pet insurance claim story adds a critical data point: P2 actually followed AI advice that turned out to be wrong, and it had real consequences. The insurer pushed back on language the AI had told him to use. When he fed the pushback back into ChatGPT, it reversed itself and admitted it shouldn't have given that guidance. This is a stronger hallucination example than most in the dataset because the harm was financial, not just inconvenient.

AI as Institutional Equalizer

P2's most concrete use cases involve navigating asymmetric relationships with institutions. Hospitals speak in medical jargon. Insurance companies use specialized language that can disqualify claims if you get the wording wrong. In both cases, AI closed the knowledge gap between an individual and an institution that holds all the domain expertise.

"It helps bridge the gap between organizations and people... it helps me not feel like I get railroaded by insurance claims and stuff as easily."

The financial impact is tangible. P2 estimates he recovered thousands of dollars in pet insurance claims that he would have lost without AI's help in matching the insurer's expected language and navigating the appeals process.

"If I wasn't able to use it at all, I would have a lot of thousands of dollars worth of vet bills that I wasn't able to get covered just purely because of not saying the right things in the submissions."

The "Learning Tool, Not a Do It for Me Tool" Line

P2 draws a sharp distinction between augmentation and delegation, and applies it both personally and pedagogically. He hasn't offloaded any tasks entirely to AI. He uses it to revisit and improve past work (deciphering his late grandmother's handwritten recipe cards), not to avoid doing work in the first place. In his classroom, he tells students the same thing: AI is fine as long as you're learning from the process, not skipping it.

"I don't mind you using AI to do your work because that's how, you know, it's a learning tool. But use it as a learning tool and not a do it for me tool. If I can tell that you've clearly just thrown it in there and said, 'Do this for me,' then you kind of lose that personal credibility in my eyes."

Job Security and the Human-Centric Hedge

P2 watched AI spin up a website in five minutes and recognized his own specialty shrinking in real time. His response is strategic: he's considering going back to school to develop skills that are harder to automate. The framing is not despair but calculation. He sees the direction of travel and is planning accordingly.

"I was kind of looking at maybe going back to school to learn an extra skill that maybe is more human centric because that human touch I think is going to be more of what keeps people employed."

Emerging Themes

ThemeDescriptionKey Quote
Trust CalibrationDeliberate practices for evaluating AI trustworthiness"I would say that it has to do with how important what you're asking it is. So I think it's a great first step."
Hallucination FrustrationDisappointment at AI confidently producing fabricated content"The biggest disappointment would be like when it's confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts."
AI as EqualizerUsing AI to bridge knowledge gaps between individuals and institutions"It helps bridge the gap between organizations and people... it helps me not feel like I get railroaded by insurance claims and stuff as easily."
Skill ErosionObserved atrophy of skills attributed to AI handling those tasks"My ability to spell certain things has kind of went down because I'm just so used to, oh, okay, it sees what I'm trying to do and it fixes it for me."
Augmentation Not ReplacementUsing AI to enhance rather than offload tasks"I haven't completely offloaded any tasks for it. I pretty much just use it to augment what I do."
Job Security AnxietyFear that AI will reduce the number of people needed for a given type of work"I was kind of looking at maybe going back to school to learn an extra skill that maybe is more human centric."

P2's trust calibration introduces a "proportional verification" variant. Where P1 described a two-layer methodology (organic reaction plus deliberate adversarial testing), P2 scales verification effort to the stakes of the question. Medical information about his hospitalized sons gets cross-checked against other sources. Collectible card inventories get taken at face value. Less systematic than P1, but arguably more representative of how most people actually calibrate trust.

"Like if I was asking it about my son's inpatient stay and it would tell me things... I'd say okay well let me take what you've given me and I'll go out and then I'll look it up and see if I can find anything else about that just to kind of make extra sure. But then for things like I was using it to do an inventory of like old collectible cards that I had, you know, that isn't as we'll say important."

P2's hallucination frustration adds a consequence-bearing example. P1 caught hallucinations before they mattered (reviewing meeting summaries). P2 actually followed bad advice and experienced real-world financial harm when an insurance company pushed back on language the AI had told him to use. The AI then reversed itself and admitted the guidance was wrong. This is a distinct sub-pattern: actionable bad advice that causes harm, not just fabricated content that gets caught during review.

"I was trying to get it to help me with it. And I included something in the description of what was going wrong that it told me to do and then they gave me some push back on it. So then I took it and pasted their push back into it and I said what's going on here? What should I do now? And it said well you shouldn't have said this. And I said well you told me to do that."

P2's AI as equalizer evidence is the strongest in the dataset for individual-versus-institution use cases. The medical jargon translation during his sons' hospitalization and the insurance claim navigation both involve the same dynamic: an institution holds all the domain expertise and specialized language, and an individual uses AI to close that gap. The financial impact is concrete: thousands of dollars in recovered vet insurance claims.

"I would copy that and paste it into chat GPT and say, 'What does this mean in English?' basically and it would kind of dumb it down for me so I could understand better."

P2's skill erosion is the flip side of P1's self-maintenance theme. P1 described actively counteracting erosion by rereading and re-learning. P2 reports the erosion itself as already happening, using spellcheck as an example of a skill that has atrophied because the tool handles it automatically. He doesn't describe counter-practices.

"My ability to spell certain things has kind of went down because I'm just so used to, oh, okay, it sees what I'm trying to do and it fixes it for me."

P2's augmentation-not-replacement stance is explicit and philosophical. He hasn't offloaded any tasks entirely. He uses AI to revisit past work (grandmother's recipe cards) and to improve what he already does, not to avoid doing it. The same principle shows up in his teaching: "use it as a learning tool and not a do it for me tool." When he detects that someone has used AI as a shortcut rather than a supplement, it costs them credibility in his eyes.

"Use it as a learning tool and not a do it for me tool. If I can tell that you've clearly just thrown it in there and said, 'Do this for me,' then you kind of lose that personal credibility in my eyes."

P2's job security anxiety is personal and economic. He watched AI build a website in five minutes and recognized his own specialty being automated. His response is strategic rather than despairing: he's considering going back to school for skills that are harder to automate, framing "the human touch" as the durable competitive advantage.

"It spun up a website, you know, five minutes. And I get that, you know, it makes errors and we still need people to check those errors and everything, but the amount of people that you need to check that is going to be less."

Interview Transcript

00:03:34

Paul: So, start at the beginning. What was the first AI tool that you remember trying and adopting?

P2: The first AI tool I used was called a crayon. So basically you tell it what you want it to make a picture of and then it created that image. I had spent a few years trying to get somebody to make a portrait of my dogs. like a watercolar and I wanted it to be like a Star Wars Jedi robe and and everything. But nobody was taking consignments, so my friend said, "Hey, try this out." And then that helped me get it wasn't the best. But it was I think it was still pretty early, but that helped me get the dogs in some robes and then you know with the the lightsabers like they were fighting. So then I after I got that, then I took it into Paint .net.

00:05:34

P2: It was like a free Photoshop and I kind of edited a little bit from there. But that was my first real experience with AI.

Paul: That's so I don't have to ask the followup. Was this for work or personal use?

P2: Personal.

Paul: What other AI tools are you using now and what have you stuck with? What did you stop using along the way?

00:06:52

P2: I haven't used Crayon since the that first time that I used it. I use Chad GPT for pretty much everything now. I use it more for personal than I do work. Just because it's easier for me to kind of identify things that I can do with it in my personal life. But the I mean I use chat for a lot of different things. Probably my my favorite thing that I use it for is I used to help learn Spanish. I was doing personal lessons with the lady in Argentina for a while, but then time got crazy and you know, I had to scale back on my budget. So, you know, like one thing I do is I play video games in Spanish and then I say like, "Hey, this is something I kind of got stuck on." we help explain, you know, the wording or or what this means. And then I have like Facebook communities for different games that I play. And then I talk to them in Spanish. So they're from like Puerto Rican Facebook groups. And then it kind of helps. So, I'll like I'll say what they said and then it tells me if I basically interpreted it right and then I'll tell it what I want to say and then it gives me pointers to help correct it.

Paul: So is this for conversing in those Facebook groups or is this for chat in your games?

00:08:11

P2: It's for for both. It's more for the Facebook groups just because I don't have as much time to play the games anymore. But it's actually I was playing Runescape is the game that I play and I was in a Venezuelan group for a month before they realized that I was actually American. And so that that was pretty cool and just kind of you know what I could do you know in game translating and then I'd throw it in there if I was stuck.

Paul: Okay. So, I'm guessing that or maybe I don't want to make any assumptions. Do the games that you play also have the voice interface so that you can speak not just chat text chat?

P2: Not the games that I play. Mine are they're pretty like so this one is the they call it old school Runescape and it's basically just the way that it was in 2007 but they you know add a little bit of stuff to it from there.

Paul: I believe that's a it's a DND offshoot?

P2: Yeah, it's it's pretty similar. It's like that in World of Warcraft.

Paul: I still remember the OG Runescape box at the hobby store in the in the 80s or 90s next to the D&D books and boxes. Well, cool.

00:09:12

Paul: what do you think has been your biggest win or success or efficiency gain with AI in at work? So this is specifically work that we'll cover personal after that.

P2: So, for work, my biggest win I would say is so I work as an IT business analyst full-time and then I moonlight as an adjunct professor. So my biggest professional win would be from the the adjunct professor role. I do statistics classes and IT classes. But then for my statistics I use it to kind of make more realistic scenarios for students. So instead of saying you know go out and make a box and whisker chart and now you can see some fake data or making maybe 10 or 20 fake you know lines. It's just as good as my imagination is. I can go out and say, "Hey, create a hundred fake IT tickets and then it'll do that and then I have my students analyze that data and then you know make recommendations based on that. So I think it's really helped improve what I've been able to to give them labwise.

00:10:27

Paul: Thanks. And how about biggest win or success or gain personal-wise?

P2: Personally, I've used it a lot for medical. I think would be my best like win. So my sons were in the hospital for a couple months and during that time, I would go into my chart and I'd see like the doctor's notes about what they were saying was going on. And then I would copy that and paste it into chat GPT and say, "What does this mean in English?" basically and it would kind of dumb it down for me so I could understand better

I would copy that and paste it into chat GPT and say, "What does this mean in English?" basically and it would kind of dumb it down for me so I could understand better.

Paul: What's been the biggest disappointment or failure you've experienced while with AI?

00:11:42

P2: The biggest disappointment would be like when it's I'll say confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts and then you're like, hey, that's not really right. And so that's that I would say was my biggest frustration just because you have to know so much or else you can find yourself in a bad position from it.

The biggest disappointment would be like when it's I'll say confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts.

Paul: Yeah. And I'm going to jump down to some of my other questions because this has been coming up right at the point where you brought it up. How do you decide when and whether to trust AI? How do you diagnose when you think it's off the map?

00:13:00

P2: That is a really good question. And I would say that it has to do with how important what you're asking it is. So I think it's a great first step. But then you know depending on what it is, I would always try to also corroborate like okay this is indeed true or not.

I would say that it has to do with how important what you're asking it is. So I think it's a great first step.

P2: So like if I was asking it about my son's inpatient stay and it would tell me things like if I say you know what should I what do you think he has or whatever you know it might say well these are the the five most likely things and then it would kind of explain it and then I'd say okay well let me take what you've given me and I'll go out and then I'll look it up and and see if I can find anything else about that just to kind of make extra sure. But then for things like I was using it to do an inventory of like old collectible cards that I had you know that isn't as we'll say important. So I kind of just you know if it tells me one thing and I know better then I'll correct it. Otherwise I just take it for what it says.

Paul: Okay. Is there any specific times you can recall where you trusted AI and then in retrospect you shouldn't have?

00:14:22

P2: There is definitely a time. And I'm I'm I'm trying to say I think it has to do with I think I was doing an insurance claim for my pet and I was trying to get it to, you know, get it to help me with it. And I included something in the like description of what was going wrong. that it told me to do and then they gave me some push back on it. So then I took it and you know pasted their push back into it and I said what's going on here? What should I do now? And it said well you shouldn't have said this. And I said well you told me to do that so what what was wrong with that? And then it, you know, apologized and I shouldn't have told you that. But that was the one time that it got me.

Paul: Let's go back to some of your organizations successes and challenges using AI. Every organization is either running multiple AI pilots or deployments or thinking about it because it's or deep deep in the hype cycle. Anything that and you could elaborate on what you wrote in the survey, but is there anything you want to talk about regarding your organizations's successes and challenges? What were they trying to solve for and and how did it go?

00:15:32

P2: So I used to be on the team that was the business intelligence team. And they were doing AI before AI was cool is what I say. So like the first thing that they did and I was not working directly on that.

00:16:50

P2: I was kind of transitioned to another team when they started. But I think that would still be their biggest success with it. Is they do predictive analytics for forecasting staffing needs for the hospital. So they have it however it's doing that it displays on dashboards for leadership and then they use that to try to help decide what staffing levels should be. And then the biggest issue that they've like impeded that they've run into is just, you know, all of the legal things that are involved with hospitals and trying to make sure that the information doesn't get out in the wild. So to try to help with that, they've made like their own, they call it Gen GPT because it's Genesis is the name of the the group, like the hospital. But that way everything stays inhouse and it doesn't get out. So that's slowed the progress that they've wanted to do for that product. You know, but they haven't so they haven't been able to get as far as they've wanted to yet, but also they've been able to do it a lot more safely and make sure everything is confined.

Paul: Yeah, there's there's a lot of regulation in health and medical. I want to dig into a little bit about how AI has changed how you do certain things. So, is there any activity or task you you you look back and you feel, oh, I'm doing this really differently now just purely because of AI.

P2: I would have to say, definitely personally. It helps me do things differently more so than professionally. And I think it helps bridge the gap between you know organizations and people. So like I say that I was using it for insurance. That's been a big thing cuz like you know before I would have tried to figure out you know this is what I need to go to and and read about and it would have just been a mess and then try to communicate and they come back with oh you said this so then it disqualifies you. So it's been kind of a good tag team partner to just say, "Hey, this is what I want to do." where should I start? And then as I get through the process, I bounce ideas off of it. And it kind of helps me through those. And, you know, it helps me not feel like I get railroaded by insurance claims and stuff as easily.

It helps bridge the gap between organizations and people... it helps me not feel like I get railroaded by insurance claims and stuff as easily.

Paul: You already covered the drawbacks to using AI for this particular use case. Where it gave you the wordings that were actually disqualifying. What would it be like if you couldn't use AI for this anymore?

00:19:35

P2: If I couldn't use AI for this anymore, I'm I'm sure that So, it it had a little goof where I got some push back. But I was able to recover from that with its help. It kind of course corrected me. if I wasn't able to use it at all, I would have a lot of thousands of dollars worth of vet bills that I wasn't able to get covered just purely because of not saying the right things in the you the submissions accidentally making you know trying to explain it to them in layman's terms the way I understand it and then by saying that it muddies the waters. So, I would say it it would really if without it, if I were trying to do it going forward, I wouldn't have a lot of the claim money coming back to me that it should be.

Paul: Is there anything in your life or your your your personal or work life that you've completely stopped doing because AI does it for you now?

P2: No, I haven't completely offloaded any tasks for it. I pretty much just use it to augment what I do.

I haven't completely offloaded any tasks for it. I pretty much just use it to augment what I do.

Paul: Anything that you've started doing that maybe you wouldn't or couldn't have done before , you started using AI? Now, you already mentioned the, you know, the insurance claim stuff, but is there anything that now that you've got this tool, you decide, I'm going to take this on. Never would have done it before.

00:20:52

P2: I don't necessarily know that I've started, you know, like a new activity or routine just because of it. But because of it, I've been able to kind of go back and revisit things that I've previously done to improve it.So like my grandma, when she passed, I got all of her recipe cards. And so I scanned them all into, the computer, put it on the the cloud, and then I, you know, I named it what made sense just so I could go in and reference them. And for the most part, you can read everything and interpret it. But then you get some that I had to say like this is a mystery and this is a mystery. So because of it, I started going back through all of the the cards and having if I couldn't read, I have it help interpret what it might be that she was writing there.

00:22:05

Paul: Yeah. Huh. That's a good use. What norms are are developing in in work and in personal life about disclosing the use of AI? So can can you think of any unwritten rules or norms forming around AI at work, or in personal life? And if so, but how how have they been evolving?

P2: I'll start with work just because I'm a little more familiar with that. So one thing that my hospital does is they use a system called DAX which to my understanding basically listens to the visit and then helps the provider make notes of what happened so it's not as reliant on their memory. And that may be completely off but that's just kind of the way I understand it. And so when they they have each visit, they have to sign a disclosure that says, "We are allowed to use the DAX program for this visit, or we're not." so it's kind of added that extra layer of, "Hey, just so you know, there's a robot listening." and make sure it's okay.

00:23:30

Paul: Have you ever run into a situation where you're listening to a presentation or someone's going through some some work product and it feels like it was loweffort, low quality AI slop for lack of a better word. What happened in that situation?

P2: So I know that I've encountered it before, but I can't point to a specific this this happened and and you know what it was about. But I will say that when I have encountered that before it kind of takes the credibility away from the person. And so I don't mind you using AI and this is what I tell my students as well. I don't mind you using AI to do your work because you know that's how you know it's a learning tool but use it as a learning tool and not a do it for me tool. So, if I can tell that you've clearly just thrown it in there and said, "Do this for me." Then you kind of lose that personal credibility in my eyes.

Use it as a learning tool and not a do it for me tool. If I can tell that you've clearly just thrown it in there and said, "Do this for me," then you kind of lose that personal credibility in my eyes.

Paul: Do you think AI is changing how people approach solving problems?

00:24:36

P2: I think I think people that were always going to half effort do things are still always going to be those that half effort do the things. And then people that you know really really care I think are still going and saying hey please you know help me with this but then making sure that they're still putting forth a you know a valid effort and things. I think it's just kind of added, it's been an added tool for those people and then just for the people that half effort do things, it's just kind of been another way for them to skirt by if that makes sense.

Paul: Yeah, it does.

Paul: I wrote down the word or I tried to write down the word conscientiousness and I realized I had no idea how to spell which is weird because I'm pretty good at spelling most words. But I'm glad you mentioned that because I think there is something there to unpack and uncover as I go forward with this work, which is there is an internal and and now it's just you and me talking. I'm I'm a little off the script and that's okay. I give myself permission. There is of course I'll need to go back to look at the latest research but I did my graduate work in social and personality psychology which was the the framework where we approached human factors in aviation and and medicine and conscientiousness is one of the or was one of the so-called big five personality traits and so when you mentioned that I spelled it wrong and underlined it twice and I'm going to I'm going to go back and think a little bit have a think about that, shall we say? okay. Do you feel like you're getting better at certain things or or worse at others on the flip side because of AI? And and what would those be and and why why do you think

00:27:15

P2: So I can we can use the conscientiousness as an example. I don't know if you consider it AI or not, but I kind of do is spellch check. Because we have spellch check now or, you know, I've noticed that my ability to spell certain things has kind of went down because I'm just so used to, oh, okay, it sees what I'm trying to do and it fixes it for me. So I think that has been an area where I've gotten worse at something.

My ability to spell certain things has kind of went down because I'm just so used to, oh, okay, it sees what I'm trying to do and it fixes it for me.

I've gotten better. I can't I can't point to an exact thing that's, you know, I've gotten better at. But I've gotten, you know, like when I was saying that I use it for my kids medical, you know, just trying to translate, like I guess it made me a better caregiver because I can understand their care better. But I don't know if that's necessarily what you're looking for there.

00:28:34

Paul: It was really open-ended. So, it counts. I know we're we've got about less than 5 minutes left, so I want to make sure I've covered nearly everything. I've got a section here on what I just titled hopes and fears. So, first, what do you think is the most significant breakthrough or positive outcome that AI might enable within the next decade?

P2: I'm really looking forward to having the little Star Wars droid kind of roll beside you and , you know, you could just say, "Hey, you know what's what's the the answer to this question. And I know that Amazon had kind of done it with one of their , it was like a security droid and and they I think they discontinued it because it wasn't getting adopted well enough or something. But I'm really looking forward to kind of having that to where you don't even have to pull up your phone and go into an app. It just kind of, hey, help me out here.

Paul: What's your single biggest concern or fear related to the increasing use of AI?

P2: That's definitely got to be job security. It's so I am like a programming websites and stuff is my specialty. And you know, just yesterday I was looking at it. It spun up a website, you know, five minutes. And and I get that, you know, it makes errors and we still need people to check those errors and everything, but the amount of people that you need to check that is going to be less. You know, they won't maybe need as many people to do one job. So like I was kind of looking at maybe going back to school to learn an extra skill that maybe is more human centric because that human touch I think is going to be more of what keeps people employed. And so it's a little worrying there.

I was kind of looking at maybe going back to school to learn an extra skill that maybe is more human centric because that human touch I think is going to be more of what keeps people employed.

Paul: Yeah. Yeah. Makes sense. Last question is, is there something you think I should have asked you in this type of of session, but I didn't? Do you have any good suggestions for questions?

P2: Off the top of my head, I can't really think of anything. I mean, I think you you did a good job getting all of the things that might have been helpful out of me.

Paul: Yeah, I appreciate that. I've been aiming for I built this first draft of the of the session script. We're aiming for a half hour and we're pretty much there. Of course, everyone their everyone's mileage varies. Okay. I'm going to stop the recording and then we'll we'll just do a quick wrap-up.

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

  • I took notes after each session.
  • I fed those notes to several AIs, along with the moderator guide, project proposal, session transcript, the participant's survey responses, and a codebook of tags and themes I've been iterating as I collect data.
  • I prompted each to write a background, findings, and emerging themes section.
  • Then I iterated on each AI's draft, challenging the AI where appropriate and removing what I'm euphemistically calling "hallucinatory content" :-).
  • I collected each AI's drafts, added them to the project I've set up in Claude Cowork, and prompted it to draft the background, findings, and emerging themes section, pushing back as appropriate.
  • Then I edited the content, because "human in the loop" means "I have final edit." At least to me it does.
  • I then published each session writeup.

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.