Skip to content
Paul Sherman
April 14, 2026

The Reluctant Early Adopter

P1 - Principal UX Designer, Insuretech

A veteran UX practitioner with 25+ years of experience and a freshly completed PhD initially resisted AI due to the hype cycle, then became one of its most prolific adopters.

I will never 100% trust AI ever because I don't think it will earn that. It's an oxymoron.

P1: Survey Data and Session Summary

Survey Responses

QuestionResponse
Age65+
EducationDoctorate or professional degree (e.g., PhD, JD, MD)
Role / LevelIndividual contributor
Job titlePrincipal UX Designer
Years of experienceMore than 25 years
Organization description(not provided)
IndustrySoftware
Individual AI tools usedText generation (e.g., creating documents, emails, summaries), Search and information retrieval, Data analysis and synthesis, Workflow automation and process automation, Agentic opportunity analysis
Organizational AI toolsCustomer-facing chatbots or virtual assistants, Internal search and knowledge summarization, Predictive analytics for business forecasting
AI adoption involvementContributed to technical design, requirements gathering, or implementation; Led the project, strategy, or initiative (project manager, initiative owner)
Biggest work win with AIMy biggest win has involved using AI to analyze data to discover actionable insights from customer feedback.
Biggest work disappointment with AIHallucinations and sycophantic responses.
Organization's biggest AI successA feature called ProjectMAX, which uses AI to query and analyze existing data to augment and optimize task completion.
Organization's biggest AI challengeNot using AI enough (LOL)

Background

P1 is a principal UX designer working in insuretech with more than 25 years of professional experience and a recently completed doctorate. They self-identify as a habitual early adopter who is typically "the first person with everything," yet they initially resisted AI specifically because the hype surrounding it felt unsubstantiated. Their adoption path began with Grammarly for personal writing, then expanded rapidly once they encountered tools that delivered concrete value in their research work.

At the time of the interview, P1 was using AI across an unusually wide range of contexts: professional UX research (Marvin, Dovetail, Max QDA), workflow documentation (Claude), job search and educational planning (Anthropic), podcast production (Descript), photography (sky replacements and enhancements), and personal health tracking through multiple biohacking devices feeding into AI-powered health aggregators (Aura, Whoop, Ultra Human, Bevel, Athletic, Liberty). By their own admission, they were "doing it more than I thought I was doing it."

Their organization uses AI for customer-facing chatbots, internal search, and predictive analytics, and P1 has been involved in both technical design and leading AI initiatives. Their survey response identified their organization's biggest AI challenge succinctly: "Not using AI enough (LOL)."

Key Findings

From Hype Resistance to Pervasive Adoption

P1's adoption arc is notable for the gap between their initial resistance and their current depth of use. They described AI as "a crutch" and were put off by "people making statements that are unsubstantiated." The turning point came during their PhD work, when research repository tools introduced AI-powered summarization and theme development. Once the value was concrete and demonstrated rather than marketed, adoption accelerated across nearly every domain of their life.

This pattern suggests that experienced practitioners may need to encounter AI solving their specific problems before they adopt, rather than responding to generalized enthusiasm. P1's resistance was not technophobia but calibrated skepticism about unproven claims.

"I don't like hype and I saw all the hype... I'm an early adopter. I'm the first person with everything. ... When it happened with AI, there was too much hype and people making statements that are unsubstantiated."

AI as Professional Validator

One of the most distinctive findings from this session was P1's use of AI to validate their professional judgment against workplace criticism. When colleagues criticized a survey P1 had prepared, they fed it question by question into Claude for an independent critique. Claude found only one minor recommended change in a five-question survey, which P1 then used to demonstrate that the criticism was unfounded.

This represents a use of AI that rarely appears in productivity narratives: not as a tool to do work faster, but as a credible third-party evaluator to settle professional disputes. P1 explicitly valued that Claude "doesn't have any horses in the race" and would not give a sycophantic response, using the tool's perceived objectivity as a counterweight to workplace politics.

"I took the survey question by question went into Claude and asked Claude to critique and Claude doesn't know and Claude doesn't have any horses in the race."

Two-Layered Trust Calibration

P1 described a structured approach to evaluating AI trustworthiness. The first layer is ongoing supervisory evaluation during normal use, where they review AI output as the "expert" responsible for the final decision. The second layer is deliberate adversarial testing: asking AI questions to which they already know the answers, purely to gauge reliability. They described asking multiple AI models "Tell me about P1" and finding that "nobody was 100% right."

Their conclusion frames trust not as a binary state but as a spectrum that requires continuous recalibration. The framing draws on their Harvard AI coursework about "agentic risk" and positions the human as a coach or supervisor of an imperfect but useful system.

"I will never 100% trust AI ever because I don't think it will earn that. It's hard. I say it's an oxymoron."

The Expectation Escalation Paradox

P1's biggest professional win involved using Claude and Marvin to document and annotate 25 to 30 task flows across their organization's products, identifying optimal points for AI integration in each workflow. By their own description, this was an unreasonable scope: "I thought they asked me to do too much and I kept saying as much." Yet AI made it possible to deliver.

The paradox is that the same AI tools that made the task possible also changed the expectation landscape. P1 noted that stakeholders now have "dysfunctional expectations" and described the asks as "completely ridiculous" and "insane." AI solves the immediate problem while reinforcing the conditions that created it.

"I think the thing I'm getting better at is responding to terrible dysfunctional expectations of stakeholders because I can do things faster."

Deliberate Self-Maintenance Against Skill Erosion

Asked whether AI was causing skill atrophy, P1 described an active practice of "sharpening the saw" through rereading books and deliberately re-exposing themselves to known material. They referenced a concept of "emptying yourself of yourself" to approach familiar content with fresh eyes. This practice predates AI and was not developed in response to it, but P1 explicitly connects it to the skill-erosion risk: they see the "dumbing down of self" as a known threat that can be managed through intentional practice.

"I will never let that dumbing down of self that everybody says the risk of AI is. I'll never let that happen because of the way that I maintain myself."

Emerging Themes

ThemeDescriptionKey Quote
Hype ResistanceActive skepticism toward AI marketing claims; adoption delayed by distrust of hype, not of technology"I don't like hype and I saw all the hype... I'm an early adopter."
Pervasive IntegrationAI adoption spanning many life domains, sometimes without the participant fully recognizing the extent"I think I was doing it more than I thought I was doing it."
Trust CalibrationDeliberate, ongoing practices for evaluating AI trustworthiness on a spectrum"I will never 100% trust AI ever because I don't think it will earn that."
Self-MaintenanceDeliberate practices to prevent skill atrophy from AI dependency"I will never let that dumbing down of self... I'll never let that happen."
AI as ValidatorUsing AI to independently evaluate or defend existing work against criticism"Claude doesn't know and Claude doesn't have any horses in the race."
Expectation EscalationAI enabling faster delivery while simultaneously raising stakeholder expectations"I'm getting better at responding to terrible dysfunctional expectations... because I can do things faster."
Radical TransparencyA personal policy of always disclosing AI use, framed as ethical obligation"It's a personal policy with me because I just believe in being transparent. ... I think it's plagiaristic if you don't."
Knowledge DisplacementConcern that decision-makers deploy AI to reduce labor costs without understanding the loss of institutional knowledge"Let go because of AI, only to be rehired because they found out AI couldn't do all the things."
Organizational AI Adoption ChallengesOrganizations making premature personnel decisions based on overestimation of AI capabilities"Let go because of AI, only to be rehired because they found out AI couldn't do all the things."
Hallucination FrustrationDisappointment at AI confidently fabricating content, especially when source data should prevent it"I hate the hallucinations because it seems like there's no excuse for a lot of them."

Interview Transcript

00:03:14

Paul: What was the first AI tool that you remember trying?

P1: Oh, wow. Man, the first I'm not going to be accurate. More than likely,

Paul: That's okay. If if you don't know the exact one, that's fine.

P1: the first one that comes is Grammarly.

Paul: What were you hoping it would do for you?

P1: I would hope it would sort of do some little janitorial work as I was writing to help me catch certain things that I was missing to suggest things that might give me a better way of expressing myself and writing properly and did a great job to the point I had the free version and I fell in love with it and I went ahead and got the paid version of it.

Paul: Okay. Was this for work or personal use or both?

P1: I'll use that at work now too.

Paul: What what other AI tools have you tried and which ones have stuck with you and which ones have you just kind of left by the by the side of the road?

00:04:20

P1: I'm going to go as they come to mind. I just fell in love with anthropic. I fell in love with it. it was something I started pushing into. I actually negle resisted AI a little bit at

Paul: Why was that?

P1: first because I think that's the only fair way to honest way to put it cuz I saw it as a crutch. I don't like hype and I saw all the hype. I'm all for I'm an early adopter. I'm the first person with everything. I used to have my own usability lab. I'm I'm on top of everything. As soon as it comes out, I've got it. when a when it happened with AI, there was too much hype and people making statements that are unsubstantiated. I just can't I just can't get with that. So then I started trying to find ways to incorporate it into my work and into my personal life.

I don't like hype and I saw all the hype... I'm an early adopter. I'm the first person with everything. ... When it happened with AI, there was too much hype and people making statements that are unsubstantiated.

00:05:11

P1: And it wasn't until we probably actually as I started to wrap up my PhD I started to ramp up start to see more opportunities and I was using in Marvin hey Marvin Dovetail I was evaluating research repository tools at work everybody's using AI now so loop panel didn't get theirs going but I started that was sort of like my main inroads because Grammarly was a different animal because I wasn't using it for heavy lifting so to speak. So, I started using it to go through the summaries, the transcripts for my research. I started using it to develop themes. I started Max QDA, the AI stuff in Max QDA. I started doing that. So, a lot of tools anthropic huge. Absolutely. Anthropic actually put together. It asked me if I wanted an agent to help me with job search. And what it rolled out was mindblowing to me to find could go into my LinkedIn profile, look at my background, look at like five different job boards, go through all of them.

00:06:28

P1: It would have taken me three, four, five weeks to do what to do what Anthropic did. And then it came back with a list of jobs, ranked them by the ones that were closest fit, gave me the links to apply for the job. This is a mindblowing to me. And then the other thing I'm trying to ramp up my knowledge. I know I've got gaps. So I also use Anthropic and it put together an AI learning path for me. It put it got I got dashboard. I got everything. I can map my progress and all that. This is just they really they nailed it when they labeled it as a as an AI assistant. It's assisting me and I knew that AI was not going to replace us if it's if used properly, but as an assistant, I couldn't ask for a better assistant. So, those are a few things. There's more. I'm just missing them. I use AI in my photography. Use AI for sky replacements.

00:07:22

P1: I use AI for enhancements of the photography. man, there's so many. I use AI tools because I lost a lot of my weight through biohacking and I've got all these, you know. Oh, here's one. So, I've got one. Here's my Put my Aura ring back on and my ultra human and my Apple Watch and my Whoop band. And then I'm I've got my I'm not diabetic, but I track my blood sugar so I learn how to how to eat the right way and things that nature. And all this information goes into my phone. Then I have four different health aggregators, all of which tap into AI to give me recommendations on how to when to slow down, when to work out, what to do best. Hey, you know what? You should start shutting down right now. So I get all these AI recaps and recommendations. So I've actually surv as much as I said that I wasn't adopting AI, I think I was doing it more than I thought I was doing it.

As much as I said that I wasn't adopting AI, I think I was doing it more than I thought I was doing it.

00:08:16

P1: It was just wasn't doing it in my UX work. I think that's where it was lacking. So these are just a few. So, I'm using, man, I have to look at my phone to get the to get the names of these things. There's one called Peak Watch. there's Aura is using AI now. Whoop is using AI now. I'm using Bevel as a health aggregator. I'm using Athletic as a health aggregator. I'm using Liberty. So, I'm all over the place with surrounded by AI right now.

Paul: What about the biggest win or success or efficiency gain that you would attribute to AI in your work life?

P1: The biggest one is we're being tasked to identify our work processes and we're also being asked to find opportunities to identify the workflows in our products that we roll out. And I'm working in insuretech now. So, they came to me.

00:09:23

P1: I'm a principal, so they came to me. I thought it was they asked me to do too much and I kept saying as much and it finally came out, especially one who's never had an opportunity to deep dive our products and services. So, I'm going to say, "So, how do I do that?" Well, AI for the win. AI for the win. I actually use I'm using Claude now for that at work, but I was using Marvin to go into our all our research data to identify pain points across the entire landscape of our products. I put together a whole set of insights that I generated that all of it was AI done and then I went in and start to evaluate different things using AI for research summaries. I even did a thing. I just actually wrapped up delivered it to my boss because my our boss my boss's boss just came back from leave today to go through all of these optimal places to identify our work and they want to go end to end.

00:10:23

P1: So, it's not really a true journey map, but it's it's reflective of one. and to go in and look at all these different tasks and I asked Claude to identify the optimal places to identify the workflow. So, I delivered that. So, we've got all of these flows, all of these task flows and they're all annotated where here's a here's an opportunity for AI, here's an opportunity for AI, and then here's exactly what you can do with AI at each one of these steps. I just delivered that for 25 to 30 tasks that I did. I documented and delivered to my boss for his review because he's in Arizona. So he can review then but then when our then when his boss comes back they can talk about it and then we'll get together and go through the whole thing. But that's the biggest win

I thought it was they asked me to do too much and I kept saying as much. ... I just delivered that for 25 to 30 tasks that I did.

Paul: What about the biggest disappointment or failure or unexpected and negative outcome you've experienced while trying to use AI tools for work?

P1: Yeah, that's the easy one. and it's probably gonna be a lot of people's biggest one. Hallucinations. I hate hallucinations. when it just says I had AI. I asked AI to do a summary. I actually have a product manager now who goes to meetings. If I can't attend, even if I do attend, he sends me the recording. I upload the recording into Marvin. Marvin generates the summary. If if I was there, I watched the video. I go back through it. And it's amazing how often I will see AI insert things that did not happen or say things that did not happen. And so I'm glad to know it's not perfect and they tell you this all the time, but I hate the hallucinations because it seems like there's no excuse for a lot of them, but it happens anyway.

I hate the hallucinations because it seems like there's no excuse for a lot of them, but it happens anyway.

Paul: Let's talk about how AI has changed how you do certain things.

00:12:20

Paul: So this could be in your work or your personal life, but whatever is most top of mind for you. you've already mentioned your biohacking you know a couple of other things but what task or activity do you do very differently now because of AI and walk me through what you used to do versus what you do now.

P1: Okay, I'll use the example I'll use is when I trying to decide what school to go to. When I was getting my mA my masters when I went to Syracuse I went to US News and World Report I go through there they've got their ratings of all the different schools and you spend hours and I looked at no fewer I evaluated no fewer than a hundred schools when deciding where to go to get to get my first master's degree. and it's funny cuz you wear that effort as a badge of honor because you know all the rigor and all the effort and you feel good about that decision and you go when you get the degree you remember all the rigor and I did all of this stuff and it took me so long to do now after I got my PhD and you know what I want to learn more about AI.

00:13:35

P1: I believe in education. I know it fills gaps. I know how important it is. Instead of going through back to news and world report and all of this time and all of this effort which I do not have time to do, I just asked AI what are the best online programs for AI strategy and innovation things of that nature. It recommended five institutions. I did it with different AI models. I used different ones. They you agreed for the most part on some of them but then I also found some of my own. So said, "Compare, please compare Purdue, Georgetown, Grand Canyon," which was only there because I had a past relationship with Grand Canyon. and I had written off Wake Forest. I didn't want to go to Wake Forest because that's too much money. But then when I talked to Georgetown and I was just floored at what I saw at Georgetown and my original interview with on my initial interview with them, well, if I'm willing to pay for Georgetown, I should be willing to pay for Wake Forest.

00:14:37

P1: So, let me bring weight for back into it. I asked AI to do all the comparisons. It put all the information side by side and I ended up deciding to go I was just admitted to Georgetown. I got my admittance letter yesterday to get a masters in AI strategy and in innovation and I did all of that the same thing I did when I got when I went to Syracuse and all that effort. I did the same thing in wow like almost a hundredth of the time. I got all that time back to apply to other things.

Paul: Do you feel like there were any drawbacks to using AI for this school search?

P1: Nope. It was perfect because of the finite science-oriented I always say the finite science-oriented thing. These are not moving targets. It's data that already exists, standards that already exist, faculty already exist. All of these things are locked and in place. And I think that's where AI excels.

00:15:35

Paul: is there anything that you've completely stopped doing because AI does it for you now? This could be work or personal. Is there anything that you that you think back and say, "Yeah, I used to do that pretty regularly. Now I don't do it anymore.

P1: There's one. I use the AI feature in when I'm doing my podcast. I'm I just I just produced episode number 309 of my of my podcast and I used to have a workflow where I would record, go into Adobe Audition, go through everything bit by bit, edit, edit, edit, edit, edit, edit, edit, go through all of these things, and then go through, put all these EQ filters on it to make it podcast ready from a sound perspective. I now do that with one click in with two clicks in Descript. I can with one click remove all the filler words. It doesn't do a good job of it, but I hope they fix that.

00:16:32

P1: But but it does do part of it. And with a second click, I make the sound studio ready. Two clicks.

Paul: That's impressive. Well, this is something that people really don't talk about a lot, but is there any new tasks or activities you've started doing that you wouldn't have done before you started using AI? And the followup is why are you doing this new thing?

P1: I can't think of any new things offhand. I think it's replaced me with some of the things that I did so allowed to free me up to do other things. I can't think of any It probably is. I'm just not thinking of it. I know there's something I'm about to do.

Paul: What's that?

P1: It's that Gemini, I found out that Gemini can access your email, your calendar, and manage it for you to help to make sure that things are set. and so I haven't I saw it on the commercial.

00:17:39

P1: I saw the prompt to do it. I just thought it was interesting that after tapping into all this AI, somebody tried to hack into my Gmail last night. I don't know if that's just happenstance, but somebody in Las Vegas or at least using the VPN to say they were in Las Vegas were trying to and I've I've never really had that happen before that I can remember. I don't know if I opened up the door by doing some of these things and letting some of these apps have access. but that is something I would like to do because I'm I'm big on time management and I'm curious to see how AI can help me be even better.

Paul: Has AI affected how you interact with people professionally?

P1: There's one thing it did for the most part. No. For the most part, no.

P1: I didn't used to give myself credits for work because I didn't realize I was doing it in 95. so, but I it's sad and I know you've experienced this, I'm sure, too. You think people when you work with people and they see how long you've been doing it, they see your work track record and you would think they're happy to work with you. And you're always happy to You've always had a huge heart. You've always been dynamite to me. I love you. and you know, I come across people and I'm always putting myself out there and I'm always helping people. I'm like, you're here to help vault the team forward. Team team, not me, me. Team team. and I the volatility I'm exposed to is insane.

Paul: In what way?

P1: There were workplaces that were not hostile that become hostile when I arrive because people become hostile toward me and then people would tell me well that person's never been like that before. Well, I was never here. They they're threatened by me. They sit around and they look for opportunities to take pot shots. It it's insane. the lynching that I took on LinkedIn by one guy who just dragged me through the mud and put up a picture of Sherman Clump and then went on this tirade attacking me about stuff that he knew nothing about but he just decided to do it. I experienced that at work. So what I use just to insert that it was all necessary. I had to use AI to prove to people that what I said was on point.

I had to use AI to prove to people that what I said was on point.

Paul: How did you do that?

P1: I just started I had a people come to me for surveys. Some of my internal clients come to me to they know that they got a rough draft. They present it to me to massage it to get it ready from a UX perspective and then I go into survey I'm one of the people work as a Survey Monkey license.

00:20:34

P1: I put it in Survey Monkey. I send it back to them to see if they how what they have to say about the changes and then we send it out. So, I sent I did one the other day and then I didn't realize this. The survey was then sent to marketing. Marketing takes a survey and sends it to our internal research guild, which is a hodgepodge setup of people that are aspirational about UX, but one person is a market researcher who lied to get her job as a UX researcher. She doesn't really have the experience and she's threatened and scared to death of me. All I have to do is come into a meeting and they cl you can see it on their face. It's amazing. And it's not because of the way I carry myself because I'm always laughing and joking and helping. And I'm not I'm not a threat to anybody. I'm not I wasn't hostile when I was a kid. I'm not hostile now. and so she gets it from marketing.

00:21:25

P1: she doesn't know what to do. She just wants to be able to find something to criticize me about. So she sends it to like my biggest hater in the office. the hater just manufactures things to say that and then they said this was a terrible survey and P1 helped with it and I'm going okay so what I did was because they were the ones that said we can use AI to fine-tune surveys I'm like and then the light bulb went off so I took the survey question by question went into Claude and asked Claude to critique and Claude doesn't know and Claude doesn't have any horses in the race it's not going to give a it's not going to be sycophantic. I found I love the fact that because that's the other thing I hate about AI can be very sycophantic. It's not sycophantic in this case and it gave a review of each question and it didn't find problems. It only found one recommended solid recommended change with the five question with the fifth question in a five question survey and I use that data to massage that to let them know you know what Claude had no problem with it.

I took the survey question by question went into Claude and asked Claude to critique and Claude doesn't know and Claude doesn't have any horses in the race.

00:22:29

P1: You said it was terrible and Claude didn't have a problem with it.

Paul: That's interesting. And that leads to the next question I have, which is in what situations do you typically disclose your use of AI?

P1: It's a personal policy. It's a personal policy with me because I just believe in being transparent.

It's a personal policy with me because I just believe in being transparent. ... I think it's plagiaristic if you don't.

P1: I believe in pulling people up. I believe in this is how I accomplished it. You can go do it too. So and I think it's plagiaristic if you don't.

Paul: Are there unwritten rules or norms forming about AI use in your work life? And if so, can you can you describe them or is it still formative?

P1: it's still formative. I think it's formative. other than if you did something with AI, illustrate the process. one of the things I'm held responsible for at work is to try to be an example. I am literally on paper tasked as part of my annual evaluation. Be an example. Be so this is how I did it so that people can turn around. So if there's anything that is formalized, it's that. So when the thing that I did for work with Claude, I'm going to end up going back and showing the team exactly how I did that so that everybody can go and do it. I'm actually going through in between now and I don't start Wake Forest till August. I just started and today is the first official day. I'm going through an AI intensive with Harvard data science and I was so I reviewed that one.

00:24:27

P1: I looked at Stanford. I looked at a few, like MIT, and I settled on this one also because I also teach at Brandeis now. We have a union but now I wish we had a union at Kent. I probably still be teaching. You probably still be teaching but the they have a they want somebody to teach an AI course by the way at Brandeis. I didn't think I don't know if you're interested in that but I didn't think about you. The I didn't realize you were doing all this AI stuff but you would be perfect for that. But but we have a union so they can't do anything to us and get away with it. and so and I forgot the question. Can you give me the question? I forgot. I

Paul: How do you decide whether to trust AI's output and what happens when you've gone down the path of trusting it and then find that maybe it was off?

P1: there's two ways I do it. During the standard course of action, I evaluate the resources. And I love what and that's what the Harvard thing back to them again. They say that it doesn't displace us. We serve as coaches. We serve as supervisors. We we evaluate the agentic risk. That's the expert's job. So when the results come back, I go and I look at it. I'm the person that has to sign off on it. I'm the person that has to make the decision. AI is not making the decision. AI supporting the decision, is giving me the data to make the decision.

We serve as coaches. We serve as supervisors. We evaluate the agentic risk. That's the expert's job.

00:26:09

P1: So, so I do that. Then there's the second wave of that. And the second wave is deliberate evaluation. So I do it within the organic course of action, but then I do things that I know the answers to so that I can where I'm the expert on a personal level that it might not be a personal thing.

P1: It might it might be a professional thing, but I'm deliberately going to do things and this is strictly for evaluation purposes. I'm not doing anything with the data you give me. I'm just checking to make sure how much I can trust you. At least we went into when I when I started my official d deep dive in AI, I went to different agents and I asked them or models and I asked them about me. And it get I mean and people told me they had been doing that so I thought I'd do it. Yeah.

00:27:02

P1: Tell me about P1. And it so he told me about myself not knowing that P1 was asking the question and so I got the responses I got were hilarious and nobody was 100% right. But I'm now I'm going to do it again and now that I've dived in and I'm paying for it. I'm going to I'm going to I'm going to ask it certain questions about me and see what I get back. But but those types of deliberate things are the between the two lets me know how to trust, when to trust. I will never 100% trust AI ever because I don't think it will earn that. it's hard.

I will never 100% trust AI ever because I don't think it will earn that. It's hard. I say it's an oxymoron.

00:27:52

Paul: As I mentioned, you're the first person to go through this set of questions. So, I'm going to leave some on the on the cutting room floor and get to the ones I really want to get to, which are do you feel like AI is helping you get better at certain things? And on the flip side, are you getting worse at others? and talk about what things you feel like you're getting better at, what you may be getting worse at, and why you think that is?

P1: I think the thing I'm getting better at is responding to terrible dysfunctional expectations of stakeholders because I can do things faster that they ask me something completely ridiculous the ask of going through and getting all of these things for this. I'm looking at the help documentation that I pulled it out that I dropped the hope documentation in and asking to create the flowchart and then do the things. That's insane that they asked me to do that. but I was able because they're always trying to find something even some of our leadership is always trying to find something to criticize me

I think the thing I'm getting better at is responding to terrible dysfunctional expectations of stakeholders because I can do things faster.

00:28:51

P1: about while they let the other people who are less qualified they give them credit for breathing. And I think that's insane. They're never held accountable for anything and I'm held accountable for things that I didn't do, wouldn't do. So, it helps me to do that.

Paul: So, getting better at responding to requests and criticisms. Anything you feel like you may be getting worse at or losing skill at because of AI?

P1: No, cuz I know that's a risk. And I will always I'm always into self-maintenance. I believe in sharpening the saw. I believe in sitting down and rereading books, reexposing myself multiple times to content that I already know. and I there's a concept that I somebody taught me about years ago. Empty yourself of yourself and forget about what you know. Forget about who you are.

00:29:52

P1: Forget about what you do and just re-expose yourself to something. And so I'm always I've always done that. When I went to Kent State, I already had X number of years and I would go through I already know this and yeah, I didn't get anything till the sixth or seventh week of the course, which actually paved the way for my dissertation topic and but said yeah. So, I just keep doing I will never let that dumbing down of self that everybody says the risk of AI is. I'll never let that happen because of the way that I maintain myself.

I will never let that dumbing down of self that everybody says the risk of AI is. I'll never let that happen because of the way that I maintain myself.

Paul: Got it. All right. I'm going to ask one more question and then we'll wrap up. Well, two more questions. What's your biggest concern or fear related to the increasing use of AI?

P1: My biggest concern slash fear has to do with UX maturity. I just talked about this on the episode that I just put out. I the UX maturity and the fear of AI.

00:30:45

P1: That was the topic I just put out covered on my podcast. And it's UX maturity because you have companies. I talked to a person best way to answer this. I talked to a person a week ago who was let go because of their job, because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn't do all the things. So that UX maturity, the lack of UX maturity, people are going to make these rash, these reckless decisions about personnel and teams because of their inordinate expectations, but UX maturity is bad like predominantly across the discipline. bad decisions are going to happen not only with the C-suite and the people make those decision but the people running the vast majority of UX team have little to no UX maturity and from a practitioner perspective they qualify as entry- level people yet they're running teams so they're going to make terrible decisions and that's my biggest

I talked to a person a week ago who was let go because of their job, because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn't do all the things.

Paul: All right.

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

  • I took notes after each session.
  • I fed those notes to several AIs, along with the moderator guide, project proposal, session transcript, the participant's survey responses, and a codebook of tags and themes I've been iterating as I collect data.
  • I prompted each to write a background, findings, and emerging themes section.
  • Then I iterated on each AI's draft, challenging the AI where appropriate and removing what I'm euphemistically calling "hallucinatory content" :-).
  • I collected each AI's drafts, added them to the project I've set up in Claude Cowork, and prompted it to draft the background, findings, and emerging themes section, pushing back as appropriate.
  • Then I edited the content, because "human in the loop" means "I have final edit." At least to me it does.
  • I then published each session writeup.

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.