Theme Explorer
Emerging themes from the People & AI research, organized by category. Click any theme to see full evidence. Also check out the theme-by-participant comparison matrix and evidence density visualization. They're kinda neat.
Themes
Jump to category
Adoption Patterns
How and why people start, continue, or stop using AI tools
Active skepticism toward AI marketing claims; adoption delayed or shaped by distrust of hype rather than distrust of technology itself
“I don't like hype and I saw all the hype... I'm an early adopter. I'm the first person with everything. ... When it happened with AI, there was too much hype and people making statements that are unsubstantiated.”
P1 - Principal UX Designer, Insuretech
Appears in
AI adoption that spans many life domains (work, personal, health, creative) to the point where the participant may not fully recognize the extent of their own integration
“As much as I said that I wasn't adopting AI, I think I was doing it more than I thought I was doing it.”
P1 - Principal UX Designer, Insuretech
“There's a blend across work and personal. Right now I'm getting my education doctorate. So I'm doing my whole dissertation on AI's role, like my evolution of my leadership skills in conjunction with gen AI. So I use it probably a lot between work and personal and school. For school I'll even have it read my writing and then give me like a review, and not do it for me but just tell me how to improve it. Or I see what it recommends for clarity and conciseness in the writing and then any kind of grammatical errors, I'll use it for that. I'll use it to brainstorm professional development ideas with me for teachers. As far as like, I use a lot of design thinking in the professional development sessions with teachers that I create, and I will have it reference design thinking protocols or design justice thinking protocols to make PD better than what I could do alone, because I have a very limited amount of time to create.”
P11 - CTE Program Manager, K-12 Education
“Oh my goodness. Well, it's a new role that I started in June and I started the new role into administration in June. So, I'm trying to think, what did I used to do with it? Oh, AI. Okay. So, one thing, does it have to be in my role or can it be in my personal life? Okay. So, I used to meal plan without the use of AI and that was just like looking up recipes and then putting them in an app and the app would tell me what to go buy at the grocery store. Now, I just say, "Perplexity, I am wanting a high protein diet that's low cost. Tell me what to buy at the grocery store. You have all my health data. What should I be eating?" It creates the whole meal plan with the recipes and gives me the shopping list in less than a minute and I like that.”
P11 - CTE Program Manager, K-12 Education
Appears in
Human-AI Relationship
Trust, reliance, verification, and the evolving dynamic between people and AI systems
Deliberate, ongoing practices for evaluating how much to trust AI output, not a binary trust/don't-trust but a spectrum that requires continuous recalibration
“We serve as coaches. We serve as supervisors. We evaluate the agentic risk. That's the expert's job.”
P1 - Principal UX Designer, Insuretech
“I will never 100% trust AI ever because I don't think it will earn that. It's hard. I say it's an oxymoron.”
P1 - Principal UX Designer, Insuretech
“I honestly try to hedge against that by only asking it things where there is no quote unquote wrong answer, because like I said, because of that December calendar incident. I'm not 100% sure that I would trust it if I asked it what 2 plus 2 is half the time. So if I have a big hairy task that again would require processing tens of thousands of rows of data, I don't know that I would, I would probably ironically, and again this goes back to that concern that I have of am I now suddenly seduced into overrelying on it, but I would probably ask it to say, "Okay, process this 10,000-row file but then also tell me how should I double-check your work," which is circular reasoning in the worst sort of way. But yeah, I think ultimately my personal strategy is try not to use it in spaces where there is high risk and or where there is an absolute 00:17:02 need for 100% accuracy and then just stick to it more where the spaces are of, like, I know I keep saying it, but the idea generation where there really is no wrong answer per se, it's just input for me.”
P10 - UX Manager, Insurance
Appears in
Deliberate practices to prevent skill atrophy from AI dependency, rereading, re-learning, staying sharp independent of tools
“I will never let that dumbing down of self that everybody says the risk of AI is. I'll never let that happen because of the way that I maintain myself.”
P1 - Principal UX Designer, Insuretech
“One is it makes me lazy. So I need to intentionally say, "No, I'm going to keep thinking for myself." I need to, again, similar to the person that was learning, retain the critical thinking. Otherwise it gets lost because it's a muscle. So we need to keep practicing and using it. And this is one thing I'm really keen on passing to the more junior colleagues. It's like, you cannot skip that. Forget about that, because otherwise you will be a solo lead really soon and you cannot delegate that critical thinking and problem solving to the machine.”
P7 - Principal Design Researcher, Software Consulting
“I unplug on the weekends. I go down in the art studio. I paint my brains out all weekend. I go in the garden.”
P9 - UX Researcher and AI Specialist, Independent
Appears in
The observed or perceived atrophy of specific skills (spelling, recall, manual processes) attributed to AI or automation handling those tasks
“Yeah, I think that could happen. You know, instead of going through material myself, notes and sort of collating myself and thinking that out. Yeah, I could see that skill going downhill. It's almost like my handwriting skills gone downhill as I type more and more for text. I noticed that dexterity isn't quite what it should be sometimes.”
P12 - UX Designer/Researcher, Advertising & Design
“I can see the same sort of parallel. Yeah, for sure. And that's not a good thing, especially for aging populations. You know, they need to keep that brain strong.”
P12 - UX Designer/Researcher, Advertising & Design
“My ability to spell certain things has kind of went down because I'm just so used to, oh, okay, it sees what I'm trying to do and it fixes it for me.”
P2 - IT Business Analyst & Adjunct Professor, Healthcare
Appears in
A deliberate stance of using AI to enhance existing activities rather than offloading tasks entirely, maintaining personal involvement in the work
“Everything we're doing is building the plane as it flies. And I have work in Figma which hasn't been fully translated into our product. And so that work is still there to actually do those refinements, and even to truly implement the designs from Figma into code while we're still building out new features and whatnot. And so sometimes I'm using it to do the work that a front-end engineer might do to clean up our implementation.”
P14 - Head of Design, Healthcare Software
“So we are using it a little bit for that too. But it hasn't gone back into Figma yet. I feel like that's still a work in progress. And then there are times like when I was taking that LLM kind of experience, the chat-based interface project, and I spent just a couple days just working on that and I was really designing as I was building it because I had my engineer's work to start from so I was refining their work, I was cleaning up what they had done. But there would be times when I'd give maybe a general prompt and the output, maybe 50% of the output worked and 50% didn't. So, I say, "Oh, that's a good idea. We'll keep that, but then change these five things." And it's just kind of like an iterative building process. There have been other times where I'm like, "Okay, I'm going to try and use Figma Make because I haven't used it very much" and I'll give it an idea that I'm working on and the output just took a while and it's not helpful at all.”
P14 - Head of Design, Healthcare Software
“So it would need access to GCP and Azure. It would need access to our Bamboo and the other tools that are in the infrastructure that are needed for this production so that it could do the full cycle. So right now I'm the monkey in the middle. I tell it what to do in the tangible code. It does it. Now I have to run the test, take the logs from it, and feed it back and say check the logs, did it work or did it not? So I'm really in the way of the velocity.”
P15 - Senior Developer, Telecommunications
Appears in
Using AI as a conversational thinking partner for complex decisions, valued for breadth of knowledge and availability rather than authority, with the user retaining full decision-making ownership
“This year, the one-on-ones that I used to conduct with the team members, some were much better than others, but there wasn't a whole lot of consistency or structure to them, and I just felt like overall they could be better as a group. So, I turned that question over to ChatGPT and just asked for some best practice methods there. And it gave some pretty decent ideas. I'll say it gave me the good starts of ideas and then I would hone them myself and then bring it back to ChatGPT for kind of like a final "does this sound like a workable plan" and then say yes or no.”
P10 - UX Manager, Insurance
“The detraction I'll say is it's almost, the word I've used with my wife about it is that it is surprisingly seductive in that I might be overrelying on it suddenly. Have I gone from, because I'd always been a little bit of a kind of a Jared Spool skeptic about, hey, this is just a word association machine, this is like a magic trick, this isn't much substance to it, to now suddenly I do use it a lot more than I think I ever envisioned that I would. And in that ideation space, it's been a lot of help just for me to broaden the approaches that I bring to the work that I've got to do for the rest of my team. So, that's been helpful for me. It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives to kind of push against my default way of doing things.”
P10 - UX Manager, Insurance
“So a lot of them are kind of what I would call purists. They see themselves as, "I work in Figma a lot. I sometimes do some research and that's about it." And so now stakeholder management, influencing without authority, communicating to different levels of audiences, things like that. They all needed that. ChatGPT really did help me with coming up with a 12-month comprehensive training plan that included both paid and free sources. I really went back and forth with it for a while on this one to really kind of hone this into something that was, that thus far has proven to be valuable and also feasible from both a cost and a time perspective. So, I was able to get that done much more quickly and much more comprehensively than I ever would have been able to by myself through just what I'll call old research methods now at this point of me just googling things and talking to people.”
P10 - UX Manager, Insurance
Using AI as a personalized tutor to acquire new skills or understand unfamiliar domains, valued for its ability to adapt explanations to the learner's level, provide immediate feedback, and sustain engagement past frustration points
“And then I started using ChatGPT and from there I moved into Claude. And because of this then I was like, okay, how can I do this on my professional side? Because one of the great things that I did, since last time we connected, I took a certification in neuro-linguistic programming. So I was doing mentoring and coaching and I was running the DEI council and the mentorship program for [former company] for our business unit. So I was like, okay, how am I going to put together the content since English is not my first language? So let's use ChatGPT to polish it off, how to get a tone for the executive level. So that, it was funny because I learned a lot from ChatGPT, like how should I talk to, how should I write something. So I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal.”
P13 - UX Design Consultant, Consumer Finance
“So I literally used, it was in my last year of my most recent grad program, I literally used AI to teach me how to do R. And I've used it to learn multiple software platforms at this point, specifically data analytics. Tableau was a big one. Because when I would start to encounter resistance and get to that point where I'm frustrated and I'm going to quit, I have something there where I can say, "Okay, this is the kind of visualization I am trying to make. This is where the data is porting in here and how it's set. And for some reason, I'm pulling up donuts. What is going on?" And it has that ability even from a screenshot to look and say, "Oh, well, you need to move this around." And I think something people need to remember is ask it why. Why do you need to do that?”
P6 - Senior Technical Product Manager, Consumer Finance
“It recently sent me down a rabbit hole because I was like, "Okay, what's the difference between Newtonian relativity and Einstein's relativity?" And it starts explaining it. And when I start hitting those barriers that have been there because I'm not a physicist, I can say, "Hey, explain this to me like I'm in eighth grade. Can you use an example? Give me a metaphor for what you're describing here." And the odd thing is coming away with the ability to explain this complex thing but also an interest in it.”
P6 - Senior Technical Product Manager, Consumer Finance
Appears in
Using AI to compensate for a known personal cognitive weakness or tendency, not as a general productivity tool but as a targeted corrective for a specific, self-identified limitation
“When I come back from Peru, I got a new job. And one of the things that I've noticed is that they always ask you, "What is your weakness?" And my weakness is definitely, I'm almost overly detail-oriented. That is a blessing and a curse because it means you can really get over-involved in the minutia and lose sight of everything that's out here. I find that when I'm controlling AI well and I'm using it to streamline my work or to help me think through a problem or to do affinity mapping, it's great at affinity mapping, the time for me to use it is when I'm over-involved in one little thread because what it'll do is broaden me out and give me 10 different threads that I might not be looking at.”
P6 - Senior Technical Product Manager, Consumer Finance
“I think it helps me zoom out and if I need to zoom back in, helps me zoom in. It has to be accurately prompted to do it. But I really think that that is probably where it benefits me the most: it helps me to see patterns and to see things that I might not otherwise because I'm very close to my work.”
P6 - Senior Technical Product Manager, Consumer Finance
“And it's in my calendar because I integrated Claude with my calendar. So, that's really helped because like I said, I have ADHD and Claude really helps me, is helping me stay on task better because I have 50 squirrel moments a day. That's why I have literally like 40 Claude projects. I love the projects.”
P9 - UX Researcher and AI Specialist, Independent
Appears in
The reframing of AI errors and hallucinations as a productive feature rather than a flaw, arguing that the need to correct AI output creates engagement, attention, and a sense of partnership that would be absent if results were perfect
“I think the hallucinations are not a bug. I think it's a feature.”
P8 - UX Researcher/Designer, Electric Utilities
“You build a relationship with AI because you have to correct it. You have to pay attention. It's not like sending something to the printer and you get exactly what was on the screen. Then you start engaging with it. And how I talk about it as a partner, I mean, that's giving it a personality and that's understanding it has flaws and strengths, and I think that's the main takeaway for me from AI is that if you want to use the strengths, you have to accept the flaws and work with them.”
P8 - UX Researcher/Designer, Electric Utilities
Specific, replicable workflows and prompting strategies that participants have developed for getting reliable, authentic results from AI tools, shared as practical methods rather than abstract principles
“That's a good question. I don't, no, I don't think it has changed how people value my, the output is, I can say productivity-wise it's definitely sped things up, you know, things that could derail me in the styling of something it can just sort of just get the information presented properly. And then the tone of voice which is really important, it's like I did a test a couple weeks ago. The voice was sort of with an executive assistant compared with an English professor. So same text and I got two different answers. So being able to do that for different audiences I think is really helpful. You know, the whole storytelling thing that is really important, especially talked about in the UX research read-up.”
P12 - UX Designer/Researcher, Advertising & Design
“So, it's RRCC: Role, Result, Context, Constraint. So before I even put in what I want, the information question, I do the role I want it to play, the result I want, you know, the goal, context, constraint. So say, here's the example: role is "act as an expert movie buff," result, "I'm looking for listing of movies playing my area," goal, "to take my family, friends who are fun," context, "I live in such-and-such city," constraint, "limit list to non-rated R movies." So that really helped with certain outputs and that's something I will most likely use predominantly going forward.”
P12 - UX Designer/Researcher, Advertising & Design
“And so, I need you to do, and what tools I'm giving to you. So, like, pretty much I fill those three bullets. So, okay, today you are my financial advisor. You're going to select for me the top 10 stocks and I want them to be in the logistic industry. So I give those specifics. Or, you know, today you're my content creator, I'm creating this email for this audience, needs to communicate this message. So it's like, which hat you wearing, what's the task you need to do, and what are the constraints or, you know, whatever background. So that's the three items on my formula, my three pillars that make my use successful.”
P13 - UX Design Consultant, Consumer Finance
Appears in
Work Transformation
How AI changes tasks, workflows, roles, and professional identity
Using AI not to produce work but to independently evaluate or defend existing work against criticism, AI as impartial third-party judge
“I had to use AI to prove to people that what I said was on point.”
P1 - Principal UX Designer, Insuretech
“I took the survey question by question went into Claude and asked Claude to critique and Claude doesn't know and Claude doesn't have any horses in the race.”
P1 - Principal UX Designer, Insuretech
Appears in
Using AI to bridge knowledge or power gaps between individuals and institutions or expert domains, enabling participation in contexts where the person would otherwise be at a disadvantage
“My biggest win I'd say being able to do the business plans, pitch decks, financials. There's also like formulas that were given that, I'm not a math guy, so having those to be able to put in Excel or what have you is really helpful. Anything math-heavy would be, you know, anything that would help with quant or whatever, big help for me.”
P12 - UX Designer/Researcher, Advertising & Design
“And then I started using ChatGPT and from there I moved into Claude. And because of this then I was like, okay, how can I do this on my professional side? Because one of the great things that I did, since last time we connected, I took a certification in neuro-linguistic programming. So I was doing mentoring and coaching and I was running the DEI council and the mentorship program for [former company] for our business unit. So I was like, okay, how am I going to put together the content since English is not my first language? So let's use ChatGPT to polish it off, how to get a tone for the executive level. So that, it was funny because I learned a lot from ChatGPT, like how should I talk to, how should I write something. So I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal.”
P13 - UX Design Consultant, Consumer Finance
“I would copy that and paste it into chat GPT and say, "What does this mean in English?" basically and it would kind of dumb it down for me so I could understand better.”
P2 - IT Business Analyst & Adjunct Professor, Healthcare
Appears in
AI enabling faster delivery simultaneously raises stakeholder expectations for what individuals can produce, creating a treadmill effect
“I thought it was they asked me to do too much and I kept saying as much. ... I just delivered that for 25 to 30 tasks that I did.”
P1 - Principal UX Designer, Insuretech
“I think the thing I'm getting better at is responding to terrible dysfunctional expectations of stakeholders because I can do things faster.”
P1 - Principal UX Designer, Insuretech
“We had two weeks we lost all the vendors like all of them like 45% reduction but just in the design team. So we were still supporting 14 delivery pods and we're like oh crap how are we going to keep the ship right?”
P3 - Head of Design, Banking
Appears in
A mismatch between the AI tools an organization officially provides and what individuals actually need to do their work, leading to shadow IT, personal device workarounds, or side-of-desk experimentation
“Yeah. So there's Copilot. We can use Copilot and we can use Canva AI. But I learned a thing. So, if I get off of the school district's network and use the guest network, I can access any AI I want.”
P11 - CTE Program Manager, K-12 Education
“Gemini and Perplexity. Yeah, those are the ones. I really love Gemini, but I use it every time I'm not on the school network.”
P11 - CTE Program Manager, K-12 Education
“So that actually, I used to work at [former company] and I started as just a presentation specialist putting together all the PowerPoints for the senior leadership and then maybe a year and a half into that I got moved to the Chief of Staff team for one of the EVPs that was data and analytics, and then became decisions and analytics, and they started talking about ChatGPT and how [former company] was getting involved with AI. And one of the VPs that I was supporting, Ragu, he was like, to me, the genius in AI, everything, you know, it's that kind of person you look and said, "Oh my gosh." And when they said, "Ragu, where should I go?" It's like, "Well you start playing with ChatGPT, look for Google, some classes." And then I start like dipping a little bit and then my first experiment was, okay, let me, in my personal, let's start with the personal first because [former company] was kind of funny, they were exploring a lot of things in AI but everything was like firewalled so everybody was trying at home but we could not actually try. It was kind of, I never understood that whole rationale behind.”
P13 - UX Design Consultant, Consumer Finance
Appears in
Using AI to learn and adopt the communication patterns, language, and framing of higher-status roles or audiences in order to advance professionally or increase influence
“I started taking those newsletters and corporate communications and feeding them into Copilot and then had Copilot build me how to write like this executive. What are their key points? How do they say their things?”
P3 - Head of Design, Banking
Appears in
The range of ways organizations struggle to find an effective path forward with AI, from mandating usage with arbitrary targets to deploying without clear strategy, creating perverse incentives, failing to account for verification overhead, making premature personnel decisions, or moving too slowly due to security and governance concerns
“I talked to a person a week ago who was let go because of their job, because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn't do all the things.”
P1 - Principal UX Designer, Insuretech
“Then the next chapter is they rolled [generative AI] out at work and basically told us you better start using it. And they even, they don't monitor what we use, what we chat with it about, but they monitor how often we chat with it.”
P10 - UX Manager, Insurance
“And then they kind of look for, all right, what have you done lately that's improved efficiency using ChatGPT, for example, and now Claude. So it's a little bit with a gun to my back that I find I'm dipping my toes into it deeper every day.”
P10 - UX Manager, Insurance
Appears in
The organizational challenge of evaluating, integrating, and maintaining AI-generated or vibe-coded artifacts produced by non-engineers (e.g., subject matter experts, product managers), which appear functional on the surface but lack documentation of intent, design system alignment, or user-centered rationale, creating downstream problems for the teams responsible for quality and coherence
“Yeah, that is also happening and I haven't talked about that yet. I mean this is almost like very detailed product documentation but it's also that we have sort of a process of them going through AI to come up with that and then come up with the technical details to start implementing it. Also they are vibe coding some of those interfaces and sharing them with me and the team, and that's been its own interesting challenge. How so? Tell me more about that. So I think there are a few aspects of it. In [our customers' industry] I think the bar in terms of end UI design is not always terribly high and so when someone vibe codes a design, puts it out there like "oh we're done, look, so and so did it, it's there," and I start looking at it and there are some things on the surface that are fine and they're working and maybe there are a few good ideas that I haven't thought of too.”
P14 - Head of Design, Healthcare Software
“But then you start to peel away the surface and there's so much that doesn't make sense in terms of what we're doing and the layout in addition to just like maybe the design system we're using. It doesn't map to the design system we've already established. And so there's all those aspects of it, but then there's even translating the domain and the intent into the interface that I never, like sometimes I'll just, in the past maybe I'll get handed one of these live coded interfaces without that context and I'll have to go back, either I'll have to do my best to extract that intent out of the interface or I'll have to go back and ask 20 questions just to figure out what was going on. And so this is something I sort of have unsuccessfully proposed which is that we do a better job of documenting our intent if anybody's going to be vibe coding interfaces and put some structure to that so that we can say, okay, so and so made an example of this application, what were you thinking, what were you hoping to accomplish, and with the idea that maybe if that was documented we could assess it together and see whether it was working.”
P14 - Head of Design, Healthcare Software
Concerns & Risks
Fears, disappointments, and perceived threats from AI adoption
Concern that AI dependency displaces foundational knowledge and judgment, whether through organizational leaders making uninformed personnel decisions or through broader erosion of critical thinking and domain skills across practitioners and generations
“I talked to a person a week ago who was let go because of their job, because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn't do all the things.”
P1 - Principal UX Designer, Insuretech
“But again, on the detraction side, I kind of worry like how much of myself am I losing through this process because I'm just lazily relying on it now to provide me with all of the perspective. So yeah, that kind of concerns me.”
P10 - UX Manager, Insurance
“And when [his son] hits the job market, is he suddenly going to find that if he, for whatever reason, ChatGPT falls out of vogue, it becomes illegal, something unforeseeable happens. Does he, when you kick that crutch out from underneath him, is he capable of doing anything? Is anyone capable of doing anything? And I've seen some articles about this idea of just the stagnation of human capabilities. The more we lean on something that can do something so comprehensive for us, or at least that we believe to be so comprehensive for us. So that gives me concern for the future. I don't know what to tell either one of my kids about what's an AI-proof, if there even is such a thing. What's an AI-proof field of study for you, field of work for you? Or how should you again responsibly integrate it into your work in a way that's not eroding your own ability to think critically and put two and two together.”
P10 - UX Manager, Insurance
Appears in
Disappointment or anger at AI confidently producing fabricated content, especially when source data should have prevented it
“I hate the hallucinations because it seems like there's no excuse for a lot of them, but it happens anyway.”
P1 - Principal UX Designer, Insuretech
“I've also run into some situations with ChatGPT where it will just obviously hallucinate something. The chief example I always have of that is there was a time where, literally, it was last November, I needed to make a calendar for like a newsletter that would have been the month of December and I just didn't feel like making the Word table. 00:06:41 So I asked it, "Make a Word table that's a calendar for the month of December with two rows for each date," that sort of thing, and it messed the dates up. Like if November started on a Monday, it had it starting on a Tuesday where none of the dates lined up.”
P10 - UX Manager, Insurance
“The biggest disappointment would be like when it's I'll say confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts.”
P2 - IT Business Analyst & Adjunct Professor, Healthcare
Appears in
Fear that AI will reduce the number of people needed for a given type of work, prompting consideration of career pivots toward skills that are harder to automate
“Anxious. Just because of the fact it just seems like it's come on too fast, too strong, too quickly. And without anybody really understanding any of the ramifications, governance, ethics, environmental concerns, economic concerns. Again, when the Sam Altman types of the world will talk about this golden utopia in the future where nobody has to do any sort of like drudgery work anymore. It's like, well, no offense to anybody, but the economy runs on an awful lot of people doing drudgery work. And what happens when all, you're just going to say these people just live a carefree life with no job anymore because there's nothing for them to do and they just have this limitless free time now because all that overhead has been lifted from their lives.”
P10 - UX Manager, Insurance
“So I just don't know. I feel like it's just, I've said to my wife in the past, I think in the future, unfortunately, when we look back 30 years from now on this era of technological advancement, I think the legacy of this phase of AI, this gold rush mentality, is just going to really be exposing the greed of C-level executives in the world right now in that they would put their faith in anything which will allow them to say, "I'm the person who cut staff expenses by 30% and as a result you stakeholders all got higher dividends and I got a bigger bonus, so everybody wins." But that's just not true.”
P10 - UX Manager, Insurance
“I think unless you know how to use it, most of the, I don't see UX designers surviving in 10 years from now. It's sad that I'm saying this, I mean, I'm passionate about that, but AI is taking over. So anybody who has strategic thinking can take over anything. You know, you can use any tool to do graphical design, UX design, anything that was done by a human before as far as creativity can be done by AI.”
P13 - UX Design Consultant, Consumer Finance
Appears in
Concern that AI will prevent junior practitioners from developing foundational skills through hands-on experience, creating a generation that cannot identify errors because they never learned the craft without AI
“I mean, I watched both my kids when I was in school. My parents' biggest worry was, am I on drugs? When my kids were in school, like in high school, five, six years ago, my biggest worry was like, are they cheating off of others? Everyone seemed to be crowdsourcing all the homework. And I'm like, is anyone actually learning anything other than just how to get by in an ethically dubious way? And now I feel like AI has almost given rise to the legitimacy of that now in a way.”
P10 - UX Manager, Insurance
“Well, I kind of think of, I don't know that I have many concerns because when we, my only concern is that we don't start with the basics in school and that we give AI too soon. So I'm talking about elementary years, primary years. Because I'm thinking back to when I learned long division and multiplication and the basics of math, it was like learning a language without knowing you're learning a language.”
P11 - CTE Program Manager, K-12 Education
“There's a logic behind it. And I feel like if we skip over learning [the basics of math, long division and multiplication], maybe we'll have people who can't think for themselves. But we're starting to see that now with students coming up because they're over-tested, just because of over-testing. So I feel like, you know, we used to farm and we used to be really active and walk and now we just go to the gym. So those who have the motivation to hone their creative thinking or critical thinking skills, they will. Those who don't want to won't. And that's where the divide will be, I think.”
P11 - CTE Program Manager, K-12 Education
Appears in
Concern about generative AI models being trained on creators' intellectual property without compensation, and the resulting threat to creative careers when AI-generated output becomes a commodity
“There's no royalty model. We know that a lot of the models were trained on other people's intellectual property... there's no compensation for it.”
P3 - Head of Design, Banking
Appears in
Concern about the physical and environmental costs of AI infrastructure (data centers, power grid stress, water consumption, farmland loss), grounded in direct personal proximity to the impacts rather than abstract environmentalism
“I live in [midwest US city] and there is a data center getting put in [city], which is where [organization] headquarters is, and there's a data center being put in [neighboring city], which is technically the city I live in. And so we're just seeing all these horror stories of people running out of water and we know they're coming for the Midwest because of our water and it makes us worried that it's all some big dumb bubble.”
P4 - Senior UX Researcher, Software
“My husband works our electrical company as a lineman. So he already sees how stressed out the grid is from people just flicking on their air conditioning in the summer. And a lot of these they just get a free pass at a lot of our utilities whether it be water and power without building their own substation because that would cost way too much money.”
P4 - Senior UX Researcher, Software
“Our friend who does the concrete for the [nearby city] data center that there was a big push to get that closed, but there's just not very many laws to protect the rights of what people want. They're already building it in a way that they're like, 'Well, we can turn this into a warehouse or maybe this would just be an Amazon warehouse afterwards.' So they're already kind of predicting like the people that are building it are already like this bubble might pop.”
P4 - Senior UX Researcher, Software
Appears in
The observed degradation of AI output quality over repeated use of the same prompt or workflow, where initially reliable results become inconsistent or off-target without any change in the user's input
“It works for a little while. It may work beautifully for a hundred inquiries using that prompt, but eventually it starts to drift. And I know that I've had this frustration factor on both my personal use as well as sometimes my use at work, particularly with, I've admitted I'm frustrated with Copilot. They get drifty and they get really drifty and you sit there and you're like, "It's not that hard. Why don't you just do your job? I told you what your job is." And one of the things that it's doing in the back end is it's trying to streamline me. Given it a complex set of edits, like, "Where can I cut corners?" And so I would say is kind of where the disappointment is, that it's hard to create a workflow that replicates every single time consistently without it being long and detailed, saying, "You may not move on until this happens." That's I think the big disappointment, that you don't have the, vibe coding is such a thing right now, but you don't have that kind of usage of AI.”
P6 - Senior Technical Product Manager, Consumer Finance
“I'm used to trying things over and over and over again, and once I get it right, I don't have to worry about it anymore. It works. With AI, I'll try things over and over and again and I get it to work, then I try to use it again and I get something different, like complaints, "Oh, I can't access these files" that I just accessed before.”
P8 - UX Researcher/Designer, Electric Utilities
Appears in
Concern that deep dependence on a single AI vendor creates vulnerability to price increases, outages, or platform changes, prompting contingency planning and a desire for portability across the AI stack
“And are the AI vendors going to be like crack dealers that say, "Oh, the first taste is affordable," and then they're going to turn the screws on us and now what? We're stuck with a codebase that no monkey in the company can grok, right? And so you're stuck. You need the AI to support the AI because Pandora's box has been opened and this is just one sphere of it, right? I try to push all that aside because I'm having fun with the shiny new toy and trying to figure out how to best use it.”
P15 - Senior Developer, Telecommunications
“But really I'm about to start writing on "what's your plan B" because I'm so integrated with Claude and Claude products and Anthropic that I don't like it. I feel monopolized. So I am trying to come up with a backup plan for when Claude Code goes down, right? It happens all the time.”
P9 - UX Researcher and AI Specialist, Independent
“Price gouging. Yeah. And I'm a plan B kind of person. You know, it's just back up your backup, your backup because I'm just wired that way. And I think it's an important topic of conversation. Like I am really afraid AI is going to create this massive class divide and what happens when Claude is $500 a month or $800, you know, what's your plan B? So, what happens when it goes down? Do you stop working for the day? Like I see people so dependent on it that they're like, I don't know how to work without it anymore.”
P9 - UX Researcher and AI Specialist, Independent
Appears in
Concern that uneven AI adoption, whether driven by refusal, lack of access, or cost barriers, is creating a new class of disadvantaged individuals and communities, distinct from the traditional digital divide in that the gap is between AI users and non-users rather than between those with and without internet access
“I worry for people who don't understand it. There's a huge, the majority is like, "I'm afraid of AI," or you know, Gen Z is absolutely opposed to it. And I think about, you know, there's also like a gender gap apparently, they say. I don't know if I buy it. In enterprise, women, I think, are leading the way. In general I think women are more cautious. So, the environmental piece is always front and center for me. But the people who are blowing it off without trying it and not like 10xing themselves, like I want to see, you know, women in business thrive. And if they automatically are a hard no on it, they're putting themselves at a disadvantage. So I worry about that a little bit.”
P9 - UX Researcher and AI Specialist, Independent
“But I am a philanthropist at heart, so really for me it's closing those gaps, the gender gaps, the pay gaps, the poverty gaps. That to me would be the best thing that could happen.”
P9 - UX Researcher and AI Specialist, Independent
Concern that AI systems trained on non-representative data reproduce and amplify existing social inequities, particularly affecting minority communities through biased feedback, culturally insensitive outputs, and failures of recognition
“But even, you know, so, and then there's another problem with it on a broader sense where I don't trust, because I was reading research that the UN is really pushing AI in the global south for teaching and learning to create learning management systems and to give students feedback. I don't trust that because the data set that they're using is so westernized. It seems like another version of colonization and cultural, how do I say that?”
P11 - CTE Program Manager, K-12 Education
“Like making culture homogeneous, I guess. So those are some of the things I think about when I think about trust.”
P11 - CTE Program Manager, K-12 Education
“I would just say it's about our own bias. Like, we live in a world that is not fair, it's not just, it's not equitable. And how is AI amplifying that in the world? That would be my only concern. Without checks, is it just, like, I've read a few studies where students were given feedback based on their writing and minority students were given less rigorous feedback from AI than Caucasian students. And then AI didn't recognize different dialects of English except for proper English. And then facial recognition didn't recognize Black students as human when they went in to be recognized for a test that was proctored, it didn't recognize their faces.”
P11 - CTE Program Manager, K-12 Education
Frustration that reliable information about AI tools, best practices, and skill requirements is scattered across platforms with no centralized, authoritative source, creating a barrier to effective adoption and professional development
“But what is very disappointing, and I think that's a common agreement with everybody that I talk to, information about AI tools is always scattered. So you don't have like, say, I'm still trying to learn Figma, okay, and I go there and I start like, okay, and then I go someplace else. And it's the same thing when I say, okay, where is a tool for ChatGPT, where can I find the tips, the tricks, you know, the dos and don'ts? Or what are the top skills, the top AI tools that people are using? Because each company, when they look for jobs, they have different tools they're using for AI. So where do I go, where do I learn, where do you know, are those actual resources? I feel everything is so scattered. Sometimes you find stuff on YouTube, sometimes on Instagram or LinkedIn, or, you know, that's the biggest blocker that I have.”
P13 - UX Design Consultant, Consumer Finance
“And I've seen this on job posts, like the tools required are different by same industry, by different companies. So, I don't have a, like, financial, like, that's the standard for financial is this, or the standard for healthcare. No, like, I was doing, like, for, like, say, presentations for education. Each company is asking for a different tool with AI. So I think that's the biggest gap, the lack of standards. We don't have a go-to. We have too many options and it's almost like you have to be like the jack of all trades, the unicorn of AI.”
P13 - UX Design Consultant, Consumer Finance
Fear that the absence of enforceable rules, penalties, and oversight for AI misuse will lead to unchecked harm, distinct from concerns about specific AI failures and focused instead on the systemic lack of a regulatory framework
“I would say the biggest fear is no guides. Like there's no rules to punish anybody that's using AI to harm the world. Okay? So no matter, you know, to start a war, to contaminate food, whichever, you know, when you're using AI to cause harm and there's no rule to punish those people, there's no way to stop them. So that's my biggest fear, the lack of police per se.”
P13 - UX Design Consultant, Consumer Finance
Comparison Matrix
Which themes appear in which sessions. Filled cells indicate the theme was identified and coded in that session.
Evidence Density
How much annotated evidence supports each theme. Darker bars indicate more coded passages, which can signal either a richer theme or one that participants feel strongly about.
Social Dynamics
Disclosure norms, workplace politics, professional identity, and social effects of AI use
A personal or organizational policy of always disclosing AI use, framed as ethical obligation rather than compliance requirement
Appears in
Emerging informal standards within teams or organizations about when and how to attribute AI contributions, developing organically through practice rather than through formal policy, and shaped by seniority dynamics and perceived professional vulnerability
Appears in
The ability to recognize low-effort AI-generated content from others, and the social and professional consequences of that recognition, including diminished credibility and reduced impact of the message
Appears in
The recognition that AI has long been embedded in everyday services and infrastructure without users' awareness, and that public perception of AI is distorted by conflating 'AI' with recent generative/consumer tools while ignoring the foundational AI that already underpins critical systems
Appears in