P8: Survey Data and Session Summary
Survey Responses
| Question | Response |
|---|---|
| Age | 55-64 |
| Education | Master's degree |
| Role / Level | Individual contributor |
| Job title | UX Researcher/Designer |
| Years of experience | More than 25 years |
| Organization description | UX team at a large electric utility company |
| Industry | Electric Utilities |
| Individual AI tools used | Text generation (creating documents, emails, summaries), Search and information retrieval, Data analysis and synthesis, Code generation and completion |
| Organizational AI tools | Internal search and knowledge summarization, Code generation and developer tools |
| AI adoption involvement | Contributed to technical design, requirements gathering, or implementation |
| Biggest work win with AI | I created a UX analysis agent in Copilot to help me go through interview transcripts and create initial analysis for review and further development by the research team. |
| Biggest work disappointment with AI | Using AI is unpredictable even if using the same prompt. I tried to create an agent that would use our company templates for creating documents. The agent worked once as intended and after that it claimed that it couldn't find a template, or didn't have access to it, or couldn't edit in Word. |
| Organization's biggest AI success | We are in the process of adopting AI in several areas of the organization. I don't think we've experienced great success yet. As a large utility we are slow at adopting new technologies, and security and privacy are the main obstacles for us for moving quickly. |
| Organization's biggest AI challenge | We spent a long time looking for a code assistant tool. We even piloted one that our developers loved, only for it to be rejected by our security team. |
Background
P8 is a UX researcher/designer at a large electric utility company with more than 25 years of professional experience. He holds a master's degree and works as an individual contributor. Before moving into UX, he spent years in industrial product design doing 3D CAD modeling and renderings, a career he left partly because he saw automation eroding the value of those skills. That prior experience of technological displacement gives him a distinctive vantage point on AI adoption: he has already lived through one round of his professional skills being automated away.
P8 works in one of the most security-constrained environments in this study. His employer manages a large portion of the US electric grid, faces millions of cyber attacks daily, and locks down AI access so aggressively that even visiting a website with "AI" in the URL is blocked. The tools he can use at work are limited to Copilot, an internal chatbot called Chat [employer], and Photoshop's built-in generative AI. A promising pilot of Kiro (an AWS coding assistant) was killed by the security team despite developer enthusiasm. P8 responds to these constraints by experimenting on personal equipment at home.
Despite these restrictions, P8 has built a UX analysis agent in Copilot that processes interview transcripts and generates draft reports, which he credits partly to a former colleague who shared AI course materials with him before leaving the company. He approaches AI as a collaborator and partner, consistently framing it as a tool that augments rather than replaces his work, and has developed a distinctive philosophical stance toward AI's imperfections.
Key Findings
The Security Wall and Shadow Experimentation
P8 works behind what may be the most restrictive AI security environment described in this study. His employer's cybersecurity concerns are not abstract policy but grounded in the reality of protecting critical infrastructure. The result is a cascading set of restrictions: previously approved tools like Miro and Figma gain AI features and lose their approval, promising pilots get killed by security review, and even web browsing is filtered for AI-related content. P8's response is to go home and experiment on his own computer, creating a clear separation between sanctioned work use and personal exploration.
"We're not allowed to use Figma Make because of their licensing agreement. And there's other tools. We're not allowed to use Lovable. I mean, right now we're not allowed to use the Google tool, but I just went home one day and just experimented on my own computer. It's not in the [employer] environment."
Trust Through Iterative Constraint
Most participants in this study describe trust calibration as a process of verifying AI output after it's produced: cross-checking facts, applying domain expertise, scaling scrutiny to the stakes. P8 takes a different approach. When AI gives him untrustworthy results, his first assumption is that he failed to define the problem well enough. His trust practice is front-loaded: narrowing the specification, adding constraints, and iterating on the input rather than auditing the output.
The Photoshop example illustrates this concretely. When asked to replace a person in a photo, AI produced nonsensical results. Rather than concluding the tool couldn't do it, P8 decomposed the task into smaller, more precisely defined sub-tasks (change just the face, add specific gear, specify age and appearance) and got progressively better results. He describes this as "a metaphor for how I interact with the different chatbots."
"If I can't trust it, well, my first assumption is that I did not define the problem well enough, I think."
Hallucinations as a Feature, Not a Bug
P8's most distinctive contribution to this study is his reframing of AI hallucinations. Where other participants express frustration or calibrated acceptance, P8 argues that hallucinations are productive because they create engagement. If AI produced perfect results every time, he argues, people would not engage with it as deeply. The need to correct, refine, and iterate is what builds the "partnership" dynamic he values.
He draws an analogy to teaching: a child learning a skill doesn't get it right at first, and the process of correction is how both the teacher and the learner develop. He extends this to a broader philosophical point about accepting flaws alongside strengths, arguing that the willingness to work with AI's imperfections is a prerequisite for accessing its capabilities.
"You build a relationship with AI because you have to correct it. You have to pay attention. It's not like sending something to the printer and you get exactly what was on the screen. Then you start engaging with it."
AI Slop as a Design Quality Problem
P8 uses the term "AI slop" unprompted, before the interviewer introduces it, and offers a practitioner's definition grounded in design craft. His concern is not primarily social (looking bad to colleagues) but substantive: AI-generated deliverables can follow surface-level patterns perfectly while missing the connection to actual end users. The output "looks beautiful and follows some patterns, but it doesn't necessarily provide anything new and it might not relate to the end user."
This framing extends the slop concept beyond the recognition problem (can you tell it's AI-generated?) to a quality problem (does it actually serve the user?). P8 also notes a specific tell for AI-generated text, the em dashes, which he references when describing his humorous "edited by AI" email signature.
"It's easy to create an interface that's essentially AI slop. Yeah, it looks beautiful and it follows some patterns, but it doesn't necessarily provide anything new and it might not relate to the end user."
The Perception of Reality Fear
When asked about his biggest concern, P8 offers a fear that differs from most participants' focus on job displacement or skill erosion. His worry is that AI-generated synthetic content, consumed often enough, will alter people's perception of reality itself. He grounds this in a constructivist view of perception: people build their reality from what they experience, and once those beliefs solidify, they become resistant to correction. If synthetic content becomes indistinguishable from real experience, the foundation of shared reality erodes.
"If we start perceiving things that are not real often enough, it is dangerous for us. ... So if we start playing with that perception, it's really going to change the way people can view the world."
Emerging Themes
| Theme | Description | Key Quote |
|---|---|---|
| Corporate Tooling Gap | Mismatch between AI tools an organization provides and what individuals need, leading to shadow IT | "It's very, very locked down to the point that even if I go to a site that has AI, the word, in it, I can't go to it." |
| Trust Calibration | Deliberate, ongoing practices for evaluating AI trustworthiness on a spectrum | "My first assumption is that I did not define the problem well enough." |
| Augmentation Not Replacement | Using AI to enhance existing activities rather than offloading tasks entirely | "To me, it's like working with a partner because it gives me ideas that I couldn't really figure on my own." |
| AI Slop Detection | Recognizing low-effort AI-generated content and its consequences | "It looks beautiful and it follows some patterns, but it doesn't necessarily provide anything new." |
| Disclosure Norms | Emerging standards about when and how to attribute AI contributions | "I recently started signing my emails that I ran through Copilot at the end, 'edited by AI.'" |
| Prompt Drift | Degradation of AI output quality over repeated use of the same workflow | "I get it to work, then I try to use it again and I get something different." |
| AI as Equalizer | Using AI to bridge knowledge or power gaps | "I'm not a native English speaker. When I write, I make grammatical mistakes and I don't find the right words always." |
| Hallucination as Engagement | Reframing AI errors as productive features that create engagement and partnership | "You build a relationship with AI because you have to correct it." |
| Organizational AI Adoption Challenges | Security and governance concerns slowing AI adoption and killing promising pilots | "We even piloted one that our developers loved, only for it to be rejected by our security team." |
Interview Transcript
00:00:00
Paul: So my first question to you is can you think back to, if not your first light bulb moment, to one of the most top of mind initial aha, light bulb goes on moments when it comes to you interacting with generative AI?
P8: So even before I actually interacted myself with generative AI, I knew it's going to change at least the industry I was in before. So I was in product design and for the longest time I did two things. I did renderings and I did modeling, and then software for renderings was getting better and better and better to the point that anybody can do it. So I kind of lost 50% of my value for the company in a way. Then I started reading about smart CAD software that can do a lot of stuff on its own, so I figured I probably should look for something else, which is why I switched to UX. I mean that was one of the reasons, but there's lots of others.
00:01:15
Paul: I believe you came from the graphic and illustrative design work.
P8: Product design, 3D product design.
Paul: Oh, that's right, the industrial design world.
P8: Yeah, I was doing 3D CAD mostly. I was a tactical person and I wanted to be more strategic. I do remember, I don't remember the details, but I remember finding out about ChatGPT and I can go and ask questions, and at first it was just very unsatisfying experience. Type a prompt, usually something that's not that important, and I get a response and I was like, huh, that's neat. I think my Alexa speaker can do that. But I didn't think I realized how much power it has at first.
00:02:10
Paul: Let's fast forward.
P8: It's been a journey to where I'm at now, I think.
Paul: Well, let's talk about where you're at now. So let's fast forward to today. What AI tools and capabilities are you using at work and in non-work?
00:02:27
P8: At work, my two main tools, I'm using two different chat AI tools. I use Copilot and we use our own tool that's called Chat [employer].
Paul: Do you use Copilot because it's approved within the organization, so it's your only other choice besides the internal tool?
00:02:59
P8:
[Employer] is extremely concerned about cybersecurity. We own a third of the electric grid and we get I think millions of cyber attacks a day, literally. So we have ways to, I mean it's very, very locked down to the point that even if I go to a site that has AI, the word, in it, I can't go to it. So the tools I can use at work are limited. Also, if there are new tools, the problem is a lot of the software tools that have been used before, whether it's Miro or Figma or anything, all of a sudden that has an AI component and a lot of them are not approved for different reasons.
So I've also been on a pilot for a coding assistant, which was fun, but it ended up not meeting our security team's levels of what they're looking for. So we're looking for another one.
00:04:22
Paul: How's that?
P8: It's a terrible tool. I mean, I think it's really absolutely awful. Certain things it will do well. So for example, I had a vertical picture. I wanted it to be horizontal, so I just added background and it did a really good job, taking me four hours to do that in less than a minute. But other things, I was tasked to replace people in pictures with other people, and that was just miserable until I realized how to use it, which applies to a lot of things I do with AI now: the guardrails have to be very well-defined is what I learned, and be prepared to be iterating a lot.
00:05:05
Paul: That's interesting. I've got some questions about some of your survey answers, but I want to return to your thought about the guardrails have to be well-defined. Can you talk more about that?
00:05:14
P8: Yeah, I mean, let me give you a simple example. So we had a person who left the company and wasn't wearing the right protective equipment in the picture, and I was asked to replace him with another person. So I circled that person, I said replace it. I got three options, none of them were even close to being positioned in a way that made sense, and two of them were minorities, which I think is what they're trying to establish as part of their bias correction. They don't want to be biased, so they're going the other direction. Meanwhile it was just a short Asian woman that was looking towards the camera. It just wasn't at all what I expected. So I realized, okay, I can't just do that. So then I just started focusing on, let's say, the guy's face and said, let's add some protective gear. So I said change the face, add protective gear, it's somebody in their mid forties, some facial hair, stuff like that. I mean,
just really defining, getting narrower and narrower in focus of what I want AI to do, and that got me better results. And that's kind of a metaphor for how I interact with the different chatbots: start with a really good definition of what I'm looking for, give it some background, and I turn it into a conversation.
00:07:06
P8: And sometimes it can do what I want it to, which is frustrating. I mean I know it can, it just doesn't because, whatever. And I think I mentioned it in my survey, the inconsistency of the results is driving me nuts. So I have somewhat of a more technical aptitude, I would say, than most designers. Maybe not, trying not to be too generalized, but
I'm used to trying things over and over and over again, and once I get it right, I don't have to worry about it anymore. It works. With AI, I'll try things over and over and again and I get it to work, then I try to use it again and I get something different, like complaints, "Oh, I can't access these files" that I just accessed before.
00:07:57
Paul: I have experienced that inconsistency as well. In your survey answers, you described some successes and some challenges that you personally experienced and your organization experienced while adopting or deploying AI. Is there anything you want to elaborate on from the survey?
00:08:19
P8: I worked with a very talented UX researcher. Unfortunately, she couldn't keep her position at the company because she had to move to an area where we can't employ people, but she was very much into AI. She just loved to learn. She took all these courses. She taught me a lot about how to use AI, basically shared all the course materials with me. So my successes have been because of her. I was able to write a decent, but everything is always improving, a decent UX analysis agent.
00:10:11
Paul: I know that you mentioned in your responses that you had a great proof of concept going, but it wouldn't pass the internal checks.
00:10:26
P8: That's the pilot we did of Kiro. Kiro is an Amazon Web Services code development assistant environment. It works extremely well, and I used it both to code some apps, or basically I used it to create tools to help me with my day-to-day. Like, we have a very convoluted timesheet process and it takes me hours. So I tried to write something to help me with that. It wasn't perfect, but it kind of worked, but I ended up just not using it. But as far as creating prototypes and proof of concepts, I can say I want this type of software, it's supposed to do this and this, and just very general comments, and it just would create an interactive prototype, essentially the front end, and it's coded.
I was really, really looking forward to learning how to use [Kiro] more and more, and then they pulled the rug from under our feet, and so I started researching other tools, but as of now, I can't bring anything in.
00:12:05
Paul: I'm not familiar with that tool. I'm familiar with Google AI Studio, but I will look at Google Stitch.
P8: I think Stitch is newer.
00:12:17
P8: So I'm hoping I can basically try and figure out how to bring it in.
We're not allowed to use Figma Make because of their licensing agreement. And there's other tools. We're not allowed to use Lovable. I mean, right now we're not allowed to use the Google tool, but I just went home one day and just experimented on my own computer. It's not in the [employer] environment.
00:13:37
Paul: I'm going to talk a little bit about how AI's changed how you do things. So are there things that you just no longer do because you've automated it and assigned it to AI? And on the flip side, are there things that you've now started doing because of AI, and this could be, "Well, it's freed me up to do different things that I want to do," or, "Well, now that I've got this wired with AI, now I've got to go do something else I wasn't expecting to do and frankly don't want to do." So tell me how it's changed your tasks and activities.
00:14:14
P8: The biggest time saving for me has been in the analysis phase. So I can run five interviews, get the transcripts, put them into Copilot and run it through my agent that I created for a UX analysis, and I would get at least a draft of the results or even a draft of the report with all the sections. And to me,
it's like working with a partner because it gives me ideas that I couldn't really figure on my own, different insights, but of course I have to double check everything. So that's actually a good example of guardrails. Before, I used the same tool to create personas for users of a data cataloging tool that we're looking to buy. So I gave all the interviews to the AI and said, create these personas, and it created five great personas, but it wasn't based on the data, it was based on general knowledge.
00:15:43
P8: But the analysis, I mean, being able to do that in such short amount of time is phenomenal. It's stuff that would take us hours and days, like moving sticky notes around, because AI is really good at finding patterns, that's what people say, creating affinity maps and themes and all that stuff. It's a lot of work and sometimes it's not fun. I mean for me, the fun part is talking to people. I love that. And usually in a conversation I can come up with one or two really good ways to approach the problem, not a solution, but a good way to approach it. But I can't just go ahead and move forward without actually having the report, which in a way seems like redundant. Why do I have to go and do that now? I think I know where I want to go. But it does help definitely solidify my ideas and provide some backup to what I want to focus on next. So that's been a huge help.
00:16:49
Paul: Has AI changed how you interact with people professionally at work?
00:16:55
P8: Not yet, but as a joke, I think that's kind of a humor I came up with. I think we're going to start corresponding with prompts. Because right now...
Paul: So I thought you were going to say we're going to start corresponding with other people's agents.
00:17:10
P8: Well, that might be too, but I mean a lot of times I need to write an email, type the prompt into AI, it gives me the email, so why do I need to do this? Just type the prompt and then they can figure it out. But how I interact with people, that's a good question. Yes and no. So AI by itself didn't change how I interact with people. I mean, writing actually is a good example.
I'm not a native English speaker. When I write, I make grammatical mistakes and I don't find the right words always, and I'm worried that either my point doesn't come across, which is a problem, or especially when you type, that I might just not look as smart as I think I am. Again, my vocabulary is not as rich as I would like it to be. So that helped me with communications.
00:18:52
P8: And that's because they don't trust it. And I think most of their experiences have been negative. Also, I think there's some fear with it taking over things and reducing the quality of our deliverables.
Paul: Is that a fear that you share?
00:19:17
P8: No, because I think if you use it correctly, if you plan for the risks and mitigate them with human interaction, then it's only going to benefit the workflows. But
if you just take things as they are and not try to refine them, yeah, I mean it's easy to create an interface that's essentially AI slop. Yeah, it looks beautiful and it follows some patterns, but it doesn't necessarily provide anything new and it might not relate to the end user. So it might follow all the rules but miss some key points that are hard to define.
00:20:02
Paul: You mentioned AI slop, and I'm going to set up a situation that's very common to many of us where we've encountered someone sharing something, usually in a work environment, that people suspected was a low effort, low quality AI first effort. So my question to you is not whether you've experienced that because I think that's very common, but two questions. In what situations do you typically disclose your use of AI? And the follow-up is, are you seeing norms and unwritten rules grow about disclosing use of AI?
00:20:45
P8:
I recently started signing my emails that I ran through Copilot at the end, "edited by AI." And again, there's no reason for me to do that. And I kind of do it because I think it's funny, but it's kind of like the "sent from my iPhone" or whatever. But I feel like, beyond [people having to] look for the em dashes, I think it's a good way to disclose it.
00:21:16
Paul: In what situations do you disclose AI? You talked about your email. And are you seeing any kind of unwritten rules or norms?
00:21:24
P8:
I don't see unwritten rules. I always disclose when I use AI, whether it's reporting the results of something or if I do create any kind of visuals.
I'm going to run it through AI first and see if it comes up with some starter ideas instead of me doing a whole exploration. And it came up with something that I thought was pretty neat, and I just rebuilt it in Illustrator and gave it more depth and just more human touch, if you will.
00:22:17
P8: And it's not something I could have thought of on my own, and it saved me time to go through a bunch of iterative stuff. But this was a situation where it's low risk. This is not going to go on websites with 5 million users. This is going to be, they printed like three copies for the end of their aisle. And whenever I walk by it, I still feel it's better than anything else because everybody else, if they have any graphics, it's clip art. This is, I guess, the new clip art, if you will, but it doesn't replace real creativity. Another thing, I mean, it can't, because creativity is about the process, not about the results. And if you bypass the process, then it's not creativity.
00:23:13
Paul: A follow up question about this. The scenario is you're using AI for something. Let's think about it maybe as a research report or some kind of investigative work that you're doing, and you're using AI, and it gives you something that you know is wrong or off topic. What do you do when you are working with AI and all of a sudden it gives you something and you feel like you can't trust it? What's your experience and what do you do and how do you feel?
00:23:51
P8:
If I can't trust it, well, my first assumption is that I did not define the problem well enough, I think.
Paul: I wonder if it's back there.
00:24:19
P8: So it gives me a lot of ideas, and I think, in a way,
I think the hallucinations are not a bug. I think it's a feature.
Paul: Tell me more about that.
00:24:29
P8: I think that, and I just thought of it a few days ago, I'm not fully, the idea is not fully developed, but it is kind of like teaching a young person or a child about something and you tell them, "Here, do this," and they do it, but they're not doing it the right way at first. So you figure, okay, I need to correct this thing. And then slowly you build up their skill level until it's whatever it is that you're teaching them, or a pet if you're training a pet. And I think that if AI was to somehow give us the perfect results every time, we would not engage with it as much.
00:25:19
Paul: Interesting.
00:25:22
P8:
You build a relationship with AI because you have to correct it. You have to pay attention. It's not like sending something to the printer and you get exactly what was on the screen. Then you start engaging with it. And how I talk about it as a partner, I mean, that's giving it a personality and that's understanding it has flaws and strengths, and I think that's the main takeaway for me from AI is that if you want to use the strengths, you have to accept the flaws and work with them.
00:26:04
Paul: That's a great way of putting it. I'm going to use the last minute or two we have left to ask. I can go a little over if you need.
P8: Sure, but I have Copilot training that I kind of have to do even though I don't need to.
00:26:34
Paul: Two related questions. One, the first one is, what's your biggest hope, or what is AI's promise in the future? So by that I mean, do you hope or see the possibility of a significant breakthrough or some positive outcome that AI might enable within the next five, ten years? And if so, what is it?
00:27:05
P8: Yeah, so definitely. I mean, one of the biggest benefits of AI that I see is whenever I hear about something, I just read about the other day, AI discovered a new molecule or hormone that's similar to GLP-1. I've been on GLP-1 for over a year and lost 70 pounds, and I think it's the best drug ever, but it has side effects. And so I just read that scientists used AI to isolate another hormone or molecule or whatever. Again,
that's not something that I can go in and type to Copilot, "Find me a molecule to replace." You have to use it as a tool that augments what you do.
00:27:59
P8: I saw a really good, there was an article and then I saw that there was a really good webinar or something that was put on by one of the think tanks in DC and it had a couple of Nobel Prize winners. Anyhow, they're talking about five different ways AI can go. And really, I think in order for it to work for everybody, it has to support humans rather than replace humans. And the other thing they said that was really interesting is, if you're driving towards a cliff, you're not going to keep driving. You're going to stop or slow down if you feel you're going to drive off the cliff. And they basically said, with AI, we're driving towards a cliff. We need to be able to slow down or stop before we just drive off. I mean, that was their optimism. We should be able to do that.
00:29:02
Paul: That's interesting. And that goes directly to the flip side question, which is what's your biggest concern or fear related to the increasing presence of AI in many aspects of the world?
P8: It's going to change our view of reality.
Paul: Can you say more about that? You said change our view of reality.
00:29:23
P8: I mean, right now reality is based on what we perceive and how we interpret it and how it fits within our beliefs that are learned. If we start perceiving things that are not real often enough, it is dangerous for us. I mean, as it is right now, I do believe that every person creates their own reality, and once they do, and this is kind of a recent thought I had in the last few years, once they do, you can't shift it. It's like what you believe in is the same as what your brain can't tell the difference. So I know I can touch things around me and I believe that I still have family around even if I don't see them. So if we start playing with that perception, it's really going to change the way people can view the world. I mean, it starts with fake news and starts with content that's synthetic, and I mean, not illustrative content like some memes that were floating around recently, but more stuff that looks real and it's very easy for someone to say, "Yeah, of course it's real. I saw it. I experienced it somehow."
00:31:05
P8: For society, that's my biggest fear. But as far as our work and day to day, I mean, on the one hand, it would let people be able to pursue what they love instead of focus on having to spend eight hours a day on something they don't like. But that has to work hand in hand with a different kind of societal model that may exist somewhere. I mean something like universal income, things like that. If we're going to lose millions of positions of employment and we don't allow these people to find other meaning, then that's going to lead to definitely the end of humanity as we know it.
00:32:06
Paul: That's a pretty big fear. Thank you for articulating that. I appreciate that. The last question I had before we wrap up. Is there something, so you've heard what I've asked you, and we've been talking for about a half an hour. Is there something you feel like I should be asking people that I didn't ask you?
00:32:30
P8: Maybe something to the extent of, how do you see your job in five years? It's a discussion. Or how is your job right now compared to what it was five years ago? Then you're not making assumptions, but I don't know if that's beneficial. Maybe it's a good prompt if somebody doesn't know where to go next. But as far as a real question, I mean, it's not a question necessarily, but I have two kids who are almost 20 and 23, and they hate AI.
00:33:24
Paul: I've got a 26 and a 23-year-old, so they...
00:33:30
P8: Absolutely hate anything that has to do with AI. I keep telling them that right now AI is paying my bills because not only do I use it, I'm also on the Chat [employer] project, so building tools for different business units to use using AI. I'm on another huge AI project that I can't talk about, but I see where they're coming from. I mean, there's so many careers that they're not going to embark on because they're going to be gone in just a few years. And if you're not adjusted enough to be able to find a humanity that's still available, then it's going to be very hard.
00:34:24
Paul: Well, that's a great place to wrap up. I'm going to stop the recording and then we'll just finish up.