Skip to content
Paul Sherman
April 20, 2026

The Visual Thinker with a Framework

P12 - UX Designer/Researcher, Advertising & Design

A UX designer and researcher with a visual advertising background who entered AI through Midjourney image generation, consolidated around Gemini as a primary tool, and treats a structured prompting framework learned in an AI training class as his most significant unlock for getting reliable, audience-appropriate output.

I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot. I don't know where that disconnect is, but I would say if you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways.

P12: Survey Data and Session Summary

Survey Responses

No pre-interview survey response was received for this participant.

Background

P12 is a UX designer and researcher whose career began on the visual side of advertising before transitioning into UX design and research. At the time of the interview, P12 was between roles and described the job market as being in a "weird time." He is currently taking a formal AI prompting course, which has become a significant source of his working practices with AI tools.

P12's entry point into AI was image generation through Midjourney, which aligned naturally with his visual background. He tried several tools after that, including Pi.ai, ChatGPT, Copilot, and ATLAS.ti for UX research analysis. He has since consolidated around Google Gemini as his primary tool, drawn by both its image generation capabilities and its text-based functionality. His tool adoption pattern is pragmatic rather than exploratory: he dropped Pi.ai and Copilot when they no longer fit his workflow, not because of dissatisfaction with the tools themselves.

Key Findings

A Structured Framework from Formal Training

P12's most distinctive contribution to this study is a named prompting framework he learned in an AI training class: RRCC (Role, Result, Context, Constraint). Before entering any query, he defines the role he wants the AI to play, the result he's looking for, the relevant context, and any constraints on the output. He provided a fully worked example during the interview and described it as something he will "most likely use predominantly going forward." This is notable because most participants in the study developed their techniques through trial and error. P12 acquired his through formal instruction, which is a different pathway to prompting competence.

"So, it's RRCC: Role, Result, Context, Constraint. So before I even put in what I want, the information question, I do the role I want it to play, the result I want, you know, the goal, context, constraint."

The Atomic Clock Test: Active Trust Monitoring

P12 described an ongoing practice of actively testing Gemini's reliability within long-running chat sessions. He caught Gemini using past tense when referring to the current time, which led him to instruct the system to "always refer to the atomic clock" as a temporal anchor. He also described noticing omissions in ongoing conversations, where the system would "act like they forgot" information that had been established earlier. His framing positions critical thinking not as a nice-to-have but as a necessary safeguard against being let down by AI output.

"I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot."

Math as the Equalizing Domain

When asked about his biggest professional win with AI, P12 pointed to business plans, pitch decks, and financial formulas. He explicitly identified math as a domain where he lacks native skill ("I'm not a math guy") and described AI as the thing that lets him participate in quantitative work he couldn't do on his own. This is a clean instance of AI bridging a specific competency gap rather than broadly accelerating existing strengths.

"I'm not a math guy, so having those to be able to put in Excel or what have you is really helpful. Anything math-heavy would be, you know, anything that would help with quant or whatever, big help for me."

The Handwriting Parallel: Skill Erosion as Embodied Experience

P12 drew an analogy between AI-driven skill erosion and the way typing has degraded his handwriting. Rather than treating skill loss as an abstract concern, he described it as something he has already experienced physically: dexterity declining as he types more and writes by hand less. He extended the concern beyond the individual to aging populations, arguing that cognitive skills need active maintenance in the same way that motor skills do.

"It's almost like my handwriting skills gone downhill as I type more and more for text. I noticed that dexterity isn't quite what it should be sometimes."

Risk-Based Disclosure: A Threshold Rather Than a Blanket Rule

Unlike several other participants who described disclosure norms developing through social pressure or organizational policy, P12 proposed a risk-based threshold for when disclosure matters. He hasn't encountered formal requirements and doesn't advocate for blanket disclosure. Instead, he argues that disclosure becomes necessary when AI-generated content involves statistical or financial data that could drive significant decisions. The standard is proportional to the stakes.

"I think you need a disclaimer saying these numbers need to be double-checked for accuracy for sure. Anything that goes into the area, you know, we're going to make this big million-dollar decision based on this widget not working correctly, you know, that's what has to be double-checked."

Emerging Themes

ThemeDescriptionKey Quote
Trust CalibrationDeliberate, ongoing practices for evaluating AI trustworthiness"I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot."
Useful AI TechniquesSpecific, replicable prompting strategies and workflows"So, it's RRCC: Role, Result, Context, Constraint. So before I even put in what I want, the information question, I do the role I want it to play, the result I want, you know, the goal, context, constraint."
AI as EqualizerUsing AI to bridge knowledge or skill gaps"I'm not a math guy, so having those to be able to put in Excel or what have you is really helpful."
Skill ErosionObserved or perceived atrophy of specific skills from AI/automation dependency"It's almost like my handwriting skills gone downhill as I type more and more for text."
Disclosure NormsEmerging standards about when and how to attribute AI contributions"I think you need a disclaimer saying these numbers need to be double-checked for accuracy for sure."
Knowledge DisplacementConcern that AI dependency erodes critical thinking and foundational judgment"If you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways."

P12's trust calibration is behavioral and ongoing rather than rule-based. He doesn't apply a checklist; he actively monitors AI output within long-running chat sessions, catching errors like incorrect temporal references and questioning omissions. The atomic clock anecdote is a distinctive, concrete example of a user imposing an external accuracy standard on an AI tool. His emphasis on critical thinking as the essential safeguard connects trust calibration to knowledge displacement: without the habit of questioning, the output becomes unreliable and the user becomes vulnerable.

"So an ongoing chat I have in Gemini, I saw past tense being used in a conversation. So I asked at the current time which was off, which was really sort of shocking to me, and it's obviously not a constant, you'd think maybe a computer would be. So I had to ask it to going forward always refer to the atomic clock."

P12 contributes the most formally structured technique in the study so far. Where P9 described persona-based prompting and iterative refinement strategies developed through personal experimentation, P12's RRCC framework came from a formal prompting class. The framework is structured, named, and comes with a worked example. He also described a secondary technique: running the same text through different voice settings (executive assistant vs. English professor) to test how tone changes the output for different audiences.

"So say, here's the example: role is 'act as an expert movie buff,' result, 'I'm looking for listing of movies playing my area,' goal, 'to take my family, friends who are fun,' context, 'I live in such-and-such city,' constraint, 'limit list to non-rated R movies.'"

P12's use of AI as an equalizer is focused specifically on quantitative work. He identifies math and financial modeling as domains where he lacks native ability, and describes AI as what makes participation possible. This is consistent with how P2 and P8 manifested this theme: AI closing a specific competency gap rather than broadly accelerating productivity across all tasks.

"My biggest win I'd say being able to do the business plans, pitch decks, financials. There's also like formulas that were given that, I'm not a math guy, so having those to be able to put in Excel or what have you is really helpful."

P12's contribution to skill erosion is the handwriting-to-typing analogy, which grounds an abstract concern in embodied experience. He has personally felt the loss of manual dexterity from reduced handwriting, and draws a direct parallel to how AI could erode the skill of collating and synthesizing material independently. His extension to aging populations adds a demographic dimension that other participants haven't raised.

"Yeah, I could see that skill going downhill. It's almost like my handwriting skills gone downhill as I type more and more for text. I noticed that dexterity isn't quite what it should be sometimes."

"And that's not a good thing, especially for aging populations. You know, they need to keep that brain strong."

P12's approach to disclosure norms is distinct from other participants in the study. Rather than describing norms that are developing through social pressure (P5, P7, P8) or being formally established from a leadership position (P11), P12 proposes a risk-based threshold. Disclosure matters when the stakes are high, specifically when statistical or financial data could drive significant decisions. Below that threshold, he hasn't seen or felt the need for formal requirements.

"I haven't seen any mention of having to do that, but I'd say for something that was totally certain by either statistical data, I mean, I think you need a disclaimer saying these numbers need to be double-checked for accuracy for sure."

P12 frames knowledge displacement as an individual responsibility problem. His concern is not about organizations or systems failing but about individuals who consume AI output without engaging their own critical thinking. The warning is direct: if you don't question what you're getting, it will let you down. This passage is shared with trust calibration (nested annotation in the transcript) because the same behavior, actively questioning AI output, serves both as a trust practice and as a bulwark against knowledge erosion.

"I would say if you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways."

Interview Transcript

00:00:00

Paul: I'd like you to tell me the story of your first "oh wow" moment with AI. So, what was going on that made you try AI? What happened that made the light bulb turn on for you?

P12: I'd say creating imagery, and this was through Midjourney. A former co-worker had been using it and I tried it out and was pretty amazed. I tried random ideas for a while and then got more focused and sort of dove more into that for a while.

00:06:48

P12: So yeah, and I come from sort of the visual side, advertising, before I got into more the UX design and research. So it sort of was easy to just jump back in.

Paul: So, fast forwarding to today, what other AI tools have you adopted since then? And which ones stuck and which ones didn't?

P12: Let's see. Pi.ai, which I'm not sure if that, I guess it still exists. I don't know who really created it, but I really like that one. That was probably the third one I tried after ChatGPT. The focus for the last four or five months has been Gemini. I like the imagery going out of there. I don't just use it for that. I use it for more text-heavy type of subjects. Let's see. I've used ATLAS.ti for UX research analysis. More about sort of getting sentiments out of transcripts for videos. That's helpful even though the UI was funky.

00:08:01

P12: What else I used? I think I mentioned Midjourney, ChatGPT and Gemini. I think that's, oh, I did use Copilot a little bit, but not all that much.

Paul: Are there any applications that you tried and just stopped using, and why?

P12: I'd say Pi.ai, for not any particular reason. Probably because I have a sort of a pro or a paid account through Google Gemini. That's probably one reason. Copilot because I wasn't using Microsoft tools at that one company anymore. Although my spouse says she doesn't really like Copilot there when she works on it. So it really has nothing to do with me, but anyway, just thought about it randomly. Does that answer your question? Did I veer off a certain way?

00:09:21

Paul: I'd like you to think about one thing that you do regularly. It could be for work or personal life, that AI has changed the most. Walk me through what you used to do versus what you do now and how you do it.

P12: I think a powerful tool that has helped is coming up with slide decks using ChatGPT or Gemini. Recently I am taking an AI prompting course. So, there's some new things I didn't know about like going into Canvas through the AI, which was nice. So, I can export the, like say there's a meeting, you can get the synopsis of the meeting notes in a certain tone, can bring those into Gemini and produce actual images, which does a much better job than ChatGPT. So I think the synthesis between those programs, you know, and obviously the omni-channel between devices which is really helpful, I think that's the most powerful. I did do a business plan and a pitch deck through ChatGPT last year which was actually pretty nice. Obviously edited afterwards but that was very helpful.

00:10:39

Paul: Do you feel like using these new ways of doing things has changed how your work is evaluated or what people expect from you?

P12: That's a good question. I don't, no, I don't think it has changed how people value my, the output is, I can say productivity-wise it's definitely sped things up, you know, things that could derail me in the styling of something it can just sort of just get the information presented properly. And then the tone of voice which is really important, it's like I did a test a couple weeks ago. The voice was sort of with an executive assistant compared with an English professor. So same text and I got two different answers. So being able to do that for different audiences I think is really helpful. You know, the whole storytelling thing that is really important, especially talked about in the UX research read-up.

Paul: What's been your biggest win with AI in your professional life so far? And on the flip side, what's been the biggest disappointment or surprise failure?

00:11:59

P12: My biggest win I'd say being able to do the business plans, pitch decks, financials. There's also like formulas that were given that, I'm not a math guy, so having those to be able to put in Excel or what have you is really helpful. Anything math-heavy would be, you know, anything that would help with quant or whatever, big help for me.

P12: Disappointment, I think in the going back to the image side, which I was talking about with Midjourney, I'd say I, and I forgot to mention that earlier. I really liked what Midjourney could output, but I could never get it to do certain characters I wanted really well, or if I wanted to use like myself in some images, it didn't do a nice job. And there's a lot more sort of coding, if you will, in Midjourney than what Gemini has, and I got spectacular results out of Gemini without having to put in certain codes.

00:13:11

Paul: Lots of people have related their experiences with AI when it's just wrong. So my question is, how do you decide whether to trust what AI gives you? What are your detectors and what tips you off that something might be wrong? And what do you do?

P12: I have one great example from a couple years ago, and then I'll switch to another answer, saying it's a two-part answer. The first one is I had some data from a survey come back to me. It was obvious, compared with a two-sentence answer and an open-ended survey question, it was a paragraph that was an obvious [UNCLEAR: "stand up. I could just serve it to you there. answers out after that"]. The second part is, sorry, repeat the question. I think I'm missing part.

Paul: How do you decide whether to trust what AI gives you? And what do you do when you can't trust it?

00:14:10

P12: Okay. So an ongoing chat I have in Gemini, I saw past tense being used in a conversation. So I asked at the current time which was off, which was really sort of shocking to me, and it's obviously not a constant, you'd think maybe a computer would be. So I had to ask it to going forward always refer to the atomic clock. So occasionally I'll ask what time it is and sometimes it'll also tell me what time it is when I answer for a new part of the chat. But I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot. I don't know where that disconnect is, but I would say if you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways.

00:15:14

Paul: Do you see any norms or, you know, implicit ways of behaving that are starting to define how and when people should disclose that they used AI?

P12: I haven't seen any mention of having to do that, but I'd say for something that was totally certain by either statistical data, I mean, I think you need a disclaimer saying these numbers need to be double-checked for accuracy for sure. Anything that goes into the area, you know, we're going to make this big million-dollar decision based on this widget not working correctly, you know, that's what has to be double-checked.

Paul: Have you ever encountered work product from someone else that felt like a low-effort AI output? And if so, what was your reaction?

P12: I don't quite have that example, but I do think based on a sort of a group little quick project I had, I think based on the experience of somebody and their maybe writing skills, you're going to get a poor outcome.

00:16:53

P12: So think garbage in, garbage out. Maybe, maybe, so I don't know if that's totally true, but that just comes to mind.

Paul: Here's a question I just added this morning, actually. Do you have a really useful AI technique or practice that you've developed or adopted when it comes to using AI that's just something that works especially well for you? Maybe something you would characterize as an unlock or just a big gain.

00:18:17

P12: So, it's RRCC: Role, Result, Context, Constraint. So before I even put in what I want, the information question, I do the role I want it to play, the result I want, you know, the goal, context, constraint. So say, here's the example: role is "act as an expert movie buff," result, "I'm looking for listing of movies playing my area," goal, "to take my family, friends who are fun," context, "I live in such-and-such city," constraint, "limit list to non-rated R movies." So that really helped with certain outputs and that's something I will most likely use predominantly going forward.

00:20:00

Paul: Did you stumble on that technique or read it, adapt it yourself?

P12: No, it was in the prompting class. I don't know where that comes out of. I should really ask, follow up and ask where that came from because that wasn't covered. But that's a powerful one I think, and it goes back to that sort of voice and tone that I was mentioning with the different roles.

Paul: I want you to zoom out and tell me, how does this increasing presence of AI in the world, both work and personal, make you feel?

P12: I think it's exciting, at the same time dangerous. I mean, that can be anything. You know, it's a two-sides thing, how things can be used. When it comes down to it, it's a tool, but the builders are the tool.

00:21:08

P12: You don't know where they're coming from, what their biases are, and that's where sometimes I'd like a little more information. But you know, it's like the sources that you can get, you know, if you ask for sources in material, that sort of makes me more comfortable, and I see that.

Paul: Do you ever worry about losing certain skills because you're leaning on AI? And if so, tell me about that.

P12: Yeah, I think that could happen. You know, instead of going through material myself, notes and sort of collating myself and thinking that out. Yeah, I could see that skill going downhill. It's almost like my handwriting skills gone downhill as I type more and more for text. I noticed that dexterity isn't quite what it should be sometimes.

00:22:12

P12: I can see the same sort of parallel. Yeah, for sure. And that's not a good thing, especially for aging populations. You know, they need to keep that brain strong.

Paul: I want to get at what your biggest fear is with AI, and what breakthrough you think AI might lead to?

P12: I'm gonna start with the negative first because I think this goes back to chatbots in general. Taking the human element out of conversations on the phone is not, and I've had several instances where it's not the greatest. And I think that trend needs to reverse. I know there's, you know, people don't want to spend on staffing, but whatever. You know, especially in the health field, I know there was an onboarding where they wanted to use an AI for a new patient intake, and that was not a good, I mean, I can't say it was done improperly, but you know, you don't have a person where that can repeat the question.

00:23:36

P12: You know, there's no follow-up, just what they baked into it. So that's where I think it needs to be improved and they need to go backwards on a lot of that.

Paul: Okay. And what about on the positive side? What do you think AI might unlock for us?

P12: I think anything in the scientific field that really has to crunch data, in the medical field, all that human health where it can just resource, you know, and do so many searches and get the information. And I think that's where it really shines. And with the computing power behind it, I think that's really where the improvements in a lot of things are.

Paul: What do you think the biggest gap is between what AI can do for you right now and what you actually need it to do or want it to do?

P12: I don't think I've been limited. I think maybe given the limitation is sometimes the quality of the answer, or I want more sometimes. It could just be me not knowing which questions to ask next.

00:24:59

Transcription ended after 00:26:23

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

  • I took notes after each session.
  • I fed those notes to several AIs, along with the moderator guide, project proposal, session transcript, the participant's survey responses, and a codebook of tags and themes I've been iterating as I collect data.
  • I prompted each to write a background, findings, and emerging themes section.
  • Then I iterated on each AI's draft, challenging the AI where appropriate and removing what I'm euphemistically calling "hallucinatory content" :-).
  • I collected each AI's drafts, added them to the project I've set up in Claude Cowork, and prompted it to draft the background, findings, and emerging themes section, pushing back as appropriate.
  • Then I edited the content, because "human in the loop" means "I have final edit." At least to me it does.
  • I then published each session writeup.

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.