Skip to content
Paul Sherman
April 16, 2026

The Tunnel Vision Experiment

P7 - Principal Design Researcher, Software Consulting

A principal design researcher at a software consultancy ran a controlled comparison of AI-assisted vs. unassisted transcript analysis and found that AI excels at targeted retrieval but creates tunnel vision that strips away the contextual understanding where the most valuable implicit insights live.

I think it's part of the process for us as researchers to immerse ourselves in the data. If we skip that, we don't understand the data afterwards, and we are not retaining that important knowledge that is kind of layering in the back of your mind.

P7: Survey Data and Session Summary

Survey Responses

QuestionResponse
Age45-54
EducationMaster's degree
Role / LevelTeam lead
Job titlePrincipal Design Researcher
Years of experience16-25 years
Organization descriptionConsultancy for software development
IndustrySoftware (IT consulting)
Individual AI tools usedText generation (creating documents, emails, summaries), Media creation (images, audio, video), Search and information retrieval, Data analysis and synthesis, Workflow automation and process automation
Organizational AI toolsInternal search and knowledge summarization, Predictive analytics for business forecasting, Code generation and developer tools
AI adoption involvementNo direct involvement in adoption or deployment (mostly a user of a deployed AI system)
Biggest work win with AISome automation of our workflow, specifically the UX research comms; then experimented with some AI features to analyze data from quant studies and supporting the reports we produced.
Biggest work disappointment with AIThe time it takes to even understand which tool to use and how. The 'generative' part that translates into making up data instead of pinpointing specific data points (happening during analysis).
Organization's biggest AI successWe are transitioning from a consultancy offering our services to a product-led service provider (we launched our proprietary AI platform for software development).
Organization's biggest AI challengeI'm not sure :)

Background

P7 is a Principal Design Researcher at a software development consultancy with 16 to 25 years of professional experience. She holds a master's degree and leads a team of researchers. Because she works at a consultancy, her access to AI tools depends on the client engagement: some clients restrict tooling, while her own company encourages experimentation. At the time of the interview, she was staffed on a client project where she led a three-person research team with varying levels of seniority.

P7 uses AI across an unusually wide range of personal and professional domains, from trip planning and legal comprehension to cat health research, book translation, storyboard generation, UX research analysis, and research operations automation. She organizes her personal AI use into topical folders and blurs the line between personal and professional use, at one point catching herself categorizing an AI-assisted book translation (a professional topic, qualitative research interviews) as "personal" because it was self-published.

Professionally, P7's most significant AI experimentation has been with Condens AI, a research repository tool she values for its discreet, non-invasive integration of AI features. She has also used Gemini, ChatGPT, Recraft, and Notebook LLM across different stages of her work. Her approach to AI is characterized by deliberate experimentation: she does not simply adopt tools, she designs informal tests to evaluate whether they add value.

Key Findings

The Tunnel Vision Experiment

P7 conducted a controlled comparison that no other participant in this study has described: she analyzed six transcripts using Condens AI's conversational search feature and six without it, then compared the results. The AI-assisted analysis was faster and more efficient for targeted queries, such as checking whether users noticed a specific page component. But it came at a cost.

The AI search created what P7 calls "tunnel vision." When used on a transcript that hadn't been coded yet, it produced narrow answers to specific questions without building any sense of the session as a whole. The sequential, contextual understanding that comes from reading through a transcript, following the conversation's flow, noticing what was said before and after a key moment, was lost.

"The problem is that that kind of analysis gives you tunnel vision. So you don't get the context in which that is said. You don't get if they said it before or after something else. Because the moment you code in a sequence, you follow the conversation, you follow the flow."

The Implicit Insight AI Missed

In a separate experiment using Notebook LLM with Gemini, P7 found that the AI reproduced most of her themes but completely missed the one she considered most valuable: an implicit insight she had read between the lines of several participants' comments. This was not an explicit finding but an unarticulated user need that required interpretive depth to surface. When the team used these insights to pitch ideas to a client, the AI-generated themes produced generic ideas. The one idea rooted in P7's implicit finding was the only one the client loved.

P7 pushed back on colleagues who praised her "good idea," insisting it was not creativity but research: "It's rooted in something that I saw in the data from the research." The distinction matters because it demonstrates that the value of human analysis is not just thoroughness but the capacity for interpretive leaps that generative AI cannot replicate.

"No, no, that's not a good idea I had. It's just the research. It's rooted in something that I saw in the data from the research."

Navigating Disclosure Across a Seniority Gradient

P7 provided the most detailed account in this study of how AI disclosure norms get negotiated within a team. When she began leading a three-person research team, she discovered three distinct postures toward AI use, each shaped by professional position. The junior trainee was defensively opposed: "Why are you asking this? No, I'm not." She was intentionally avoiding AI to protect her learning. The senior colleague was quietly using AI to generate first drafts. And P7 herself was trying to open a conversation without sounding judgmental.

Her solution was to broker an explicit team agreement: what tools are we using, when do we use AI versus not, and what are the expectations at each seniority level? She validated the junior's instinct to learn the craft first ("Good call. I agree with you") while creating space for the senior to be transparent about AI-assisted work. The result was a set of negotiated norms rather than either a blanket policy or unspoken assumptions.

"And I feel we shouldn't be ashamed of talking about [disclosing AI use]. And especially in a power dynamic where there's different seniority, I think we should be completely transparent about what the expectations are."

Critical Thinking as a Muscle, Not a Trait

P7 frames critical thinking not as an innate personality characteristic but as a skill that must be actively maintained. She uses the metaphor of a muscle: it atrophies without exercise, and AI creates the conditions for atrophy by making it easy to skip the hard cognitive work. She acknowledges that AI "makes me lazy" and describes an intentional practice of resisting that pull.

This framing carries particular weight because P7 extends it from a personal discipline to a mentorship obligation. As a team lead, she feels the "ethical pressure of passing along" the importance of critical thinking and sense-making to junior researchers. She positions her generation (Gen X) as "bridges" who saw the analog world, the digital world, and now AI, and who bear responsibility for transmitting the skills that predate all three.

"One is it makes me lazy. So I need to intentionally say, 'No, I'm going to keep thinking for myself.' ... And this is one thing I'm really keen on passing to the more junior colleagues. It's like, you cannot skip that."

The Generational Credulity Escalation

When asked about her biggest fear related to AI, P7 offered a historical frame that situates AI-driven credulity in a multi-generational pattern. She described her grandmother trusting the radio, her parents trusting television, her own generation trusting the internet, and the current generation trusting AI. Each medium expanded the scale and reduced the friction of accepting information at face value.

P7 argues that AI represents not a new phenomenon but a massive amplification of an existing one. The difference is that previous media left room for alternative verification ("I can go and read a book, or I can go and ask an expert"), while AI's ease and comprehensiveness reduces the motivation to seek independent sources. The effortlessness of access, she argues, devalues the information itself.

"I remember distinctly my grandmother telling me, 'Well, this is true because I've heard it on the radio.' And then my parents saying, 'No, this is true because I've heard it on TV.' Then, 'This is true because I read it on the internet.' And [this is] true, because AI told me."

Emerging Themes

ThemeDescriptionKey Quote
Trust CalibrationDeliberate, ongoing practices for evaluating AI trustworthiness on a spectrum"I don't take what comes out of an AI at face value, ever."
Augmentation Not ReplacementUsing AI to enhance existing activities rather than offloading tasks entirely"It's just helping me work with my thinking, or articulate what I'm thinking, or challenging what I'm thinking."
Knowledge DisplacementConcern that AI dependency erodes foundational knowledge and judgment across generations"People stop thinking. Stop critical thinking. That's a huge risk at the population level."
Disclosure NormsEmerging informal standards about when and how to attribute AI contributions, shaped by seniority dynamics"We shouldn't be ashamed of talking about [disclosing AI use]."
Self-MaintenanceDeliberate practices to prevent skill atrophy from AI dependency"It gets lost because it's a muscle. So we need to keep practicing and using it."
Hallucination FrustrationDisappointment at AI producing fabricated content with false confidence"It even invented places to eat that don't exist, even with a fake website."
Apprenticeship ErosionConcern that AI prevents juniors from developing foundational skills through hands-on experience"I'm intentionally not using it because I'm learning."
Pervasive IntegrationAI adoption spanning many life domains to the point where boundaries blur"It's funny because I considered [using AI to help draft a book] personal and not work."

Interview Transcript

00:00:00

Paul: What was your first light bulb moment with generative AI? And by that I mean, can you recall the first time you used AI and said "oh wow," kind of the "oh wow" moment.

P7: It's been a while. Not because I'm an early adopter, but I need to remember. I think the first use I've done is probably generating some images. And I remember it was quite a while ago, so the images weren't as good as they are today. So imagine that, and it was already powerful in my mind. But I think the really wow moment was the first time I've used it to try and help me plan a trip. Because I've asked probably...I don't even remember which trip, by the way. It's probably one of the first times because I tend to have multiple trips that I would like to do in the future just in parallel somewhere.

00:01:11

P7: And I usually have my document online where I paste stuff that I find around and so on so I don't forget. But I think that time I actually asked a really specific question, like, "Okay, I want to go to Scotland for instance, and probably I will have 10 days, and what I like is this, and my husband loves photography, and we don't really like cities. What can we do? Can you plot out a plan for us?" And I was impressed by how fast it was and how quickly I already saw an output that was like a day-by-day itinerary with multiple options and so on. I think that was the one time. I don't know if you want to know more about that, but...

Paul: No, that's interesting. You are not the first person to mention specifically a trip planning experience as being the aha moment.

00:02:15

Paul: Okay, let's skip forward to today. So what AI, specifically generative AI tools, are you using now and in what contexts? This can be work, personal, or both.

P7: I'm definitely using it for both. On a personal level, compared to the past, I now have folders by topic. So I have definitely my travel stuff, which is probably the most exciting piece. I've started recently using it for understanding legal stuff, because unfortunately something happened in my family and I needed to understand that. So I started using it for, "Okay, can you explain how people can use this particular law in Italy?" and so on. And, "What can I do given the situation?" Just to get more of an understanding rather than legal advice. I have a lawyer, but there's so much you need to absorb and understand, and I was trying to say, "Okay, explain to me what does it mean, what can happen," or things like that.

00:03:32

P7: There's probably many more things. Oh, there's definitely a section about my cat's health specifically. Like, I don't know, what should they eat and so on. And again, it's not a substitute for a vet, it's actually in preparation. I want to go to the vet with good questions and hypotheses and ideas. And it gave me some ideas about, "Oh well, you know, it can be that your cat always has nausea, so an automatic feeder could help. Why don't you try?" So just exploring a few things. There's probably more that I just don't remember, but I think I use it a lot even to understand stuff that I don't understand properly. I just try and say, "Okay, explain it to me in very simple terms, what does it mean?" and so on. That's on a personal level. I've used it for... I've published a book back in 2021 in Italian and I have a very lazy plan of publishing in English at some point. So what I've done is I actually purchased

00:04:44

P7: a subscription to one AI tool to help me do the rough translation of everything. And then I'm planning on hiring a real editor, like a person, to help me edit that into something more readable. But that was also something else that I've done.

So it's funny because I considered [using AI to help draft a book] personal and not work. I listed that as one of the personal uses of AI instead of work, because the book was self-published. So it was really a personal project of mine, but the book is about interviews, qualitative interviews for research.

00:05:38

P7: So for work, I think I've started more recently. Well, depending on what I've had access to. I work for a consultancy, so that means if I'm staffed on a client's project, of course I need to adhere to whatever tools they use and they authorize. Otherwise, with my company, which is [employer], we have a set of tools for internal use. We definitely encourage experimenting a lot. But the thing is, if I didn't have the time or the real opportunity to use it, I wouldn't use it. So again, it's funny that we talk about the past like it was 10 years ago, but it was actually maybe a year and a half ago. But it feels like a long time ago. I think my first interaction was generating storyboards that I was using to articulate a design concept that we came up with.

Paul: What tool did you use to generate the storyboards?

00:06:41

P7: It was a mix of tools because I've tried with Gemini, I think at one point. But before that, wow, I don't even remember. I think it might have been ChatGPT. Then I've tried with Recraft, which I really like. It was really for generating illustrations and so on. The thing is that at the time it was difficult to have consistency across the different steps of the storyboard, the different frames. So we did some funny, really funny stuff with colleagues about like, "It doesn't matter if the person changes a bit across the storyboard, that's not the important point." And we would actually do a patchwork, cutting pieces out of it and mixing it up and putting balloons when needed. It was actually fun and really quick turnaround to illustrate a concept, rather than, you know, in the past we used to actually hand-draw those. But it was powerful.

Paul: Am I understanding correctly that you used the tool to create the storyboard images? And to me it sounds like you took the images at a certain point, and maybe put them in a tool like LucidChart or Mural or FigJam and then arranged them and cut and pasted.

P7: Or a board. An online board. Yes. We even did it I think in PowerPoint at some point because we needed to submit those concepts. So we had to document those concepts and as part of that were attachments and pages. So basically we put it probably in slides just because it was giving us the frame, the space, the layout. So we actually built it, but yeah, we definitely used multiple tools. So I was using a combination. And instead, nowadays, I did something similar more recently and I've used Gemini.

00:08:50

P7: I think there is a... I don't remember, so forgive me about the name. I don't know the skin or the Gem or whatever. But there is a Gem I think that's not supposed to do storybooks but does something really similar. I think it's called Storybook maybe. And the good thing is that it was generating consistent stuff. So it was much easier this time around. I just needed to again cut and paste into our board, but apart from that it was almost done to the level that I could use it. So that's one. Then the other one that I've been experimenting with much more recently is within UX research.

Paul: Tell me about that.

00:09:59

P7: In that case, I've been using it for two different reasons. One, within the research analysis and synthesis, we're using a tool called Condens. I don't know if you know it. It's very similar to Dovetail, or what used to be EnjoyHQ. I mean, it's for coding and... I'm a big fan of this tool, first of all, because I've known it since it was a startup. Now they're pretty much very well known, especially in Europe, I would say. Condens AI. I really like the way they've integrated AI in the tool, which is very discreet. So it's not invasive. It gives you all the agency in the world, and especially when it comes to research, that's for me definitely one of my goals. Because

I really don't trust a synthesis or analysis and synthesis done entirely by AI. But even with the overview, I think it misses a lot of important stuff that's between the lines. And by the way, it's not just that. I think it's part of the process for us as researchers to immerse ourselves in the data. If we skip that, we don't understand the data afterwards, and we are not retaining that important knowledge that is kind of layering in the back of your mind.
You're layering all the research that you have done, and you come with a digested bulk of knowledge that needs time to sediment. I think we cannot skip that, or actually, not that we cannot, we shouldn't.

00:11:31

Paul: You are speaking my language. That idiom of course means you articulated my thoughts better than I could, frankly, about that. Which is that you can't, as a researcher, it is our job to be one with the data and understand it at the level of...

P7: The deep thinking has to come from even staying with the data, staying with the chaos at some point and being there and saying, "You know what, I need this," because at some point it will make sense to you. And you need that. And actually it's not just a lonely kind of activity, it's better if there's more than one researcher actually doing this. And so what I like about this tool is that it doesn't offer you this, at least not quickly. So you're actually inclined to use it just for support on specific things.

00:12:36

P7: And one of the cool things they've developed now is that for every session, every transcript, it does the transcript. I mean, for me that's taken for granted since a long time. So transcript, yes. I usually correct the transcript when I'm analyzing it if I need it. Otherwise, if it's understandable, I will leave it as it is, even if there's some typos here and there. But if I capture something as a code, I will definitely refine it and check it. At the same time, you can have a conversational AI search within that transcript where you ask questions in natural language and it will pinpoint you where in the transcript they talk about that and it will summarize that. Which is not a substitute for coding, but it's a support.

Paul: I think of it as a way of coding in a two-way fashion. So typically, we code as we read and as we edit the transcript and clarify it.

00:13:36

P7: I hear you. Yes. And one interesting thing, because I've done a little experiment. We were trying to understand how much value we're generating if we're using AI in our process. So I've done an experiment: I analyzed six transcripts with this AI search and six without, and started to compare. What happens? And it's not just because the temptation is always like, "Were you faster?" Well, yeah, I was faster.

Paul: Right. So you answered the question, was it faster? What about, was it better, or was it equal to...

P7: So it wasn't. It was different. And the interesting thing is that it was really useful and really fast if you had a very specific question in mind.

00:14:32

P7: So if you're looking for something that you already know, that is definitely more efficient. And I can give you an example: "Okay, did the users notice this particular component in the pages?" And probably, "Did they say anything about it?" That was super cool, because I would do it quickly throughout all the sessions one by one and I would say, "Okay, they didn't. Yes or no." So I was really fast in analyzing that.

The problem is that that kind of analysis gives you tunnel vision. So you don't get the context in which that is said. You don't get if they said it before or after something else. Because the moment you code in a sequence, you follow the conversation, you follow the flow. There is some logic behind it.
And if you do it after you've already coded everything, well, it's different because you've got that sense from what you did previously.

00:15:47

P7: But if you do it with a transcript that you haven't coded yet, you basically don't get a sense of that session. You only get, again, tunnel vision. Like, "Oh, I understand this, but that's it."

Paul: I have a comment on that that I'm trying to articulate. And I know this is not about me, but we're in the same general area. So I like that this is a discussion. Here's my thinking. I realize now that I run a session. Let's take your session. I will run the session. I will go through the transcript like you do. I'll read it. I'll find the clear misspellings and grammar errors and I'll change them. So sometimes you'll see "impatient" instead of "inpatient," which is actually something from another interview yesterday.

00:16:47

Paul: And then I'll remove certain dysfluencies and leave others in, and connect the times where I was a bad moderator and interjected and it splits what you've said. And then I'll code. And then I'll give the transcript to the AI that I'm using, but I won't give it my codes. I will give it sometimes my theme codebook, but sometimes I won't. And what I found is that I have my themes and my codes, and I want to see what it does independent of me. So I guess I'm using it as an independent researcher that hopefully will surface things that I was either biased against seeing, or was, because I'm a human, just didn't see.

P7: Okay.

Paul: So I use it to enhance what I'm doing, but never to replace my synthesis and analysis.

P7: Absolutely. Yeah. I think so. I've used it like that just in one occasion in which again I was trying to compare, to understand if it was worth it or not. And actually the result was rather the opposite.

00:18:03

P7: So it kind of picked up my same themes, the majority of them. But there was one specific insight that I thought was the most valuable one that I got, because it was the typical implicit kind of thing. The one that wasn't explicit at all, that I read between the lines of a couple of participants. And I said, "Oh, I think there's something very interesting here. I want to take it out." And in that case it was Notebook LLM with Gemini, which is pretty good. I find it really good. But it completely missed on this one. And I can tell you that we tried to use it for a pitch to a client. And the number of ideas we came up with... I don't think it's a coincidence that the one more valuable insight was the only idea that the client really loved.

Paul: Why do you think that is?

P7: I think because it was less generic. And some of the colleagues told me, "Oh, you had a really good idea," and I said, "No, no, that's not a good idea I had. It's just the research. It's rooted in something that I saw in the data from the research."

00:19:25

P7: So I wouldn't categorize that as having a good idea. It's just that it's really rooted on something that these users told us that they would do, and I was able to read through and say, "Oh, what their need is this." They didn't articulate it, but what they told us means that they need this. And so it was based on a robust need that again wasn't articulated, wasn't clear, and probably wasn't that obvious compared to other topics that were more or less... other themes that emerged that were pretty, not obvious, but nothing particularly surprising maybe.

Paul: If I understand your point here, I think I derive the same value from using AI, in that it helps me identify one or two or maybe three implicit themes that I didn't identify or think about. But I typically end up throwing out more than 50% of what it suggests, either because it duplicates what I've already discovered, and maybe it helps articulate some of those themes that I've already identified. But at the most, I get one or two valuable insights, but they're valuable. They're not... it's information I might not have identified. So in that way, I find the tool useful. All right, I know we've only got about six minutes left, and I've got so many questions, and I do tend to get...

P7: I can stay for a little longer if you like.

Paul: Oh, great. Thank you. I want to understand how you decide in a dynamic situation when to trust AI. Tell me about your experiences in using it and developing your sense of trust, and what's happened when that trust is broken.

P7: Wow, this is a good question. I need to open with a big statement, which is: generally I don't trust it. So that's my initial stand.

00:21:42

Paul: What do you mean by you don't trust it? Are there contexts in which you trust it more or less?

P7:

I don't take what comes out of an AI at face value, ever. In general.

Paul: And what do you do then to verify, or take what it's given you and...

P7: I try to, well I would say that probably I would resort to people that know about that topic and just check, fact check if that's true or not. And I can give you examples. I mean, I wasn't looking for legal advice. I was looking for legal understanding. And from what it explained to me, it even suggested something like, "Oh, wow. You could do this. You could ask for an expedited procedure," whatever. And I went to my lawyer and said, "Okay, but can we do this?" And he said, "No, because in Rome this doesn't apply," blah blah blah. And I said,

00:22:41

P7: "Yeah, of course." Right? So it's like a spark of an idea, but I don't take it at face value. Similarly, it very much depends. If it's health advice, I'll be extremely careful. I would just maybe use it again to decode some obscure technical jargon, but I would definitely triple check somewhere else, with someone else, and other sources. When it comes to travel, I'm a bit lighter, but then a few times it gave me a bad outcome. Like

it was suggesting that I went to a specific place, I did a plan, I didn't check, and then that place was shut down that day and for a while for renovation. I said, "I wasted one day that I had in this location. Why didn't I think about checking?"
So I would say I generally don't trust it completely. Building trust... I'm not even sure I build trust, now that I think about it.

00:23:51

P7: I'm not sure.

I think I rely a lot on, when there are things about research, things that I know, I use myself as a benchmark and I say, "No, you're not saying the right thing." And then that worries me, though, because I'm thinking, "Okay, all the things that I don't know, which are many, and all the domains that I don't know... should I believe it or not?"
And how can I, what can I do to actually understand if this is any true, or what degree of truth there is in it, or how much it's mistaken? Because
it's generative, right? I mean, we should expect that it invents. But I think the majority of people don't think about that. I try to remind myself, it's generating stuff. So I try to reply, "Please stick to the real thing." And it even invented places to eat that don't exist,

00:24:42

P7: even with a fake website. I went there and this restaurant doesn't exist.

Paul: I want to talk about a related topic, which is disclosing AI use. And this is more of a professional and work capacity. Do you feel you have identified unwritten rules or norms that are evolving about disclosing AI use in your professional life? Tell me about your experience with that. Could you reflect on that?

P7: Absolutely. And I think it's a very interesting one. So in my current client project, I'm basically leading a team of three researchers. And one is sort of a junior trainee because she's transitioning from another field. She's starting to do research and she wants to learn. And another one is a colleague who is senior and is a much more robust researcher and so on. And the funny thing that happened the moment we started working together, we didn't know each other too much, and we didn't know how each would work. I've noticed two things. One, that

the trainee was definitely not using AI at all. And I asked, "Hey, are you using any?" And she went totally on the defensive, like,

00:26:17

P7: "Why are you asking this? No, I'm not." And I'm like, "That's fine. I just want to know. I want to know how you're using it and can we talk about it?" And she said,
"No, absolutely not. I'm intentionally not using it because I'm learning." And I said, "Okay, good call. I agree with you. Since you're trying to learn, maybe you can use it afterwards as a benchmark. The first thing, the first draft of the moderation guide, the first screening, is only you with your thoughts, because you're learning the craft."
And what happened with the senior was rather the opposite.
I've noticed that some things were definitely AI generated. So I just asked myself, "How do I ask this without sounding judgmental?" Which was... I didn't want it to be like, "Hey, are you using AI?" in a judgmental way. It's like, "Hey, can we talk about this? How are you using AI? Which tools are you using?"
I started from also some things about, "We have to be careful because depending on the client, we need to understand and agree on which tools we're using." And then we've decided that we would be much more open about, "Okay, I've used AI to actually do this." And we could also discuss about, "Hey, which prompt have you used?" Or, "How

00:27:36

P7: did you use it? Did you use it as a first draft and then refine the draft, or the opposite, where you threw some initial thoughts into the AI to get something structured?" The two choices: using it as a more elaborated critical thinking tool for what we're doing, but not excluding it.

And I feel we shouldn't be ashamed of talking about [disclosing AI use]. And especially in a power dynamic where there's different seniority, I think we should be completely transparent about what the expectations are.
And we agreed on some things: "Hey, we're not going to use it for this reason, but in this tool, let's try and use it to analyze sessions. Let's see how it goes. There's this time where we have actually much less time to do the study. Can we try and use AI to see if we can speed it up and still get a good enough quality?"

00:28:47

Paul: And that's successful, or is it still an open question?

P7: It's still an open question. Another use I've done, if I can share this, which instead I'm happy about, is on the research operations side. On that I see a lot of potential, and I've been experimenting a bit. The first thing was, first of all, we needed to describe and define our research process. And we did that because we needed that alignment and we needed that transparency towards the rest of the organization. Like, "This is how we work. This is what we need from you. You need to send us a request. We will do this. We will do that." And then the moment that was in place, we started using some automation and AI to, for instance, summarize what's new with our research. So we're sending out a newsletter that is AI generated based on our backlog, for instance, or things like that.

00:29:49

Paul: I've been experimenting, as I mentioned before we started recording. I've been experimenting for this project with research ops automation, and I'm enjoying myself and finding lots of opportunities. But let's talk more about your experiences.

P7: Yeah, sure.

Paul: I'd love to talk after this call and chat with you again offline. But getting back to some of the questions I wanted to ask you: do you think AI is changing how you approach problem solving?

P7: No and yes. Not the first one. No, I don't think it's changing how I do problem solving, because I think that's the one human thing that at least for now I want to keep for myself. I'm a problem solver. I need to look at things holistically. I need to come up with solutions. But it is affecting the way I'm thinking. And it's affecting it in many different ways.

00:30:57

P7:

One is it makes me lazy. So I need to intentionally say, "No, I'm going to keep thinking for myself." I need to, again, similar to the person that was learning, retain the critical thinking. Otherwise it gets lost because it's a muscle. So we need to keep practicing and using it. And this is one thing I'm really keen on passing to the more junior colleagues. It's like, you cannot skip that. Forget about that, because otherwise you will be a solo lead really soon and you cannot delegate that critical thinking and problem solving to the machine.

P7:

So I don't think it impacts the way I'm thinking. It's just helping me work with my thinking, or articulate what I'm thinking, or challenging what I'm thinking. So in that sense, it's not the thinking, it's probably the how I work that changes.
Does that make sense?

Paul: That makes sense. Before we break in a few minutes, I wanted to get at some more emotional questions.

00:32:06

Paul: I want to understand how the increasing presence of AI in every aspect of work and personal life is making you feel. And so I want to get at this by asking you: what do you think is the biggest promise of AI? If you could hope for a fantastic positive outcome, what do you think AI would enable or deliver or unlock in the next 5 to 10 years?

P7: Hope? I don't know. I'm thinking probably progress. The first thing that came to mind when you asked this was progress in medicine and health-related things. Just because, one evident thing where AI is fantastic is in diagnosing stuff and reading images and everything. So that would be... it's already happening. So I'm thinking that on steroids could be fantastic. Which is, I don't know, maybe spotting patterns and understanding much better maybe correlations between things. Like, "Hey, you know what, actually cancer is linked to..." not really, I wish this was true... not really to the environment you live in and your nutrition or something, but it's actually linked to X, Y, Z, so we can act on that. So that's probably the biggest breakthrough I'm hoping is coming.

00:33:47

Paul: And how about what's your single biggest concern or fear related to AI?

P7: So many. So many that it's difficult to pick.

Paul: You can do a top three if that's better.

P7: Maybe. So one thing... I'm sorry, I'm not probably picking the biggest world problems. I'm trying to look at something that's closer to myself. But one thing, by my personal use and how I see people using AI,

the biggest fear is the reverse of the coin that I was mentioning before: people stop thinking. Stop critical thinking. That's a huge risk at the population level, because then we would be unable to do anything, like understanding our lives, making decisions, electing our politicians and whatnot. So that's really a risk that I see, because this is really tapping into our innate laziness
and reluctance to change and anything else. So that's a risk. The other risk is people having... I think it will probably level us into an average situation.

00:35:03

P7: So those who didn't know anything about something would know just as much, more or less. But the problem is we are all going to be at the same level, and some of them, probably many people, will think they are at that average, but in reality they're not really understanding. Just because, what I was telling you, they wouldn't be able to actually understand if that was the truth or not, because they don't have the tools to actually discriminate between, "Is this realistic? Yes or no." So we will probably end up there, which means that just very few people will be able to actually rise up from the average. And I wonder who those might be and why.

Paul: You've touched on something I've been thinking about, and I want to first push back gently on what you said by asking: how is that really different than how things are today, where it's probably a mass of people in the middle that take information that doesn't come from generative AI and accept it at face value and don't think critically?

00:36:22

Paul: And there's people who will go further and investigate or attain mastery. So I don't know. I'm not disagreeing with you at all.

P7: I think it would just be so much more, so much worse than previously. It would be massive. Because I understand what you're saying and I think we've seen that. I was thinking, recently I was talking to someone and I said I see things repeating because I'm that old. And

I remember distinctly my grandmother telling me, "Well, this is true because I've heard it on the radio." And then my parents saying, "No, this is true because I've heard it on TV." Then, "This is true because I read it on the internet." And [this is] true, because AI told me. So unfortunately I think this is just massive and pervasive in ways that we don't really grasp as of now.
And the other thing that really worries me is how much agency is left for you to question that. Because I mean, in the past, "Okay, I've heard this on the radio, but I can go and read a book, or I can go and ask an expert," and so on. I see this reducing and reducing, unfortunately. Especially because it took us so much effort to gain mastery, as you well defined it, or even build that understanding. I remember when I was a kid, reading the encyclopedia or the dictionary and the books, and it was tough. It was difficult to find the things. And
now everything is at the tip of your fingers, but I don't think you're valuing it as much because it's effortless, and you don't question it as much as we did in the past.

00:38:21

P7: Like, "Okay, who is this book? Who's the author of this book?" Or, "Which television channel are you watching?" Or, "Which newspaper are you reading, and who is writing it?" So we were asking ourselves a lot of questions. I don't see that happening in the future as much, and this worries me.

Paul: I'm going to agree with you and, are you familiar with the idiom "yes, and"?

P7: Yes, absolutely.

Paul: I'm going to "yes, and" you by saying, my wife, who also works in our field but is... she's been working in speech recognition and conversation design. Chat and voice are right in her wheelhouse, and have been for 30 years. And she frames it like this: because AI is so fluent and sounds so authoritative, we're at risk of mistaking its fluency and authoritativeness with actual knowledge.

P7: Yes. Absolutely.

00:39:25

P7: That's what scares me. And the thing is, it's kind of a chicken and egg or vicious circle: you need enough knowledge to be doubting what this AI is telling you. But if you don't have that knowledge, you tend to trust whatever is coming, and you don't have the tools or the questions to actually doubt anything that is coming out of there.

Paul: I'm going to let the tape roll even though I'm now the one thinking out loud. I want to talk about... I've been developing ideas around this. And first, before I talk about them: are you familiar with personality traits in general? You don't have to know a lot about the Big Five personality factors or anything like that.

P7: Mhm.

Paul: So there's somewhat stable traits that exist, and then there's states, which we can just call contexts. And so I'm thinking about, well, how much does someone's traits of inquisitiveness and conscientiousness, and all the things that go into critical thinking, how much does that drive people's approach to AI? And how much does context, like your senior researcher who maybe didn't have enough time and did the first draft of the moderator's guide using AI...

00:40:58

Paul: I'm working on and working through these ideas of what stable traits mediate people's interactions with AI, what contextual states mediate people's interactions with AI, and how do these mix and affect each other. I don't have a very fancy presentation or theory, but what are your thoughts on what I've just said?

P7: It's very interesting. I've been thinking about what are the building blocks of critical thinking. I didn't go as far as you did with that, but I was thinking, how do we cultivate that? How do we ensure... because I think that's the one thing that I feel the ethical pressure of passing along: cultivate curiosity, cultivate your ability to inquire. What does it take to do critical thinking in an AI world is different from the past. But I feel like our generation, and I'm assuming we're more or less the same age...

Paul: I am squarely a Generation X person.

P7: I am too.

00:42:14

P7: So exactly. I think we are a bit of the bridges, because we saw the analog world, we saw the digital, and now we're seeing AI. And we feel like we have this responsibility of actually passing along what we believe is important. It's definitely critical thinking. And sense-making is another one that I generally tend to say to researchers in the making. These are the two things you need to cultivate. And well, curiosity always, but maybe it's part of this too. I wouldn't know if it's personality traits. I've always thought it's more of a muscle.

Paul: So a motivation and a habit, maybe.

P7: And a master that helps you, a trainer that helps you.

Paul: Yeah. I brought up traits as sort of a straw man, because first of all, traits as measured by social and personality inventories don't have much predictive validity.

00:43:30

Paul: We can't predict behavior well just from traits. Motivation and situation are much better predictors. No, this is really helpful. I would love to keep you on for so much more time, but I want to respect our time commitment, and I know we've gone 15 minutes over. One last question: after you've heard most of the questions that I wrote in the moderator guide, is there anything you think I should ask people that I didn't ask you?

P7: Maybe: "What would you never use AI for?"

Paul: I think that's a good question. Thank you.

P7: Sorry, I don't know if the English is correct, but it just occurred to me.

Paul: No, that's: "What would you never consider using AI for?" Great.

P7: Just to bring up something strong.

Paul: I appreciate everything you've said. I'm going to stop the recording now and then we can wrap up.

P7: Yes. Great.

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

  • I took notes after each session.
  • I fed those notes to several AIs, along with the moderator guide, project proposal, session transcript, the participant's survey responses, and a codebook of tags and themes I've been iterating as I collect data.
  • I prompted each to write a background, findings, and emerging themes section.
  • Then I iterated on each AI's draft, challenging the AI where appropriate and removing what I'm euphemistically calling "hallucinatory content" :-).
  • I collected each AI's drafts, added them to the project I've set up in Claude Cowork, and prompted it to draft the background, findings, and emerging themes section, pushing back as appropriate.
  • Then I edited the content, because "human in the loop" means "I have final edit." At least to me it does.
  • I then published each session writeup.

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.