P5: Survey Data and Session Summary
Survey Responses
| Question | Response |
|---|---|
| Age | 55-64 |
| Education | Master's degree |
| Role / Level | Unemployed |
| Job title | Sr. Manager, UX Research |
| Years of experience | More than 25 years |
| Organization description | Enterprise SaaS software for content management, cybersecurity, etc. |
| Industry | Information technology (software, hardware, semiconductors, and IT consulting) |
| Individual AI tools used | Text generation (creating documents, emails, summaries), Media creation (images, audio, video), Search and information retrieval, Data analysis and synthesis |
| Organizational AI tools | Customer-facing chatbots or virtual assistants, Internal search and knowledge summarization, Security/fraud detection/anomaly monitoring, Predictive analytics for business forecasting, Content moderation or filtering systems, Code generation and developer tools |
| AI adoption involvement | Contributed to technical design, requirements gathering, or implementation; Provided subject matter expertise, requirements, or end-user feedback |
| Biggest work win with AI | We host Innovation Labs at my former company's annual customer/partner events. In these labs, we conduct usability testing sessions with a variety of products over the course of the 3 to 4 day event. This year, we had 9 products in the lab and conducted over 150 testing sessions. As the event typically occurs in the mid-November, it was always a challenge for the researchers to synthesize all of the data and report findings for multiple tests before product teams began leaving for holiday breaks. With AI-assistance, we were able to provide timely actionable insights to all teams before they started leaving. Further, AI-assistance enabled us to analyze research insights for common elements across all products (e.g., embedded AI elements, advanced search features, etc.) Insights from these analyses impacted common elements in our design system. Thus, we were able to address the needs not only of the individual products, but also all products using these common features across the portfolio. This type of meta analysis was previously not possible due to resource limitations and the need to return to day-to-day product support. |
| Biggest work disappointment with AI | There are a couple of things here. The first is related to the same AI assistance that enabled us to achieve our biggest win. We realized early on that simply providing audio recordings to produce transcripts and synthesize findings was in no way sufficient to get a clear and actual picture of the results. We discovered that the transcripts contained many misattributions and hallucinations that only served to muddy the water. To get useful results, we had to spend significant time cleaning up the transcripts and carefully tagging insights. We also found it important to include moderator notes with the transcripts prior to analysis/synthesis. This was more of a confirmation of low expectations than a true disappointment, but I think of it this way: for a hypothetical research project, if it takes 5 days for a researcher alone to achieve a result, it might take AI alone only 1 day. However, with the necessary human oversight plus AI assistance, it might take 3 days. Still valuable time savings, but not living up to the true promise of AI. The second disappointment came from initial attempts at applying AI to create an initial draft of test scripts. The results were disappointing, and we initially abandoned the concept as being more trouble than it was worth. Later, I established a more rigorous process around data collection to feed AI and a prompt library to produce the needed results. Now, we are able to produce an 80-90% complete initial test script draft. |
| Organization's biggest AI success | Although I don't have direct insight into the details, I know that the Development organization has been achieving real, measurable efficiency gains through AI assisted coding. In my own UXD organization, the design team sees the potential for efficiency gains through Figma Make. However, they still struggle with achieving results that are consistent with the design system and managing available credits. I should also mention my own research team's efficiency gains as a potential candidate for biggest gains in efficiency for our own processes and deliverables. |
| Organization's biggest AI challenge | I've got to re-visit the design team's struggles with Figma Make. It constituted a large investment and came with significant expectations from leadership. However, despite the promise, significant struggles and frustrations still exist. |
Background
P5 is a senior UX research manager with more than 25 years of experience, most recently leading a research team at a large enterprise SaaS company that produces content management, cybersecurity, and related software products. At the time of the interview, P5 was unemployed following layoffs. Their survey responses and interview reflect someone with deep expertise in research operations who had been actively integrating AI tools into their team's workflow before their departure.
P5's relationship with AI began with exploratory use of Copilot and ChatGPT, quickly moving toward operational experiments. Their team's primary AI investment was Dovetail, which they used for transcript summarization, quote extraction, and theme identification across large-scale usability testing events. The organization also sanctioned Copilot as its official LLM, though P5 and their team supplemented with ChatGPT for additional analysis. On the personal side, P5 uses GPT as a sounding board for major life decisions, including home purchasing and navigating the layoff process.
The session's distinctive quality is P5's practitioner-researcher lens. Unlike participants who primarily consume AI output in their own work, P5 has both used AI tools operationally and conducted usability research on AI-powered products, giving them a dual perspective on trust, transparency, and the gap between what AI promises and what it delivers.
Key Findings
The Innovation Labs Success and Its Asterisk
P5's biggest AI win was a clear operational achievement. Their company runs annual Innovation Labs, a multi-day event with 9 to 15 concurrent usability tests and over 150 sessions. Historically, researchers could not synthesize findings before product teams left for the holidays. With Dovetail's AI-assisted analysis, P5's team delivered actionable results before the break for the first time.
More significantly, the time savings enabled a type of analysis that had never been possible: looking across all tests for common elements like embedded AI features and advanced search patterns. This cross-portfolio meta-analysis fed insights into the design system, expanding the research team's sphere of influence from individual product support to company-wide design decisions.
But the win came with a substantial caveat. Dovetail's transcripts contained misattributions, its auto-generated tags were insufficient, and the team had to invest significant effort in cleaning transcripts and adding moderator notes before the AI analysis became reliable. P5 frames this through a precise cost-benefit arithmetic.
"For a research activity that would take a researcher alone five days to complete, if you look at it with AI alone, it might take a day, but in order to do a good job of it, the necessary human AI interaction, you might get closer to three days."
The George Carlin Trust Model
When asked how they calibrate trust in AI, P5 reached for a comedy routine. George Carlin had a bit about listening to someone who sounds authoritative: you nod along, "Yeah, yeah, yeah, go on," until suddenly you realize the person is full of it. P5 says they've experienced that pattern with AI multiple times.
This analogy captures something the more formal trust frameworks from other participants (P1's two-layer evaluation, P4's role-based gradient) don't: the felt experience of trust eroding in a single moment after gradual accumulation. P5 described the aftermath as necessitating a retrospective review of everything they'd previously accepted, though in most cases it "turns out to be fine." Their recovery strategy is pragmatic: reset the chat, narrow the focus, specify resource types, and pull the conversation back on track.
P5 also brought a researcher's perspective to trust, drawing on usability testing they had conducted on AI-powered enterprise products. Participants in those tests made it clear they would not rely on AI recommendations without visible reasoning about where the outputs came from, particularly in enterprise contexts where decisions carry cost and risk.
"I don't know if you recall, there's an old George Carlin routine that, you'd be talking with someone that sounds like they really know what they're talking about for a while and you're like, 'Yeah, yeah, yeah, go on.' And then there's this, he's full of b.s., I think I've encountered that with AI a few times."
The Attribution Analogy
P5's research team developed a practice of disclosing AI contributions in their collaborative discussions without any formal policy requiring it. Team members would freely say things like "the AI and I put this together," treating the disclosure as unremarkable. P5 compared this to making an attribution when using a quote: acknowledging that the thinking is not entirely your own, without that acknowledgment diminishing the quality of the contribution.
This stands in contrast to P1's radical transparency stance (a deliberate ethical position) and to the nondisclosure patterns described by other participants. P5's team arrived at disclosure organically, in a context of psychological safety and shared research norms. The attribution analogy is notable because it reframes AI disclosure from a confession ("I cheated") to a citation ("I built on this source"), which may explain why the team adopted the practice without friction.
"I think people just are kind of freely admitting is like the AI and I put this together and our thinking is more, I think it's almost along the lines of making an attribution with a quote that you use and it's not completely my own thinking but it doesn't diminish the quality of it just because of that."
Slop from on High
While P5's own team handled AI disclosure well, they observed a different pattern in organizational communications from leadership. Corporate messages that were clearly AI-generated with minimal human thought investment didn't provoke confrontation, but they did erode the message's impact. P5 described the effect as quiet: you can't ignore the communication entirely, but it "doesn't have the same impact" when the reader can tell the sender contributed little beyond a general direction.
This finding captures an underexplored social cost of AI adoption. The credibility erosion P5 describes isn't about factual errors (hallucination) or ethical failures (nondisclosure). It's about effort legibility: when recipients can tell that a communication required minimal cognitive investment from the sender, the message's persuasive force diminishes regardless of its factual accuracy.
"I think it kind of diminishes the impact of the message going forward, is like if it becomes apparent that there is very little of your own thought other than a general direction then it's just, can't really ignore it necessarily but it doesn't have the same impact."
The Overreaction-Overcorrection Prediction
P5 closed the interview with a prediction about organizational AI adoption. Practitioners who work directly with AI understand that it is not yet capable of full replacement. Whether leadership understands this is an open question. P5 predicts a "dramatic overreaction" in which organizations make personnel and structural changes that domain experts would recognize as premature, followed by a "massive overcorrection" when the consequences become apparent.
This prediction carries particular weight coming from someone who is currently unemployed after layoffs at a technology company. P5 is not speculating about hypothetical displacement; they are living through the consequences of organizational decisions about headcount while simultaneously describing AI as the most energizing change to their discipline in 30 years.
"I suspect what's going to happen is there's going to be a dramatic overreaction to the introduction of AI. It's going to make a lot of changes to organizations that probably shouldn't occur. And at some point, we'll probably have a massive overcorrection."
Emerging Themes
| Theme | Description | Key Quote |
|---|---|---|
| Trust Calibration | Deliberate, ongoing practices for evaluating AI trustworthiness on a spectrum | "There's an old George Carlin routine... you'd be talking with someone that sounds like they really know what they're talking about... And then there's this, he's full of b.s." |
| Augmentation Not Replacement | Using AI to enhance existing activities rather than offloading tasks entirely | "For a research activity that would take a researcher alone five days... the necessary human AI interaction, you might get closer to three days." |
| Hallucination Frustration | Disappointment at AI producing fabricated or unreliable content | "I guess I don't know if miserable failure is accurate but not far from that." |
| Knowledge Displacement | Concern that decision-makers lack expertise to evaluate what AI can replace | "Whether or not leadership understands that I don't know." |
| Job Security Anxiety | Fear that AI will reduce headcount | "There's already great concerns about replacement." |
| AI as Sounding Board | Using AI as a thinking partner for complex decisions, retaining full decision-making ownership | "It's kind of a sounding board that's very knowledgeable about a whole lot of topics." |
| Disclosure Norms | Informal team standards for attributing AI contributions, emerging without formal policy | "The AI and I put this together... it's almost along the lines of making an attribution with a quote." |
| AI Slop Detection | Recognizing low-effort AI content from others and the resulting credibility erosion | "It kind of diminishes the impact of the message going forward." |
Interview Transcript
00:00:00
Paul: All right. Well, we were talking before the tape started. Tape, air quotes, tape, before the recording started about our both our backgrounds. So I know you're a UX researcher. Let's go back to the beginnings of you and AI. So what's the first generative AI tool you remember trying and what were you hoping it would do for you?
P5: Oh, so I guess the first one that I really tried I think is either ChatGPT or Copilot. I think the thing maybe is Copilot that I really started with and just kind of an exploratory sort of thing both personal and through work.
Paul: What were you trying to do, or was it just purely exploration?
P5: Yeah. So it was really very much exploration. We started getting the impetus around work, was trying to figure out where we could get some efficiency gains for our processes.
00:02:08
P5: One of my early experiments was seeing, we have test script templates and we've been playing around with whether it would be able to help us to say get oh at least 70% of the way there with producing a script that we could then edit and hopefully save some time in that process. The initial passes that we took were kind of,
I guess I don't know if miserable failure is accurate but not far from that. Essentially figured out that what we were doing wasn't working and that it wasn't getting us where we wanted to be and the cost benefit was just not even close to being there.
Paul: Yeah, that's a common story I've heard. And of course, we're mid hype cycle and the quote that I love is you're using the worst version of X you will ever use now.
00:03:23
Paul: So related to that, what AI tools or capabilities have stuck with you? What are you using now and why? And you can talk about work first or personal first or blend them. It's up to you.
P5: Sure. So well let's start with work. So one of the major tools that we brought in was Dovetail. So we made great use of that and with kind of great success. It enabled us to do a lot of things a lot more quickly than we had in the past. We also discovered that the full promise was not there for Dovetail like everything else.
The way I describe it is that for a research activity that would take a researcher alone five days to complete, if you look at it with AI alone, it might take a day, but in order to do a good job of it, the necessary human AI interaction, you might get closer to three days.
00:04:40
Paul: Sorry, I didn't mean to interrupt. I guess I did because I'm going to keep going. When you were using Dovetail, I recently used it for a project and this was the first time I noticed it with the AI summarization capability. What were you using the Dovetail AI capabilities for, initial tagging and theme identification or something else?
P5: Yeah. So initially it was more kind of summarization and taking a pass at pulling quotes and getting some highlights on the key themes that were going on. If you can give like the example where it was of real benefit, our company does what we call innovation labs at major company events once a year, usually kind of in the mid November time frame, and at those labs we conduct somewhere between like nine and 15, we'll have nine and 15 different tests going on over the course of three days.
00:05:52
P5: This year we had nine product services in the lab and had over 150 sessions over the course of that time. And usually given the time frame that it occurs and when people start taking off on holidays, our researchers who are responsible for multiple tests usually aren't always able to produce the kind of actionable results before teams go on holidays. Working with Dovetail and doing all of that enabled us to get it in kind of more timely format but also what we weren't able to do previously is to look across the tests that we had in the lab at common elements. One of the big themes this year was kind of embedded AI within projects, also like advanced search features, and this gave us the time to kind of take a look across those tests, pull out key insights related to those common elements that we're then able to not just support the individual teams but kind of across the portfolio by incorporating those insights into our design system that it can be used across the board.
00:07:08
Paul: So this was in Dovetail. Was there a point where you jumped out of Dovetail to other tools?
P5: Well yeah I mean so it was largely for this one it was kind of mostly with Dovetail and with manual stuff, maybe doing a little bit more analysis with, well Copilot was the officially sanctioned LLM within the company, but Copilot or GPT to do some further investigation analysis and I guess they recently came out with an MCP that will facilitate that process.
Paul: What is the MCP connecting to the source transcripts and summarization?
P5: Yeah. So it's kind of across the board for transcripts. So you can not just the transcripts within a certain say the certain person's file or project but you can even look across those to pull that into, I think they have one for both Copilot and GPT at least, probably Claude, but I haven't looked at it.
00:08:28
Paul: Okay, cool. What do you see as the biggest win or success or efficiency gain that you achieve using AI tools in your work?
P5: Yeah. So I think that was probably it to this point. So getting actionable results to teams before the holidays rather than waiting until after the first of the year. That was a big win and it was noticed by the product teams that in past years we haven't been able to make it happen. This year we did. And then I think maybe even bigger than that was being able to look at the common elements. So that just expanded the sphere of influence from individual products to across the portfolio.
Paul: What do you think was the biggest disappointment or failure that you would have experienced personally while using AI tools?
00:09:24
P5:
So, I think, well, I guess disappointment is maybe the right word. So, it was kind of discovering that, maybe not unexpectedly, I guess the bar was pretty low, but discovering that Dovetail still needed a lot of babysitting to get a lot of results. We had to go in and we realized that the transcripts had a lot of misattributions. I mean there were just a lot of things that need to be cleaned up to make it useful beforehand. That allowing Dovetail to kind of create its own tags and apply those was not sufficient. We still needed to do the diligence to go in and apply our own tags to make it more meaningful and real world context.
not just relying on the transcripts alone, but introducing moderator notes to the analysis as well to help get that real world and actual findings.
00:10:30
Paul: Leading up to the organization. So that, I had just asked you about your personal wins and disappointments. What about your organization? What would you say is their biggest success from deploying an AI tool either internal tooling or customer-facing something?
P5: Yeah. So I guess a couple things. So I don't have direct involvement with it but I know that the development organization has been touting and providing some actual measurements around kind of the efficiency and gains that they have seen. So I think just from a product development standpoint it's got to be that. I think there's a lot of promise for our design team. I think they were fairly early on with Figma Make. I think they are still having some growing pains particularly around getting it to work with the existing design system.
00:11:40
P5: Getting good results with that right off the bat. Still struggling with credit usage. Figuring out the best strategies to use to kind of keep that within the realm of reason. And I mean I would throw kind of our own research gains into that as well.
Paul: How about your organization's biggest disappointment or failure when it comes to implementing or relying on AI? Not specific to your work, maybe org-wide.
P5: Yeah. I mean, I think it's really more of a kind of that theme of just not living up to the promise. And I guess it's not unexpected, but it's just kind of that I was just really hoping it would get a little bit farther along than what we're able to do at this point.
Paul: Yeah. I want to talk a little bit about how AI has changed how you do certain things and this could be in your work life or personal life. You can jump between them. Yeah. So, whatever is most top of mind for you. So is there any activity or task you do very differently now because AI is in your workflow?
00:12:42
P5: Yeah. So, I mean, I'll talk about personal life for now. Yeah,
I think I've kind of been using GPT for working through kind of major life events. So like considering purchasing a house for example, kind of working through that and doing tradeoffs and running scenarios and what to consider, and thinking of it as sort of a, I don't want to say co-pilot but the thing is it's kind of a sounding board that's very knowledgeable about a whole lot of topics. Particularly in a context where it involves major life decisions you don't trust it
00:13:58
P5: completely, but you're able to cover a whole lot of ground, cover a whole lot of topics, get a lot of insights and things that you hadn't really considered brought into the conversation.
Paul: Okay, great. Is there anything you've completely stopped doing because AI just does it for you now and it does it well enough?
P5: I don't know that I completely stopped on anything. I think there have been some frustrations that I mentioned earlier maybe before the recording about trying to apply or generate a draft of a test script beforehand and kind of doing that on its own, just getting quickly that that wasn't working.
00:15:18
P5: I've since, one of the things that I was thinking through is exactly how to make that happen. And kind of was successfully able to put together a process around it that would support what the AI would need in order to do that. And kind of similarly with doing mockups in tools like UX Pilot or Stitch, running into just not getting exactly what you want or what you're looking for. So it can be a frustrating process at times.
Paul: Yeah. How about how AI has affected how you interact with people professionally? Have you noticed any effect and why and how?
P5: I'm trying to think just in different context. I think you can see kind of the AI influence from people that you were, say, emails that were crafted with AI, presentations that were crafted with the AI. And I don't necessarily think that's a bad thing. It's just like okay yeah as long as it's based on their thinking.
00:16:46
P5: I think it's okay. And I think other interactions, I think it's enabled maybe a closer tie kind of working with product management to tie data analytics to research and kind of being able to identify what the, some real pain points in products, and being able to apply research to that and figure out the why to go with the what.
Paul: Yeah. You mentioned that there was context where you would see presentations or some content and you kind of knew it was created with AI or you suspected. I've got questions around that. Trying to understand how norms are developing in organizations and among people about disclosing use of AI and setting appropriate expectations for quality. So are there situations where you typically disclose your use of AI, and maybe reflect on that as well as how you've seen other people handle it or not?
00:18:02
P5: Yeah. I mean I don't, I think maybe the one example comes from the interaction with my own research team,
that there were many conversations where we were working on projects together,
Paul: What the fuck?
P5: most internal things, and that people would contribute their own thoughts but it was really AI assisted thinking going into it and we didn't really think about it, we didn't set any policies about that or any guidelines around discussions. I think people just are kind of freely admitting is like the AI and I put this together and our thinking is more, I think it's almost along the lines of making an attribution with a quote that you use and it's not completely my own thinking but it doesn't diminish the quality of it just because of that.
00:19:24
Paul: P5, have you ever been in a situation where, similar to what you described, can you recall a specific situation where someone shared something at work like a document or presentation that you suspected was a low-effort AI slop? How did you react and did it change how you interacted with the person later?
P5: That's good. Yeah. I mean, I think it's mostly,
I think for the most part those came from say communications from on high. Kind of the organizational level communications that would go out that you could tell there's no thought other than a general direction of I want to communicate this to a whole lot of people in this particular organization and do it. And I think the takeaway I get from that is just like okay, I think it kind of diminishes the impact of the message going forward, is like if it becomes apparent that there is very little of your own thought other than a general direction then it's just, can't really ignore it necessarily but it doesn't have the same impact.
00:21:04
Paul: Okay. Talk a little bit about AI and trust. So, how do you decide whether to trust AI? And this could be generative AI that you're directly interacting with or the output of generative AI that you're consuming that someone else created.
P5: Yeah. I mean, so to start, I can kind of go back to our early rounds of usability testing and what we were seeing in terms of results around trust there.
We had product that was making recommendations and we're kind of proving on the results that were presented by the AI to the participants in testing and that was one thing that just came out as huge, that without some insight into where the AI was coming up with that output, and having some indication like just calling out these are the preferences or I'm getting it from our discussion about X Y and Z, without that, particularly in an enterprise context when decisions can be costly and have risk associated with them,
00:22:48
P5: it was a clear message from the participants that there's no way that they would rely on those outputs without some insight into where they were coming from.
Paul: So visibility and understandability are some of the words that come to mind when you related that context. How about you? What's your personal, I'm trying to think how to phrase this. I'm going to read how I wrote it and then we'll see if it makes sense. How do you decide whether to trust AI? What are your detectors? What pings or lights up or goes off for you in different situations?
P5: Yeah. I mean I think there's certainly some aspects of being able to see when outputs are annotated with sources and I can determine whether those are trustworthy or not. I think sometimes there are situations in which I can be like trust trust trust trust, ooh.
00:24:02
P5:
I don't know if you recall, there's an old George Carlin routine that, you'd be talking with someone that sounds like they really know what they're talking about for a while and you're like, "Yeah, yeah, yeah, go on." And then there's this, he's full of b.s., I think I've encountered that with AI a few times.
Paul: What do you do then? So, does it make you want to go back and revisit everything that you've trusted so far? How do you recalibrate? How do you calibrate? Recalibrate and dynamically adjust, I guess, is my question.
P5: Yeah. So I think it almost necessitates going back and taking a look and in most cases it turns out to be fine. I think maybe at some point there's some hallucination or something that it goes astray and okay,
Paul: Do you have a, like I'm sorry I didn't mean to interrupt.
00:25:05
P5: No no go ahead.
Paul: You made me think of, do you have an explicit process for revisiting and redirecting the AI to help you verify its output?
P5:
Yeah, I mean I think in some cases if I'm not too far down a path I can just go back and kind of confirm but then also kind of reset the chat saying like it's getting off topic, I want to focus more on X Y and Z and I'd like to have it based on these particular types of resources and just kind of pull it pull it back in focus a little.
Paul: Yeah. Okay. I know we're a little bit over time. Do you have five minutes to go?
P5: Oh. Sure. Sure. I didn't realize it.
Paul: No, no. I appreciate every second of our time together and I just want to make sure if I do go over that we are both okay with that.
00:26:16
P5: Yeah. No problem.
Paul: As an aside, you know what it's like when you're creating a guide and you're trying to fit it into a time frame. And the first three or four sessions I did came in really good. The session before you and now you, both user researchers and I've blown the time.
P5: I wonder if that's the guide or the interviewee.
Paul: But yes, something about cobbler children. I don't know. There's some aphorism that applies here. Anyway, I did want to get two themes worth of questions. I think we can wrap this up in five minutes if that's okay with you.
P5: No problem.
Paul: Want to think about and talk about the cognitive effects, lack of a better word. Do you think AI is changing how you approach solving problems?
00:27:10
P5: Yes. So I think it definitely has and I think it's all kind of learning the prompt engineering things. I think with the more experience you have doing prompts and getting the outputs and then figuring out I need to adjust my prompts to get more specific about what I'm looking for. I think that has had a big effect on my own writing. I think it's going to cause me to focus and
Paul: How?
P5: kind of be more specific. Not really going into extraneous details but really focusing on what matters. If that makes sense.
Paul: It sounds like that's helping you improve your writing. Is there anything else that you feel like you're getting actively better at when you can employ AI? And the flip side, is there anything you worry about or feel like you're getting worse at?
00:28:28
P5: So, I think another thing is just kind of like the way my problem solving works. I mentioned working through kind of big life events and doing it that way. I think it's kind of helped me structure that approach to doing it. Maybe you kind of stop and consider what am I not considering. Yeah, I think it has a positive impact on that.
I've been looking out for, I guess, the effect that people discuss about how it kind of detracts from your own thinking or your own creativity to rely on AI to produce outcomes. And I don't think that's happened to me. And I think if it indeed has not happened, I think it may have something to do with how I see the AI interactions.
00:29:55
P5: It's not sort of the replacement, but as an assistant, kind of a sounding board, if you will.
Paul: Yeah, that's an interesting point. It's something I've been kicking around. I do have a background, graduate background in, well my degree was social and personality psychology but I was working in a human factors safety operational safety focused program but I've still got a little bit of that, are there stable traits or pseudo stable traits, and air quotes, that mediate how people interact with AI, approach it, problem solve with it and upskill or downskill.
P5: There we go.
Paul: So I'm early days in thinking about that. Okay, last questions are more about emotional, I don't know why my brain just slipped into neutral but it did. They're more about people's emotional reactions. So at a high level, how does the increasing presence of AI in work world and personal world make you feel?
00:31:15
P5: So actually in the work world I feel like the advent of AI, particularly in the way that research has been accomplished, I think that's the most energizing change to the discipline that I have experienced since I started the job some 30 some odd years ago. So I think the changes that's brought about, it actually energized me quite a bit just like oh we can do stuff this way now. Just all the, a very very shiny new and very useful tool in the toolkit.
Paul: Right. Going further with the emotional reactions and then digging into hopes and fears. What do you think is the most significant breakthrough or positive outcome that AI might enable within the next decade? And in other words, what are you most hopeful that AI can do for the world soon?
00:32:52
Paul: Don't worry, we have the dark side of this question coming.
P5: Yeah. I mean, most positive thing that could possibly, I'm just hoping it can just accelerate progress in a lot of different areas. I think just as it's kind of helped me with thinking about personal issues and helped get efficiency gains in my own work, I think of all the potential gains that can occur in say medicine, technology, and like that. I think that where it said it would be positive, assuming it is, then getting there faster might be a good thing.
Paul: Yeah. Okay. So, more of an accelerate progress is the hope that you expressed.
P5: Yeah. That's the way I would prefer to think about it I guess.
00:34:04
Paul: How about your biggest concerns or fears related to the increasing presence of AI in pretty much every aspect of work and personal life?
P5: Yeah, I mean I think the work aspect is going to be a huge thing.
I mean there's already great concerns about replacement. And I think the people who are actually hands-on with the work kind of understand that it's not there at least not yet to do kind of full replacement.
Whether or not leadership understands that I don't know. So I suspect what's going to happen is there's going to be a dramatic overreaction to the introduction of AI. It's going to make a lot of changes to organizations that probably shouldn't occur. And at some point, we'll probably have a massive overcorrection.
Paul: Got it. Yeah. Is there anything you think I should have asked you during this chat, but I didn't?
P5: Yeah, I mean I think that was pretty well covered actually.
Paul: Hey, I really appreciate you sharing your experiences with me today.