P10: Survey Data and Session Summary
Survey Responses
| Question | Response |
|---|---|
| Age | 55-64 |
| Education | Master's degree |
| Role / Level | Manager |
| Job title | UX Manager |
| Years of experience | More than 25 years |
| Organization description | Financials (banking, insurance, investment funds, and consumer finance) |
| Industry | Insurance |
| Individual AI tools used | Media creation (images, audio, video), Data analysis and synthesis, Code generation and completion |
| Organizational AI tools | Customer-facing chatbots or virtual assistants, Internal search and knowledge summarization, Predictive analytics for business forecasting, Code generation and developer tools |
| AI adoption involvement | Contributed to technical design, requirements gathering, or implementation; Provided subject matter expertise, requirements, or end-user feedback |
| Biggest work win with AI | I have found chat bots to be useful when ideating management activities for my team. For example, I used ChatGPT to help me formalize more structure for my one on ones with each team member. |
| Biggest work disappointment with AI | I find visualizations very underwhelming with chat bots. For example, once I had ChatGPT create an annual schedule of events. It returned this as a list. When I asked it to create a PowerPoint slide with a Gantt chart view, it gave me unusable nonsense. |
| Organization's biggest AI success | For all of the investment and hype where I work, I'm not sure that I can say where the biggest gains or wins have been. They have rolled out tools, but their adoption has struggled with users. I suppose the DevOps team has had the most success thus far with operational coding tasks. |
| Organization's biggest AI challenge | (No response) |
Background
P10 is a UX manager at an insurance company with more than 25 years of professional experience. He holds a master's degree and leads a team of UX practitioners that he has been working to shift from tactical Figma-and-wireframes work toward strategic discovery research and stakeholder influence. His first exposure to AI was recreational, generating absurd images with DALL-E, but his move into regular use was not voluntary: his employer rolled out generative AI tools and began monitoring how often employees engaged with them.
P10 describes himself as "somewhere between a Jared Spool crank and the 'end is near' sign-waving guy" when it comes to AI. He views large language models as fundamentally "word association machines" and harbors deep concerns about the economic motivations driving corporate AI adoption. Yet he simultaneously acknowledges that AI has become "surprisingly seductive," a word he uses three times during the session, and that he relies on it more than he ever expected. His organization uses ChatGPT and has recently begun adopting Claude, including Claude Code for the development team.
What makes P10 distinctive in this study is the sharpness of the contradiction between his ideological skepticism and his practical dependence. He is not gradually warming to AI; he is being pulled into it despite active resistance, and he is aware of and articulate about that pull. His use is deliberately constrained to what he calls "low-risk ideation," a domain where there are no wrong answers and therefore no trust is required.
Key Findings
Low-Risk Ideation: Trust Through Scope Restriction
P10's approach to AI trust is unique in this study. Rather than developing verification practices or cross-referencing strategies, he has eliminated the need for trust entirely by restricting his use to domains where accuracy is irrelevant. This strategy was born from a specific incident: he asked ChatGPT to generate a December calendar as a Word table, and it returned dates that didn't align with the actual days of the week. The mundanity of the failure, this was a calendar, not a complex analysis, is precisely what made it so corrosive to his trust.
His resulting framework is simple: use AI only where there is "no quote unquote wrong answer." Idea generation, brainstorming management approaches, exploring training options. Never data processing, never anything requiring factual accuracy. He acknowledges the circularity of his position: if he did need AI for a large data task, he would "probably ask it to say, 'Okay, process this 10,000-row file but then also tell me how should I double-check your work,' which is circular reasoning in the worst sort of way."
"I'm not 100% sure that I would trust it if I asked it what 2 plus 2 is half the time."
The Small Council: AI as Ideation Sparring Partner
P10's most productive use of AI is as a conversational thinking partner for management challenges. His metaphor for this relationship, "a small council of different personalities or different backgrounds or different perspectives," captures something that goes beyond simple brainstorming. He values AI not for generating answers but for generating pressure against his default assumptions.
The clearest example is his 12-month training plan. Tasked with shifting his UX team's skill set from tactical to strategic, he used ChatGPT iteratively: posing the problem, reviewing suggestions, honing the plan himself, and then returning to ChatGPT for a feasibility check. He describes the result as "much more comprehensive than I ever would have been able to by myself through just what I'll call old research methods now at this point of me just googling things and talking to people." The back-and-forth pattern, not the single-shot query, is what makes this work for him.
"It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives to kind of push against my default way of doing things."
The Gun to My Back: Mandated Adoption and Its Discontents
P10's organization does not merely encourage AI use; it monitors it. Employees are tracked on how often they interact with ChatGPT, and they are periodically asked to demonstrate what efficiency gains they've achieved. P10 describes this as adopting AI "with a gun to my back." The mandate creates a perverse dynamic: the organization simultaneously pushes usage and asks employees to retroactively justify the investment, essentially requiring them to narrate their own productivity gains to validate the licensing costs.
What the organization does not do is establish any norms around disclosure. There is no expectation that reports or deliverables indicate when AI was involved in their creation. P10 finds this absence notable: the incentive structure rewards demonstrating that you used AI, but has nothing to say about transparency in the output.
"So it's a little bit with a gun to my back that I find I'm dipping my toes into it deeper every day."
The Seductive Pull: Skepticism vs. Dependence
The word "seductive" recurs throughout P10's session and captures a dynamic that none of the existing codebook themes fully describe. He began as a skeptic, influenced by Jared Spool's dismissals of AI as a "word association machine" and "magic trick." He remains ideologically skeptical: he worries about hallucinations, environmental impact, executive greed, and the erosion of human capability. And yet he finds AI "blending into my daily life at work and at home more often than I ever probably thought it would or should."
His boss has noticed the effect. Without any explicit conversation about AI, she has commented that P10 is "broadening your perspectives, you're trying new things." The improved output is real. But P10 frames the improvement as a loss as much as a gain: "How much of myself am I losing through this process because I'm just lazily relying on it now to provide me with all of the perspective."
"On top of all that, like I said, seductive is the word that I use in terms of, yet despite all those misgivings, I find it blending into my daily life at work and at home more often than I ever probably thought it would or should."
The Generational Split: Drugs, Cheating, and AI
P10 offers a three-generation framing of parental anxiety that no other participant has articulated. His parents worried he was on drugs. He worried his kids were crowdsourcing their homework. And now he watches his younger son "ChatGPT'ing and Copiloting his way through certain classes" and wonders what the purpose of education is when AI has "given rise to the legitimacy" of shortcutting.
The generational split is visible in his own family. His older son, who graduated from OSU in 2021, treats AI as a novelty at work. His younger son, finishing his first year at Miami, is "absolutely absorbed by it." P10 doesn't know what to tell either of them. He cannot identify an "AI-proof" field of study, and he worries about what happens "when you kick that crutch out from underneath him" and whether his son is "capable of doing anything."
"And now I feel like AI has almost given rise to the legitimacy of that now in a way."
Emerging Themes
| Theme | Description | Key Quote |
|---|---|---|
| Trust Calibration | Deliberate practices for evaluating how much to trust AI output | "I'm not 100% sure that I would trust it if I asked it what 2 plus 2 is half the time." |
| Hallucination Frustration | Disappointment at AI confidently producing fabricated content | "Like if November started on a Monday, it had it starting on a Tuesday where none of the dates lined up." |
| Knowledge Displacement | Concern that AI dependency erodes foundational knowledge and judgment | "How much of myself am I losing through this process because I'm just lazily relying on it now to provide me with all of the perspective." |
| Apprenticeship Erosion | Concern that AI prevents developing foundational skills through hands-on experience | "And now I feel like AI has almost given rise to the legitimacy of that now in a way." |
| AI as Sounding Board | Using AI as a conversational thinking partner with user retaining decision ownership | "It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives." |
| Disclosure Norms | Emerging standards about when and how to attribute AI contributions | "So let's say for example when we turn in a report, 'Last quarter, portions of this were created through generative AI means,' nothing like that." |
| Job Security Anxiety | Fear that AI adoption is driven by executive cost-cutting at workers' expense | "I think the legacy of this phase of AI, this gold rush mentality, is just going to really be exposing the greed of C-level executives in the world right now." |
| Organizational AI Adoption Challenges | Organizations struggling to find an effective path forward with AI, from mandating usage and monitoring frequency to overrelying on AI for critical business processes | "Are we building this house of cards now that's just eventually going to doom the company?" |
P10's organization represents one of the more explicit mandate-and-monitor approaches in this study. Employees are not simply encouraged to use AI; their interaction frequency is tracked and they are periodically asked to demonstrate efficiency gains. This creates a feedback loop where adoption is decoupled from utility: P10 uses AI more not because he finds it more useful, but because the organization requires evidence of usage. The absence of any corresponding disclosure norms means the organization incentivizes AI use but is silent on transparency, a combination that P10 notes without fully resolving.
"Then the next chapter is they rolled [generative AI] out at work and basically told us you better start using it. And they even, they don't monitor what we use, what we chat with it about, but they monitor how often we chat with it."
P10's trust calibration strategy is distinctive among the 10 participants coded so far. Where others have developed verification practices (cross-referencing, asking AI to check its own work, testing with known answers), P10 has taken an avoidance approach: he restricts his use to domains where trust is unnecessary because there are no objectively wrong answers. This is calibration through scope restriction rather than through validation, and it was directly shaped by the December calendar hallucination.
"I think ultimately my personal strategy is try not to use it in spaces where there is high risk and or where there is an absolute need for 100% accuracy and then just stick to it more where the spaces are of, like, I know I keep saying it, but the idea generation where there really is no wrong answer per se, it's just input for me."
The December calendar incident serves as P10's foundational AI trust story. He asked ChatGPT for a simple Word table calendar and received one with misaligned dates. The triviality of the task is what made the failure so damaging: if AI can't get calendar dates right, what can it be trusted with? This single incident shaped his entire approach to AI use, pushing him into the "low-risk ideation" framework that defines his current relationship with the technology.
"I've also run into some situations with ChatGPT where it will just obviously hallucinate something. The chief example I always have of that is there was a time where, literally, it was last November, I needed to make a calendar for like a newsletter that would have been the month of December and I just didn't feel like making the Word table. So I asked it, 'Make a Word table that's a calendar for the month of December with two rows for each date,' that sort of thing, and it messed the dates up. Like if November started on a Monday, it had it starting on a Tuesday where none of the dates lined up."
P10 voices knowledge displacement at two levels. Personally, he worries about his own perspective atrophy: the more he relies on AI for ideation, the less he exercises his own capacity for lateral thinking. Generationally, he watches his younger son navigate college with ChatGPT and Copilot and wonders whether the ability to think critically will survive dependence on tools that do the thinking for you. His "kick that crutch out" framing captures the fear that AI dependency is not additive but substitutive, that it displaces rather than augments human capability.
"And when [his son] hits the job market, is he suddenly going to find that if he, for whatever reason, ChatGPT falls out of vogue, it becomes illegal, something unforeseeable happens. Does he, when you kick that crutch out from underneath him, is he capable of doing anything? Is anyone capable of doing anything?"
P10's three-generation framing of parental anxiety (drugs, homework crowdsourcing, AI) provides a historical arc that contextualizes AI-assisted learning as the latest in a series of escalating concerns about whether young people are actually developing competence. His distinctive contribution is the observation that AI has conferred a kind of legitimacy on shortcutting that previous forms of academic dishonesty never achieved: it's no longer cheating if everyone is doing it and the institution encourages it.
"My parents' biggest worry was, am I on drugs? When my kids were in school, like in high school, five, six years ago, my biggest worry was like, are they cheating off of others? Everyone seemed to be crowdsourcing all the homework. And I'm like, is anyone actually learning anything other than just how to get by in an ethically dubious way? And now I feel like AI has almost given rise to the legitimacy of that now in a way."
P10's "small council" metaphor is among the most vivid descriptions of the AI-as-thinking-partner relationship in this study. He values AI not for its answers but for the breadth of perspectives it can surface, perspectives that push against his default assumptions and broaden the approaches he brings to his team. His iterative workflow, posing a problem, reviewing AI suggestions, honing them himself, and returning for a feasibility check, demonstrates a genuine dialogue rather than a one-shot generation pattern.
"It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives to kind of push against my default way of doing things."
P10's organization occupies a distinctive position on the disclosure spectrum: it actively mandates and monitors AI usage but has established zero norms around disclosing AI involvement in deliverables. The incentive structure actually runs counter to disclosure, since employees are rewarded for demonstrating AI-driven efficiency gains but are never asked to indicate which outputs were AI-assisted. P10 notes this gap matter-of-factly rather than critically, suggesting it hasn't yet risen to the level of a felt problem.
"So let's say for example when we turn in a report, 'Last quarter, portions of this were created through generative AI means,' nothing like that."
P10's version of job security anxiety is less personal and more structural than most participants'. He does not express fear for his own position; instead, he articulates a systemic critique of the economic logic driving AI adoption. His argument is that C-level executives will adopt any technology that allows them to claim credit for headcount reduction, regardless of whether the technology actually delivers on its promises, and that the workforce will bear the consequences of that bet.
"I think the legacy of this phase of AI, this gold rush mentality, is just going to really be exposing the greed of C-level executives in the world right now in that they would put their faith in anything which will allow them to say, 'I'm the person who cut staff expenses by 30% and as a result you stakeholders all got higher dividends and I got a bigger bonus, so everybody wins.' But that's just not true."
Interview Transcript
00:00:00
Paul: I'd like you to tell me the story of your first "oh wow" moment with AI. So, what was going on that made you try AI and what happened that made the light bulb turn on for you.
P10: It's probably kind of two different answers because I tried it for a while before I really had the "oh wow" moment. And so I had been aware of it and just felt like, honestly, I think the first times I ever dipped my toe into any of it would have been like the DALL-E image generation app just for fun, just purely just to see if I could give it ridiculous prompts and it would throw out these weirdly morphed figures with distorted hands and faces and stuff like that. It was just kind of a hoot just to see what you'd get back at random with random text prompts. So that was just kind of fun.
00:00:50
P10: It gave me a little bit of an insight into what it might be able to do that might be useful down the road. But what might be actually useful down the road, and I just started hearing more about generative AI.
00:03:58
P10: This is probably 2023, on just some podcast chatter and things like that. And so then I remember the first use cases for me turned out to be personal uses for travel related things, helping me plan. I'm going to go to Toronto for the weekend, what should I look for, how should I get there, where should I stay, things like that. And I found it to be reasonably helpful. Then the kind of all the news headlines around hallucinations kind of put a hitch in my step. And then obviously the environmental impact of things put a hitch in my step. So, it kind of drew me back from it a little bit.
Then the next chapter is they rolled [generative AI] out at work and basically told us you better start using it. And they even, they don't monitor what we use, what we chat with it about, but they monitor how often we chat with it.
00:04:53
P10: And then they kind of look for, all right, what have you done lately that's improved efficiency using ChatGPT, for example, and now Claude. So it's a little bit with a gun to my back that I find I'm dipping my toes into it deeper every day.
Paul: That's a common theme I've heard. So, fast forward to today. What other tools have you tried and abandoned and what tools have you tried and are sticking with?
P10: ChatGPT, like I said, is basically mandated at work. So, I use that quite a bit. I lead a team of people. And I'll say I have settled into getting comfortable using it for what I call low-risk ideation where I just don't know. Let's say for example, I'll give you the perfect example.
00:05:46
P10: This year, the one-on-ones that I used to conduct with the team members, some were much better than others, but there wasn't a whole lot of consistency or structure to them, and I just felt like overall they could be better as a group. So, I turned that question over to ChatGPT and just asked for some best practice methods there. And it gave some pretty decent ideas. I'll say it gave me the good starts of ideas and then I would hone them myself and then bring it back to ChatGPT for kind of like a final "does this sound like a workable plan" and then say yes or no.
I've also run into some situations with ChatGPT where it will just obviously hallucinate something. The chief example I always have of that is there was a time where, literally, it was last November, I needed to make a calendar for like a newsletter that would have been the month of December and I just didn't feel like making the Word table.
00:06:41
So I asked it, "Make a Word table that's a calendar for the month of December with two rows for each date," that sort of thing, and it messed the dates up. Like if November started on a Monday, it had it starting on a Tuesday where none of the dates lined up.
And I was just like, okay, like I said, so that further cemented my whole thing of low-risk ideation opportunities. I'll give it a fair crack. We've started working with Claude a little bit more at work now, particularly Claude Code, and now I don't know if you saw, even they just announced Claude Design that Anthropic is releasing now. So it's a little bit of a stretch from what I saw in the headline. Take a look. TechCrunch has a good article about it. They call it Claude Design, but really it seems more like it's their version of Canva versus being their version of Figma. So, design's a little bit of a stretch, I think, but I think the fact that they called it that gives a little bit of insight into where they want to go in the future.
00:07:36
P10: So, we'll see.
Paul: That's interesting. I'd like to switch gears a little bit. The stuff that you've been talking about has been really helpful and insightful, but I want you to think about one thing that you do regularly at work or in your personal life that AI has changed the most and walk me through what it used to be like versus what it's like now.
P10: Let me make sure I understand the question. So, just a task that I used to handle in an old way and now I handle it in a new way kind of thing.
Paul: What task or activity has AI impacted the most?
P10: I would say it's that idea generation. I don't do a whole lot of conveyor belt type work where it's, oh, I used to have to manually go through this Excel spreadsheet and sort and filter and pivot table the whole thing, and now it just does it for me.
00:08:26
P10: I don't have a whole lot of opportunity like that. So again, it's really just more sort of a, I'll say this. The other thing that I know 90% of people at work use it for, you'll probably hear this an awful lot in your research, is "oh, I wanted to write an email and sound more professional," things like that. I don't do that very often. Maybe again, I'm definitely old school at this point, but I have always kind of prided myself on my ability to write pretty well. And so I almost kind of take it as an affront towards asking it then to like edit or clean up my words or things like that. I'm just loathe to rely on it for that, but I know a lot of people do. So for me it's more the idea generation. Did that answer your question?
Paul: Absolutely. So, what prompted you to bring AI into your idea generation and what's working well?
P10: The detraction I'll say is it's almost, the word I've used with my wife about it is that it is surprisingly seductive in that I might be overrelying on it suddenly. Have I gone from, because I'd always been a little bit of a kind of a Jared Spool skeptic about, hey, this is just a word association machine, this is like a magic trick, this isn't much substance to it, to now suddenly I do use it a lot more than I think I ever envisioned that I would. And in that ideation space, it's been a lot of help just for me to broaden the approaches that I bring to the work that I've got to do for the rest of my team. So, that's been helpful for me. It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives to kind of push against my default way of doing things.
00:10:11
P10: But again, on the detraction side, I kind of worry like how much of myself am I losing through this process because I'm just lazily relying on it now to provide me with all of the perspective. So yeah, that kind of concerns me.
Paul: Do you feel like it's changed how the work that you produce using idea generation is evaluated by others or what people expect from you?
P10: Oh yeah, for sure. I think my boss has definitely, we've never discussed it specifically like, "Hey, are you getting this from gen AI?" But she has noted several times where it's like, "I can definitely see you're broadening your perspectives, you're trying new things." So it has been, you know, objectively beneficial I'd say from my job performance from that standpoint, correct.
Paul: What's been your biggest win with AI at work so far?
00:11:07
Paul: And then on the flip side, what's been the biggest disappointment or surprise failure? This could be with your team or with the organization and how it's affected you.
P10: Sure. Absolutely. I'll say the biggest win for me was, we are attempting to expand what our user experience team does at [organization] right now in a way that's a lot more strategic versus tactical. So we are attempting, we have been attempting for probably the last 18 months now to really shift left into more of the discovery research and teeing up the missed opportunities and the latent risks that our business partners are sitting on that they need to then put onto their road maps and do something about in order to maximize their business. Versus just creating like reactive wireframes whenever they happen to think of us and ask for us to do something for them on a screen level. So it's been very, that required an awful lot of shift in the soft skills for the team.
00:12:03
P10: So a lot of them are kind of what I would call purists. They see themselves as, "I work in Figma a lot. I sometimes do some research and that's about it." And so now stakeholder management, influencing without authority, communicating to different levels of audiences, things like that. They all needed that. ChatGPT really did help me with coming up with a 12-month comprehensive training plan that included both paid and free sources. I really went back and forth with it for a while on this one to really kind of hone this into something that was, that thus far has proven to be valuable and also feasible from both a cost and a time perspective. So, I was able to get that done much more quickly and much more comprehensively than I ever would have been able to by myself through just what I'll call old research methods now at this point of me just googling things and talking to people.
00:13:01
P10: So, it is definitely that ability for it to amalgamate all that information into exactly the question that I pose to it. That's probably been the biggest transformational win for me thus far. I'll say the detraction is not so much in my own space but like you said at the organizational level, I do worry about the inaccuracy. They have a community of practice for gen AI that has like monthly presentations. They had one just yesterday and they showed kind of a warning case of, if you're using Copilot and you don't use the deep thinking model instead of the default model, it will, if you hand it like a form and say "pull all the fields of data out of this form," it'll stop like two thirds of the way through because it's trained to say people like me more when I give you an answer quickly. And so it just cuts it off short and you don't really, if you didn't notice, you wouldn't even know, like, "Oh, I've only got like two thirds of the data set here to work with," that kind of thing.
00:14:04
So I worry more about like what's going to happen the first time we get sued over a claim that we deny that we shouldn't have or something like that, that there's going to be a swing and a miss here. Are we overrelying on it when we're giving away so much of our processing, our manual data entry processing capabilities now over to AI? And I just wonder like are we building this house of cards now that's just eventually going to doom the company?
00:15:45
Paul: The next question is about trust and disclosure and verification. How do you decide whether to trust what AI gives you? Everyone's had that instance where it comes out with something that you know is wrong. So, what are your detectors? What tips you off that something might be wrong? And what do you do?
P10: I honestly try to hedge against that by only asking it things where there is no quote unquote wrong answer, because like I said, because of that December calendar incident. I'm not 100% sure that I would trust it if I asked it what 2 plus 2 is half the time. So if I have a big hairy task that again would require processing tens of thousands of rows of data, I don't know that I would, I would probably ironically, and again this goes back to that concern that I have of am I now suddenly seduced into overrelying on it, but I would probably ask it to say, "Okay, process this 10,000-row file but then also tell me how should I double-check your work," which is circular reasoning in the worst sort of way. But yeah, I think ultimately my personal strategy is try not to use it in spaces where there is high risk and or where there is an absolute
00:17:02
need for 100% accuracy and then just stick to it more where the spaces are of, like, I know I keep saying it, but the idea generation where there really is no wrong answer per se, it's just input for me.
Paul: Are you seeing any norms forming at work or in your personal life around when and how to disclose that someone used AI?
P10: No, not at work at least. Let's put it that way. It seems to be kind of just this gold rush mentality of we're all expected to use it and then they sometimes ask us like, "What benefits have you been getting from it lately?" Just I think as a way really just to justify the cost of the licensing that they do. But beyond that, if you mean like any sort of like disclosure, so let's say for example when we turn in a report, "Last quarter, portions of this were created through generative AI means," nothing like that.
00:17:59
Paul: Are you seeing any differences across roles in how people trust and double-check what they produce with AI? So you've got, let's think devs versus your team versus maybe product owners, product marketers.
P10: Devs are big into it. Devs would, I believe the devs and the ops guys in our organization would drive right over a cliff if Claude told them to. So they are very, they trust it. They're enthusiastic about it beyond words. They're the true evangelicals for this sort of stuff. Business people obviously a lot more suspicion, I think, which starts out which is fear for their own jobs. Is this thing going to replace me? And then the kind of like, "Well how could it replace me? It doesn't even do it right. I do it right. It doesn't know what it's doing." That kind of thing. So there's a lot more of that I think on the business side. So yes, within IT, an awful lot of enthusiasm, very little need to double-check the work.
00:19:09
Paul: How does this increasing presence of AI in the world and work and personal life, how does it make you feel?
P10: Anxious.
Paul: Tell me more about that.
P10: Anxious. Just because of the fact it just seems like it's come on too fast, too strong, too quickly. And without anybody really understanding any of the ramifications, governance, ethics, environmental concerns, economic concerns. Again, when the Sam Altman types of the world will talk about this golden utopia in the future where nobody has to do any sort of like drudgery work anymore. It's like, well, no offense to anybody, but the economy runs on an awful lot of people doing drudgery work. And what happens when all, you're just going to say these people just live a carefree life with no job anymore because there's nothing for them to do and they just have this limitless free time now because all that overhead has been lifted from their lives.
00:20:21
P10: So I just don't know. I feel like it's just, I've said to my wife in the past, I think in the future, unfortunately, when we look back 30 years from now on this era of technological advancement, I think the legacy of this phase of AI, this gold rush mentality, is just going to really be exposing the greed of C-level executives in the world right now in that they would put their faith in anything which will allow them to say, "I'm the person who cut staff expenses by 30% and as a result you stakeholders all got higher dividends and I got a bigger bonus, so everybody wins." But that's just not true.
So yeah, I'm like I said, I'm a little bit of a, somewhere between a Jared Spool crank about the thing and then also the "end is near" sign-waving guy. But on top of all that, like I said, seductive is the word that I use in terms of, yet despite all those misgivings, I find it blending into my daily life at work and at home more often than I ever probably thought it would or should.
00:21:33
Paul: What about on the opposite side of the coin? A big breakthrough or promise that in your most optimistic times you feel like AI might enable in the next 5 to 10 years?
P10: I would say, again, this is not my area of expertise, but since it's got the ability to, data has become the currency of the world for a while now at this point, and you end up with legitimately socially beneficial uses for that data. Could this be a tool which could help people, let's say for example, in the medical research field, crunch through data sets much more quickly than ever? So potentially life-saving drugs get to market more quickly. Now again, that flies in the face of the, "Yeah, but what about the hallucinations and the errors?" So if they can again train models to say, "Take your time. I'll wait 2 minutes if you don't mind."
00:22:37
P10: And if you get it 100% right, then that just seems to me to be a space where, yeah, there could be benefit to the planet, particularly around harnessing all that data into something that's actually useful for people versus just simply a profit leveraging mechanism.
Paul: What about the next generation? And you as a manager are in a mentoring role. So what about people entering the field who've never done the work without AI? What's your concerns about that?
P10: I mean, I watched both my kids when I was in school. My parents' biggest worry was, am I on drugs? When my kids were in school, like in high school, five, six years ago, my biggest worry was like, are they cheating off of others? Everyone seemed to be crowdsourcing all the homework. And I'm like, is anyone actually learning anything other than just how to get by in an ethically dubious way? And now I feel like AI has almost given rise to the legitimacy of that now in a way.
So now I've got one kid still left in school at college and I know he's ChatGPT'ing and Copiloting his way through certain classes and I'm like, what does that mean for the purpose of education?
00:23:56
P10: And when [his son] hits the job market, is he suddenly going to find that if he, for whatever reason, ChatGPT falls out of vogue, it becomes illegal, something unforeseeable happens. Does he, when you kick that crutch out from underneath him, is he capable of doing anything? Is anyone capable of doing anything? And I've seen some articles about this idea of just the stagnation of human capabilities. The more we lean on something that can do something so comprehensive for us, or at least that we believe to be so comprehensive for us. So that gives me concern for the future. I don't know what to tell either one of my kids about what's an AI-proof, if there even is such a thing. What's an AI-proof field of study for you, field of work for you? Or how should you again responsibly integrate it into your work in a way that's not eroding your own ability to think critically and put two and two together.
00:24:56
P10: My older son graduated '21 from OSU and like you said really, he uses it at work now but it's more of a novelty to him. And my younger son just finished his first year at Miami and he's absolutely just absorbed by it. And so you're right, very very different experiences in a short amount of time, and that goes back to that point I said, I think my biggest concern is like everybody is just full steam ahead down the road on this thing without thinking of any of the governance or the future ramifications or anything that needs to be thought about in a new capability like this.
00:25:53
Paul: What's the biggest gap between what AI can do for you right now and what you actually need it to do?
P10: I would like to see it, ChatGPT starting to do a better job of this, but basically not just give me blanket answers and not just even give me a blanket answer and then say, "Well, would you like me to do this next? Would you like me to do that next?" But instead really move into the, "Hey, have you considered this?" It knows what I'm talking about. It's keeping track of what we've talked about even in the past. And so again, ChatGPT is a little bit better, but it's also still kind of a dumb word association engine. So, for example, I worked on a discrete project related to ServiceNow last year. That project's over, but during that time, I asked it for some ideas along the way. Now, to this day, it will then give me an answer to a question that I ask on a totally unrelated topic and say, "Hey, would you like me to like mold that to like the ServiceNow use case?" And I'm like, I don't know how to tell it,
00:26:56
P10: like, stop telling me about that. It doesn't matter anymore. So, I would really like for it to, if it truly wants to be more of kind of a consultative partner for me, start not just answering my discrete question, but uncovering the questions that, hey, this is what you should be asking next.
Paul: I had a similar experience where I was using ChatGPT for the most part this time last year to understand trading and options and then I moved to some other topics and it kept, I had to go in and edit what was then ChatGPT's memory file and manually cut out things I didn't want it to refer to because it was having that same issue you described.
00:27:49
P10: I asked it one time, by the way. I asked ChatGPT one time, "Are you able to, can I instruct you to forget chats that we've had?" And it said that yes, you can. So if I ever asked it something medically sensitive that I never wanted it to refer to ever again, it said at least it said that it will drop that from its future chats. Yet to be seen.
P10: I'm trying to learn just in order to keep head above water because things are charging forward so fast.