P6: Survey Data and Session Summary
Survey Responses
| Question | Response |
|---|---|
| Age | 45-54 |
| Education | Master's degree |
| Role / Level | Individual contributor |
| Job title | Senior Technical Product Manager |
| Years of experience | 16-25 years |
| Organization description | We are a massive, global credit card transaction processor |
| Industry | Consumer Finance |
| Individual AI tools used | Text generation (creating documents, emails, summaries), Search and information retrieval, Data analysis and synthesis, Workflow automation and process automation, Code generation and completion, Building/training/tuning machine learning models |
| Organizational AI tools | Customer-facing chatbots or virtual assistants, Internal search and knowledge summarization, Security/fraud detection/anomaly monitoring, Customer recommendation systems, Predictive analytics for business forecasting, Content moderation or filtering systems, Code generation and developer tools |
| AI adoption involvement | Contributed to technical design, requirements gathering, or implementation; Led the project, strategy, or initiative (project manager, initiative owner); Provided subject matter expertise, requirements, or end-user feedback |
| Biggest work win with AI | I have been working on various systems that effectively encrypt sensitive data allowing for affinity mapping and thematic discovery without the risk of exposing PII. I have also built a translation engine that allows developers to translate UI components and other design elements accurately, but also flags when human review is needed. |
| Biggest work disappointment with AI | My biggest disappointment continues to be our preference for Copilot as a business tool. I think there are much better tools and systems out there. I am also frustrated at the fact that AI development outside the foundational and proprietary AI we use for fraud detection (tools, agents, etc) is very siloed instead of being centralized and made accessible to all. |
| Organization's biggest AI success | We have been using AI for years as part of our fraud detection strategy. I think people have a tendency to assume AI was uncommon up until consumer AI became prevalent when, in fact, it is literally the only reason people's banking credentials have not yet been stolen. AI allows us to provide real time fraud prevention, detection, and prediction, which is tremendously important. |
| Organization's biggest AI challenge | Our biggest challenge is that fraud is a constantly changing landscape and we have to work hard to stay ahead of the curve, more so now that publicly available AI is becoming more sophisticated. That said, I often say that the only way to beat the machine is to have the bigger, badder machine. |
Background
P6 is a Senior Technical Product Manager at a large global credit card processor with 16 to 25 years of professional experience. She holds a master's degree and works as an individual contributor, though her role spans product management, tool building, and data science collaboration. At the time of the interview, she was transitioning to a new position within the same company.
P6 occupies a distinctive position in this study's sample: she is both a builder and a consumer of AI systems. Professionally, she leads the development of internal AI-powered tools, including fraud detection rule engines, a translation confidence system for UI localization, and a PII-encoding engine for thematic analysis of customer contact data. Personally, she uses consumer AI tools for everything from trip planning to learning new software platforms. Her survey responses reflect the broadest tool portfolio in the study so far, including building, training, and tuning machine learning models alongside the more common text generation and search uses.
What makes P6's perspective particularly valuable is her vantage point from inside the "invisible" AI layer. She works on the fraud detection systems that most consumers never think about, then goes home and uses the generative AI tools that dominate public discourse. This dual perspective gives her a sharp read on the gap between what AI actually does in practice and what most people think it does.
Key Findings
Trust as System Design, Not Personal Practice
Most participants in this study describe trust calibration as a personal habit: cross-checking outputs, applying domain knowledge, scaling verification to the stakes of the question. P6 does all of that, but she has also gone a step further by engineering trust calibration directly into the tools she builds. Her translation engine assigns a confidence rating to every output: 95% or above can be used without review, below that threshold requires human oversight, and below 70% gets routed to the external translation partner.
This represents a qualitative shift from individual trust practices to institutional trust architecture. The confidence tiers encode the same judgment that other participants apply manually, but they do so at scale and without relying on each user to make the right call about when to verify.
"Because we don't want people just translating things who don't understand the language, it also assigns a confidence rating. And our set confidence is if it's a 95% or above confidence rating you can roll with it."
The Prompt Drift Problem
P6's central frustration with AI tools is not hallucination (fabricating content) but drift (gradually degrading quality over repeated use of the same workflow). She describes prompts that work beautifully for a hundred inquiries before slowly going off-target, and AI systems that "try to streamline" her instructions by cutting corners on complex edits.
This is a distinct failure mode from what other participants have described. Hallucination is a point failure: the output is wrong right now. Prompt drift is a temporal failure: the output was right and then stopped being right, with no clear signal about when the transition occurred. For someone trying to build repeatable AI-powered workflows, drift is arguably the more dangerous problem because it erodes confidence in the entire pipeline rather than in a single output.
"It works for a little while. It may work beautifully for a hundred inquiries using that prompt, but eventually it starts to drift."
AI as Cognitive Counterweight
When asked how AI affects her problem-solving approach, P6 offered a remarkably self-aware answer. She identified tunnel vision on details as her "Achilles heel" and described deliberately deploying AI as a counterweight to that tendency. When she catches herself over-involved in one thread, she uses AI to broaden her field of view, generating the ten other threads she might not be looking at.
This is not the general-purpose productivity enhancement that most participants describe. It is a targeted cognitive intervention: she has diagnosed a specific limitation in her own thinking style and selected AI as the corrective tool. The self-awareness required to use AI this way, knowing not just what the tool can do but what you specifically need it to compensate for, may represent a more mature form of AI adoption than simply using it to go faster.
"I get bogged down in minutia. That is my Achilles heel... the time for me to use it is when I'm over-involved in one little thread because what it'll do is broaden me out and give me 10 different threads that I might not be looking at."
The Invisible Infrastructure Argument
P6 makes a case that no other participant in this study has made: that the public discourse about AI trust is fundamentally distorted because people only see the visible layer (generative tools, chatbots, social media manipulation) while remaining unaware of the invisible layer (fraud detection, logistics optimization, infrastructure AI) that they already depend on. She points out that the very reason people's banking credentials haven't been stolen is AI-powered fraud detection, but nobody thinks about that when they say "you can't trust AI."
This argument reframes the trust question. The issue is not whether AI can be trusted, since it is already trusted, invisibly, with some of the most consequential decisions in daily life. The issue is that the public conversation about trust is being shaped by the least consequential and most visible applications of AI.
"Something that is a major pain point to me is that at a consumer level, we don't necessarily have any insight into how this works for us and how it's a necessary thing. It streamlines logistics. It streamlines fraud detection on down the line. We're not looking at that."
Learning Past the Quit Point
P6 described using AI to teach herself R, Tableau, and even Newtonian physics, with a specific emphasis on what happens at the moment of frustration. When she hits the point where she would normally quit, she can ask the AI to explain the problem differently, use a metaphor, or walk her through it like she's in eighth grade. The result is not just completing the learning task but developing genuine interest in the subject matter.
The key mechanism here is adaptive persistence. Traditional learning resources (courses, documentation, Stack Overflow) present information at a fixed level. When the learner hits a comprehension wall, the resource doesn't adapt. AI does, and P6 reports that this adaptation sustains her through exactly the moments where learning typically fails.
"When I start hitting those barriers that have been there because I'm not a physicist, I can say, 'Hey, explain this to me like I'm in eighth grade. Can you use an example? Give me a metaphor for what you're describing here.' And the odd thing is coming away with the ability to explain this complex thing but also an interest in it."
Emerging Themes
| Theme | Description | Key Quote |
|---|---|---|
| Trust Calibration | Deliberate, ongoing practices for evaluating AI trustworthiness on a spectrum | "A computer cannot be held accountable and therefore it should not make managerial decisions." |
| Augmentation Not Replacement | Using AI to enhance existing activities rather than offloading tasks entirely | "Stop using it as a potential replacement for humans and use it as a way for us to manage this infosphere that we've built ourselves." |
| Knowledge Displacement | Concern that AI dependency erodes foundational knowledge and judgment across generations | "What happens when we're only going to the AI for the solution? What does it do to human ingenuity?" |
| Prompt Drift | Degradation of AI output quality over repeated use of the same prompt or workflow | "It may work beautifully for a hundred inquiries using that prompt, but eventually it starts to drift." |
| AI as Learning Partner | Using AI as a personalized tutor that adapts to the learner's level and sustains engagement past frustration points | "I literally used AI to teach me how to do R." |
| Corporate Tooling Gap | Mismatch between the AI tools an organization provides and what individuals need | "They have made Copilot available to everybody. I have thoughts on that. I hate Copilot." |
| AI as Cognitive Prosthetic | Using AI to compensate for a known personal cognitive weakness | "The time for me to use it is when I'm over-involved in one little thread because what it'll do is broaden me out." |
| Invisible AI | Recognition that AI is embedded in critical infrastructure without public awareness, distorting discourse about trust | "What they don't realize is that you kind of have to [trust AI]. It's already underneath so much that we rely very heavily on." |
Interview Transcript
00:00:00
Paul: What is the first generative AI tool you remember using?
P6:
I don't know if I actually put a finger on the fact that what I was using is AI, but I know that it has been underneath a lot of services and things that I used probably before I moved into an AI-forward area of my career.
00:04:04
P6: The one that I want to just pull out right away is to repeat the name Amazon because Amazon's been using it for a very long time to generate recommendations and things like that for us. Not necessarily to creatively generate anything. So we're kind of skirting the line a bit, but as far as my own kind of consumer usage, if it's like me monkeying around with AI, my brother-in-law showed me ChatGPT years, like during its first iteration, and he was using it pretty heavily. He's the dean of physics at [a university] and he was using it because he knew it was going to be a thing, and so he showed me and that was the first time that I was ever really playing with it. And I remember sitting at a dinner table with my sister and her husband generating images just because we could. They've gotten scary good. I mean, back in those days, they were fun, but they weren't good.
00:05:14
P6: Now they're good, which is frightening because that was only a couple years ago.
Paul: So let's skip forward to now. What tools are you using that have kind of stuck with you? You can address personal and work separately or blend it all together. It's up to you.
P6: For me the things that I kind of where I start to skirt the line with my job is, we obviously have for everybody at our business, we have a Copilot. But we're actually doing quite a bit of work because we have our own artificial intelligence engines, AI-based engines, and mostly these are not necessarily generative. Generative tends to be more on the creative side. These are things that are used to expedite various and sundry processes and workflows. When we're using those, we're predominantly kind of raw coding them in Python and other programs to make sure that they are functional.
00:06:28
P6: But the way that fraud detection used to work is there were a series of scenarios that you wouldn't want. Like a high-risk person would be somebody with most of their credit limits eaten up. They're from a high-risk country. It's purchase behavior you don't see. And so the machine used to work to align all these things. But to make it smarter and to make it so that people aren't seeing as many false positives, we've started to hone in on things like, Paul Sherman is traveling right now and he's going to Nepal. Did he check in at the airport? So before we reject all of your transactions, we're going to see did he make a purchase at O'Hare or at another big airport? And a lot of that is determined: are we immediately going to decline, or are we going to reach out to him and say, "Hey Paul, are you buying stuff in Kathmandu?" All of that is elegant and complex and it's all kind of homespun in various programming software as far as what we do on a daily basis.
00:07:50
P6: There's also a need for people who are fraud analysts. They're not programmers. They don't fixate on the back end. People who need to be able to implement new rules. Every time Paul's in Kathmandu, he doesn't want his transactions declined. That's going to necessitate a new rule. And so what we are using is artificial intelligence that helps to determine which of these rules should be applied to him and which should not, and what rules are we applying over other clients that we probably should or should not apply to him as well. And we need a system where fraud analysts can go in and click that into place. And so that's something we're working on.
Paul: Did I understand you correctly that you're working on internal tools for the fraud analysts?
00:08:57
P6: We are, yes, we are basically building them a suite of tools that makes it easier for them to do their work, to automate certain things. If they need to go in and change a rule, there's an entire process for doing that. We can use AI to streamline that process. We have a system where customers will call in with customer support issues. This is actually really common for all of Salesforce. Salesforce has something, I don't know if it's always called Einstein. I don't know if that's something that belongs to them or belongs to us, but it's called Einstein and it's a logic where when the customer calls in or writes in, usually it's an email, but in service industries where it's mostly telephone calls it actually carries the transcript, and the transcript can interpret from there and say, "Okay, this is the person's issue," and give it a tier classification of root cause.
It's grossly inaccurate, but I think that kind of points to the human-in-the-loop element: it only gets smarter if whoever's using that system goes back and double checks and says, "Oh, no, it's not that, it's this."
00:10:20
P6: A lot of times they take for granted that it's just, "It's streamlined my life and my process and it's faster now," and they just kind of roll with it. But we also use a lot of,
they have made Copilot available to everybody. I have thoughts on that. I hate Copilot, but our legal team is using it to streamline writing certain kinds of documentation that's repetitive, of course with gross human oversight.
And because we don't want people just translating things who don't understand the language, it also assigns a confidence rating. And our set confidence is if it's a 95% or above confidence rating you can roll with it.
00:11:43
P6:
If it's below that it needs some human oversight, and if it's below a certain level, like once we hit like 70%, it's something that we would want to send to our translation partner.
00:12:51
Paul: What do you think's been your biggest win or success or efficiency gain lately achieved by using AI tools in your personal work?
P6: I deliver a lot of analytical reporting and I have dashboards set up that, they're set up in such a way that my stakeholders could go in to PowerBI or Domo and they could filter out and see exactly the numbers they need to see. They don't want that. They want me to put it together on a monthly basis and give it to them as a report. And it's a lengthy report. I have found a way to effectively have AI read my dashboard on a given day and compare it to data I pulled the previous 3 weeks, the previous month or whatever and say, "Okay, well, we've seen a spike here and a dip here, and these are things we should look into." So I'm no longer having to dig through all of that.
00:14:01
P6: And that's a simple application, but it saves me hours and hours and hours of work. And it's parlayed into me building, I'm working with our data science team right now to build an AI that will allow us, we can't export any kind of customer contact information. Like if there's a contact string of their email, our email, their email, our email, we can't export that and we can't because it has the potential to contain PII. It becomes a liability for us. So we're working on artificial intelligence that can take those strings and encode them such that it can compare, if it encodes it out to numbers, it can compare certain strings of numbers and say these cases all contain some of the same words and those words point back thematically to similarities that they may have. I created a simple version of this as my thesis at Northwestern. Wildly, it worked. I used it to mine Reddit transcripts.
00:15:15
Paul: Okay.
P6: So I wasn't using verbatims from real people without their consent. I was encoding it and just looking at thematically what is it saying. We're still working on it. It's not, it's been only a few months and one thing I have learned about AI in my experience is it's got to run for a while before it gets good.
Paul: How about let's flip this question. So, what's been the biggest disappointment or failure case you've experienced while trying to apply or use AI tools or process or build a new workflow?
P6: I believe that prompting or building agents is going to be kind of the new venue for UX practitioners. And I think of that because if I build a prompt out like I'd build a persona and build a workflow out the way that I would map a workflow or create a service blueprint. When I build it out that way I get great results. However, the thing that I think is a big failure, and it's been a failure across the board of people I've seen experimenting with AI, is using really clipped prompts.
00:16:51
P6: It works for a little while. It may work beautifully for a hundred inquiries using that prompt, but eventually it starts to drift. And I know that I've had this frustration factor on both my personal use as well as sometimes my use at work, particularly with, I've admitted I'm frustrated with Copilot. They get drifty and they get really drifty and you sit there and you're like, "It's not that hard. Why don't you just do your job? I told you what your job is." And one of the things that it's doing in the back end is it's trying to streamline me. Given it a complex set of edits, like, "Where can I cut corners?" And so I would say is kind of where the disappointment is, that it's hard to create a workflow that replicates every single time consistently without it being long and detailed, saying, "You may not move on until this happens." That's I think the big disappointment, that you don't have the, vibe coding is such a thing right now, but you don't have that kind of usage of AI.
00:18:05
P6: If you do that too much your answers can be all over the place. So I don't know if that necessarily answers your question, but that's a big problem.
Paul: Yeah. No, it does. It does. So I've got some questions here about how AI has changed how you do certain things. You related the example of not having to go back and dig and make that report because people can't be bothered to go into PowerBI or flex their SQL chops if they have any. Are there any other tasks that you do very differently now because of the AI tools available to you? And this could be either work or personal. You can blend the two or skip or stick to one. It's up to you.
P6: There's a lot of little things. Initially when I was dabbling with AI the number one thing I was doing is really dumb stuff like, "How much popcorn do you add to a Stir Crazy popcorn popper?"
00:19:16
P6: That's what my mother loves using it for now.
Paul: What was the answer?
P6: Oh, 1/4 cup, one quarter cup, and 3 tablespoons of neutral oil. It looks it up and it tells me and I don't have to be goofing around and running around in the background. So, some of those really quick questions, I love that. I love not having to dig through the entire internet to find the answer I'm looking for, not needing to look at 50 different resources. A good example is I have Machu Picchu behind me because I'm going to be there next weekend. Yeah, I'm taking a little vacation, super excited about it. But one thing is the last three or four years my vacations have either been in the winter or they've been in cold places. So I haven't been to South America in 20 years. I don't know what to pack.
00:20:23
P6: I've never been to Peru and I've never been at that altitude and there's tons of resources and you get just bogged down of like looking here and looking there and you're like, "I found the trekking poles I want to take but then there's 15 other recommendations." Being able to ask it last night I said, "I need a packing list for Peru. I am going to hike to Machu Picchu, after which I'm hiking to another archaeological site. And then I am renting a car in Lima and driving to the Nazca lines where I'm taking a flyover." And I said, "I need to know what to pack for this. These are my dates." And it comes back and it's like, "Well, here's the weather forecast next week and here's what we recommend for your hiking. And you want to keep your pack under x number of pounds. So this is exactly what you should put in it." And it thought of things that I hadn't even thought of. And asking a question, it really streamlined this process because as I began to kind of look into my response, which I'm still double-checking everything, it had concatenated all of this information from the internet into a single list of like, "Most people find it beneficial to take two pairs of trail pants, a pair of shorts," and you know,
00:21:52
P6: four dry wicking shirts and things like that. And one of the nice things about that is if you get stuck on something and you're like, "Okay, what is a dry, recommend to me a good dry wicking shirt?" It'll find it for you. Or "What is loperamide?" I don't know what it was and it's like, "It's Imodium." These little twiddly questions. It's still the popcorn popper, but it's a lot more elegant than a popcorn popper to sit down and I've got a list. I can print my list out or I can put it in an app so that I can just check my stuff off as I pack. But it's made it really streamlined, and I actually did that this winter because I was headed to Finland and I knew it was going to be freezing. So it looks at all this information and gets to the bottom of it. So you aren't wasting 10, 15 minutes reading through somebody's travel blog only to find out that they were in Finland in mid-June.
00:22:53
P6: So it's not going to work for you.
Paul: Right, and also I imagine having 10 or 15 tabs open and trying to figure out where you were.
P6: Exactly.
Paul: Let's talk about AI and trust. How do you decide whether to trust what AI has given you? This can be at work or at home. Or both.
P6:
I am an advocate of continuous human oversight. I saw a quote from IBM today and it was something to the effect of, a computer cannot be held accountable and therefore it should not make managerial decisions. That applies over a lot of different areas. I think, I'm sure you've read about the United Healthcare stuff where it was making accept/reject determinations that resulted in a massive lawsuit. I am a huge advocate for human in the loop.
00:24:04
P6: Especially when we start to consider the compounding effects of, fraud is at an all-time high. And something I tell people about why [my company] uses AI for fraud is because fraudsters use AI for fraud. I think I wrote in my survey that I filled out for you: something I constantly tell people is the only way to beat the machine is to build the bigger, better, smarter machine. And we live and die by that in finance.
Something that is a major pain point to me is that at a consumer level, we don't necessarily have any insight into how this works for us and how it's a necessary thing. It streamlines logistics. It streamlines fraud detection on down the line. We're not looking at that. We see all of the scam attempts of garbage AI scam attempts where it's just wash, rinse, repeat, and they're contacting a gajillion people to see who will bite. We see what it's doing as far as the bad parts of AI, especially with social media.
00:25:30
P6: And I think that that's an absolutely massive pain point. And from a trust aspect, I think that's the number one reason that people are like, "You can't trust this thing. You can't trust it."
What they don't realize is that you kind of have to. It's already underneath so much that we rely very heavily on.
Paul: Yeah, I identify with that. We had a dog sitter visit us to just be oriented about the house and the dog. And she asked, "So, you don't have a smart doorbell? You don't have any cameras in the house?" I said, "No, we don't." She said, "I thought you and your wife worked in tech." I said, "Yes, we do, which is why we don't have that crap in the house."
00:26:30
Paul: I've just got three or four questions. These are, I think you'll find them interesting. First, I want to talk about how, or if, and if so, how AI use is affecting your approach to solving problems.
P6: I get bogged down in minutia. That is my Achilles heel. I'm transitioning to a new position at [my company] as we speak.
Paul: Congrats.
P6: When I come back from Peru, I got a new job. And one of the things that I've noticed is that they always ask you, "What is your weakness?" And my weakness is definitely, I'm almost overly detail-oriented. That is a blessing and a curse because it means you can really get over-involved in the minutia and lose sight of everything that's out here. I find that when I'm controlling AI well and I'm using it to streamline my work or to help me think through a problem or to do affinity mapping, it's great at affinity mapping, the time for me to use it is when I'm over-involved in one little thread because what it'll do is broaden me out and give me 10 different threads that I might not be looking at.
00:28:04
Paul: So it helps you zoom out?
P6:
I think it helps me zoom out and if I need to zoom back in, helps me zoom in. It has to be accurately prompted to do it. But I really think that that is probably where it benefits me the most: it helps me to see patterns and to see things that I might not otherwise because I'm very close to my work.
Paul: Are you concerned about losing any skills because of using AI, or do you feel like your detail orientation helps prevent that?
P6: Oh, where I get concerned is, I never want to lose my humanity, of course. And some of the skills that concern me is that, collectively, we have this thing that provides us quick answers, provides us different ideas, etc. How much critical thinking and creative thinking are we doing? And what worries me about that is, if you look at socialization, kids socialized 100% differently in the '90s than they do now.
00:29:20
P6: And that is the result of social media. I very firmly believe that it hasn't necessarily been a good thing for them. So what happens when we lose our ability to sit and gnaw on a problem or think creatively about something or think outside the box? What happens when we're only going to the AI for the solution? What does it do to human ingenuity? And that's a big concern of mine.
Paul: Yeah, that actually leads right into my last two questions. What do you think is the most significant possible breakthrough or positive outcome that AI might enable in the coming years? Then, what's your single biggest concern or fear related to increasing use of AI?
P6: I definitely think that, I had to learn R and I had taken a couple of the little courses out there available on R and I was just having a really hard time figuring it out.
00:30:38
P6: So I literally used, it was in my last year of my most recent grad program, I literally used AI to teach me how to do R. And I've used it to learn multiple software platforms at this point, specifically data analytics. Tableau was a big one. Because when I would start to encounter resistance and get to that point where I'm frustrated and I'm going to quit, I have something there where I can say, "Okay, this is the kind of visualization I am trying to make. This is where the data is porting in here and how it's set. And for some reason, I'm pulling up donuts. What is going on?" And it has that ability even from a screenshot to look and say, "Oh, well, you need to move this around." And I think something people need to remember is ask it why. Why do you need to do that?
00:31:38
P6:
It recently sent me down a rabbit hole because I was like, "Okay, what's the difference between Newtonian relativity and Einstein's relativity?" And it starts explaining it. And when I start hitting those barriers that have been there because I'm not a physicist, I can say, "Hey, explain this to me like I'm in eighth grade. Can you use an example? Give me a metaphor for what you're describing here." And the odd thing is coming away with the ability to explain this complex thing but also an interest in it.
But I also worry that with some of the streamlining that it does, does anybody need to really know how to do calculus anymore? You can make the AI do it. These are critical skills, though, and they're skills we should have. We should be able to do algebra. It's a pain. Use the AI if you've got it. But I know a lot of teachers who are expressing frustration because their kids aren't learning some of these foundational things that they need to know.
00:32:57
P6: You need to know how trigonometry works. You can't just get to the right answer.
Paul: That's yeah, that's interesting. Let's go to the light side now. What's your biggest hope or what do you see as the biggest promise of AI in the coming years that it might deliver? Not that it will deliver, but in your optimistic vision?
P6: AI needs to function to make us better and more efficient, and safer. I think it's got so much potential, and some of that's biased because I work in fraud. It has so much potential to complement us in crucial ways, to keep us safe, to keep us protected from the fact that there's so much out there that's, for lack of a better term, bullshit. To keep us safe from all of that, to help us to know when what we're seeing is real and know when what we're seeing is false.
00:34:08
P6:
In a way, we've created an information environment where we need it. We need AI. And I really think that that's the biggest promise of it, is to stop using it as a potential replacement for humans and use it as a way for us to manage this infosphere that we've built ourselves.
P6: We, in order to navigate it, we're going to need the machine. But the one thing about it is that if we don't start taking those steps now, I don't think this is going to go well. I really have a lot of reservations about it.
Paul: Well, that's a great place to stop and I'm going to stop the recording now.
P6: I do want to say one thing before you stop your recording and this is just to protect my company. I work for a credit card processor. You could say that, but I don't want to call out [the company].
Paul: I'm going to de-identify. I'm going to make sure I search the transcript for any mention of the company name and remove it.
P6: Yeah. I realized I've been speaking very transparently and I'm like, they might not appreciate that. Because I'm not a, oh.
Paul: Well, you'll be anonymous and so will they.
P6: Super duper.