P13: Survey Data and Session Summary
Survey Responses
| Question | Response |
|---|---|
| Age | 45-54 |
| Education | Master's degree |
| Current role / position level | Consultant |
| Job title | Consultant |
| Years of professional experience | 16-25 years |
| Organization description | I elevate brands and executive communication through strategic storytelling and high-impact design. |
| Industry | Financials (banking, insurance, investment funds, and consumer finance) |
| Individual AI tools used | Text generation (e.g., creating documents, emails, summaries), Media creation (images, audio, video), Data analysis and synthesis, Workflow automation and process automation, Code generation and completion |
| Organizational AI tools deployed | None of the above |
| AI adoption involvement | My organization has not formally deployed an AI project |
| Biggest work win with AI | While at [former company], one of the most impactful efficiency gains I've achieved with AI tools has been in transforming complex, technical content into executive-ready narratives at scale. In my previous role supporting senior leadership (C-suite), I was often tasked with turning dense materials, such as strategy documents, data reports, and operational updates, into clear, compelling PowerPoint presentations for executive and board-level audiences. This process traditionally required significant time to synthesize content, identify the "so what," and structure a cohesive story. By integrating AI into my workflow, I was able to accelerate this process significantly. I used AI to rapidly distill large volumes of information into key insights, draft structured narratives, and generate initial slide frameworks. This allowed me to shift my focus from content creation to higher-value work: refining the story, strengthening the visual hierarchy, and tailoring messaging for executive impact. The outcome was a meaningful increase in both speed and quality. I reduced turnaround time on presentations while maintaining, and often improving, clarity and strategic alignment. This also enabled me to support a higher volume of stakeholders across multiple departments without compromising the level of polish expected at the executive level. Beyond my core role, I've also applied AI to elevate my personal brand and career strategy. I've used it to refine my portfolio narrative, translate complex and compliance-constrained work into compelling case studies, and sharpen my resume for different roles. This has allowed me to position my work more strategically, especially when I can't publicly share full deliverables. I've also experimented with creating lightweight AI-driven workflows to support my job search. This includes building repeatable prompts and "agent-like" processes to scan for relevant roles, summarize job descriptions, and tailor application materials quickly. As a result, I've significantly reduced the time spent on manual search and customization, while improving alignment between my experience and the roles I pursue. More broadly, AI has become a force multiplier in both my professional output and career management. It doesn't replace my role as a storyteller and strategist, but it amplifies it, allowing me to operate with greater speed, clarity, and intention. |
| Biggest disappointment with AI | Lack of tutorials. Finding good (and reliable) resources |
| Organization's biggest AI success | The time spent performing easy tasks has decreased by 40% |
| Organization's biggest AI challenge | At [former company] I often heard: lack of legislation. |
Background
P13 is a UX design consultant with a financial services background, having spent six years at a major US bank and a prior stint at another large financial institution. At the time of the interview, she was navigating a career transition following an acquisition that dissolved her team. She holds a master's degree, a certification in neuro-linguistic programming, and 16-25 years of professional experience. She is Brazilian-born and US-based, with English as her second language.
P13's entry point into AI was personal: she photographed her refrigerator and asked ChatGPT to act as her personal chef and plan a week of meals. That experiment cascaded rapidly into financial analysis, professional communications, job search automation (daily AI agents delivering 15 targeted job listings by 8 a.m.), portfolio case study conversion, and futures trading research. She moved from ChatGPT to Claude and was actively taking courses through Cursive.io to broaden her AI tool literacy at the time of the interview.
Her professional AI use was shaped by the regulatory environment of consumer finance. Both institutions where she worked firewalled AI tools behind security restrictions, creating a situation where employees at all levels, including VPs, used personal devices and workarounds to access AI, then emailed results to themselves under cover stories. This shadow adoption culture is the session's most distinctive thread.
Key Findings
Shadow Adoption in Regulated Industries
The most vivid finding from this session is the picture of AI adoption happening entirely underground in a heavily regulated financial institution. P13 described an environment where the organization was actively exploring AI but had firewalled all the tools, creating a gap between institutional interest and individual access. The workaround culture that developed was not limited to individual contributors; leadership participated in the same shadow behavior.
"And because we could not use it at work, I was using my personal account on my phone and then I was emailing myself at work, saying 'midnight ideas, insomnia crisis.' So people said, 'Oh my gosh, P13 is having brilliant moments, you know, at night.' But it's like, they're blocking our AI use. But it was funny because most of the VPs were doing the same thing."
This extends the corporate tooling gap theme into territory that other participants have not described as fully: not just individual workarounds, but an organizational culture where the workaround is the norm and everyone, from EAs to EVPs, knows it.
AI as Language Scaffolding
P13 used AI to bridge a specific professional gap: producing executive-level English communications as a non-native speaker. What makes her approach distinctive is the learning arc. She did not settle into permanent reliance on AI for language polish. Instead, she treated AI output as a model to study, progressively internalizing the communication patterns until her own drafts required only minimal correction.
"I learned a lot from ChatGPT, like how should I talk to, how should I write something. So I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal."
This is both an equalizer story (bridging a language gap for high-stakes professional contexts) and a learning partner story (AI as scaffolding that the user progressively removes). The combination is the clearest example in the dataset of AI facilitating genuine skill acquisition rather than permanent dependency.
A Three-Part Prompting Formula
P13 articulated the most explicit prompting framework in the study: a three-part formula of role assignment, task definition, and constraints. She applies it consistently across domains, from cooking to investing to content creation, and credits it as the primary factor in her AI success.
"Which hat you wearing, what's the task you need to do, and what are the constraints or, you know, whatever background. So that's the three items on my formula, my three pillars that make my use successful."
Where most participants described their techniques anecdotally or as something they arrived at through trial and error, P13 has formalized hers into a named, repeatable structure. Her fridge prompt from the opening of the interview is an example of the formula in practice before she had named it: role (personal chef), task (weekly recipes), constraints (use available ingredients, minimize shopping).
Generational Fault Lines in AI Trust
P13 drew on direct observation across a large team to describe a generational pattern in AI trust. Younger workers (late teens, early 20s) accepted AI output uncritically, while workers in their mid-30s and older brought domain experience that served as a check on AI claims. She framed this not as generational stereotyping but as a pattern grounded in whether someone has enough professional experience to recognize when AI output is wrong.
"I noticed that I would say 30s, mid-30s and older, they had more critical thinking, a little more common sense. The younger, like the early career, like the late teens, early 20s, they're more into, 'No, no, no, let's do this. Let's trust AI and that's it.'"
The Scattered Landscape of AI Learning
P13 named AI learning resource fragmentation as both her biggest disappointment and the biggest gap between what AI can do and what she needs. The problem is not that resources don't exist, but that they are scattered across YouTube, Instagram, LinkedIn, and individual courses with no centralized, authoritative source. She extended this to the job market, noting that different companies in the same industry require different AI tools, making it impossible to know what to learn.
"So I think that's the biggest gap, the lack of standards. We don't have a go-to. We have too many options and it's almost like you have to be like the jack of all trades, the unicorn of AI."
Emerging Themes
| Theme | Description | Key Quote |
|---|---|---|
| Corporate Tooling Gap | Mismatch between official AI tools and what individuals need, leading to shadow IT | "...they were exploring a lot of things in AI but everything was firewalled, so everybody was trying at home but we could not actually try at work." |
| Pervasive Integration | AI adoption spanning many life domains | "Google is no longer my BFF, you know, it's just my acquaintance nowadays." |
| AI as Equalizer | Using AI to bridge knowledge or power gaps | "How am I going to put together the content since English is not my first language? So let's use ChatGPT to polish it off, how to get a tone for the executive level." |
| AI as Learning Partner | Using AI as a personalized tutor for skill acquisition | "I learned a lot from ChatGPT, like how should I talk to, how should I write something." |
| Organizational AI Adoption Challenges | Organizations struggling to find an effective AI path forward | "I was emailing myself at work, saying 'midnight ideas, insomnia crisis.'...most of the VPs were doing the same thing." |
| AI Learning Resource Fragmentation | Frustration with scattered, non-centralized AI learning resources | "We don't have a go-to. We have too many options and it's almost like you have to be like the jack of all trades, the unicorn of AI." |
| Trust Calibration | Deliberate practices for evaluating AI trustworthiness | "A lot of times I compare Claude with ChatGPT and I say, okay, you know, this is wrong." |
| Knowledge Displacement | Concern that AI erodes critical thinking across generations | "The younger...they're like, 'Oh, AI said so, it's the right way.' And you check the older people, they had more experience, like, 'No, might be a better way to do that.'" |
| Disclosure Norms | Emerging standards about when to attribute AI contributions | "Over there a lot of places, if they do images with AI, they put an AI credit...Here, very sporadic." |
| Useful AI Techniques | Specific, replicable prompting strategies | "Which hat you wearing, what's the task you need to do, and what are the constraints...my three pillars." |
| Job Security Anxiety | Fear that AI will eliminate specific professions | "I don't see UX designers surviving in 10 years from now. It's sad that I'm saying this, I mean, I'm passionate about that, but AI is taking over." |
| AI Governance Anxiety | Fear of unchecked AI misuse without enforceable regulation | "There's no rules to punish anybody that's using AI to harm the world...that's my biggest fear, the lack of police per se." |
P13's corporate tooling gap evidence is among the strongest in the dataset. Where other participants described friction or individual workarounds, P13 described an entire organizational culture of shadow AI use. The "midnight ideas" cover story and the acknowledgment that "most of the VPs were doing the same thing" paints a picture of systemic dysfunction driven by the collision between regulatory caution and practical demand.
"...they were exploring a lot of things in AI but everything was firewalled, so everybody was trying at home but we could not actually try at work."
P13's pervasive integration spans cooking, professional communications, job search agents, portfolio case study conversion, futures trading, and ongoing self-education. The "Google is no longer my BFF" line captures a wholesale displacement of a prior default tool. She built autonomous agents for daily job delivery and treats AI as foundational infrastructure across her life.
"Then I moved to the financial part of it. So okay, I started testing areas and I was like, okay, this is better than me, you know, that's going to be my new BFF. You know, Google is no longer my BFF, you know, it's just my acquaintance nowadays."
P13's AI as equalizer and AI as learning partner themes are intertwined. She used AI to bridge a language gap when writing executive-level communications, but the arc was scaffolded: she studied AI output to internalize patterns, then progressively wrote her own drafts with decreasing AI correction needed. This is the clearest example in the dataset of AI facilitating genuine skill acquisition rather than creating permanent dependency.
"I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal."
P13's trust calibration approach combines common-sense sanity checks with cross-model comparison. She runs the same task through Claude and ChatGPT to triangulate, and catches basic mathematical errors through domain intuition. Notably, she places responsibility for AI success squarely on the user rather than on the tool.
"At the end of the day it's not about the AI, it's how can you use it? How can you leverage?"
P13's knowledge displacement contribution is a generational framing grounded in direct observation across a large team. She observed younger workers accepting AI output uncritically while older workers brought domain experience that served as a check. The implication is that if the younger cohort never develops that critical sense independently, the displacement compounds over time.
"I noticed that the younger generation...they're like, 'Oh, AI said so, it's the right way.' And you check the older people, they had more experience, like, 'No, might be a better way to do that.'"
P13 brings a unique cross-national lens to disclosure norms. She compared Brazilian practices (where AI credit on images and movies is becoming standard) with US practices (sporadic at best). She also shifted the framing from voluntary disclosure to regulatory necessity, citing deepfakes and elections as contexts where the absence of rules creates real harm.
"Over there a lot of places, if they do images with AI, they put an AI credit, you know, on the image. So they are disclosing. Movies, anything that's done with AI. Here, very sporadic."
P13's useful AI techniques contribution is the most explicitly formalized prompting framework in the study: role, task, constraints. She applies it across domains and credits it as the primary factor in her AI success.
"Today you are my financial advisor. You're going to select for me the top 10 stocks and I want them to be in the logistic industry. So I give those specifics."
P13's job security anxiety is among the most direct in the dataset. She is a UX design consultant saying her own profession will not survive the decade. The pivot she identifies is not toward a different discipline but toward a meta-skill: strategic thinking as the only durable differentiator.
"I don't see UX designers surviving in 10 years from now. It's sad that I'm saying this, I mean, I'm passionate about that, but AI is taking over."
P13 introduced AI governance anxiety as a new theme, distinct from disclosure norms. Where disclosure is about attribution practices, governance anxiety is about the systemic absence of enforceable rules and penalties for AI misuse. Her concern is not about AI failing but about there being no mechanism to punish those who use AI to cause harm.
"There's no rules to punish anybody that's using AI to harm the world...when you're using AI to cause harm and there's no rule to punish those people, there's no way to stop them."
P13 also introduced AI learning resource fragmentation. She named it as both her biggest disappointment and the biggest gap between what AI can do and what she needs. The frustration is not that resources don't exist but that they're scattered across platforms with no centralized, authoritative source. She extended this to the job market, where different companies in the same industry require different AI tools.
"Information about AI tools is always scattered...where can I find the tips, the tricks, you know, the dos and don'ts?"
Interview Transcript
00:00:00
Paul: I'd like to start off by having you tell me the story of your first "oh wow" moment with AI. So, what was going on that made you try AI and what happened that made the light bulb turn on for you?
P13: So that actually, I used to work at [former company] and I started as just a presentation specialist putting together all the PowerPoints for the senior leadership and then maybe a year and a half into that I got moved to the Chief of Staff team for one of the EVPs that was data and analytics, and then became decisions and analytics, and they started talking about ChatGPT and how [former company] was getting involved with AI. And one of the VPs that I was supporting, Ragu, he was like, to me, the genius in AI, everything, you know, it's that kind of person you look and said, "Oh my gosh." And when they said, "Ragu, where should I go?" It's like, "Well you start playing with ChatGPT, look for Google, some classes." And then I start like dipping a little bit and then my first experiment was, okay, let me, in my personal, let's start with the personal first because [former company] was kind of funny, they were exploring a lot of things in AI but everything was like firewalled so everybody was trying at home but we could not actually try. It was kind of, I never understood that whole rationale behind.
00:01:34
P13: So my first win was I took a picture of my fridge and I gave a prompt saying, "Today, you're my personal chef, create for me easy to put together recipes for the week and keep shopping at minimum. I like this, this and this. You are allowed to use all or any of the ingredients, not necessarily everything at once." So, it was very descriptive of what I needed to do, what kind of task, and then I was like, whoa. I got out the whole menu for like four days and I was like, I like that. Then I was like, okay, I'm going to test my pantry. So I went there and I did, okay, now I need something and now I'm trying to do other things. Then I moved to the financial part of it. So okay, I started testing areas and I was like, okay, this is better than me, you know, that's going to be my new BFF. You know, Google is no longer my BFF, you know, it's just my acquaintance nowadays.
00:02:44
P13: And then I started using ChatGPT and from there I moved into Claude. And because of this then I was like, okay, how can I do this on my professional side? Because one of the great things that I did, since last time we connected, I took a certification in neuro-linguistic programming. So I was doing mentoring and coaching and I was running the DEI council and the mentorship program for [former company] for our business unit. So I was like, okay, how am I going to put together the content since English is not my first language? So let's use ChatGPT to polish it off, how to get a tone for the executive level. So that, it was funny because I learned a lot from ChatGPT, like how should I talk to, how should I write something. So I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal.
00:03:57
P13: So that was kind of helping the background in my English, like the writing skills, but also kind of making it easy and faster. Okay, that's the communication that I need to send for all the mentees for this week, what they need to do or not. So just give the bullet points and ask them to create. So I start moving like that. And because we could not use this at work, I was doing my personal on my phone and then I was emailing myself at work, said "midnight ideas, insomnia crisis." So people said, "Oh my gosh, P13 is having brilliant moments, you know, at night." But it's like, they're blocking. But it was funny because most of the VPs were doing the same thing.
Paul: Oh, that's interesting. I've talked to several people who said the same thing about using a tool that wasn't approved yet and having to do it on the side of the desk, emailing themselves and things like that.
P13: Yeah, it was kind of crazy because even like with UX they allowed us to use Balsamiq and something else but we could not use Adobe XD. So I had the entire Adobe suite but XD was not safe enough. So it was kind of back and forth. But with the AI that's how we started, you know, ChatGPT. [Current company] acquired [former company] last year. So that was in May and I knew two weeks before because my EVP was one of the first ones to be like, go. And first one is, we closed on the 18th. They had the open announcement to the company that was Sunday, announcement on Monday for everybody, a town hall on Tuesday. Thursday he was out of the door. That was like, and then, so we knew that, you know, we're going to move to a team that already had a whole chief of staff set up. So our team knew there was a matter of time for them to let us go. So what I did, I start improving my skills and I create agents. So okay, now I did, perform my daily job search.
00:06:07
P13: Those are the parameters, deliver for me by 8 a.m. every day the top 15 jobs. So that's what I start doing, you know, find all the blind spots. Now I'm updating my portfolio because everything that I have designed, not just for [former company] but for [previous employer], I cannot publish because the whole confidentiality. I do have the hard copy. So now I'm converting my pieces into case studies and trying to find a way around, because when you submit your portfolio for review, if they don't see the images they automatically disqualify you. So it's like, okay, a site cannot have the images but if I don't have the images I don't get the job. So I started doing those day searches and project management too. So that's how that started and I'm loving it. I'm taking also, this past weekend I got, it's a very basic thing but it's been helping me a lot, it's from Cursive.io. So it's a very basic course for all those most used AI tools, so Lovable, Claude, Midjourney, little classes that teach me the basics so I know a little glimpse of what I can do with each. So that's what I've been up to now.
00:07:33
Paul: What do you think's been your biggest win with AI in your work life so far? And on the flip side, what's been your biggest disappointment or surprising failure?
P13: The biggest win is the time saving.
Paul: Tell me more about that.
P13: So for me it's like, some tasks that are more time consuming, yes, it's saving me. Like for example, okay, where to go to find the job, how many places. Okay, it's fine for me. Give me the list, you know, like doing the filter. Or you know, like even for personal things like a menu or anything. So I don't have to think, not in a sense I don't have to think because it's hard to think, but it's the time consuming, so we can allocate the time for other things. So to me that's the biggest win. I've been doing well. I'm looking for a new job. I've been doing day trading and I'm operating futures market. So it's like, okay, how does that work in a prop firm, how does this, so it's almost like my right hand. So that's my win.
P13: But what is very disappointing, and I think that's a common agreement with everybody that I talk to, information about AI tools is always scattered. So you don't have like, say, I'm still trying to learn Figma, okay, and I go there and I start like, okay, and then I go someplace else. And it's the same thing when I say, okay, where is a tool for ChatGPT, where can I find the tips, the tricks, you know, the dos and don'ts? Or what are the top skills, the top AI tools that people are using? Because each company, when they look for jobs, they have different tools they're using for AI. So where do I go, where do I learn, where do you know, are those actual resources? I feel everything is so scattered. Sometimes you find stuff on YouTube, sometimes on Instagram or LinkedIn, or, you know, that's the biggest blocker that I have.
Paul: I'm thinking through how organizations have implemented AI across the organization and you mentioned the fact that you couldn't use, I think it was ChatGPT. Can you tell me more about what you saw from your perspective about AI adoption and how the organization handled it?
P13: Okay. So, one problem is when you work in the financial industry, you know, banking regulators are super strict. So, we have the feds, they, you know, the feds coming to do all the audits, you know, every year.
00:10:28
P13: So because of that the information must be part of a highly confidential level. Even for me that never had a client interface, I had to take money courses over there. So it becomes like, well, but I don't deal with the public. I'm not a customer service. I don't care. You're a [former company] employee. You need to know about the money laundering. You need to know about everything. So when it comes to, and [previous employer] was the same thing. [Previous employer] was a little crazier in a sense of, not, they didn't have AI but they had a team that, first of all, you could not say a couple words that they found offensive when you're talking on chat. You know, if somebody said "damn it," the manager got a notification that somebody said a bad word, and the word was "damn it." But because something went bad, nobody was cursing anybody. But they had this filter and they had a dedicated team to read everything that was being said through their, I think it was a Slack, whatever they were using at the time.
00:11:40
P13: So [previous employer] was more restricted than [former company] on that level. But they're very protective, make sure that the data, you know, and no secret has been leaked. So I was like, okay, I cannot tell you that we have 13 billion in revenue and then 17, no, you know, nobody can say that until they get published. But even though, that's, I think that's the biggest issue in financial and in healthcare because you need to protect the data. But other industries I feel that they're a little more relaxed.
Paul: This is just an aside. I'm wondering if the filter is set up for Portuguese or Spanish swear words.
P13: For Spanish? Yes. For Portuguese? No. Because I said something in Portuguese that was equivalent and they didn't pick it up. But in Spanish, yes. And in Polish they had.
00:14:05
Paul: Everyone's had this happen where the AI comes out with something that you know is wrong. And my question is, how do you decide whether to trust what AI gives you? What are your detectors? What tips you off that something might be wrong?
P13: Well it's what happens all the time because AI is a tool and a tool that's based on algorithms. So any wrong command, any wrong prompt is going to trigger a not so accurate response. So normally when I think about like, for example, my investments or some of the accounts, I was like, well 1 + 1 equals, why are you giving me 4.2? What's irrational. And a lot of times I compare Claude with ChatGPT and I say, okay, you know, this is wrong, or whatever the situation. And I caught it a lot of times. Say I give a table for you to tell me what's going on and you're not reading the table properly. It's like, okay, do your job as you should do. And my response, because there is, so you have at the end of the day it's not about the AI, it's how can you use it? How can you leverage?
Paul: Do different roles, people in different types of roles, trust AI differently in your experience?
P13: They do. I think one thing that I noticed is the older you get, the more skeptical you are. I noticed that the younger generation, and again not being judgmental, but I was working with the millennials, the new alpha, and those generations, I cannot keep track of which, whatever name they are now, but they're like, "Oh, AI said so, it's the right way." And you check the older people, they had more experience, like, "No, might be a better way to do that." So I noticed that I would say 30s, mid-30s and older, they had more critical thinking, a little more common sense. The younger, like the early career, like the late teens, early 20s, they're more into, "No, no, no, let's do this. Let's trust AI and that's it." So, as a whole, I see the biggest thing with the age group.
00:16:58
P13: It's very unusual to find the other way around, you know, the pattern is pretty clear. And also there's a little more resistance for the older people to learn AI.
Paul: I was going to ask you about that, about have you encountered people who avoid using AI or are resistant to it.
P13: Yes. Because, like, so for example, I'm going to go back to [my time at former company] because it was my last six years there and I had a lot of experience. So being on the chief of staff, my EVP was only three years older than me. So I'm 52, he's 55. My director, she was 40, she was five maybe ten years younger than me. And my manager, she's mid-20s, but she got promoted as a manager because they needed to get the whole diversity thing and then she was forced to grow because you need to make sure that it cannot be just pure white people there. But then each VP, they had their own assistant or, you know, their own EAs. And I had EAs in their 30s and I had ones close to their 60s, close to retirement.
00:18:13
P13: Some of the ones close to retirement, only one that was the Susie, that was from the marketing VP, that was the craziest lady. She was 15 at heart. She would come up with the craziest ideas. Amazing. So, okay, what didn't you learn? But the other ones, like, "Oh, P13, do you think it's safe?" "Oh, P13, this is so complicated." "P13, how do I do this?" Even to save a PDF or how can I sign a PDF. So I'm talking about somebody in like the late 40s up to like 50s, most of those admins, they had some difficulty. Maybe one or two was like, no, I need to learn this, I want to be up to speed. And the younger ones were less resistant, you know, they didn't see much because, and again I think you grow up with certain tools. I also noticed that, again not being judgmental, but I noticed a pattern that the people from India, they're a little way more techy in the technical part, you know, because in India you have to be successful. And I have great friends from India that I made at [former company], and either you're a doctor or in the data science, that's the only two professions they are respected nowadays in India. So that's why, and they finish school over there, they come here for the Masters, and that's how they bring their family.
00:19:42
P13: So they are more in the tech area, like in the whole technical part, I would say the back end. But then when you get the Asian people, because they were working a lot with Shanghai, we had an office in Shanghai, they were more like the front end. They're more like, okay, how do we do this, how, it was a different kind of technology. But I would say the Indians are more like, okay, you have to do this code and this code for ChatGPT, and the other ones say, oh no, we just do this, you know, to make it pretty, just ask for. So it's a different approach. But I noticed by country, by area, like geographic, there are different focuses, which is interesting.
Paul: Are you seeing any norms or unwritten rules forming about when people disclose that they've used AI for something?
P13: In some places, yes. I just came back from Brazil. I was there for a whole month with my mom. And over there a lot of places, if they do images with AI, they put an AI credit, you know, on the image. So they are disclosing. Movies, anything that's done with AI. Here, very sporadic. I see that. But I feel that at some point we need to have some norms, some rules. Because the deepfakes, they know, I mean, we have elections coming up here. We have elections in Brazil. I mean, there's so much, so many things that AI can do to damage. I think it needs, it needs somehow to have some sort of rule, some kind of criteria that we can get things, you know, okay, you can only do A if you do B, otherwise it's going to be like nobody's land.
Paul: My next question is, what's the most useful technique that you've developed when it comes to using AI? What's the biggest efficiency gain or technique or method that you found just works really well when you're incorporating AI into something?
P13: Well, it's the little formula that I have. Who you are. So pretty much: you're my personal chef, you're my investor, you're my health advisor. So, which role, like which hat are you using? What do I need you to do?
00:22:21
P13: And so, I need you to do, and what tools I'm giving to you. So, like, pretty much I fill those three bullets. So, okay, today you are my financial advisor. You're going to select for me the top 10 stocks and I want them to be in the logistic industry. So I give those specifics. Or, you know, today you're my content creator, I'm creating this email for this audience, needs to communicate this message. So it's like, which hat you wearing, what's the task you need to do, and what are the constraints or, you know, whatever background. So that's the three items on my formula, my three pillars that make my use successful.
Paul: I want to zoom out for a second here and ask you, how does this increasing presence of AI in the world, in all aspects of life, how does that make you feel?
P13: The question is not the presence of AI. The question is who's using AI and what they're using it for. AI is a great tool, like technology. Like a computer, you can use for good or you can use for bad. Okay? So think about, you can create amazing videos with AI to educate people or you can create those fake news to destroy people. So when I see AI, I see it's the future. There's no way back. It's kind of the thing, once you see, you can't unsee. That's game over. You know, AI is the future. No way back. The question is what kind of guidelines we're going to have to make sure that it doesn't fall off, you know, that doesn't get in the wrong hands. And that's the issue, that's the main concern, like who is going to be handling AI in the future.
Paul: So if you had to define your biggest hope and the biggest promise of AI, and what's your biggest fear, how would you describe those?
00:24:54
P13: I would say the biggest fear is no guides. Like there's no rules to punish anybody that's using AI to harm the world. Okay? So no matter, you know, to start a war, to contaminate food, whichever, you know, when you're using AI to cause harm and there's no rule to punish those people, there's no way to stop them. So that's my biggest fear, the lack of police per se.
P13: But the biggest hope, I can see it in the medical field. I think in the medical field AI can help scientists to figure out so many things and the cure for so many diseases, you know, can speed up like maybe surgeries or, I, the biggest hope for me, and I think the biggest win will be the medical field, if used properly. But, because, not saying our industry,
P13: I think unless you know how to use it, most of the, I don't see UX designers surviving in 10 years from now. It's sad that I'm saying this, I mean, I'm passionate about that, but AI is taking over. So anybody who has strategic thinking can take over anything. You know, you can use any tool to do graphical design, UX design, anything that was done by a human before as far as creativity can be done by AI.
00:26:29
P13: As long as you know what you're asking, as long as you know what prompts you're giving, you can do anything.
Paul: What do you think the biggest gap is between what AI can do for you right now and what you actually need it to do?
P13: The biggest gap is the lack of resources, for me personally. I mean, I know resources are there but where are they? I think the biggest issue is being scattered. All the information is scattered, but that's one thing. Another thing is we have way too many tools doing the same thing. You know, let's say back in the day, for example, what we had, Adobe XD and maybe, what else, Balsamiq, or InVision. That's it. And now we have this and we have Figma, you can do something on that and that and that, so many tools. And which tool is best? We don't know. And each industry is requiring different tools.
00:27:40
P13: And I've seen this on job posts, like the tools required are different by same industry, by different companies. So, I don't have a, like, financial, like, that's the standard for financial is this, or the standard for healthcare. No, like, I was doing, like, for, like, say, presentations for education. Each company is asking for a different tool with AI. So I think that's the biggest gap, the lack of standards. We don't have a go-to. We have too many options and it's almost like you have to be like the jack of all trades, the unicorn of AI.
Paul: That's a good place to stop. I'm going to stop the recording now and we can wrap up.