Explore the Data
Search across all annotated evidence by keyword, theme, industry, or participant role.
Showing 220 passages across 15 participants and 34 themes
“I don't like hype and I saw all the hype... I'm an early adopter. I'm the first person with everything. ... When it happened with AI, there was too much hype and people making statements that are unsubstantiated.”
“As much as I said that I wasn't adopting AI, I think I was doing it more than I thought I was doing it.”
“I thought it was they asked me to do too much and I kept saying as much. ... I just delivered that for 25 to 30 tasks that I did.”
“I hate the hallucinations because it seems like there's no excuse for a lot of them, but it happens anyway.”
“I had to use AI to prove to people that what I said was on point.”
“I took the survey question by question went into Claude and asked Claude to critique and Claude doesn't know and Claude doesn't have any horses in the race.”
“It's a personal policy with me because I just believe in being transparent. ... I think it's plagiaristic if you don't.”
“We serve as coaches. We serve as supervisors. We evaluate the agentic risk. That's the expert's job.”
“I will never 100% trust AI ever because I don't think it will earn that. It's hard. I say it's an oxymoron.”
“I think the thing I'm getting better at is responding to terrible dysfunctional expectations of stakeholders because I can do things faster.”
“I will never let that dumbing down of self that everybody says the risk of AI is. I'll never let that happen because of the way that I maintain myself.”
“I talked to a person a week ago who was let go because of their job, because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn't do all the things.”
“I talked to a person a week ago who was let go because of their job, because of AI, only to be rehired because they found out that they were wrong to let the people go because they found out AI couldn't do all the things.”
“I would copy that and paste it into chat GPT and say, "What does this mean in English?" basically and it would kind of dumb it down for me so I could understand better.”
“The biggest disappointment would be like when it's I'll say confidently wrong. Like it thinks that it's right and then it starts telling you to do things or that these things are facts.”
“I would say that it has to do with how important what you're asking it is. So I think it's a great first step.”
“It helps bridge the gap between organizations and people... it helps me not feel like I get railroaded by insurance claims and stuff as easily.”
“I haven't completely offloaded any tasks for it. I pretty much just use it to augment what I do.”
“Use it as a learning tool and not a do it for me tool. If I can tell that you've clearly just thrown it in there and said, "Do this for me," then you kind of lose that personal credibility in my eyes.”
“My ability to spell certain things has kind of went down because I'm just so used to, oh, okay, it sees what I'm trying to do and it fixes it for me.”
“I was kind of looking at maybe going back to school to learn an extra skill that maybe is more human centric because that human touch I think is going to be more of what keeps people employed.”
“Google AI Studio has been my favorite tool to use. That's my primary tool, but it's all side of desk. I literally when this came out, they didn't even roll it out to everybody... So, I would sit here and then my actual AI computer was on my left.”
“We had two weeks we lost all the vendors like all of them like 45% reduction but just in the design team. So we were still supporting 14 delivery pods and we're like oh crap how are we going to keep the ship right?”
“Well, that's where it gets tricky because the organization bought an AI company and that company is growing rapidly, but we don't have clear line of sight to what everything they're doing. So, there's a lot of back office things they have done...And then there's just been a huge push to build agents. We just don't have line of sight of where that's happening. It's scary because we haven't hired any design for the last two years other than the last the AVP we just got, but that AI company within us has grown to 130 people. So, they're almost double bigger than our design team...”
“There's no royalty model. We know that a lot of the models were trained on other people's intellectual property... there's no compensation for it.”
“I started taking those newsletters and corporate communications and feeding them into Copilot and then had Copilot build me how to write like this executive. What are their key points? How do they say their things?”
“My biggest fear is that we're not replacing the apprentice level people like and they still need a fundamental of whatever their craft is without AI... who's going to watch the watchers who will know that something is wrong because they never did it.”
“Am I inadvertently working myself out of a job? And then if that's the case, then what am I going to make that I can commoditize to stay afloat.”
“I'm on one week sprint cycles here at [organization] on my product, which makes you want to really pant. So I have to use AI to at least help me get drafts or clean up a report.”
“They are throwing every single tool our way. And I feel bad for our designers because they have even more. Like for example Linear and then there's Cursor and then we have all the Figma Make.”
“We already went from Rive mania to Figma Make mania and now we're on like Cursor mania. It seems like there is always a new tool and you have to almost use all of them to feel comfortable.”
“I think they want us to be like down 10 hours of work a week with these tools by the end of the year.”
“My designers are not getting something as thorough as this. We've done little mini pilot sessions before we even get to the training sessions as a group. So we're super lucky because my designers are literally just being told by their manager like, "Here's Figma. Go play with it.”
“We had were doing plenty of studies where it's not actually reading all seven. It might have only referenced five. So that's been an issue for us is trusting to be like, okay, did it actually analyze all the calls?”
“Right off the bat, the first time I used this like a month ago, it hallucinated a whole quote.”
“She can spin anything to it being okay and that we don't have to improve it. So that's really hard for me because I know she's coming in here and saying like what are the wins and then if she doesn't find win she's going to like twist it and then I'm like wait did you check that? And she's like I don't have time to check it.”
“They're saying, "Oh, we want 50% of code to be written by AI." And I have some of my locally located developers who are like, I already spend so much time cleaning up this low-quality code from our overseas colleagues and now I have even crappier code in my AI that they have to review and they're like, it just would have been faster.”
“So my accessibility and content designers are a little concerned that as accessibility builds this really cool thing in Cursor to remind everybody to be accessible that they've trained this agent to do they're like okay well are they still going to have me in three years or will there just be less of us?”
“People are even arguing that AI moderated researchers are better than human researchers, which I'm like, I don't know about that.”
“A lot of them now are making them look really cool and have vibe coding but I don't think they ever go back in and add anything just whatever they prompted and told our customers and there's parts where it says supposed to have secondary research and customer research and it's just making up pain points in there and so everything looks put together and there's a lot of words on a page but nobody's still going in for that second layer.”
“Everybody's like, 'Looks cool.' And then I'm like, 'No, but read it.' Does any of this [make sense]?”
“If everybody was on a scale of who it is, research is more on I'm going to do my second pass. We're probably on the highest end and then product managers are way over here.”
“I live in [midwest US city] and there is a data center getting put in [city], which is where [organization] headquarters is, and there's a data center being put in [neighboring city], which is technically the city I live in. And so we're just seeing all these horror stories of people running out of water and we know they're coming for the Midwest because of our water and it makes us worried that it's all some big dumb bubble.”
“My husband works our electrical company as a lineman. So he already sees how stressed out the grid is from people just flicking on their air conditioning in the summer. And a lot of these they just get a free pass at a lot of our utilities whether it be water and power without building their own substation because that would cost way too much money.”
“Our friend who does the concrete for the [nearby city] data center that there was a big push to get that closed, but there's just not very many laws to protect the rights of what people want. They're already building it in a way that they're like, 'Well, we can turn this into a warehouse or maybe this would just be an Amazon warehouse afterwards.' So they're already kind of predicting like the people that are building it are already like this bubble might pop.”
“My bonus, my performance, is attached to how much I use AI at work. So I have to [use it]... if I don't I might not get my bonus. So at first before I was really figuring out how to do it in my workflow. I was just asking it for my grocery list and other dumb stuff and I felt bad because everybody tells you one search is dumping out a water bottle and I'm like oh no I have to do so many searches a day or else I don't get my bonus.”
“I guess I don't know if miserable failure is accurate but not far from that. Essentially figured out that what we were doing wasn't working and that it wasn't getting us where we wanted to be and the cost benefit was just not even close to being there.”
“The way I describe it is that for a research activity that would take a researcher alone five days to complete, if you look at it with AI alone, it might take a day, but in order to do a good job of it, the necessary human AI interaction, you might get closer to three days.”
“So, I think, well, I guess disappointment is maybe the right word. So, it was kind of discovering that, maybe not unexpectedly, I guess the bar was pretty low, but discovering that Dovetail still needed a lot of babysitting to get a lot of results. We had to go in and we realized that the transcripts had a lot of misattributions. I mean there were just a lot of things that need to be cleaned up to make it useful beforehand. That allowing Dovetail to kind of create its own tags and apply those was not sufficient. We still needed to do the diligence to go in and apply our own tags to make it more meaningful and real world context.”
“not just relying on the transcripts alone, but introducing moderator notes to the analysis as well to help get that real world and actual findings.”
“I think I've kind of been using GPT for working through kind of major life events. So like considering purchasing a house for example, kind of working through that and doing tradeoffs and running scenarios and what to consider, and thinking of it as sort of a, I don't want to say co-pilot but the thing is it's kind of a sounding board that's very knowledgeable about a whole lot of topics. Particularly in a context where it involves major life decisions you don't trust it”
“completely, but you're able to cover a whole lot of ground, cover a whole lot of topics, get a lot of insights and things that you hadn't really considered brought into the conversation.”
“that there were many conversations where we were working on projects together,”
“most internal things, and that people would contribute their own thoughts but it was really AI assisted thinking going into it and we didn't really think about it, we didn't set any policies about that or any guidelines around discussions. I think people just are kind of freely admitting is like the AI and I put this together and our thinking is more, I think it's almost along the lines of making an attribution with a quote that you use and it's not completely my own thinking but it doesn't diminish the quality of it just because of that.”
“I think for the most part those came from say communications from on high. Kind of the organizational level communications that would go out that you could tell there's no thought other than a general direction of I want to communicate this to a whole lot of people in this particular organization and do it. And I think the takeaway I get from that is just like okay, I think it kind of diminishes the impact of the message going forward, is like if it becomes apparent that there is very little of your own thought other than a general direction then it's just, can't really ignore it necessarily but it doesn't have the same impact.”
“We had product that was making recommendations and we're kind of proving on the results that were presented by the AI to the participants in testing and that was one thing that just came out as huge, that without some insight into where the AI was coming up with that output, and having some indication like just calling out these are the preferences or I'm getting it from our discussion about X Y and Z, without that, particularly in an enterprise context when decisions can be costly and have risk associated with them,”
“it was a clear message from the participants that there's no way that they would rely on those outputs without some insight into where they were coming from.”
“I don't know if you recall, there's an old George Carlin routine that, you'd be talking with someone that sounds like they really know what they're talking about for a while and you're like, "Yeah, yeah, yeah, go on." And then there's this, he's full of b.s., I think I've encountered that with AI a few times.”
“Yeah, I mean I think in some cases if I'm not too far down a path I can just go back and kind of confirm but then also kind of reset the chat saying like it's getting off topic, I want to focus more on X Y and Z and I'd like to have it based on these particular types of resources and just kind of pull it pull it back in focus a little.”
“I've been looking out for, I guess, the effect that people discuss about how it kind of detracts from your own thinking or your own creativity to rely on AI to produce outcomes. And I don't think that's happened to me. And I think if it indeed has not happened, I think it may have something to do with how I see the AI interactions.”
“It's not sort of the replacement, but as an assistant, kind of a sounding board, if you will.”
“I mean there's already great concerns about replacement. And I think the people who are actually hands-on with the work kind of understand that it's not there at least not yet to do kind of full replacement.”
“Whether or not leadership understands that I don't know. So I suspect what's going to happen is there's going to be a dramatic overreaction to the introduction of AI. It's going to make a lot of changes to organizations that probably shouldn't occur. And at some point, we'll probably have a massive overcorrection.”
“I don't know if I actually put a finger on the fact that what I was using is AI, but I know that it has been underneath a lot of services and things that I used probably before I moved into an AI-forward area of my career.”
“It's grossly inaccurate, but I think that kind of points to the human-in-the-loop element: it only gets smarter if whoever's using that system goes back and double checks and says, "Oh, no, it's not that, it's this.”
“they have made Copilot available to everybody. I have thoughts on that. I hate Copilot, but our legal team is using it to streamline writing certain kinds of documentation that's repetitive, of course with gross human oversight.”
“And because we don't want people just translating things who don't understand the language, it also assigns a confidence rating. And our set confidence is if it's a 95% or above confidence rating you can roll with it.”
“If it's below that it needs some human oversight, and if it's below a certain level, like once we hit like 70%, it's something that we would want to send to our translation partner.”
“It works for a little while. It may work beautifully for a hundred inquiries using that prompt, but eventually it starts to drift. And I know that I've had this frustration factor on both my personal use as well as sometimes my use at work, particularly with, I've admitted I'm frustrated with Copilot. They get drifty and they get really drifty and you sit there and you're like, "It's not that hard. Why don't you just do your job? I told you what your job is." And one of the things that it's doing in the back end is it's trying to streamline me. Given it a complex set of edits, like, "Where can I cut corners?" And so I would say is kind of where the disappointment is, that it's hard to create a workflow that replicates every single time consistently without it being long and detailed, saying, "You may not move on until this happens." That's I think the big disappointment, that you don't have the, vibe coding is such a thing right now, but you don't have that kind of usage of AI.”
“I am an advocate of continuous human oversight. I saw a quote from IBM today and it was something to the effect of, a computer cannot be held accountable and therefore it should not make managerial decisions. That applies over a lot of different areas. I think, I'm sure you've read about the United Healthcare stuff where it was making accept/reject determinations that resulted in a massive lawsuit. I am a huge advocate for human in the loop.”
“Something that is a major pain point to me is that at a consumer level, we don't necessarily have any insight into how this works for us and how it's a necessary thing. It streamlines logistics. It streamlines fraud detection on down the line. We're not looking at that. We see all of the scam attempts of garbage AI scam attempts where it's just wash, rinse, repeat, and they're contacting a gajillion people to see who will bite. We see what it's doing as far as the bad parts of AI, especially with social media.”
“What they don't realize is that you kind of have to. It's already underneath so much that we rely very heavily on.”
“When I come back from Peru, I got a new job. And one of the things that I've noticed is that they always ask you, "What is your weakness?" And my weakness is definitely, I'm almost overly detail-oriented. That is a blessing and a curse because it means you can really get over-involved in the minutia and lose sight of everything that's out here. I find that when I'm controlling AI well and I'm using it to streamline my work or to help me think through a problem or to do affinity mapping, it's great at affinity mapping, the time for me to use it is when I'm over-involved in one little thread because what it'll do is broaden me out and give me 10 different threads that I might not be looking at.”
“I think it helps me zoom out and if I need to zoom back in, helps me zoom in. It has to be accurately prompted to do it. But I really think that that is probably where it benefits me the most: it helps me to see patterns and to see things that I might not otherwise because I'm very close to my work.”
“And that is the result of social media. I very firmly believe that it hasn't necessarily been a good thing for them. So what happens when we lose our ability to sit and gnaw on a problem or think creatively about something or think outside the box? What happens when we're only going to the AI for the solution? What does it do to human ingenuity? And that's a big concern of mine.”
“So I literally used, it was in my last year of my most recent grad program, I literally used AI to teach me how to do R. And I've used it to learn multiple software platforms at this point, specifically data analytics. Tableau was a big one. Because when I would start to encounter resistance and get to that point where I'm frustrated and I'm going to quit, I have something there where I can say, "Okay, this is the kind of visualization I am trying to make. This is where the data is porting in here and how it's set. And for some reason, I'm pulling up donuts. What is going on?" And it has that ability even from a screenshot to look and say, "Oh, well, you need to move this around." And I think something people need to remember is ask it why. Why do you need to do that?”
“It recently sent me down a rabbit hole because I was like, "Okay, what's the difference between Newtonian relativity and Einstein's relativity?" And it starts explaining it. And when I start hitting those barriers that have been there because I'm not a physicist, I can say, "Hey, explain this to me like I'm in eighth grade. Can you use an example? Give me a metaphor for what you're describing here." And the odd thing is coming away with the ability to explain this complex thing but also an interest in it.”
“But I also worry that with some of the streamlining that it does, does anybody need to really know how to do calculus anymore? You can make the AI do it. These are critical skills, though, and they're skills we should have. We should be able to do algebra. It's a pain. Use the AI if you've got it. But I know a lot of teachers who are expressing frustration because their kids aren't learning some of these foundational things that they need to know.”
“In a way, we've created an information environment where we need it. We need AI. And I really think that that's the biggest promise of it, is to stop using it as a potential replacement for humans and use it as a way for us to manage this infosphere that we've built ourselves.”
“So it's funny because I considered [using AI to help draft a book] personal and not work. I listed that as one of the personal uses of AI instead of work, because the book was self-published. So it was really a personal project of mine, but the book is about interviews, qualitative interviews for research.”
“I really don't trust a synthesis or analysis and synthesis done entirely by AI. But even with the overview, I think it misses a lot of important stuff that's between the lines. And by the way, it's not just that. I think it's part of the process for us as researchers to immerse ourselves in the data. If we skip that, we don't understand the data afterwards, and we are not retaining that important knowledge that is kind of layering in the back of your mind.”
“The problem is that that kind of analysis gives you tunnel vision. So you don't get the context in which that is said. You don't get if they said it before or after something else. Because the moment you code in a sequence, you follow the conversation, you follow the flow. There is some logic behind it.”
“I don't take what comes out of an AI at face value, ever. In general.”
“it was suggesting that I went to a specific place, I did a plan, I didn't check, and then that place was shut down that day and for a while for renovation. I said, "I wasted one day that I had in this location. Why didn't I think about checking?”
“I think I rely a lot on, when there are things about research, things that I know, I use myself as a benchmark and I say, "No, you're not saying the right thing." And then that worries me, though, because I'm thinking, "Okay, all the things that I don't know, which are many, and all the domains that I don't know... should I believe it or not?”
“it's generative, right? I mean, we should expect that it invents. But I think the majority of people don't think about that. I try to remind myself, it's generating stuff. So I try to reply, "Please stick to the real thing." And it even invented places to eat that don't exist,”
“even with a fake website. I went there and this restaurant doesn't exist.”
“the trainee was definitely not using AI at all. And I asked, "Hey, are you using any?" And she went totally on the defensive, like,”
“Why are you asking this? No, I'm not." And I'm like, "That's fine. I just want to know. I want to know how you're using it and can we talk about it?" And she said,”
“No, absolutely not. I'm intentionally not using it because I'm learning." And I said, "Okay, good call. I agree with you. Since you're trying to learn, maybe you can use it afterwards as a benchmark. The first thing, the first draft of the moderation guide, the first screening, is only you with your thoughts, because you're learning the craft.”
“I've noticed that some things were definitely AI generated. So I just asked myself, "How do I ask this without sounding judgmental?" Which was... I didn't want it to be like, "Hey, are you using AI?" in a judgmental way. It's like, "Hey, can we talk about this? How are you using AI? Which tools are you using?”
“And I feel we shouldn't be ashamed of talking about [disclosing AI use]. And especially in a power dynamic where there's different seniority, I think we should be completely transparent about what the expectations are.”
“One is it makes me lazy. So I need to intentionally say, "No, I'm going to keep thinking for myself." I need to, again, similar to the person that was learning, retain the critical thinking. Otherwise it gets lost because it's a muscle. So we need to keep practicing and using it. And this is one thing I'm really keen on passing to the more junior colleagues. It's like, you cannot skip that. Forget about that, because otherwise you will be a solo lead really soon and you cannot delegate that critical thinking and problem solving to the machine.”
“So I don't think it impacts the way I'm thinking. It's just helping me work with my thinking, or articulate what I'm thinking, or challenging what I'm thinking. So in that sense, it's not the thinking, it's probably the how I work that changes.”
“the biggest fear is the reverse of the coin that I was mentioning before: people stop thinking. Stop critical thinking. That's a huge risk at the population level, because then we would be unable to do anything, like understanding our lives, making decisions, electing our politicians and whatnot. So that's really a risk that I see, because this is really tapping into our innate laziness”
“I remember distinctly my grandmother telling me, "Well, this is true because I've heard it on the radio." And then my parents saying, "No, this is true because I've heard it on TV." Then, "This is true because I read it on the internet." And [this is] true, because AI told me. So unfortunately I think this is just massive and pervasive in ways that we don't really grasp as of now.”
“now everything is at the tip of your fingers, but I don't think you're valuing it as much because it's effortless, and you don't question it as much as we did in the past.”
“[Employer] is extremely concerned about cybersecurity. We own a third of the electric grid and we get I think millions of cyber attacks a day, literally. So we have ways to, I mean it's very, very locked down to the point that even if I go to a site that has AI, the word, in it, I can't go to it. So the tools I can use at work are limited. Also, if there are new tools, the problem is a lot of the software tools that have been used before, whether it's Miro or Figma or anything, all of a sudden that has an AI component and a lot of them are not approved for different reasons.”
“So I've also been on a pilot for a coding assistant, which was fun, but it ended up not meeting our security team's levels of what they're looking for. So we're looking for another one.”
“just really defining, getting narrower and narrower in focus of what I want AI to do, and that got me better results. And that's kind of a metaphor for how I interact with the different chatbots: start with a really good definition of what I'm looking for, give it some background, and I turn it into a conversation.”
“I'm used to trying things over and over and over again, and once I get it right, I don't have to worry about it anymore. It works. With AI, I'll try things over and over and again and I get it to work, then I try to use it again and I get something different, like complaints, "Oh, I can't access these files" that I just accessed before.”
“I was really, really looking forward to learning how to use [Kiro] more and more, and then they pulled the rug from under our feet, and so I started researching other tools, but as of now, I can't bring anything in.”
“We're not allowed to use Figma Make because of their licensing agreement. And there's other tools. We're not allowed to use Lovable. I mean, right now we're not allowed to use the Google tool, but I just went home one day and just experimented on my own computer. It's not in the [employer] environment.”
“it's like working with a partner because it gives me ideas that I couldn't really figure on my own, different insights, but of course I have to double check everything. So that's actually a good example of guardrails. Before, I used the same tool to create personas for users of a data cataloging tool that we're looking to buy. So I gave all the interviews to the AI and said, create these personas, and it created five great personas, but it wasn't based on the data, it was based on general knowledge.”
“I'm not a native English speaker. When I write, I make grammatical mistakes and I don't find the right words always, and I'm worried that either my point doesn't come across, which is a problem, or especially when you type, that I might just not look as smart as I think I am. Again, my vocabulary is not as rich as I would like it to be. So that helped me with communications.”
“if you just take things as they are and not try to refine them, yeah, I mean it's easy to create an interface that's essentially AI slop. Yeah, it looks beautiful and it follows some patterns, but it doesn't necessarily provide anything new and it might not relate to the end user. So it might follow all the rules but miss some key points that are hard to define.”
“I recently started signing my emails that I ran through Copilot at the end, "edited by AI." And again, there's no reason for me to do that. And I kind of do it because I think it's funny, but it's kind of like the "sent from my iPhone" or whatever. But I feel like, beyond [people having to] look for the em dashes, I think it's a good way to disclose it.”
“I don't see unwritten rules. I always disclose when I use AI, whether it's reporting the results of something or if I do create any kind of visuals.”
“I'm going to run it through AI first and see if it comes up with some starter ideas instead of me doing a whole exploration. And it came up with something that I thought was pretty neat, and I just rebuilt it in Illustrator and gave it more depth and just more human touch, if you will.”
“If I can't trust it, well, my first assumption is that I did not define the problem well enough, I think.”
“I think the hallucinations are not a bug. I think it's a feature.”
“You build a relationship with AI because you have to correct it. You have to pay attention. It's not like sending something to the printer and you get exactly what was on the screen. Then you start engaging with it. And how I talk about it as a partner, I mean, that's giving it a personality and that's understanding it has flaws and strengths, and I think that's the main takeaway for me from AI is that if you want to use the strengths, you have to accept the flaws and work with them.”
“that's not something that I can go in and type to Copilot, "Find me a molecule to replace." You have to use it as a tool that augments what you do.”
“Teach myself everything. The first thing I did with it was I had it generate a massive glossary of terms about AI and conversation design. I still have it. It's on my nonprofit page.”
“There is a section with a glossary in it and that glossary was from like day one with ChatGPT, because I realized I needed to learn more than just NLU NLP. So the first thing I did was just learn all that terminology. It was hard because a lot of that terminology, LLM, you know, like there's just so much terminology that sounds like other terms that you needed to learn. So it's almost like Game of Thrones, right? Like with Tyrion and Tyron, and like I really struggled with those books in the beginning because everybody's name sounded the same. And it was the same thing with AI terminology. Learning all that terminology was the first thing I did with it. It was hard. And then I would have it quiz me.”
“But really I'm about to start writing on "what's your plan B" because I'm so integrated with Claude and Claude products and Anthropic that I don't like it. I feel monopolized. So I am trying to come up with a backup plan for when Claude Code goes down, right? It happens all the time.”
“Price gouging. Yeah. And I'm a plan B kind of person. You know, it's just back up your backup, your backup because I'm just wired that way. And I think it's an important topic of conversation. Like I am really afraid AI is going to create this massive class divide and what happens when Claude is $500 a month or $800, you know, what's your plan B? So, what happens when it goes down? Do you stop working for the day? Like I see people so dependent on it that they're like, I don't know how to work without it anymore.”
“So every morning at 8 a.m. I have a planning brief with Claude and I have it saved as a project inside of Claude.”
“And it's in my calendar because I integrated Claude with my calendar. So, that's really helped because like I said, I have ADHD and Claude really helps me, is helping me stay on task better because I have 50 squirrel moments a day. That's why I have literally like 40 Claude projects. I love the projects.”
“But what the one thing it has cured for me is the way my brain works is I have like six streams of consciousness at all times. Like not voices, but you know.”
“No, no. It's true. But what it's done is be able to allow me to get all my ideas out of my head into a project. And that getting it out and knowing it's safe closes the loop in my brain where I can say, "All right, that project, it might not be done, but it's handled. It's in a place you can close the lid on it and visit it when you need to." And it's uncluttered my brain in a way that I really, because I don't take any medication or anything like that. I work out a lot. That's like my fix for a lot of my neurological oddities. So, that gift of being able to get [my thoughts] out, put it in a suitcase, know it's safe, and visit with it whenever I want to work on that.”
“And then I might, in my 8 a.m. briefing, I'll say, "All right, this is done. This is done. I want to work on this and this." And I know Claude's going to put time in the calendar, and it's already in a project. So, I'm like doubly organized. I'm organized in my calendar, my time, but also in my filing system, and it's all connected.”
“Well, first of all, I started teaching about hallucinations from early on, and you can literally say, "LLM, teach me how to avoid hallucinations." And there is plenty you can do to make sure what you're getting back is real, right? So there are steps you can take, but now my friends are building these things, these AI brains where they're stacking the LLMs on top of each other, pulling a confidence score. But also the hallucination gap is closing too. It's getting less and less. I tell people all the time, if it's a subject matter that is common like tech, right, and there's old long history with it, it's probably going to get that right. But if you ask it about like, you know, the Figma update from yesterday, it's going to get it wrong. So, there's a time piece to it.”
“There's a subject piece to it. There's a prompting piece to it. But you can have the AI, I tell people all the time, have the AI teach you about the AI.”
“I am very out and about my personal usage. I actually teach people. I have some stuff on my LinkedIn in the featured section. I have a couple carousel lessons about how to be a thought innovator and not a slop generator.”
“Yeah, and because it's really true. What's happening though is YouTube is struggling, LinkedIn is struggling because there's so much AI generated stuff. So when I do something that's 100% AI generated, I preface it, like this is 100%, like I'm actually doing something publicly and purposefully 100% AI generated to show people what that looks like.”
“So, what that usually is, is I will say, "Hey, Claude, I want to write a LinkedIn post on X subject. Interview me on the subject." And then take my answers, do not change them, and turn it into a LinkedIn post with this LinkedIn post-writing skill. And that way it's me, my words, they don't get changed, they just get arranged nicely.”
“And then it will tell me you missed this, this, and this. And I'm like, oh, now I need to go study those things, right? So, it's a win and a win and a win, like right after one after the other. I got a great post. I got to use my knowledge, my words, and then also see where my knowledge gaps are and have a place to go study.”
“I worry for people who don't understand it. There's a huge, the majority is like, "I'm afraid of AI," or you know, Gen Z is absolutely opposed to it. And I think about, you know, there's also like a gender gap apparently, they say. I don't know if I buy it. In enterprise, women, I think, are leading the way. In general I think women are more cautious. So, the environmental piece is always front and center for me. But the people who are blowing it off without trying it and not like 10xing themselves, like I want to see, you know, women in business thrive. And if they automatically are a hard no on it, they're putting themselves at a disadvantage. So I worry about that a little bit.”
“I worry about schools. One of the things I really want to, where I want to move my consultancy, is into public schools because they are clueless. Teachers are free to do whatever they want with it. I think there needs to be governance specific to teachers, governance specific to admins, and more importantly special ed students versus typical students. I have a [REDACTED: family member]. An AI tutor probably would have made the last six years of his life a lot less hellish. So I worry about schools implementing it incorrectly and doing like a one-size-fits-all, which is not how it should be.”
“For myself, no. For others, yes. For myself, no. Because of the sheer volume of what I'm able to do now. And at my core, I'm a conversation designer and that is so deeply ingrained in me that even if I got a little rusty, it wouldn't take me long to get right back on the path. But for kids and teachers really freak me out. Like you know, the studies, as soon as kids' competency, everything, everything tanked as soon as Chromebooks entered the picture. And now we want to talk about kids using AI.”
“I unplug on the weekends. I go down in the art studio. I paint my brains out all weekend. I go in the garden.”
“I unplug. I leave my phone in the house. I play with my dogs. Like I purposefully unplug.”
“But I am a philanthropist at heart, so really for me it's closing those gaps, the gender gaps, the pay gaps, the poverty gaps. That to me would be the best thing that could happen.”
“And it's in my calendar because I integrated Claude with my calendar. So, that's really helped because like I said, I have ADHD and Claude really helps me, is helping me stay on task better because I have 50 squirrel moments a day. That's why I have literally like 40 Claude projects. I love the projects.”
“Then the next chapter is they rolled [generative AI] out at work and basically told us you better start using it. And they even, they don't monitor what we use, what we chat with it about, but they monitor how often we chat with it.”
“And then they kind of look for, all right, what have you done lately that's improved efficiency using ChatGPT, for example, and now Claude. So it's a little bit with a gun to my back that I find I'm dipping my toes into it deeper every day.”
“This year, the one-on-ones that I used to conduct with the team members, some were much better than others, but there wasn't a whole lot of consistency or structure to them, and I just felt like overall they could be better as a group. So, I turned that question over to ChatGPT and just asked for some best practice methods there. And it gave some pretty decent ideas. I'll say it gave me the good starts of ideas and then I would hone them myself and then bring it back to ChatGPT for kind of like a final "does this sound like a workable plan" and then say yes or no.”
“I've also run into some situations with ChatGPT where it will just obviously hallucinate something. The chief example I always have of that is there was a time where, literally, it was last November, I needed to make a calendar for like a newsletter that would have been the month of December and I just didn't feel like making the Word table. 00:06:41 So I asked it, "Make a Word table that's a calendar for the month of December with two rows for each date," that sort of thing, and it messed the dates up. Like if November started on a Monday, it had it starting on a Tuesday where none of the dates lined up.”
“The detraction I'll say is it's almost, the word I've used with my wife about it is that it is surprisingly seductive in that I might be overrelying on it suddenly. Have I gone from, because I'd always been a little bit of a kind of a Jared Spool skeptic about, hey, this is just a word association machine, this is like a magic trick, this isn't much substance to it, to now suddenly I do use it a lot more than I think I ever envisioned that I would. And in that ideation space, it's been a lot of help just for me to broaden the approaches that I bring to the work that I've got to do for the rest of my team. So, that's been helpful for me. It's almost kind of like having a small council of different personalities or different backgrounds or different perspectives to kind of push against my default way of doing things.”
“But again, on the detraction side, I kind of worry like how much of myself am I losing through this process because I'm just lazily relying on it now to provide me with all of the perspective. So yeah, that kind of concerns me.”
“So a lot of them are kind of what I would call purists. They see themselves as, "I work in Figma a lot. I sometimes do some research and that's about it." And so now stakeholder management, influencing without authority, communicating to different levels of audiences, things like that. They all needed that. ChatGPT really did help me with coming up with a 12-month comprehensive training plan that included both paid and free sources. I really went back and forth with it for a while on this one to really kind of hone this into something that was, that thus far has proven to be valuable and also feasible from both a cost and a time perspective. So, I was able to get that done much more quickly and much more comprehensively than I ever would have been able to by myself through just what I'll call old research methods now at this point of me just googling things and talking to people.”
“So I worry more about like what's going to happen the first time we get sued over a claim that we deny that we shouldn't have or something like that, that there's going to be a swing and a miss here. Are we overrelying on it when we're giving away so much of our processing, our manual data entry processing capabilities now over to AI? And I just wonder like are we building this house of cards now that's just eventually going to doom the company?”
“I honestly try to hedge against that by only asking it things where there is no quote unquote wrong answer, because like I said, because of that December calendar incident. I'm not 100% sure that I would trust it if I asked it what 2 plus 2 is half the time. So if I have a big hairy task that again would require processing tens of thousands of rows of data, I don't know that I would, I would probably ironically, and again this goes back to that concern that I have of am I now suddenly seduced into overrelying on it, but I would probably ask it to say, "Okay, process this 10,000-row file but then also tell me how should I double-check your work," which is circular reasoning in the worst sort of way. But yeah, I think ultimately my personal strategy is try not to use it in spaces where there is high risk and or where there is an absolute 00:17:02 need for 100% accuracy and then just stick to it more where the spaces are of, like, I know I keep saying it, but the idea generation where there really is no wrong answer per se, it's just input for me.”
“No, not at work at least. Let's put it that way. It seems to be kind of just this gold rush mentality of we're all expected to use it and then they sometimes ask us like, "What benefits have you been getting from it lately?" Just I think as a way really just to justify the cost of the licensing that they do. But beyond that, if you mean like any sort of like disclosure, so let's say for example when we turn in a report, "Last quarter, portions of this were created through generative AI means," nothing like that.”
“Anxious. Just because of the fact it just seems like it's come on too fast, too strong, too quickly. And without anybody really understanding any of the ramifications, governance, ethics, environmental concerns, economic concerns. Again, when the Sam Altman types of the world will talk about this golden utopia in the future where nobody has to do any sort of like drudgery work anymore. It's like, well, no offense to anybody, but the economy runs on an awful lot of people doing drudgery work. And what happens when all, you're just going to say these people just live a carefree life with no job anymore because there's nothing for them to do and they just have this limitless free time now because all that overhead has been lifted from their lives.”
“So I just don't know. I feel like it's just, I've said to my wife in the past, I think in the future, unfortunately, when we look back 30 years from now on this era of technological advancement, I think the legacy of this phase of AI, this gold rush mentality, is just going to really be exposing the greed of C-level executives in the world right now in that they would put their faith in anything which will allow them to say, "I'm the person who cut staff expenses by 30% and as a result you stakeholders all got higher dividends and I got a bigger bonus, so everybody wins." But that's just not true.”
“I mean, I watched both my kids when I was in school. My parents' biggest worry was, am I on drugs? When my kids were in school, like in high school, five, six years ago, my biggest worry was like, are they cheating off of others? Everyone seemed to be crowdsourcing all the homework. And I'm like, is anyone actually learning anything other than just how to get by in an ethically dubious way? And now I feel like AI has almost given rise to the legitimacy of that now in a way.”
“And when [his son] hits the job market, is he suddenly going to find that if he, for whatever reason, ChatGPT falls out of vogue, it becomes illegal, something unforeseeable happens. Does he, when you kick that crutch out from underneath him, is he capable of doing anything? Is anyone capable of doing anything? And I've seen some articles about this idea of just the stagnation of human capabilities. The more we lean on something that can do something so comprehensive for us, or at least that we believe to be so comprehensive for us. So that gives me concern for the future. I don't know what to tell either one of my kids about what's an AI-proof, if there even is such a thing. What's an AI-proof field of study for you, field of work for you? Or how should you again responsibly integrate it into your work in a way that's not eroding your own ability to think critically and put two and two together.”
“There's a blend across work and personal. Right now I'm getting my education doctorate. So I'm doing my whole dissertation on AI's role, like my evolution of my leadership skills in conjunction with gen AI. So I use it probably a lot between work and personal and school. For school I'll even have it read my writing and then give me like a review, and not do it for me but just tell me how to improve it. Or I see what it recommends for clarity and conciseness in the writing and then any kind of grammatical errors, I'll use it for that. I'll use it to brainstorm professional development ideas with me for teachers. As far as like, I use a lot of design thinking in the professional development sessions with teachers that I create, and I will have it reference design thinking protocols or design justice thinking protocols to make PD better than what I could do alone, because I have a very limited amount of time to create.”
“Yeah. So there's Copilot. We can use Copilot and we can use Canva AI. But I learned a thing. So, if I get off of the school district's network and use the guest network, I can access any AI I want.”
“Gemini and Perplexity. Yeah, those are the ones. I really love Gemini, but I use it every time I'm not on the school network.”
“Oh my goodness. Well, it's a new role that I started in June and I started the new role into administration in June. So, I'm trying to think, what did I used to do with it? Oh, AI. Okay. So, one thing, does it have to be in my role or can it be in my personal life? Okay. So, I used to meal plan without the use of AI and that was just like looking up recipes and then putting them in an app and the app would tell me what to go buy at the grocery store. Now, I just say, "Perplexity, I am wanting a high protein diet that's low cost. Tell me what to buy at the grocery store. You have all my health data. What should I be eating?" It creates the whole meal plan with the recipes and gives me the shopping list in less than a minute and I like that.”
“And then I use it, I still design as a side gig, and with client's approval I will write whole courses for them and just have it reference their writing style on their website. Well, this one client I have, she has a very unique voice that she speaks in and has podcasting and stuff. So, I'm like, just look up this site, write in her voice with this content and go.”
“So, I feel like I'm the only one in my organization, not the only one, I'm one of the ones in my organization that is setting those norms. So, for use, I'm trying to push a guideline for faculty and staff use of AI, like guidelines of what we use AI for, what's good use of AI, what shouldn't we put into AI for output.”
“Sometimes Perplexity will give me a bad link and I always check the links. I always go back and review the work that AI did in the background because I can go back and look at the source links and sometimes the links, like I noticed in Gemini when I was doing some research, I was asking general questions about AI use and I found it was citing sources that weren't as rigorous as others. It was citing blogs. It was just searching the internet. It wasn't doing an academic search of stuff that I could cite. So I would say that's kind of the part I don't trust. There's also another piece of trust that I don't have and that's bias.”
“So, this is something I think about a lot. I serve a majority minority district and what are the sources? What's the input? Because there's so much, like can I see the data set that AI was trained on? Because I want to know that when it's giving a teacher an answer, say they're not very, how do I put it, they're not very culturally sensitive, if it gives them something that's wrong I want them to be able to identify it.”
“But even, you know, so, and then there's another problem with it on a broader sense where I don't trust, because I was reading research that the UN is really pushing AI in the global south for teaching and learning to create learning management systems and to give students feedback. I don't trust that because the data set that they're using is so westernized. It seems like another version of colonization and cultural, how do I say that?”
“Like making culture homogeneous, I guess. So those are some of the things I think about when I think about trust.”
“I would just say it's about our own bias. Like, we live in a world that is not fair, it's not just, it's not equitable. And how is AI amplifying that in the world? That would be my only concern. Without checks, is it just, like, I've read a few studies where students were given feedback based on their writing and minority students were given less rigorous feedback from AI than Caucasian students. And then AI didn't recognize different dialects of English except for proper English. And then facial recognition didn't recognize Black students as human when they went in to be recognized for a test that was proctored, it didn't recognize their faces.”
“So that's the kind of thing that concerns me about AI. But I think as long as everyone has a seat at the design table and as long as we have this type of research and feedback from minority groups is used, I think we can mitigate the risk of bias.”
“Sometimes in my doctorate I worry about losing the ability to just find stuff in the library search engine. And I even went so far as to hire a tutor because I'm in my dissertation phase and I'm like, how do you know what words to search and what's going to bring you back the right research? And they're like, well, it's a process, a learning process that I feel like I'm missing out on. Like when I started design school we started with paper and pencil.”
“Like relearned design from a very non-technical standpoint and I feel like I'm losing out on that process if I just rely on AI to find the stuff for me. So I guess, but I guess maybe that's going to be a skill that's obsolete because I don't know the Dewey Decimal system.”
“Well, I kind of think of, I don't know that I have many concerns because when we, my only concern is that we don't start with the basics in school and that we give AI too soon. So I'm talking about elementary years, primary years. Because I'm thinking back to when I learned long division and multiplication and the basics of math, it was like learning a language without knowing you're learning a language.”
“There's a logic behind it. And I feel like if we skip over learning [the basics of math, long division and multiplication], maybe we'll have people who can't think for themselves. But we're starting to see that now with students coming up because they're over-tested, just because of over-testing. So I feel like, you know, we used to farm and we used to be really active and walk and now we just go to the gym. So those who have the motivation to hone their creative thinking or critical thinking skills, they will. Those who don't want to won't. And that's where the divide will be, I think.”
“Well, I kind of think of, I don't know that I have many concerns because when we, my only concern is that we don't start with the basics in school and that we give AI too soon. So I'm talking about elementary years, primary years. Because I'm thinking back to when I learned long division and multiplication and the basics of math, it was like learning a language without knowing you're learning a language.”
“There's a logic behind it. And I feel like if we skip over learning [the basics of math, long division and multiplication], maybe we'll have people who can't think for themselves. But we're starting to see that now with students coming up because they're over-tested, just because of over-testing. So I feel like, you know, we used to farm and we used to be really active and walk and now we just go to the gym. So those who have the motivation to hone their creative thinking or critical thinking skills, they will. Those who don't want to won't. And that's where the divide will be, I think.”
“That's a good question. I don't, no, I don't think it has changed how people value my, the output is, I can say productivity-wise it's definitely sped things up, you know, things that could derail me in the styling of something it can just sort of just get the information presented properly. And then the tone of voice which is really important, it's like I did a test a couple weeks ago. The voice was sort of with an executive assistant compared with an English professor. So same text and I got two different answers. So being able to do that for different audiences I think is really helpful. You know, the whole storytelling thing that is really important, especially talked about in the UX research read-up.”
“My biggest win I'd say being able to do the business plans, pitch decks, financials. There's also like formulas that were given that, I'm not a math guy, so having those to be able to put in Excel or what have you is really helpful. Anything math-heavy would be, you know, anything that would help with quant or whatever, big help for me.”
“Okay. So an ongoing chat I have in Gemini, I saw past tense being used in a conversation. So I asked at the current time which was off, which was really sort of shocking to me, and it's obviously not a constant, you'd think maybe a computer would be. So I had to ask it to going forward always refer to the atomic clock. So occasionally I'll ask what time it is and sometimes it'll also tell me what time it is when I answer for a new part of the chat. But I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot. I don't know where that disconnect is, but I would say if you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways.”
“I haven't seen any mention of having to do that, but I'd say for something that was totally certain by either statistical data, I mean, I think you need a disclaimer saying these numbers need to be double-checked for accuracy for sure. Anything that goes into the area, you know, we're going to make this big million-dollar decision based on this widget not working correctly, you know, that's what has to be double-checked.”
“So, it's RRCC: Role, Result, Context, Constraint. So before I even put in what I want, the information question, I do the role I want it to play, the result I want, you know, the goal, context, constraint. So say, here's the example: role is "act as an expert movie buff," result, "I'm looking for listing of movies playing my area," goal, "to take my family, friends who are fun," context, "I live in such-and-such city," constraint, "limit list to non-rated R movies." So that really helped with certain outputs and that's something I will most likely use predominantly going forward.”
“Yeah, I think that could happen. You know, instead of going through material myself, notes and sort of collating myself and thinking that out. Yeah, I could see that skill going downhill. It's almost like my handwriting skills gone downhill as I type more and more for text. I noticed that dexterity isn't quite what it should be sometimes.”
“I can see the same sort of parallel. Yeah, for sure. And that's not a good thing, especially for aging populations. You know, they need to keep that brain strong.”
“Okay. So an ongoing chat I have in Gemini, I saw past tense being used in a conversation. So I asked at the current time which was off, which was really sort of shocking to me, and it's obviously not a constant, you'd think maybe a computer would be. So I had to ask it to going forward always refer to the atomic clock. So occasionally I'll ask what time it is and sometimes it'll also tell me what time it is when I answer for a new part of the chat. But I think critical thinking is so so important because I will notice in this ongoing chat things that are left out, I will question and they'll act like they forgot. I don't know where that disconnect is, but I would say if you're not really critically thinking about the information you're getting, it's going to probably let you down in some ways.”
“So that actually, I used to work at [former company] and I started as just a presentation specialist putting together all the PowerPoints for the senior leadership and then maybe a year and a half into that I got moved to the Chief of Staff team for one of the EVPs that was data and analytics, and then became decisions and analytics, and they started talking about ChatGPT and how [former company] was getting involved with AI. And one of the VPs that I was supporting, Ragu, he was like, to me, the genius in AI, everything, you know, it's that kind of person you look and said, "Oh my gosh." And when they said, "Ragu, where should I go?" It's like, "Well you start playing with ChatGPT, look for Google, some classes." And then I start like dipping a little bit and then my first experiment was, okay, let me, in my personal, let's start with the personal first because [former company] was kind of funny, they were exploring a lot of things in AI but everything was like firewalled so everybody was trying at home but we could not actually try. It was kind of, I never understood that whole rationale behind.”
“So my first win was I took a picture of my fridge and I gave a prompt saying, "Today, you're my personal chef, create for me easy to put together recipes for the week and keep shopping at minimum. I like this, this and this. You are allowed to use all or any of the ingredients, not necessarily everything at once." So, it was very descriptive of what I needed to do, what kind of task, and then I was like, whoa. I got out the whole menu for like four days and I was like, I like that. Then I was like, okay, I'm going to test my pantry. So I went there and I did, okay, now I need something and now I'm trying to do other things. Then I moved to the financial part of it. So okay, I started testing areas and I was like, okay, this is better than me, you know, that's going to be my new BFF. You know, Google is no longer my BFF, you know, it's just my acquaintance nowadays.”
“And then I started using ChatGPT and from there I moved into Claude. And because of this then I was like, okay, how can I do this on my professional side? Because one of the great things that I did, since last time we connected, I took a certification in neuro-linguistic programming. So I was doing mentoring and coaching and I was running the DEI council and the mentorship program for [former company] for our business unit. So I was like, okay, how am I going to put together the content since English is not my first language? So let's use ChatGPT to polish it off, how to get a tone for the executive level. So that, it was funny because I learned a lot from ChatGPT, like how should I talk to, how should I write something. So I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal.”
“So that was kind of helping the background in my English, like the writing skills, but also kind of making it easy and faster. Okay, that's the communication that I need to send for all the mentees for this week, what they need to do or not. So just give the bullet points and ask them to create. So I start moving like that. And because we could not use this at work, I was doing my personal on my phone and then I was emailing myself at work, said "midnight ideas, insomnia crisis." So people said, "Oh my gosh, P13 is having brilliant moments, you know, at night." But it's like, they're blocking. But it was funny because most of the VPs were doing the same thing.”
“Those are the parameters, deliver for me by 8 a.m. every day the top 15 jobs. So that's what I start doing, you know, find all the blind spots. Now I'm updating my portfolio because everything that I have designed, not just for [former company] but for [previous employer], I cannot publish because the whole confidentiality. I do have the hard copy. So now I'm converting my pieces into case studies and trying to find a way around, because when you submit your portfolio for review, if they don't see the images they automatically disqualify you. So it's like, okay, a site cannot have the images but if I don't have the images I don't get the job. So I started doing those day searches and project management too. So that's how that started and I'm loving it. I'm taking also, this past weekend I got, it's a very basic thing but it's been helping me a lot, it's from Cursive.io. So it's a very basic course for all those most used AI tools, so Lovable, Claude, Midjourney, little classes that teach me the basics so I know a little glimpse of what I can do with each. So that's what I've been up to now.”
“But what is very disappointing, and I think that's a common agreement with everybody that I talk to, information about AI tools is always scattered. So you don't have like, say, I'm still trying to learn Figma, okay, and I go there and I start like, okay, and then I go someplace else. And it's the same thing when I say, okay, where is a tool for ChatGPT, where can I find the tips, the tricks, you know, the dos and don'ts? Or what are the top skills, the top AI tools that people are using? Because each company, when they look for jobs, they have different tools they're using for AI. So where do I go, where do I learn, where do you know, are those actual resources? I feel everything is so scattered. Sometimes you find stuff on YouTube, sometimes on Instagram or LinkedIn, or, you know, that's the biggest blocker that I have.”
“Well it's what happens all the time because AI is a tool and a tool that's based on algorithms. So any wrong command, any wrong prompt is going to trigger a not so accurate response. So normally when I think about like, for example, my investments or some of the accounts, I was like, well 1 + 1 equals, why are you giving me 4.2? What's irrational. And a lot of times I compare Claude with ChatGPT and I say, okay, you know, this is wrong, or whatever the situation. And I caught it a lot of times. Say I give a table for you to tell me what's going on and you're not reading the table properly. It's like, okay, do your job as you should do. And my response, because there is, so you have at the end of the day it's not about the AI, it's how can you use it? How can you leverage?”
“They do. I think one thing that I noticed is the older you get, the more skeptical you are. I noticed that the younger generation, and again not being judgmental, but I was working with the millennials, the new alpha, and those generations, I cannot keep track of which, whatever name they are now, but they're like, "Oh, AI said so, it's the right way." And you check the older people, they had more experience, like, "No, might be a better way to do that." So I noticed that I would say 30s, mid-30s and older, they had more critical thinking, a little more common sense. The younger, like the early career, like the late teens, early 20s, they're more into, "No, no, no, let's do this. Let's trust AI and that's it." So, as a whole, I see the biggest thing with the age group.”
“In some places, yes. I just came back from Brazil. I was there for a whole month with my mom. And over there a lot of places, if they do images with AI, they put an AI credit, you know, on the image. So they are disclosing. Movies, anything that's done with AI. Here, very sporadic. I see that. But I feel that at some point we need to have some norms, some rules. Because the deepfakes, they know, I mean, we have elections coming up here. We have elections in Brazil. I mean, there's so much, so many things that AI can do to damage. I think it needs, it needs somehow to have some sort of rule, some kind of criteria that we can get things, you know, okay, you can only do A if you do B, otherwise it's going to be like nobody's land.”
“And so, I need you to do, and what tools I'm giving to you. So, like, pretty much I fill those three bullets. So, okay, today you are my financial advisor. You're going to select for me the top 10 stocks and I want them to be in the logistic industry. So I give those specifics. Or, you know, today you're my content creator, I'm creating this email for this audience, needs to communicate this message. So it's like, which hat you wearing, what's the task you need to do, and what are the constraints or, you know, whatever background. So that's the three items on my formula, my three pillars that make my use successful.”
“I would say the biggest fear is no guides. Like there's no rules to punish anybody that's using AI to harm the world. Okay? So no matter, you know, to start a war, to contaminate food, whichever, you know, when you're using AI to cause harm and there's no rule to punish those people, there's no way to stop them. So that's my biggest fear, the lack of police per se.”
“I think unless you know how to use it, most of the, I don't see UX designers surviving in 10 years from now. It's sad that I'm saying this, I mean, I'm passionate about that, but AI is taking over. So anybody who has strategic thinking can take over anything. You know, you can use any tool to do graphical design, UX design, anything that was done by a human before as far as creativity can be done by AI.”
“And I've seen this on job posts, like the tools required are different by same industry, by different companies. So, I don't have a, like, financial, like, that's the standard for financial is this, or the standard for healthcare. No, like, I was doing, like, for, like, say, presentations for education. Each company is asking for a different tool with AI. So I think that's the biggest gap, the lack of standards. We don't have a go-to. We have too many options and it's almost like you have to be like the jack of all trades, the unicorn of AI.”
“And then I started using ChatGPT and from there I moved into Claude. And because of this then I was like, okay, how can I do this on my professional side? Because one of the great things that I did, since last time we connected, I took a certification in neuro-linguistic programming. So I was doing mentoring and coaching and I was running the DEI council and the mentorship program for [former company] for our business unit. So I was like, okay, how am I going to put together the content since English is not my first language? So let's use ChatGPT to polish it off, how to get a tone for the executive level. So that, it was funny because I learned a lot from ChatGPT, like how should I talk to, how should I write something. So I didn't use it in a sense of, okay, do for me and that's it. But I did in a sense of, okay, do once, do twice, and then after that I always start writing my own things and ask ChatGPT to polish it off, or Claude, and the changes were minimal.”
“So that was kind of helping the background in my English, like the writing skills, but also kind of making it easy and faster. Okay, that's the communication that I need to send for all the mentees for this week, what they need to do or not. So just give the bullet points and ask them to create. So I start moving like that. And because we could not use this at work, I was doing my personal on my phone and then I was emailing myself at work, said "midnight ideas, insomnia crisis." So people said, "Oh my gosh, P13 is having brilliant moments, you know, at night." But it's like, they're blocking. But it was funny because most of the VPs were doing the same thing.”
“Yeah. I mean, I think what we're trying to do, which we've been trying to do for the past couple years, is really ambitious, which is to create healthcare applications with a sort of modular application building tool and to create the whole backend so that their applications are possible to be used within the healthcare context. And I think late last year, as we've been struggling through putting this [application] all together, one of our engineers started leaning more into Claude Code at the same time that some of the big advances happened and made a ton of progress and was able to hook up our own instance of an LLM to start creating those applications and it actually worked. We had the building blocks figured out and it was putting it together in a way that was like, oh, we thought that this would come at some point and now it's come and now we have to catch up and work around it and try to figure out. And for me as a designer it was like all of a sudden”
“I'm maybe not one step behind but two or three steps behind. There was one instance where we've been discussing the experience of using conversational AI in our tool and what the engineer had done was working but it was kind of overwhelming in terms of everything that was a part of the UX. And so we were trying to find time for me to collaborate with him because he's been building with AI. And that was I think the first moment I'm like okay I'm just going to see what I can do with Claude Code and start doing it. And I kind of just went in deep for a couple days and was able to rebuild it with Claude Code successfully to, just the front end, but to illustrate the experience that we wanted and it felt like okay now I can kind of play, now it's kind of like fighting fire with fire like I can compete a little bit in that process. And so that was really impressive to me for the first time.”
“I think because we're a startup and we're really like 10 people, day-to-day and we're dealing with AI ourselves, it's been mostly bottom up. I think at some point, well at some point it was a little bit top down. Early this year we sort of refocused our efforts and knowing what AI could do and what our engineer was able to do, we said, "Okay, now we want to be much more ambitious and work through all this backlog that we thought was going to take months in a shorter amount of time." And so everybody needs to be using the cloud and everybody gets a subscription. We're going to do this with the people we have. And so that was one instance of it being top down, but everyone was already dabbling with it before then.”
“Everything we're doing is building the plane as it flies. And I have work in Figma which hasn't been fully translated into our product. And so that work is still there to actually do those refinements, and even to truly implement the designs from Figma into code while we're still building out new features and whatnot. And so sometimes I'm using it to do the work that a front-end engineer might do to clean up our implementation.”
“So we are using it a little bit for that too. But it hasn't gone back into Figma yet. I feel like that's still a work in progress. And then there are times like when I was taking that LLM kind of experience, the chat-based interface project, and I spent just a couple days just working on that and I was really designing as I was building it because I had my engineer's work to start from so I was refining their work, I was cleaning up what they had done. But there would be times when I'd give maybe a general prompt and the output, maybe 50% of the output worked and 50% didn't. So, I say, "Oh, that's a good idea. We'll keep that, but then change these five things." And it's just kind of like an iterative building process. There have been other times where I'm like, "Okay, I'm going to try and use Figma Make because I haven't used it very much" and I'll give it an idea that I'm working on and the output just took a while and it's not helpful at all.”
“I think because my work has been more just front end, I haven't gotten in trouble yet with anything. There's definitely one instance in our company, we have a siloed off instance of Claude that has all of our product context in it that, because we're on Azure, we have to use and I'm on Mac but I have to use a virtual desktop to use, and it has all of the context for our product and so you can ask it questions and it will give pretty good technical answers. And I think one of our salespeople or product people was responding to a client and used a different instance of Claude and it gave him a plausible answer that ended up in a client email that was wrong. And that was not good and that had to be, that was sort of like a step back moment for the company to say please be careful and please validate everything you're seeing coming out of the LLMs.”
“The other thing I've seen us do, which is hard and I don't think it's completely something that we figured out yet, is like we'll have a sort of analyst or subject matter expert in [the industry we serve] who's very technical who will start to build out a concept using AI or in the context of what we're doing and it will get maybe two or three steps before anybody has questioned it and it'll go through maybe our engineer too and start being implemented before we've been able to take a step back and say "maybe that wasn't a good idea.”
“Some are front end, sort of like a dashboard kind of analysis view. And if we come to a new client or a use case that doesn't fit within that schema of building blocks, we sort of have to take a step back and reconsider. Do we need a new one or does it fit? Do we have to broaden the definition of one of them? And definitely a couple people are using AI to try and figure that part of it out. So propose a new building block that fits within our system. I think sometimes I am seeing the result of that work with AI a few steps down the chain and I have to question whether that was a good idea. So maybe the AI proposed a new structure to how our product works and I disagree with it because it doesn't take into the context of whether an end user will be able to make sense of it.”
“Yeah, that is also happening and I haven't talked about that yet. I mean this is almost like very detailed product documentation but it's also that we have sort of a process of them going through AI to come up with that and then come up with the technical details to start implementing it. Also they are vibe coding some of those interfaces and sharing them with me and the team, and that's been its own interesting challenge. How so? Tell me more about that. So I think there are a few aspects of it. In [our customers' industry] I think the bar in terms of end UI design is not always terribly high and so when someone vibe codes a design, puts it out there like "oh we're done, look, so and so did it, it's there," and I start looking at it and there are some things on the surface that are fine and they're working and maybe there are a few good ideas that I haven't thought of too.”
“But then you start to peel away the surface and there's so much that doesn't make sense in terms of what we're doing and the layout in addition to just like maybe the design system we're using. It doesn't map to the design system we've already established. And so there's all those aspects of it, but then there's even translating the domain and the intent into the interface that I never, like sometimes I'll just, in the past maybe I'll get handed one of these live coded interfaces without that context and I'll have to go back, either I'll have to do my best to extract that intent out of the interface or I'll have to go back and ask 20 questions just to figure out what was going on. And so this is something I sort of have unsuccessfully proposed which is that we do a better job of documenting our intent if anybody's going to be vibe coding interfaces and put some structure to that so that we can say, okay, so and so made an example of this application, what were you thinking, what were you hoping to accomplish, and with the idea that maybe if that was documented we could assess it together and see whether it was working.”
“But I would say that any sort of documentation in that process has been unsuccessful so far. It's been more like, okay, you did this, now we have to meet to walk through what you were thinking. And was this intentional or was that intentional?”
“We sort of raised that flag and had to move on because we knew it was just how we were building it. And then until recently we came across a new scenario where we had a whole other kind of, a patient-facing example, and we had to take a step back and our engineer had to basically re-engineer how that was working so that it wasn't generative anymore but it was only generative based on a design system that was already defined, which is I think how my vision of it was already but we weren't there yet. So now that we have that structure in place, there's work for me to go back and make sure that the design system it's referencing makes sense. But at least that's open to us now. And that is more context into why a vibe-coded app, when you don't have that design system in place, has even more weight because there's nothing to ground it.”
“I feel like we're all using it so much. No, I think we all know that we're all using it so much. I think it's not, there's just in the context of our day-to-day work with these 10 people, I mean, I will call out I think I probably use it less for my day-to-day thinking than other people do. So, if we're having a product kind of conversation remotely, somebody might respond to my question with what I know is an AI output. Rather than somebody sending me a few sentences, they'll send me two pages worth of AI output of a concept and I'll have to read through it and see if it makes sense.”
“It's frustrating. Yeah, I think it's frustrating because I think some people have just been more trusting of it. And yeah, kind of phoning it in. There have been maybe a couple times where I've actually called out like, I don't want to know what Claude thinks about this. I just want to know what you think. Like, here's why this doesn't make sense. Tell me what you think. How does that go over when you say that? I think I've received silence as a response to that before, but I feel like because we've had some more visible failures with sort of letting AI move too quickly, that's been happening less. So we've had more, I think we're more aware of where it can fail if we don't watch it in our process.”
“There's an aspect of it that feels very empowering when I'm trying to build out an idea quickly. There's been a couple times where I'm building something in Claude Code and it's felt like it's nice to have an iterative design process in actual code. Which is really cool. Like I used to work in Flash a long long time ago and when we were doing work in Flash, Flash was the output because it would be embedded into a website. So you were building what would be the final product which felt really gratifying and so there's an aspect of that that I appreciate. But it also feels like in general there's so much anxiety around it. And it feels like I don't have a choice. Like I don't have a choice but to fight fire with fire because that's what's going on, to sort of keep up and not be left behind, and that doesn't feel great.”
“Yeah. I mean I think when AI can work right now, I think this is true for engineering but for design because that's what I know, it's because you have somebody with the judgment to know when the output is working or not working or quality or not quality and you can adjust from there, but that comes from experience. And it's almost like managing a type of more junior role except faster and so your brain has to move faster. And so that is a really good question because you don't get that judgment and that experience without doing the work and being hands-on in it in a way.”
“So there are certain things that I don't know if people will need to pick up in the future, like I don't know some of the detailed UI kind of work which the details of were tedious and took a long time in the past and now they're so automatic, or even like responsive design patterns. It was such a tedious thing before and it's so much easier now. And will there be a need for people to learn that? That's a good question. I don't know. But there's another aspect of just designing something to a certain context or problem that will be really important. So how do you train people to do that? Well, it will be interesting. My wife actually teaches English as a part-time lecturer at a university and she's gone from trying to have students use AI in really specific ways to being, like this current class, she's having them write everything by hand in the context of the classroom. She can't trust anything anymore. And it's going to be extremely painful for them, but they're actually, the idea is that they're actually learning and doing the work.”
“So we got Claude Code and Cursor with a whole suite of models, lots of tokens. And they basically trained all of us, sent us to training. And so we started using it with large context availability. I'd say over the last year it's evolved into vibe coding with verification. And now we're realizing, well, the volume of code is not something we can really manually inspect. Although there's some things that we have to inspect because it's going to DoD or military and government. So, we don't know what the policy is yet. It's kind of, we're in this wild frontier of, okay, we're using the tools because we're told to, but what does this mean from a customer's perspective? Who's going to accept it? Who's not? I don't think I'm in a position to really... it's not like my opinions are going to influence the company at all, but those are questions I have as I dive in head first.”
“And that's kind of the goal of the company is to get to that. So in essence, I'm building my replacement, which is going to be this robot. And I say that jokingly, but it's also serious. And I understand that that's really what's happening.”
“That's if AI does everything perfectly. But we all know it hallucinates and it's interesting because when multiple things hallucinate then you've got this chain reaction of everything going off the rails.”
“So it would need access to GCP and Azure. It would need access to our Bamboo and the other tools that are in the infrastructure that are needed for this production so that it could do the full cycle. So right now I'm the monkey in the middle. I tell it what to do in the tangible code. It does it. Now I have to run the test, take the logs from it, and feed it back and say check the logs, did it work or did it not? So I'm really in the way of the velocity.”
“My current work is to meet the security technical implementation guide from the DSA, the defense industry. So they have a huge list of requirements for, we'll just call it security hardening, and there are tools out there for scanning. So I have to build an image, scan it, look at it, look at the requirements, and there's not always a mitigation or remediation that is something you could script. Sometimes it's a site policy that has to be manually done, like it could be password complexity rules or checks that have to be done manually by somebody on site. So the process spans both code and policy documentation, and that really varies from customer to customer. So it's not the best example of how to use the AI to take on a bunch of requirements. I can take them one by one and say, "Hey, how do I remediate this?" And sometimes it can be done with scripting or some tweaks in the OS or whatever, but closing the loop of, "Hey, just take this whole document that's a thousand pages and implement everything," well, the government's going to come back and say, "Where's your audit trail of your development?" And I'm going to say, "Oh, well, I just told Claude to do it." I don't know how that's going to fly.”
“Which goes back to my earlier point about, well, what's the position of these customers about us as a provider building software that they're going to buy using these tools. Enterprise customers probably don't care but some of these federal and military ones might have a different opinion.”
“From the top down they're pushing it. So, I think a lot of employees were resistant to it, including myself. Initially, I wanted to play with it, but I didn't know how best to integrate it into my daily workflow. I don't know what initiated their desire to do this. I don't know where the seed of that came from, but once they decided we're all in and they spent the money on the tools, they spent money on training, they've created dynamic forums and everyone is sharing information about how they are using it, best practices. They want everyone to be sharing what they're doing and how they're doing it. We have sharing meetings, weekly AI success stories. Here's what we did with it. We had one person, a senior architect that I know personally, he had domain knowledge of contact centers, not contact centers... email servers, sorry, voicemail systems. All the words are swirling. Maybe it's the whiskey. I don't know.”
“So there's multiple solutions in the company because of the history of acquisitions. And so he thought, well, what if I just start greenfield and say, I know all the requirements. I know all the features. I know everything I want. In 48 hours with Claude Code, he had a working prototype and he spent another week polishing it and it was integrated with all existing systems. And the CEO called a special all-employee meeting to have him present it because it was a wakeup call that this is what we're going to be up against in the marketplace. There are going to be companies, new upstarts, existing companies that are going to be using these tools in this way. And it doesn't matter how good your old product was, right? If you realize the power and speed of AI, then granted, we don't know what the real cost is going to be.”
“And are the AI vendors going to be like crack dealers that say, "Oh, the first taste is affordable," and then they're going to turn the screws on us and now what? We're stuck with a codebase that no monkey in the company can grok, right? And so you're stuck. You need the AI to support the AI because Pandora's box has been opened and this is just one sphere of it, right? I try to push all that aside because I'm having fun with the shiny new toy and trying to figure out how to best use it.”
“Yeah, I think about this every day. That's why I said earlier that I feel fortunate that this is happening at the end of my career because, while I can see there's room for people to use the tools, once they get to the lights out software factory then the entry level coder is not needed anymore.”
“And what do you lose with that? Well, we take for granted our ability to think about software the way we think about it. And I have to think that education has to change because they can't be producing software coders anymore. That's not the skill that's valuable. They need to be thinking higher level. I think that's going to be the next shift. The growing pain of, oh well, this field that has existed for the last 50 years, more than that, I don't know, software has been around longer than I've been alive, so 60 years let's say. It's been more or less the same. Yes, the capability of the computers has improved, the languages have changed, but the principles really haven't. The development methodologies have evolved.”
“Maybe for the worse because I don't like Agile. I miss the old waterfall days where you had a design on paper before you start coding. Actually, iterative development was fun. Prototyping and iterating on that, I think, was my favorite method of work. But I think in the short term it's going to impact the people who just graduated. I feel bad for them because they've been told, just get a degree in software, computer science or software development or whatever. And now that's not the skill that's going to have any value because the value of that career is the experience that you learn over time through coming up through the junior ranks and dealing with all these problems in the field or in your own code. You learn a lot about, well, how do we avoid this? And to some extent some of those learnings are not valuable because the AI is going to take care of that. It's going to be doing the mechanics of writing code and finding the bugs. So that's not the skill that's needed anymore. The skill that's needed is again the higher level, like the solution level capability of going to a customer, getting requirements, building a spec, giving it to AI, and somehow making sure that what goes back to the customer meets that.”
“And that isn't software anymore. That's something else. I don't know. And we're in that period right now where we're learning these tools, trying to make the company successful because we have to use the tools because we're afraid that if we don't find a way to... and I don't know what the end is, how it's going to stabilize, right? It's shifting month by month. It's shifting.”