Skip to content
Paul Sherman
April 20, 2026

Fighting Fire with Fire

P14 - Head of Design, Healthcare Software

The sole designer at a startup building an AI-powered application platform for a regulated industry, who taught himself Claude Code to produce front-end prototypes after realizing his engineering counterpart's AI-accelerated pace had left him two or three steps behind, and who now contends with subject matter experts vibe coding interfaces that look finished on the surface but lack design system alignment, documented intent, or user-centered reasoning.

I don't want to know what Claude thinks about this. I just want to know what you think. Like, here's why this doesn't make sense. Tell me what you think.

P14: Survey Data and Session Summary

Survey Responses

QuestionResponse
Age35-44
EducationMaster's degree
Current role / position levelDirector / VP
Job titleHead of Design
Years of professional experience16-25 years
Organization descriptionWe're building an AI-powered app creation platform for healthcare orgs.
IndustryHealth care (Medical, Dental, Mental Health & Vision Services)
Individual AI tools usedText generation (e.g., creating documents, emails, summaries), Media creation (images, audio, video), Code generation and completion
Organizational AI tools deployedInternal search and knowledge summarization, Code generation and developer tools
AI adoption involvementContributed to technical design, requirements gathering, or implementation; Provided subject matter expertise, requirements, or end-user feedback
Biggest work win with AIBeing able to keep up with the speed engineers can work right now by also creating full fidelity prototypes using Claude Code.
Biggest disappointment with AIAny sort of refined or close to final design output. Every time I've attempted to speed up the process of designing a new element or feature (like different ideas for a nav bar design), the outputs from AI that early had some pebbles of insight, but very much require additional work and refinement.
Organization's biggest AI successSpeeding up our engineering backlog and being able to see what was just a roadmap idea come to life in a matter of months.
Organization's biggest AI challengeEmployees not questioning the use of AI for strategic direction or client feedback.

Background

P14 is the Head of Design at a ~10-person startup in a regulated industry. The company is building an AI-powered platform that generates modular applications for their customers, meaning AI is not just a tool used within the organization but the core of the product itself. P14 is the sole designer, covering marketing, UX, and product design. He holds a master's degree and has 16-25 years of professional experience.

P14's personal AI use has been minimal: a road trip planned via ChatGPT, some unsuccessful experiments with AI-generated illustrations and video. His professional AI adoption, by contrast, was catalyzed by a specific moment of competitive pressure. Late last year, one of the startup's engineers began using Claude Code and made rapid progress connecting the company's LLM instance to its application-building platform. The resulting acceleration left P14, in his words, "not one step behind but two or three steps behind." He responded by spending a couple of days deep in Claude Code, successfully rebuilding a front-end experience, and describes this as "fighting fire with fire."

The session's distinctive contribution is P14's detailed account of navigating a workplace where AI has democratized the production of design-adjacent artifacts. Subject matter experts are vibe coding interfaces, engineers are building at AI-accelerated speed, and the designer is left evaluating outputs that look finished on the surface but lack the intent documentation, design system alignment, and user-centered rationale that would normally accompany design work.

Key Findings

The Designer's Competitive Pressure

P14 describes a specific form of professional anxiety that differs from the general job-security fears expressed by other participants. His concern is not that AI will replace designers, but that AI has empowered non-designers to produce design-adjacent work, compressing the timeline in which the designer can contribute. When his engineer began building with Claude Code at a pace that outstripped traditional design collaboration, P14's response was not to resist but to adopt the same tools.

"I think late last year, as we've been struggling through putting this [application] all together, one of our engineers started leaning more into Claude Code at the same time that some of the big advances happened and made a ton of progress and was able to hook up our own instance of an LLM to start creating those applications and it actually worked. We had the building blocks figured out and it was putting it together in a way that was like, oh, we thought that this would come at some point and now it's come and now we have to catch up and work around it and try to figure out. And for me as a designer it was like all of a sudden I'm maybe not one step behind but two or three steps behind."

The "fighting fire with fire" framing recurs throughout the session, and P14 is explicit that this adoption was compulsory rather than enthusiastic: "It feels like I don't have a choice. Like I don't have a choice but to fight fire with fire because that's what's going on, to sort of keep up and not be left behind, and that doesn't feel great."

Vibe Coding and the Governance Gap

The session's most distinctive finding is P14's account of subject matter experts vibe coding functional-looking interfaces and presenting them to the team as if they are complete. The problem is not that these artifacts are useless; P14 acknowledges they sometimes contain good ideas. The problem is that they bypass the design process entirely, arriving without documentation of intent, without alignment to the established design system, and without consideration of whether end users will be able to make sense of them.

"But then you start to peel away the surface and there's so much that doesn't make sense in terms of what we're doing and the layout in addition to just like maybe the design system we're using. It doesn't map to the design system we've already established."

P14 has tried to address this by proposing a documentation requirement for vibe-coded interfaces, asking that anyone who produces one also document what they were thinking and what they hoped to accomplish. This proposal has been unsuccessful. The fallback is meetings where P14 has to ask 20 questions to reverse-engineer the intent from the artifact.

"This is something I sort of have unsuccessfully proposed which is that we do a better job of documenting our intent if anybody's going to be vibe coding interfaces and put some structure to that so that we can say, okay, so and so made an example of this application, what were you thinking, what were you hoping to accomplish."

AI-Generated Concepts Gaining Momentum Before Review

P14 describes a pattern where AI-generated product concepts move through implementation before anyone with design judgment has evaluated them. An SME uses AI to propose a new building block for the platform's modular library, an engineer picks it up and begins implementing, and by the time P14 encounters it, the concept has organizational momentum regardless of whether it serves users.

"I think sometimes I am seeing the result of that work with AI a few steps down the chain and I have to question whether that was a good idea. So maybe the AI proposed a new structure to how our product works and I disagree with it because it doesn't take into the context of whether an end user will be able to make sense of it."

This is compounded by the product's own architecture: because the platform uses AI generatively to produce applications, the output could vary between clients given the same guidance. Only recently did the team re-engineer this so that AI output is constrained by a defined design system rather than being purely generative.

"I Don't Want to Know What Claude Thinks"

P14 is one of the more direct participants in the study on the topic of AI slop in workplace communication. When colleagues respond to product questions by sending two pages of unedited AI output instead of their own thinking, P14 has confronted it directly.

"I don't want to know what Claude thinks about this. I just want to know what you think. Like, here's why this doesn't make sense. Tell me what you think."

The social consequence has been silence. But P14 notes that visible failures from letting AI "move too quickly" have begun to shift the norm, making colleagues more cautious about substituting AI output for their own judgment.

Emerging Themes

ThemeDescriptionKey Quote
Trust CalibrationDeliberate practices for evaluating AI trustworthiness"...it gave him a plausible answer that ended up in a client email that was wrong. And that was not good and that had to be...a step back moment for the company to say please be careful and please validate everything."
Organizational AI Adoption ChallengesOrganizations struggling to find an effective AI path forward"Everybody needs to be using the cloud and everybody gets a subscription. We're going to do this with the people we have."
AI Slop DetectionRecognizing low-effort AI-generated content and its social consequences"I don't want to know what Claude thinks about this. I just want to know what you think."
Disclosure NormsEmerging standards about when to attribute AI contributions"I feel like we're all using it so much. No, I think we all know that we're all using it so much."
Apprenticeship ErosionConcern that AI prevents junior practitioners from developing foundational skills"You don't get that judgment and that experience without doing the work and being hands-on in it in a way."
Knowledge DisplacementConcern that AI erodes foundational knowledge and judgment"Maybe the AI proposed a new structure to how our product works and I disagree with it because it doesn't take into the context of whether an end user will be able to make sense of it."
Job Security AnxietyFear of professional irrelevance driven by AI-accelerated peers"It feels like I don't have a choice. Like I don't have a choice but to fight fire with fire because that's what's going on, to sort of keep up and not be left behind."
Augmentation Not ReplacementUsing AI to enhance rather than offload tasks"Maybe 50% of the output worked and 50% didn't. So, I say, 'Oh, that's a good idea. We'll keep that, but then change these five things.'"
Vibe Code GovernanceThe challenge of evaluating AI-generated artifacts from non-designers"You start to peel away the surface and there's so much that doesn't make sense in terms of what we're doing and the layout."

P14's trust calibration evidence centers on a specific organizational incident: a colleague used an uncontextualized Claude instance (rather than the company's product-context-aware instance) and sent a plausible but wrong answer to a client. This became a company-wide calibration moment. P14's personal skepticism has protected him, but the incident illustrates how trust failures in a startup compound quickly because there are fewer review layers.

"I think one of our salespeople or product people was responding to a client and used a different instance of Claude and it gave him a plausible answer that ended up in a client email that was wrong. And that was not good and that had to be, that was sort of like a step back moment for the company to say please be careful and please validate everything you're seeing coming out of the LLMs."

P14's organizational AI adoption challenges contribution shows the startup variant: the problem is not bureaucratic inertia but speed-without-review. AI-generated concepts gain momentum through the implementation pipeline before critical evaluation happens. The failed attempt to institute documentation for vibe-coded work is a concrete example of organizational process not keeping pace with AI-enabled velocity.

"We'll have a sort of analyst or subject matter expert in [the industry we serve] who's very technical who will start to build out a concept using AI or in the context of what we're doing and it will get maybe two or three steps before anybody has questioned it and it'll go through maybe our engineer too and start being implemented before we've been able to take a step back and say 'maybe that wasn't a good idea.'"

P14's disclosure norms evidence presents an interesting edge case: a team where universal AI use has made disclosure moot. The norm that's emerging is not about whether to disclose but about the quality bar for AI-assisted contributions. Visible failures have done more to shape behavior than any explicit policy.

"I feel like we're all using it so much. No, I think we all know that we're all using it so much."

P14's AI slop detection is among the most confrontational in the dataset. He has directly told colleagues he does not want to see what Claude thinks, he wants to know what they think. The social fallback is silence, but visible failures are shifting the norm.

"There have been maybe a couple times where I've actually called out like, I don't want to know what Claude thinks about this. I just want to know what you think. Like, here's why this doesn't make sense. Tell me what you think."

P14's apprenticeship erosion contribution brings a nuanced framing. He is not flatly pessimistic but genuinely uncertain about which skills remain essential. The "managing a more junior role except faster" metaphor is distinctive: if experienced practitioners serve as the judgment layer over AI output, the pipeline for developing that judgment in the next generation is unclear. His wife's experience teaching English at a university provides a cross-domain parallel, where the instructor retreated from AI integration to handwriting because she could no longer trust that submitted work represented actual learning.

"When AI can work right now, I think this is true for engineering but for design because that's what I know, it's because you have somebody with the judgment to know when the output is working or not working or quality or not quality and you can adjust from there, but that comes from experience. And it's almost like managing a type of more junior role except faster and so your brain has to move faster."

P14's knowledge displacement evidence is tightly connected to the organizational adoption challenges but approaches from a different angle. The concern is not about process (no review gates) but about substance: AI-proposed architectural decisions are displacing the kind of user-centered critical evaluation that a designer would normally provide.

"I think sometimes I am seeing the result of that work with AI a few steps down the chain and I have to question whether that was a good idea. So maybe the AI proposed a new structure to how our product works and I disagree with it because it doesn't take into the context of whether an end user will be able to make sense of it."

P14's job security anxiety is distinctive because it is about professional relevance rather than employment. The threat is not replacement by AI but displacement by colleagues who can now produce design-adjacent work without design training. His adoption of Claude Code was compulsory, driven by the need to remain a relevant contributor rather than by enthusiasm for the tools.

"It also feels like in general there's so much anxiety around it. And it feels like I don't have a choice. Like I don't have a choice but to fight fire with fire because that's what's going on, to sort of keep up and not be left behind, and that doesn't feel great."

P14's augmentation-not-replacement stance is clear: he uses AI for implementation acceleration, not design judgment. The 50/50 framing (half the output works, half doesn't, curate accordingly) positions AI as raw material to be shaped rather than finished product to be accepted.

"There would be times when I'd give maybe a general prompt and the output, maybe 50% of the output worked and 50% didn't. So, I say, 'Oh, that's a good idea. We'll keep that, but then change these five things.' And it's just kind of like an iterative building process."

P14 introduced vibe code governance as a new theme. The pattern is specific: non-designers producing visually plausible artifacts via AI that gain organizational momentum despite lacking design system alignment, documented intent, or user-centered rationale. P14's failed attempt to institute documentation is a concrete governance gap, not just a quality complaint.

"But then you start to peel away the surface and there's so much that doesn't make sense in terms of what we're doing and the layout in addition to just like maybe the design system we're using. It doesn't map to the design system we've already established."

Interview Transcript

00:00:00

Paul: So, to start off, I'd like you to tell me the story of your first "oh wow" moment with AI. So, what was going on that made you try AI and what happened that made the light bulb turn on for you?

P14: In terms of my day-to-day work and probably personally too, I've been sort of dabbling with it over the past couple years, just sort of checking in on it to see where it's at. I think maybe the first "wow" moment for me personally is using ChatGPT on just asking random questions, and I think the only time I've really used it was for a family road trip. I was putting in destinations and recommending like if we wanted to get from A to B, where should we stop? How long should we take? I was trying to manage a road trip with kids. That was like two years ago.

00:00:58

P14: But I don't use it very much in my day-to-day life. That was sort of like a way for me to test it. In terms of my work, I've been sort of checking in on it just to be aware of where it's at. I can think of some failures like we had to do, I'm a lone designer at a startup and so I cover marketing as well as UX and product design and we had to do some initial branding work and I needed some illustrations and we ultimately worked with an illustrator which was the right decision but I was testing sort of where AI was at in terms of like can AI produce any sort of illustration that would be helpful for us. And I've had to try to put some videos together, too. And the answer was no. Not without a lot of headaches and the output was just so choppy and weird that I abandoned it.

00:01:53

Paul: Have you had any big wins with AI at work or has your organization had any big wins with AI at work?

P14: Yeah. I mean, I think what we're trying to do, which we've been trying to do for the past couple years, is really ambitious, which is to create healthcare applications with a sort of modular application building tool and to create the whole backend so that their applications are possible to be used within the healthcare context. And I think late last year, as we've been struggling through putting this [application] all together, one of our engineers started leaning more into Claude Code at the same time that some of the big advances happened and made a ton of progress and was able to hook up our own instance of an LLM to start creating those applications and it actually worked. We had the building blocks figured out and it was putting it together in a way that was like, oh, we thought that this would come at some point and now it's come and now we have to catch up and work around it and try to figure out. And for me as a designer it was like all of a sudden

00:03:07

P14: I'm maybe not one step behind but two or three steps behind. There was one instance where we've been discussing the experience of using conversational AI in our tool and what the engineer had done was working but it was kind of overwhelming in terms of everything that was a part of the UX. And so we were trying to find time for me to collaborate with him because he's been building with AI. And that was I think the first moment I'm like okay I'm just going to see what I can do with Claude Code and start doing it. And I kind of just went in deep for a couple days and was able to rebuild it with Claude Code successfully to, just the front end, but to illustrate the experience that we wanted and it felt like okay now I can kind of play, now it's kind of like fighting fire with fire like I can compete a little bit in that process. And so that was really impressive to me for the first time.

00:04:06

P14: And so I think since then I don't think I've figured out any step process. Sometimes I'm in Figma because that's where we were before. Sometimes I'm using AI, but I've been more and more like we had another instance over the past couple months where we had to redesign and rebuild something in a short amount of time. And rather than me, I mean it would normally have taken probably at a minimum of a couple weeks, but I knew we had less time. And so I tried to figure out how to again use Claude Code to make the tweaks that I needed to tweak in a way that I could communicate those with our engineering team.

Paul: How did that work out?

P14: We're in the middle of it. So, I think I was able to build the front end the way I think it needed to be built and to make some style and UX changes on the fly, like some easy wins basically for a pretty legacy healthcare application that we were working with from a client.

00:05:13

P14: Translating that to what my engineer is doing has been interesting and harder though because of how he's building. So and I think that's probably, we've been so underwater that that's probably a process we need to figure out even though I can create the front end experience. It's not necessarily that he can just grab it and translate it yet. Part of that is because we're using AI and how we're actually building things. And so there has to be a structure for our AI building in the tool to be able to output an app. Because our platform helps build tools using AI. It's not just our platform that I'm working on. It's the output of our platform which is being generated by an AI also. So there's an extra step in there that makes it a little harder to translate my work.

00:06:22

Paul: How is your organization handling AI adoption? Is it all top down? Is it all bottom up? Is it a mix? Are there people figuring it out as they go on their own?

P14: I think because we're a startup and we're really like 10 people, day-to-day and we're dealing with AI ourselves, it's been mostly bottom up. I think at some point, well at some point it was a little bit top down. Early this year we sort of refocused our efforts and knowing what AI could do and what our engineer was able to do, we said, "Okay, now we want to be much more ambitious and work through all this backlog that we thought was going to take months in a shorter amount of time." And so everybody needs to be using the cloud and everybody gets a subscription. We're going to do this with the people we have. And so that was one instance of it being top down, but everyone was already dabbling with it before then.

00:07:28

Paul: Are you seeing any differences across functional areas in AI adoption or type of AI use?

P14: I think the type of use. Yeah. I mean I think we have subject matter experts who are using it to put together structures of blueprints for domain areas that we will work in because we're sort of trying to put together the building blocks people will use to build applications. And so that use of it, but that actually bleeds into the implementation of it too because they're going beyond just the documentation. And then our engineer's using it to build on a day-to-day basis. And then I'm using it too. I wouldn't say I'm using it to design. I'm using it to help me almost refine front end structures, front end implementation of the UI at the moment. Here and there maybe I'm using it to generate some ideas in terms of design and I've seen that work kind of in a hit and miss way.

00:08:57

Paul: I'm wondering what some of the misses were and then the followup is how are you using it more for refinement rather than design ideation?

P14: Everything we're doing is building the plane as it flies. And I have work in Figma which hasn't been fully translated into our product. And so that work is still there to actually do those refinements, and even to truly implement the designs from Figma into code while we're still building out new features and whatnot. And so sometimes I'm using it to do the work that a front-end engineer might do to clean up our implementation.

P14: And then also work with the engineer to start formalizing the design system with it.

00:10:07

P14: So we are using it a little bit for that too. But it hasn't gone back into Figma yet. I feel like that's still a work in progress. And then there are times like when I was taking that LLM kind of experience, the chat-based interface project, and I spent just a couple days just working on that and I was really designing as I was building it because I had my engineer's work to start from so I was refining their work, I was cleaning up what they had done. But there would be times when I'd give maybe a general prompt and the output, maybe 50% of the output worked and 50% didn't. So, I say, "Oh, that's a good idea. We'll keep that, but then change these five things." And it's just kind of like an iterative building process. There have been other times where I'm like, "Okay, I'm going to try and use Figma Make because I haven't used it very much" and I'll give it an idea that I'm working on and the output just took a while and it's not helpful at all.

00:11:13

P14: I was like, "Okay, I'm not going to touch that for a while." I think I had one when using Figma Make and it was such a specific little, I was trying to figure out how to use filter chips in a certain part of our interface and it did that pretty well and I was able to pull some inspiration from that. But generally I'm not using it that early in my design process at all. It's much more the formalizing and cleaning up of the idea that's already there.

Paul: Everyone's experienced some sort of either hallucination or just a wrong answer from AI. Can you tell me about a time you trusted AI and shouldn't have and what happened and how did you decide what to do then and how to trust AI?

P14: I feel like, I don't know if I have gotten, I've been, I feel like my stance has been kind of skeptical from the beginning and so I don't think I've, I mean I've tried some things that haven't worked but I haven't got in trouble for it and maybe that's because I'm a designer, but I'm trying to think if there's something that I've missed in some of the work I've been doing.

00:12:42

P14: I think because my work has been more just front end, I haven't gotten in trouble yet with anything. There's definitely one instance in our company, we have a siloed off instance of Claude that has all of our product context in it that, because we're on Azure, we have to use and I'm on Mac but I have to use a virtual desktop to use, and it has all of the context for our product and so you can ask it questions and it will give pretty good technical answers. And I think one of our salespeople or product people was responding to a client and used a different instance of Claude and it gave him a plausible answer that ended up in a client email that was wrong. And that was not good and that had to be, that was sort of like a step back moment for the company to say please be careful and please validate everything you're seeing coming out of the LLMs.

The other thing I've seen us do, which is hard and I don't think it's completely something that we figured out yet, is like we'll have a sort of analyst or subject matter expert in [the industry we serve] who's very technical who will start to build out a concept using AI or in the context of what we're doing and it will get maybe two or three steps before anybody has questioned it and it'll go through maybe our engineer too and start being implemented before we've been able to take a step back and say "maybe that wasn't a good idea."

00:14:18

Paul: Can you tell me more about that? I've got some thoughts about it, but I want to hear more of your thoughts around that.

P14: Part of it has to do with what we're building because part of our assumption in terms of building applications within healthcare is that we need to build the building blocks correctly so that the applications can be assembled and we're not just building from scratch. And so it's those modular components in this library of healthcare building blocks that we have a backlog of. I wouldn't say hundreds, but maybe between 50 and 100 of these building blocks that we want to build. And we have a sort of schema for the types of building blocks that need to go into an application in healthcare. Some are data processing kind of building blocks.

00:15:27

P14: Some are front end, sort of like a dashboard kind of analysis view. And if we come to a new client or a use case that doesn't fit within that schema of building blocks, we sort of have to take a step back and reconsider. Do we need a new one or does it fit? Do we have to broaden the definition of one of them? And definitely a couple people are using AI to try and figure that part of it out. So propose a new building block that fits within our system. I think sometimes I am seeing the result of that work with AI a few steps down the chain and I have to question whether that was a good idea. So maybe the AI proposed a new structure to how our product works and I disagree with it because it doesn't take into the context of whether an end user will be able to make sense of it.

00:16:32

Paul: That's interesting that you said that sometimes a SME will come up with a concept and then I assume they're coding it to illustrate their idea?

P14: Yeah, that is also happening and I haven't talked about that yet. I mean this is almost like very detailed product documentation but it's also that we have sort of a process of them going through AI to come up with that and then come up with the technical details to start implementing it. Also they are vibe coding some of those interfaces and sharing them with me and the team, and that's been its own interesting challenge.

Paul: How so? Tell me more about that.

P14: So I think there are a few aspects of it. In [our customers' industry] I think the bar in terms of end UI design is not always terribly high and so when someone vibe codes a design, puts it out there like "oh we're done, look, so and so did it, it's there," and I start looking at it and there are some things on the surface that are fine and they're working and maybe there are a few good ideas that I haven't thought of too.

00:17:50

P14: But then you start to peel away the surface and there's so much that doesn't make sense in terms of what we're doing and the layout in addition to just like maybe the design system we're using. It doesn't map to the design system we've already established. And so there's all those aspects of it, but then there's even translating the domain and the intent into the interface that I never, like sometimes I'll just, in the past maybe I'll get handed one of these live coded interfaces without that context and I'll have to go back, either I'll have to do my best to extract that intent out of the interface or I'll have to go back and ask 20 questions just to figure out what was going on. And so this is something I sort of have unsuccessfully proposed which is that we do a better job of documenting our intent if anybody's going to be vibe coding interfaces and put some structure to that so that we can say, okay, so and so made an example of this application, what were you thinking, what were you hoping to accomplish, and with the idea that maybe if that was documented we could assess it together and see whether it was working.

00:19:00

P14: But I would say that any sort of documentation in that process has been unsuccessful so far. It's been more like, okay, you did this, now we have to meet to walk through what you were thinking. And was this intentional or was that intentional?

Paul: I've worked in telecom among other fields and that's a SME-heavy environment as well.

Paul: And I'm thinking back when I was in telecom 20-something years ago, we would get sort of the same thing where a really knowledgeable subject matter expert will come up with a way to tackle a problem that they're familiar with and they'll propose it and next thing you know, everyone's running around reacting to something that came from a SME. But it takes on its own weight.

Paul: And it sounds to me like this is that same problem but with the additional complexity of having to deal with someone's really cool looking vibe-coded application.

00:20:13

P14: For sure. I mean, and I think part of the issue for us is we're just not terribly mature yet. We're still building this idea. And so to have somebody put together, I think actually so at this point our platform experience is in a pretty good place and pretty intentional but the platform produces these healthcare domain applications and those are their own design system which is separate too. Those are the ones that have been hard to pin down because we were actually using AI in a generative way to build those, knowing that that would be a problem, which is like the AI is being given maybe the same guidance but the output might be slightly different each time or between different clients exporting the same app basically.

00:21:24

P14: We sort of raised that flag and had to move on because we knew it was just how we were building it. And then until recently we came across a new scenario where we had a whole other kind of, a patient-facing example, and we had to take a step back and our engineer had to basically re-engineer how that was working so that it wasn't generative anymore but it was only generative based on a design system that was already defined, which is I think how my vision of it was already but we weren't there yet. So now that we have that structure in place, there's work for me to go back and make sure that the design system it's referencing makes sense. But at least that's open to us now. And that is more context into why a vibe-coded app, when you don't have that design system in place, has even more weight because there's nothing to ground it.

Paul: Are you seeing any norms or unwritten rules forming specifically in your workplace about when you disclose that AI helped you with something or not so much?

00:22:39

P14: I feel like we're all using it so much. No, I think we all know that we're all using it so much. I think it's not, there's just in the context of our day-to-day work with these 10 people, I mean, I will call out I think I probably use it less for my day-to-day thinking than other people do. So, if we're having a product kind of conversation remotely, somebody might respond to my question with what I know is an AI output. Rather than somebody sending me a few sentences, they'll send me two pages worth of AI output of a concept and I'll have to read through it and see if it makes sense.

Paul: When people do that, especially if it feels like a low effort, phone it in, do it once sort of activity, how does that change your opinion on people who take that approach? How do you manage through that?

00:23:51

P14: It's frustrating. Yeah, I think it's frustrating because I think some people have just been more trusting of it. And yeah, kind of phoning it in. There have been maybe a couple times where I've actually called out like, I don't want to know what Claude thinks about this. I just want to know what you think. Like, here's why this doesn't make sense. Tell me what you think.

Paul: How does that go over when you say that?

P14: I think I've received silence as a response to that before, but I feel like because we've had some more visible failures with sort of letting AI move too quickly, that's been happening less. So we've had more, I think we're more aware of where it can fail if we don't watch it in our process.

00:25:03

Paul: How does this increasing presence of AI in the world, in work and personal lives, how does that make you feel?

P14: There's an aspect of it that feels very empowering when I'm trying to build out an idea quickly. There's been a couple times where I'm building something in Claude Code and it's felt like it's nice to have an iterative design process in actual code. Which is really cool. Like I used to work in Flash a long long time ago and when we were doing work in Flash, Flash was the output because it would be embedded into a website. So you were building what would be the final product which felt really gratifying and so there's an aspect of that that I appreciate. But it also feels like in general there's so much anxiety around it. And it feels like I don't have a choice. Like I don't have a choice but to fight fire with fire because that's what's going on, to sort of keep up and not be left behind, and that doesn't feel great.

Paul: What about the next generation of people entering the field? It could be design specifically or tech in general who've never done the work without AI. What concerns you or excites you about anything?

P14: Yeah. I mean I think when AI can work right now, I think this is true for engineering but for design because that's what I know, it's because you have somebody with the judgment to know when the output is working or not working or quality or not quality and you can adjust from there, but that comes from experience. And it's almost like managing a type of more junior role except faster and so your brain has to move faster. And so that is a really good question because you don't get that judgment and that experience without doing the work and being hands-on in it in a way.

00:27:27

P14: So there are certain things that I don't know if people will need to pick up in the future, like I don't know some of the detailed UI kind of work which the details of were tedious and took a long time in the past and now they're so automatic, or even like responsive design patterns. It was such a tedious thing before and it's so much easier now. And will there be a need for people to learn that? That's a good question. I don't know. But there's another aspect of just designing something to a certain context or problem that will be really important. So how do you train people to do that? Well, it will be interesting. My wife actually teaches English as a part-time lecturer at a university and she's gone from trying to have students use AI in really specific ways to being, like this current class, she's having them write everything by hand in the context of the classroom. She can't trust anything anymore. And it's going to be extremely painful for them, but they're actually, the idea is that they're actually learning and doing the work.

00:28:32

Paul: Is there anything that you think I should ask people when I do these interviews that I haven't asked you?

P14: I think the way people are thinking about their careers right now would be interesting. How are people adjusting sort of their day-to-day and future plans and what are they doing to prepare? I think that would be interesting to know.

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

  • I took notes after each session.
  • I fed those notes to several AIs, along with the moderator guide, project proposal, session transcript, the participant's survey responses, and a codebook of tags and themes I've been iterating as I collect data.
  • I prompted each to write a background, findings, and emerging themes section.
  • Then I iterated on each AI's draft, challenging the AI where appropriate and removing what I'm euphemistically calling "hallucinatory content" :-).
  • I collected each AI's drafts, added them to the project I've set up in Claude Cowork, and prompted it to draft the background, findings, and emerging themes section, pushing back as appropriate.
  • Then I edited the content, because "human in the loop" means "I have final edit." At least to me it does.
  • I then published each session writeup.

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.