Skip to content
Paul Sherman
April 20, 2026

Building My Own Replacement

P15 - Senior Developer, Telecommunications

A senior developer and DevOps engineer at a telecommunications company serving defense and government clients, whose company went all-in on AI tooling with Claude Code, Cursor, mandatory training, and weekly success stories, and who now watches a colleague prototype in 48 hours what took legacy teams years to build, while wondering who will accept AI-built software when the government asks for an audit trail.

Are the AI vendors going to be like crack dealers that say, 'Oh, the first taste is affordable,' and then they're going to turn the screws on us and now what? We're stuck with a codebase that no monkey in the company can grok, right?

P15: Session Summary

Background

P15 is a senior developer working in DevOps and cloud infrastructure at a large telecommunications company that serves enterprise, defense, and government clients. The company has a history of acquisitions, resulting in multiple overlapping product lines including voicemail and unified communications systems. P15's current work involves building and hardening cloud images for GCP and Azure deployments that must meet the Defense Information Systems Agency's Security Technical Implementation Guide (STIG) requirements.

P15's AI adoption followed a gradual path: ChatGPT as a Google replacement, then GitHub Copilot as "tab completion on steroids" limited to single-file context. The catalytic moment came when he dropped a log file and stack trace into Cursor and it diagnosed a long-standing race condition in 10 seconds, proposing two functionally equivalent fixes. That was when the scope of what was possible became real to him. His company has since gone all-in on AI, investing in Claude Code, Cursor, multiple models, mandatory training, elective modules across departments, and weekly forums where employees share AI success stories.

P15's session is shaped by two facts about his position. First, he's late-career and knows it, describing himself as "building my replacement" without bitterness. Second, his DevOps work sits at the boundary between what AI can do in code and what defense/government customers will accept without a traditional audit trail. He occupies the role of "monkey in the middle," running tests manually and feeding logs back to the AI because the infrastructure tools aren't connected to the AI toolchain. He sees the path to a "lights out software factory" and believes his company is heading there, but the regulatory and auditability questions for his customer base remain unresolved.

Key Findings

The "Oh Wow" Cascade

P15 describes AI adoption not as a single moment but as a series of escalating realizations, each one reframing what the tools could do. Copilot was convenient but limited. Cursor with full codebase context was a qualitative leap. The company's investment in best-in-class tools and training unlocked yet another level. And it keeps accelerating.

"The whole thing's been an 'oh wow' moment every step of improvement and it's coming so fast it's like, okay, I just got used to how it's working as a tool for me and now it's even better, how can I leverage it?"

The 10-second race condition diagnosis was the turning point: a bug no one could figure out, solved by an AI that could hold the entire codebase in context. But what followed was more significant. P15 recognized he was no longer using a productivity tool; he was watching a shift in what software development means.

The Lights Out Software Factory

P15's company is working toward what he calls Level 5 AI coding maturity: a "lights out software factory" where AI agents and sub-agents handle design, coding, and testing in concert, producing software from a spec with no human hands on the code. A senior architect demonstrated the plausibility of this when he prototyped a complete voicemail system in 48 hours with Claude Code, integrated with all existing systems. The CEO called an all-hands meeting to present it as a competitive wakeup call.

"In 48 hours with Claude Code, he had a working prototype and he spent another week polishing it and it was integrated with all existing systems. And the CEO called a special all-employee meeting to have him present it because it was a wakeup call that this is what we're going to be up against in the marketplace."

P15 accepts this trajectory but sees the recursive dependency it creates: once the codebase exceeds human comprehension, you need AI to maintain AI-generated code, and the vendors who provide that AI hold the leverage.

The Audit Trail Problem

P15's defense and government customers present a concrete version of a question most of the study's participants only address abstractly: who vouches for AI-generated work? When the STIG requires a thousand pages of security hardening and the developer's answer to "where's your audit trail?" is "I told Claude to do it," the regulatory framework has no established response.

"The government's going to come back and say, 'Where's your audit trail of your development?' And I'm going to say, 'Oh, well, I just told Claude to do it.' I don't know how that's going to fly."

This is not hypothetical for P15. His daily work involves building images, scanning them, and remediating findings that span both scriptable fixes and site-specific policy documentation. Some of that work can be automated; much of it cannot. He's the human in the loop not by choice but because the regulatory and infrastructure constraints require it.

"That Isn't Software Anymore"

P15 arrives at a conclusion that none of the other participants in the study have stated as plainly. If the skill that matters is going to a customer, getting requirements, building a spec, and giving it to AI, then the field formerly known as software development has become something else entirely. He can't name what it is.

"The skill that's needed is again the higher level, like the solution level capability of going to a customer, getting requirements, building a spec, giving it to AI, and somehow making sure that what goes back to the customer meets that. And that isn't software anymore. That's something else. I don't know."

He connects this to education: universities are producing software coders, but that's no longer the skill that has value. The experience that made senior developers valuable was accumulated through years of junior-level work, debugging, shipping, and learning from mistakes. When the junior work disappears, the pipeline for producing senior judgment disappears with it.

Emerging Themes

ThemeDescriptionKey Quote
Organizational AI Adoption ChallengesOrganizations struggling to find an effective AI path forward"We're using the tools because we're told to, but what does this mean from a customer's perspective? Who's going to accept it? Who's not?"
Apprenticeship ErosionConcern that AI prevents junior practitioners from developing foundational skills"I think in the short term it's going to impact the people who just graduated. I feel bad for them because they've been told, just get a degree in software."
Job Security AnxietyFear of professional irrelevance driven by AI"So in essence, I'm building my replacement, which is going to be this robot. And I say that jokingly, but it's also serious."
Knowledge DisplacementConcern that AI erodes foundational knowledge and judgment"And that isn't software anymore. That's something else. I don't know."
Vendor Lock-in AnxietyConcern about irreversible dependence on AI vendors"You need the AI to support the AI because Pandora's box has been opened."
Trust CalibrationDeliberate practices for evaluating AI trustworthiness"The volume of code is not something we can really manually inspect. Although there's some things that we have to inspect because it's going to DoD or military and government."
Augmentation Not ReplacementHuman-in-the-loop role shaped by infrastructure constraints"So right now I'm the monkey in the middle. I tell it what to do in the tangible code. It does it. Now I have to run the test, take the logs from it, and feed it back."

P15's organizational AI adoption challenges evidence shows a company that is doing adoption well by most measures: top-down commitment, investment in best-in-class tools, training across all departments, forums for sharing best practices, and visible executive sponsorship. The unresolved question is what happens when this commitment collides with customers who operate under regulatory frameworks that predate AI-generated code.

"From the top down they're pushing it... once they decided we're all in and they spent the money on the tools, they spent money on training, they've created dynamic forums and everyone is sharing information about how they are using it, best practices."

"Which goes back to my earlier point about, well, what's the position of these customers about us as a provider building software that they're going to buy using these tools. Enterprise customers probably don't care but some of these federal and military ones might have a different opinion."

P15's apprenticeship erosion contribution frames the problem specifically around the software engineering career ladder. The value of a software career has historically been the accumulated experience of working through problems at every level. When AI handles the mechanics of coding and bug-finding, the entry-level rung of that ladder disappears, and with it the pipeline for developing the senior judgment that currently makes AI-assisted work viable.

"Once they get to the lights out software factory then the entry level coder is not needed anymore."

"The value of that career is the experience that you learn over time through coming up through the junior ranks and dealing with all these problems in the field or in your own code. You learn a lot about, well, how do we avoid this? And to some extent some of those learnings are not valuable because the AI is going to take care of that."

P15's job security anxiety is distinctive because it's stated without defensive framing. He describes building his own replacement as a factual observation, not a complaint. The deeper anxiety is not personal (he's near the end of his career) but systemic: the pace of change is accelerating and no one can see where it stabilizes.

"So in essence, I'm building my replacement, which is going to be this robot. And I say that jokingly, but it's also serious. And I understand that that's really what's happening."

"We're in that period right now where we're learning these tools, trying to make the company successful because we have to use the tools because we're afraid that if we don't find a way to... and I don't know what the end is, how it's going to stabilize, right? It's shifting month by month. It's shifting."

P15's knowledge displacement evidence describes a field dissolving into something unnamed. The principles of software development have been stable for 60 years. Languages changed, computers improved, methodologies evolved, but the fundamentals held. AI breaks that continuity. The skill that matters now is requirements translation and solution architecture, and P15 recognizes that this is no longer software development as he has known it.

"We take for granted our ability to think about software the way we think about it. And I have to think that education has to change because they can't be producing software coders anymore. That's not the skill that's valuable."

P15's vendor lock-in anxiety is the most vivid expression of this theme in the dataset. Where P9 framed it as a portability concern, P15 describes an irreversible dependency: once a codebase exceeds human comprehension, the organization is locked into needing AI to maintain AI-generated code, and the vendors hold the leverage. The "crack dealer" analogy is deliberate.

"And are the AI vendors going to be like crack dealers that say, 'Oh, the first taste is affordable,' and then they're going to turn the screws on us and now what? We're stuck with a codebase that no monkey in the company can grok, right? And so you're stuck. You need the AI to support the AI because Pandora's box has been opened."

P15's trust calibration operates at the organizational and regulatory level rather than the personal verification level seen in most other sessions. He trusts the tools for his own work. The unresolved question is whether defense and government customers will trust software built with these tools when the development audit trail consists of conversations with an AI.

"I can take them one by one and say, 'Hey, how do I remediate this?' And sometimes it can be done with scripting or some tweaks in the OS or whatever, but closing the loop of, 'Hey, just take this whole document that's a thousand pages and implement everything,' well, the government's going to come back and say, 'Where's your audit trail of your development?' And I'm going to say, 'Oh, well, I just told Claude to do it.' I don't know how that's going to fly."

"That's if AI does everything perfectly. But we all know it hallucinates and it's interesting because when multiple things hallucinate then you've got this chain reaction of everything going off the rails."

P15's augmentation-not-replacement evidence presents an unusual variant. Most participants who describe human-in-the-loop workflows frame it as a deliberate choice to maintain involvement. P15 frames it as a bottleneck. He wants the full automation but can't get there because the AI doesn't have access to GCP, Azure, Bamboo, and the rest of the infrastructure pipeline. He's "in the way of the velocity," a constraint rather than a curator.

"So right now I'm the monkey in the middle. I tell it what to do in the tangible code. It does it. Now I have to run the test, take the logs from it, and feed it back and say check the logs, did it work or did it not? So I'm really in the way of the velocity."

Interview Transcript

00:22:30

Paul: I'd like you to tell me the story of your first "oh wow" moment with AI. So, what was going on that made you try AI and what happened that made the light bulb turn on for you?

P15: Well, I would say my first interaction with AI was ChatGPT, just using it as more or less a Google replacement. And then as Google integrated into the search, giving you AI results at the top as a summary, I started seeing that it's more structured and kind of filters out all the random crap that normally you'd have to go through the results and find ones that look reasonable. And so that wasn't, I wouldn't say, "Oh, wow." But it was like, "Well, that's convenient." So I was kind of in that mode for a while, probably a year or so. And then at work, we started using GitHub with Copilot. And that was probably maybe 18 months ago, and it was tab completion on steroids, right? We are still writing code, but it would start to suggest entire methods and you could tab to accept and you could explore the other options if you wanted to, but most of the time it would use the context of the file and it's like, wow, this is cool.

00:23:53

P15: Like it's using all the same coding style and even naming conventions of variables. And so the context was limited though, right? Because it only knew what you were doing in that file. And I just felt this, well, that's kind of cool and it did accelerate small tasks. So if I needed to do some utility type function, it was great because it would just create boilerplate code and I don't have to think about how to do it. And I think those solutions were very similar to what you would find on maybe a high school or college quiz, right? It's like, oh, write a function to do blah. And so I did find it was useful though, but how often do you do those kinds of things? And in my work, it was like, well, we do a lot of other things and how's AI going to apply to them? So we were just kind of plodding along using it that way, just minimally, but the company was saying use it more and then at some point they went all in and said we're going to get the best-in-class tools.

00:25:11

P15: So we got Claude Code and Cursor with a whole suite of models, lots of tokens. And they basically trained all of us, sent us to training. And so we started using it with large context availability. I'd say over the last year it's evolved into vibe coding with verification. And now we're realizing, well, the volume of code is not something we can really manually inspect. Although there's some things that we have to inspect because it's going to DoD or military and government. So, we don't know what the policy is yet. It's kind of, we're in this wild frontier of, okay, we're using the tools because we're told to, but what does this mean from a customer's perspective? Who's going to accept it? Who's not? I don't think I'm in a position to really... it's not like my opinions are going to influence the company at all, but those are questions I have as I dive in head first.

00:26:32

P15: And I would say my first big "oh wow" moment was when I realized that this tool has context of all the code that you expose to it. It can chew on a large codebase and make much better informed decisions. And I thought, well, hey, there's this bug that's been in the field for a long time. We've never been able to figure it out. We never catch it in time. We don't get the logs for it. And all I had was a log file, a stack trace, and I knew what component was failing. I put it into Cursor and I literally just dropped those files in and said, "What's the problem?" or something like, I don't even know that I had to ask it. I think it figured out that I had a bug and in 10 seconds told me, "Oh, you've got a race condition between these two threads. They're both accessing this." And then it said, "You have two ways to fix it. This way or this way. They're functionally equivalent, but what would you like?" and I said option one please and done. Okay, so that was my first "oh wow" for fixing bugs and it has only gotten better because we're on Opus. We have a bunch of models that we can use but Opus 4.7 just came out. We've been using Opus 4.6 for a while and that has been just another "oh wow" moment because we're doing things where... let me back up a little bit. I'll share this video link with you. There's a video out there about the levels of AI coding maturity, I would call it. They just say what level of AI are you working at? And level five is what they call a lights out software factory. So you have AI agents and sub-agents that are all defined functions. So you could have a design one, a test one, on and on and they work in concert together to produce something for you.

00:29:01

P15: And that's kind of the goal of the company is to get to that. So in essence, I'm building my replacement, which is going to be this robot. And I say that jokingly, but it's also serious. And I understand that that's really what's happening.

P15: I see the role of a senior developer like myself or even a sales engineer, someone who can translate requirements from a customer into a spec. You convert that spec to some standard form that you're going to feed to an agent. As you were talking about before we started recording with flat file human readable. Just update that, now respin or the portions of it that need to change. You're having a conversation, not so much with the AI at that point because right now it's more conversational, like interactive. It does something, I comment on what it did and say, "Well, I don't like this part, but I do like this part," and then it adjusts on the fly in this iterative cycle. I think when you get to the lights out thing, it's like you just provide a spec and it codes it, stubs out all the test interfaces and harnesses, and tests everything and when you're done, boom, you've got what you spec'ed out.

P15: That's if AI does everything perfectly. But we all know it hallucinates and it's interesting because when multiple things hallucinate then you've got this chain reaction of everything going off the rails.

P15: So there's still room for it to go but I feel like it's accelerating because we've gone from tab completion to this in a year. Right. So that's the, I don't know, the whole thing's been an "oh wow" moment every step of improvement and it's coming so fast it's like, okay, I just got used to how it's working as a tool for me and now it's even better, how can I leverage it? And in my current tasks it's not easy to leverage because the systems I'm interfacing with, I inherited all the more or less DevOps stuff like cloud image pipeline building.

00:31:39

P15: So it would need access to GCP and Azure. It would need access to our Bamboo and the other tools that are in the infrastructure that are needed for this production so that it could do the full cycle. So right now I'm the monkey in the middle. I tell it what to do in the tangible code. It does it. Now I have to run the test, take the logs from it, and feed it back and say check the logs, did it work or did it not? So I'm really in the way of the velocity.

00:32:35

P15: My current work is to meet the security technical implementation guide from the DSA, the defense industry. So they have a huge list of requirements for, we'll just call it security hardening, and there are tools out there for scanning. So I have to build an image, scan it, look at it, look at the requirements, and there's not always a mitigation or remediation that is something you could script. Sometimes it's a site policy that has to be manually done, like it could be password complexity rules or checks that have to be done manually by somebody on site. So the process spans both code and policy documentation, and that really varies from customer to customer. So it's not the best example of how to use the AI to take on a bunch of requirements. I can take them one by one and say, "Hey, how do I remediate this?" And sometimes it can be done with scripting or some tweaks in the OS or whatever, but closing the loop of, "Hey, just take this whole document that's a thousand pages and implement everything," well, the government's going to come back and say, "Where's your audit trail of your development?" And I'm going to say, "Oh, well, I just told Claude to do it." I don't know how that's going to fly.

P15: Which goes back to my earlier point about, well, what's the position of these customers about us as a provider building software that they're going to buy using these tools. Enterprise customers probably don't care but some of these federal and military ones might have a different opinion.

00:35:27

Paul: That's interesting. And I'm just going to share a personal anecdote because I think it'll be useful for this conversation. When I was at Lucent in the late 90s, we had this big push for traceability for ISO 90001 compliance. And traceability seems simple but when you get into the messy day-to-day it gets not so simple because a lot of decisions are made on the fly and we end up with either not so great or non-existent documentation and I would think that AI and our interactions with it could do a better job at traceability but I'm finding, eh, that's not the case.

P15: Yeah, well I see where the tools are starting to, even Google Meet and the transcription and the summaries, the AI summaries of what has happened, or email threads, the summarizing with AI. I think that's an attempt to boil down the salient points of this meeting and so I think it's headed that way. Whether or not it meets the requirement for an ISO audit I don't know, but ISO is, I do have a background in that as well. In fact, my dad, that's what he did at the end of his career was getting sites ready for audit. And I would go to dinner with him and the lead auditor from the external company that was going to do the audit. And I'd listen to all the politics of, well, is it really a finding?

00:36:41

P15: And trying to talk things off the list. And I think AI has the potential to help there, but yeah, it's a paradigm shift that we're experiencing here at the end of our careers. And so it's fun because it's a cool new shiny new toy to play with. And it has reinvigorated me to some extent, but my micromanager boss is killing all of that excitement because he just doesn't know how to... he's not a leader. He should not be a manager. He should not be a people manager. He'd be a fine technical leader, but even that's questionable at times.

00:37:59

Paul: That's a good jumping off point for this next question that I've got queued up, just how is your organization handling AI adoption? I want to hear more about the thrash and churn and is your organization reacting to it?

P15: From the top down they're pushing it. So, I think a lot of employees were resistant to it, including myself. Initially, I wanted to play with it, but I didn't know how best to integrate it into my daily workflow. I don't know what initiated their desire to do this. I don't know where the seed of that came from, but once they decided we're all in and they spent the money on the tools, they spent money on training, they've created dynamic forums and everyone is sharing information about how they are using it, best practices. They want everyone to be sharing what they're doing and how they're doing it. We have sharing meetings, weekly AI success stories. Here's what we did with it. We had one person, a senior architect that I know personally, he had domain knowledge of contact centers, not contact centers... email servers, sorry, voicemail systems. All the words are swirling. Maybe it's the whiskey. I don't know.

Paul: Have more.

P15: So there's multiple solutions in the company because of the history of acquisitions. And so he thought, well, what if I just start greenfield and say, I know all the requirements. I know all the features. I know everything I want. In 48 hours with Claude Code, he had a working prototype and he spent another week polishing it and it was integrated with all existing systems. And the CEO called a special all-employee meeting to have him present it because it was a wakeup call that this is what we're going to be up against in the marketplace. There are going to be companies, new upstarts, existing companies that are going to be using these tools in this way. And it doesn't matter how good your old product was, right? If you realize the power and speed of AI, then granted, we don't know what the real cost is going to be.

00:40:32

P15: And are the AI vendors going to be like crack dealers that say, "Oh, the first taste is affordable," and then they're going to turn the screws on us and now what? We're stuck with a codebase that no monkey in the company can grok, right? And so you're stuck. You need the AI to support the AI because Pandora's box has been opened and this is just one sphere of it, right? I try to push all that aside because I'm having fun with the shiny new toy and trying to figure out how to best use it.

Paul: Yeah.

P15: And the fact that they do facilitate all this interaction between the wider company tells me that they're serious about it. They're not doing it just to say they're doing it. It's not like they want to put it on their list of, "Oh yeah, we're AI enabled," right? They recognize it's a do or die kind of implication.

00:42:01

Paul: I'm thinking back about what you said about the lights out code shop and there's a certain irony between that phrase and then the concept of dark code, which refers to code that's AI generated that no human has touched. What are your thoughts about that?

P15: My thoughts are I'm fine with it, honestly. I feel like a lot of what we do in coding is reinventing the wheel many times over. The promise of reusability has never really been realized to the level that we all hoped it would. At least not in my experience. And that was the goal of reusability. You do all this low-level coding and then you never have to do it again, but that's wrong. Your requirements change and you're doing the low-level coding over and over and over. I see this as it's going to free us from that and allow us to be higher level thinkers. Solution-based thinking. What we build isn't limited by the underlying programming language. And AI is going to evolve fast enough to keep up with what we ask it to do in regards to building software products.

00:43:14

P15: And I see it as, there was a video, I can't remember if it was someone from OpenAI, they were talking about some analogy of typesetters back in the, I don't know when that was, the 1400s. I don't know. I'm just pulling a date out. But that video is out there and it basically says this paradigm shift, it's a seismic shift. And it's kind of like when the scribes, they used to have, and they were the only ones that knew how to write and read. Then typeset came out and now people were able to gain the power of language and they didn't have to worry about how to print books and it enabled them to just think about the higher level thing, right? They didn't have to think about the mechanics of how do I get this on paper and out to a lot of people.

00:44:20

P15: And I'm paraphrasing. I don't remember the exact parallels that were drawn in that analogy, but it kind of made sense. There's so much mundane boilerplate code that we write that just holds us back from being able to have real velocity in building products. And that's where I think the example of the voicemail service that was prototyped in two days that does more than all the others that we have combined. Like that was mind-blowing, everyone was mind-blown. I mean the CEO on down, they were just like everyone needs to know about this. I have a friend that works for a small company. They do specialized software consulting and medical devices and embedded systems. I talked to him about where are they on the AI maturity scale and he said yeah, we don't use that... complaining about AI. I asked him what model are you using and they were using Gemini. I'm like, well, Gemini is not really a coding model and so I guess that's another point to it. I feel fortunate that our company invested in the best of what we need for what we do, and I know they've gotten other models and other AI tools for the other areas like marketing. They didn't apply their AI mandates to just R&D, it's across the board. So, we had to watch, we went through several AI training sessions that were mandatory and then we had electives that we could take based on our interest, and there were modules on how to produce videos for marketing using AI, generated video, how to build PowerPoint presentations using AI, and not necessarily relying on it as it's going to do everything but it does the big chunk of getting it in shape and then you refine it. And much like we do with code where it's iterative in nature, with the goal of accelerating your output and hopefully improving the quality. So yeah, that's why I was excited to talk to you because I feel like, I didn't know exactly what topics you were going to cover, but I do have some feelings about it.

00:47:47

Paul: Let's talk a little bit about two things. One is your concerns, if any, about losing some basic skills. And also your concerns, if any, about what happens to the next generation, people who do your job nominally.

P15: Yeah, I think about this every day. That's why I said earlier that I feel fortunate that this is happening at the end of my career because, while I can see there's room for people to use the tools, once they get to the lights out software factory then the entry level coder is not needed anymore.

00:49:43

P15: And what do you lose with that? Well, we take for granted our ability to think about software the way we think about it. And I have to think that education has to change because they can't be producing software coders anymore. That's not the skill that's valuable. They need to be thinking higher level. I think that's going to be the next shift. The growing pain of, oh well, this field that has existed for the last 50 years, more than that, I don't know, software has been around longer than I've been alive, so 60 years let's say. It's been more or less the same. Yes, the capability of the computers has improved, the languages have changed, but the principles really haven't. The development methodologies have evolved.

P15: Maybe for the worse because I don't like Agile. I miss the old waterfall days where you had a design on paper before you start coding. Actually, iterative development was fun. Prototyping and iterating on that, I think, was my favorite method of work. But I think in the short term it's going to impact the people who just graduated. I feel bad for them because they've been told, just get a degree in software, computer science or software development or whatever. And now that's not the skill that's going to have any value because the value of that career is the experience that you learn over time through coming up through the junior ranks and dealing with all these problems in the field or in your own code. You learn a lot about, well, how do we avoid this? And to some extent some of those learnings are not valuable because the AI is going to take care of that. It's going to be doing the mechanics of writing code and finding the bugs. So that's not the skill that's needed anymore. The skill that's needed is again the higher level, like the solution level capability of going to a customer, getting requirements, building a spec, giving it to AI, and somehow making sure that what goes back to the customer meets that.

00:52:28

P15: And that isn't software anymore. That's something else. I don't know. And we're in that period right now where we're learning these tools, trying to make the company successful because we have to use the tools because we're afraid that if we don't find a way to... and I don't know what the end is, how it's going to stabilize, right? It's shifting month by month. It's shifting.

AI Use Disclosure

I used AI to analyze the data collected via interviews and surveys. How?

  • I took notes after each session.
  • I fed those notes to several AIs, along with the moderator guide, project proposal, session transcript, the participant's survey responses, and a codebook of tags and themes I've been iterating as I collect data.
  • I prompted each to write a background, findings, and emerging themes section.
  • Then I iterated on each AI's draft, challenging the AI where appropriate and removing what I'm euphemistically calling "hallucinatory content" :-).
  • I collected each AI's drafts, added them to the project I've set up in Claude Cowork, and prompted it to draft the background, findings, and emerging themes section, pushing back as appropriate.
  • Then I edited the content, because "human in the loop" means "I have final edit." At least to me it does.
  • I then published each session writeup.

There's a bit more to it, but I'm trying to keep this short. Reach out if you want to talk about my AI-assisted workflow, which I'm still evolving as I go.