P11: Survey Data and Session Summary
Survey Responses
No pre-interview survey response was received for this participant.
Background
P11 is a CTE (Career Technical Education) program manager in a large urban school district. She transitioned into administration in June 2025 after working as a career technical education teacher, where she first encountered AI through a professional development session on Magic School AI. She is currently pursuing an education doctorate, with her dissertation focused on the evolution of her leadership skills in conjunction with generative AI. She also runs a freelance design practice on the side.
P11's AI adoption story begins with skepticism toward ChatGPT ("Yeah, whatever. That's kind of not going to be a thing") and a conversion moment during a structured PD session that showed her how to use AI for unpacking educational standards and building aligned assessments. She has since settled on Google Gemini as her primary tool and Perplexity for scholarly research, abandoning ChatGPT entirely. Her district officially approves only Copilot and Canva AI, but P11 routinely switches to the guest network to access her preferred tools.
What makes P11 distinctive in this study is the combination of her position as someone actively pushing AI usage guidelines for faculty and staff while simultaneously working around the institutional restrictions herself. She also brings a perspective grounded in serving a majority-minority student population, which shapes her concerns about AI bias in ways that are unique among participants so far.
Key Findings
The Guest Network Workaround: Policy Gaps in Practice
P11's district restricts AI access on its network to Copilot and Canva AI. Her response is pragmatic: she switches to the guest network, where she can access Gemini and Perplexity without restriction. The workaround is described casually, without any indication of conflict or concern about policy compliance. This matters because P11 is simultaneously the person in her organization who is pushing to establish formal AI usage guidelines for faculty and staff. She occupies both roles, norm-setter and norm-circumventer, without apparent contradiction.
"So there's Copilot. We can use Copilot and we can use Canva AI. But I learned a thing. So, if I get off of the school district's network and use the guest network, I can access any AI I want."
AI Everywhere: Four Domains, One User
P11 uses AI across four distinct life domains with fluid transitions between them. At work, she generates professional development materials and is drafting AI usage guidelines. In her doctoral program, she uses AI to review her writing for clarity and grammatical errors. In her personal life, she has replaced her meal planning workflow entirely with Perplexity. And in her freelance design practice, she uses Gemini to write courses in her clients' voices by referencing their websites and podcasts.
"Now, I just say, 'Perplexity, I am wanting a high protein diet that's low cost. Tell me what to buy at the grocery store. You have all my health data. What should I be eating?' It creates the whole meal plan with the recipes and gives me the shopping list in less than a minute and I like that."
Bias as the Core Trust Problem
While most participants in this study frame trust in terms of accuracy (hallucinations, bad links, incorrect facts), P11 frames trust primarily in terms of bias. She articulates this concern at three levels. Locally, she worries about AI giving culturally insensitive outputs to teachers in her majority-minority district. Globally, she sees the UN's push to deploy westernized AI systems in the global south as "another version of colonization." And empirically, she cites specific research on differential feedback quality for minority students, dialect recognition failures, and facial recognition systems that failed to recognize Black students during proctored exams.
"I serve a majority minority district and what are the sources? What's the input? Can I see the data set that AI was trained on?"
The Depth-for-Speed Tradeoff
P11 names a quality cost to AI-assisted work that she attributes to herself rather than to the tool. When she uses Gemini or Perplexity to create professional development presentations, the output is fast but her delivery suffers because she hasn't spent enough time with the material to internalize it. She frames this as her own shortcoming in how she uses the tool, not as a limitation of AI itself. This self-directed quality critique is notable because it suggests that the efficiency gains from AI can come at the cost of practitioner depth, even when the practitioner is aware of the tradeoff.
"My delivery lacks sometimes substance because I created it all using Gemini or Perplexity and I didn't spend a lot of time with it."
Motivation as the Great Divider
When asked about the next generation entering her field without ever having done the work without AI, P11 introduces motivation as the differentiating variable. She argues that the opportunities to develop critical thinking and creative skills will still exist, but only those with the motivation to pursue them will do so. She draws an analogy to physical activity: we used to farm and walk; now we go to the gym. The gym is available to everyone, but not everyone goes.
"Those who have the motivation to hone their creative thinking or critical thinking skills, they will. Those who don't want to won't. And that's where the divide will be, I think."
Emerging Themes
| Theme | Description | Key Quote |
|---|---|---|
| Pervasive Integration | AI adoption spanning many life domains | "There's a blend across work and personal... I use it probably a lot between work and personal and school." |
| Corporate Tooling Gap | Mismatch between approved tools and actual needs | "If I get off of the school district's network and use the guest network, I can access any AI I want." |
| Trust Calibration | Deliberate practices for evaluating AI trustworthiness | "I always go back and review the work that AI did in the background because I can go back and look at the source links." |
| Knowledge Displacement | Concern that AI dependency erodes foundational knowledge | "I worry about losing the ability to just find stuff in the library search engine." |
| Disclosure Norms | Emerging standards about AI attribution and usage guidelines | "I'm one of the ones in my organization that is setting those norms." |
| Apprenticeship Erosion | Concern about skipping foundational learning | "I feel like if we skip over learning [the basics], maybe we'll have people who can't think for themselves." |
| AI Bias Amplification | Concern that AI reproduces and amplifies social inequities | "It seems like another version of colonization and cultural, how do I say that? Like making culture homogeneous, I guess." |
P11's AI integration spans four distinct domains: work (professional development creation, norms-setting), education (dissertation research, writing review), personal life (meal planning, health), and freelance design (writing courses in clients' voices). The transitions between domains are fluid and unselfconscious. Her meal planning example is particularly vivid: what used to be a multi-step process of searching recipes, entering them into an app, and generating a grocery list is now a single Perplexity prompt that incorporates her health data and budget constraints.
"Now, I just say, 'Perplexity, I am wanting a high protein diet that's low cost. Tell me what to buy at the grocery store. You have all my health data. What should I be eating?' It creates the whole meal plan with the recipes and gives me the shopping list in less than a minute and I like that."
P11's district provides Copilot and Canva AI as approved tools, but her preferred tools are Gemini and Perplexity. Her workaround, switching from the district network to the guest network, is described with zero friction or moral weight. She simply learned a thing. This is one of the most casual instances of the corporate tooling gap in the study: no agonizing over policy, no covert behavior, just a straightforward network switch that has become part of her routine.
"So there's Copilot. We can use Copilot and we can use Canva AI. But I learned a thing. So, if I get off of the school district's network and use the guest network, I can access any AI I want."
P11's trust calibration operates at two levels. The first is a standard source-verification practice: she checks links from Perplexity, evaluates the rigor of sources cited by Gemini (distinguishing between blogs and academic sources), and reviews the work AI did in the background. The second level is more unusual in this study: she calibrates trust not just for her own use but on behalf of the teachers she manages, worrying about whether they have the cultural sensitivity to catch outputs that are wrong in ways that go beyond factual accuracy.
"Sometimes Perplexity will give me a bad link and I always check the links. I always go back and review the work that AI did in the background because I can go back and look at the source links."
"Can I see the data set that AI was trained on? Because I want to know that when it's giving a teacher an answer, say they're not very culturally sensitive, if it gives them something that's wrong I want them to be able to identify it."
P11 names knowledge displacement at two levels. Personally, she is actively experiencing it in her doctorate: she hired a tutor specifically to learn the library research skills that she feels AI has allowed her to skip. She draws an analogy to her design education, where she started with paper and pencil before moving to digital tools. Generationally, she worries about giving children AI before they've learned foundational skills like long division, arguing that the logic behind basic math is "like learning a language without knowing you're learning a language."
"Sometimes in my doctorate I worry about losing the ability to just find stuff in the library search engine. And I even went so far as to hire a tutor because I'm in my dissertation phase and I'm like, how do you know what words to search and what's going to bring you back the right research?"
P11 occupies a distinctive position on disclosure norms: she is actively working to create them from a leadership role rather than navigating norms established by others. She describes herself as "one of the ones" in her organization pushing for formal guidelines on what AI should and shouldn't be used for. This top-down norm-setting is a different dynamic than the bottom-up emergence seen in P5, P7, P8, and P10, where norms developed informally through practice and peer observation.
"I feel like I'm the only one in my organization, not the only one, I'm one of the ones in my organization that is setting those norms. So, for use, I'm trying to push a guideline for faculty and staff use of AI, like guidelines of what we use AI for, what's good use of AI, what shouldn't we put into AI for output."
P11 frames apprenticeship erosion in terms of primary education rather than professional training, which extends the theme's scope beyond what previous sessions have addressed. Her concern is that introducing AI before children have learned foundational skills (long division, multiplication, basic logic) will produce people who can't think for themselves. Her distinctive contribution is the introduction of motivation as the differentiating factor: the opportunities to learn will still exist, but only those motivated to pursue them will take them. Her gym analogy captures this neatly.
"Those who have the motivation to hone their creative thinking or critical thinking skills, they will. Those who don't want to won't. And that's where the divide will be, I think."
P11 provides the most developed treatment of algorithmic bias in the study so far, grounded in her experience serving a majority-minority school district. She articulates the concern at three levels: local (teachers in her district receiving culturally insensitive AI outputs without the awareness to catch them), global (the UN's deployment of westernized AI systems in the global south as a form of cultural homogenization), and empirical (citing studies on differential feedback quality for minority students, dialect recognition failures, and facial recognition systems that didn't recognize Black students during proctored testing).
Despite these concerns, P11 is hopeful rather than fatalistic. She believes that inclusive design and research feedback from minority communities can mitigate the risk of bias, "as long as everyone has a seat at the design table."
"It seems like another version of colonization and cultural, how do I say that? Like making culture homogeneous, I guess."
"I've read a few studies where students were given feedback based on their writing and minority students were given less rigorous feedback from AI than Caucasian students. And then AI didn't recognize different dialects of English except for proper English. And then facial recognition didn't recognize Black students as human when they went in to be recognized for a test that was proctored, it didn't recognize their faces."
Interview Transcript
00:00:00
Paul: I'd like you to tell me the story of your first "oh wow" moment with generative AI. So, what was going on that made you try AI and what happened that made the light bulb go on for you?
P11: I was working as a career technical education teacher and we had a professional development on Magic School AI. I had toyed with ChatGPT a little bit and was unimpressed. It was early on and so I was like, "Yeah, whatever. That's kind of not going to be a thing." And so then at this PD I was like, "Oh, wow." Because I was teaching a whole new set of things. I was like, "I can unpack all these standards that I don't necessarily use that often or really ever had ever used." And I was teaching design, but it was interactive media. And so that was the first time I used it to help me with my scope and sequence, make unit exams that were aligned to the web exam blueprint.
00:01:14
P11: And it kind of scaffolded how to prompt AI for me. So then after using Magic School AI, I was like, "Oh wow, look at all this." And then I went back to ChatGPT for a while for social media stuff. But then when Google Gemini came out with the deep research model, I really went hardcore into that.
P11: I work for a large urban school district and I'm a CTE program manager now.
Paul: Fast forward to now. So, which tools have you stuck with and which ones have you abandoned?
P11: I abandoned ChatGPT and kind of went over to Google Gemini. It was back maybe two years ago when they started the deep research model and they gave you links and started citing references, everything I was citing, so you could see it's like thinking. And then I started using recently Perplexity for scholarly research, to find studies.
Paul: Is this for work or is there a blend across work and personal?
P11: There's a blend across work and personal. Right now I'm getting my education doctorate. So I'm doing my whole dissertation on AI's role, like my evolution of my leadership skills in conjunction with gen AI. So I use it probably a lot between work and personal and school. For school I'll even have it read my writing and then give me like a review, and not do it for me but just tell me how to improve it. Or I see what it recommends for clarity and conciseness in the writing and then any kind of grammatical errors, I'll use it for that. I'll use it to brainstorm professional development ideas with me for teachers. As far as like, I use a lot of design thinking in the professional development sessions with teachers that I create, and I will have it reference design thinking protocols or design justice thinking protocols to make PD better than what I could do alone, because I have a very limited amount of time to create.
00:04:29
P11: So that creativity just gets expanded upon, I guess.
Paul: Are there restricted and approved AI tools in your organization?
P11: Yeah. So there's Copilot. We can use Copilot and we can use Canva AI. But I learned a thing. So, if I get off of the school district's network and use the guest network, I can access any AI I want.
Paul: What do you tend to find you want to use that you're not officially allowed to use?
P11: Gemini and Perplexity. Yeah, those are the ones. I really love Gemini, but I use it every time I'm not on the school network.
Paul: Think about one thing that you have done regularly in your life at work or in personal life that AI has changed the most and walk me through what you used to do versus how you do it now with AI.
00:05:46
P11: Oh my goodness. Well, it's a new role that I started in June and I started the new role into administration in June. So, I'm trying to think, what did I used to do with it? Oh, AI. Okay. So, one thing, does it have to be in my role or can it be in my personal life? Okay. So, I used to meal plan without the use of AI and that was just like looking up recipes and then putting them in an app and the app would tell me what to go buy at the grocery store. Now, I just say, "Perplexity, I am wanting a high protein diet that's low cost. Tell me what to buy at the grocery store. You have all my health data. What should I be eating?" It creates the whole meal plan with the recipes and gives me the shopping list in less than a minute and I like that.
00:06:47
Paul: What if you couldn't use it anymore? How would that feel?
P11: I would be a lot less efficient at everything if I couldn't use it anymore. That would be a sad day. I mean, I've learned, like the way I use it most of the time for work, I feel like I've learned from it, too. And it's learned from me. It's just because it gives me back stuff in my own voice that I would have written, like for work say.
P11: And then I use it, I still design as a side gig, and with client's approval I will write whole courses for them and just have it reference their writing style on their website. Well, this one client I have, she has a very unique voice that she speaks in and has podcasting and stuff. So, I'm like, just look up this site, write in her voice with this content and go.
00:07:59
P11: And she's like, "Oh my gosh, how did you do that so fast?" I'm like, "Google Gemini, it knows you." But if I didn't have that, it would take me so long to do things.
Paul: I'm curious about what's been your biggest win with AI at work so far and then on the flip side, what's been your biggest disappointment or surprise failure with AI at work? So, we'll start with the biggest win or efficiency gain or success.
P11: Coming up with the PD and the presentation ideas, like it'll give me side by side what to create for professional development.
Paul: What does PD stand for in this context again? Yeah. So, professional development plan.
P11: Yeah. And then the flip side of that, on the other side of that coin, is with that inefficiency, like when I go to deliver it, it really, my delivery lacks sometimes substance because I created it all using Gemini or Perplexity and I didn't spend a lot of time with it.
00:09:12
Paul: Is this your perspective or feedback from other people?
P11: So that's my perspective. Like I could have done a more robust presentation or facilitation job if I hadn't used AI.
Paul: Are there any sort of norms that are being established at your workplace about using AI, and disclosing the use of AI? And just for simplicity, let's keep it to staff and faculty rather than students.
P11: So, I feel like I'm the only one in my organization, not the only one, I'm one of the ones in my organization that is setting those norms. So, for use, I'm trying to push a guideline for faculty and staff use of AI, like guidelines of what we use AI for, what's good use of AI, what shouldn't we put into AI for output.
00:10:30
Paul: How do you decide whether to trust what AI gives you? What tips you off that something might be wrong and what do you do?
P11: Sometimes Perplexity will give me a bad link and I always check the links. I always go back and review the work that AI did in the background because I can go back and look at the source links and sometimes the links, like I noticed in Gemini when I was doing some research, I was asking general questions about AI use and I found it was citing sources that weren't as rigorous as others. It was citing blogs. It was just searching the internet. It wasn't doing an academic search of stuff that I could cite. So I would say that's kind of the part I don't trust. There's also another piece of trust that I don't have and that's bias.
00:11:47
P11: So, this is something I think about a lot. I serve a majority minority district and what are the sources? What's the input? Because there's so much, like can I see the data set that AI was trained on? Because I want to know that when it's giving a teacher an answer, say they're not very, how do I put it, they're not very culturally sensitive, if it gives them something that's wrong I want them to be able to identify it.
P11: But even, you know, so, and then there's another problem with it on a broader sense where I don't trust, because I was reading research that the UN is really pushing AI in the global south for teaching and learning to create learning management systems and to give students feedback. I don't trust that because the data set that they're using is so westernized. It seems like another version of colonization and cultural, how do I say that?
00:13:17
P11: Like making culture homogeneous, I guess. So those are some of the things I think about when I think about trust.
Paul: How does this increasing presence of AI in your world, in everyone's world, make you feel?
P11: You know, I feel rather hopeful about it. I don't have a negative, I have hope for it but then that's kind of metered by the realities of the world that we're in and the context in which we live. I feel like it's input, you know, what goes in comes out. Representation of research and viewpoints other than completely the western world in large language models, I guess, are the models that are used to train AI. But in general I'm hopeful and I think I don't have any doomsday predictions about it or anything. I don't feel that.
Paul: What is your biggest hope for what AI might unlock or provide say in the next 5 to 10 years?
P11: More work life balance.
Paul: You said you're not really too negative about things and you're optimistic. If there was one big fear or concern you had about AI, how would you articulate that?
P11: I would just say it's about our own bias. Like, we live in a world that is not fair, it's not just, it's not equitable. And how is AI amplifying that in the world? That would be my only concern. Without checks, is it just, like, I've read a few studies where students were given feedback based on their writing and minority students were given less rigorous feedback from AI than Caucasian students. And then AI didn't recognize different dialects of English except for proper English. And then facial recognition didn't recognize Black students as human when they went in to be recognized for a test that was proctored, it didn't recognize their faces.
00:16:23
P11: So that's the kind of thing that concerns me about AI. But I think as long as everyone has a seat at the design table and as long as we have this type of research and feedback from minority groups is used, I think we can mitigate the risk of bias.
Paul: Do you worry about losing certain skills because you're leaning on AI? And if so, what?
P11: Sometimes in my doctorate I worry about losing the ability to just find stuff in the library search engine. And I even went so far as to hire a tutor because I'm in my dissertation phase and I'm like, how do you know what words to search and what's going to bring you back the right research? And they're like, well, it's a process, a learning process that I feel like I'm missing out on. Like when I started design school we started with paper and pencil.
00:17:33
P11: Like relearned design from a very non-technical standpoint and I feel like I'm losing out on that process if I just rely on AI to find the stuff for me. So I guess, but I guess maybe that's going to be a skill that's obsolete because I don't know the Dewey Decimal system.
Paul: What about the next generation? So people entering your field who've never done the work without AI.
P11: Who've never done the work without AI?
Paul: Yes. So maybe fast forward a few years from now, what concerns would you have?
P11: Well, I kind of think of, I don't know that I have many concerns because when we, my only concern is that we don't start with the basics in school and that we give AI too soon. So I'm talking about elementary years, primary years. Because I'm thinking back to when I learned long division and multiplication and the basics of math, it was like learning a language without knowing you're learning a language.
00:18:57
P11: There's a logic behind it. And I feel like if we skip over learning [the basics of math, long division and multiplication], maybe we'll have people who can't think for themselves. But we're starting to see that now with students coming up because they're over-tested, just because of over-testing. So I feel like, you know, we used to farm and we used to be really active and walk and now we just go to the gym. So those who have the motivation to hone their creative thinking or critical thinking skills, they will. Those who don't want to won't. And that's where the divide will be, I think.
Paul: That is similar to some ideas that I've been working through about how motivation has got to be a big mediator for attaining skills, keeping skills, and using AI with a critical eye. So, I'm still just working through my thoughts there.
00:20:07
P11: Yeah.
Paul: I do have one more question about the gap between what AI can do for you right now and what you actually need it to do. So, what's the biggest gap that you continually experience when you're using AI and you want it to do a thing and it just can't do the thing yet?
P11: Oh, it can't design a visual for me in any way, shape, or form. Like, okay, I had one visual designed but I gave it the sketch. I fed it the sketch and I said, you know, and I gave it instructions, but without that sketch, I would have never gotten to the visual I needed with just prompting it.