Disclosure Norms
Social DynamicsProvisional
Emerging informal standards within teams or organizations about when and how to attribute AI contributions, developing organically through practice rather than through formal policy, and shaped by seniority dynamics and perceived professional vulnerability
Evidence
“that there were many conversations where we were working on projects together,”
“most internal things, and that people would contribute their own thoughts but it was really AI assisted thinking going into it and we didn't really think about it, we didn't set any policies about that or any guidelines around discussions. I think people just are kind of freely admitting is like the AI and I put this together and our thinking is more, I think it's almost along the lines of making an attribution with a quote that you use and it's not completely my own thinking but it doesn't diminish the quality of it just because of that.”
“the trainee was definitely not using AI at all. And I asked, "Hey, are you using any?" And she went totally on the defensive, like,”
“Why are you asking this? No, I'm not." And I'm like, "That's fine. I just want to know. I want to know how you're using it and can we talk about it?" And she said,”
“I've noticed that some things were definitely AI generated. So I just asked myself, "How do I ask this without sounding judgmental?" Which was... I didn't want it to be like, "Hey, are you using AI?" in a judgmental way. It's like, "Hey, can we talk about this? How are you using AI? Which tools are you using?”
“And I feel we shouldn't be ashamed of talking about [disclosing AI use]. And especially in a power dynamic where there's different seniority, I think we should be completely transparent about what the expectations are.”
“I recently started signing my emails that I ran through Copilot at the end, "edited by AI." And again, there's no reason for me to do that. And I kind of do it because I think it's funny, but it's kind of like the "sent from my iPhone" or whatever. But I feel like, beyond [people having to] look for the em dashes, I think it's a good way to disclose it.”
“I don't see unwritten rules. I always disclose when I use AI, whether it's reporting the results of something or if I do create any kind of visuals.”
“No, not at work at least. Let's put it that way. It seems to be kind of just this gold rush mentality of we're all expected to use it and then they sometimes ask us like, "What benefits have you been getting from it lately?" Just I think as a way really just to justify the cost of the licensing that they do. But beyond that, if you mean like any sort of like disclosure, so let's say for example when we turn in a report, "Last quarter, portions of this were created through generative AI means," nothing like that.”
“So, I feel like I'm the only one in my organization, not the only one, I'm one of the ones in my organization that is setting those norms. So, for use, I'm trying to push a guideline for faculty and staff use of AI, like guidelines of what we use AI for, what's good use of AI, what shouldn't we put into AI for output.”
“I haven't seen any mention of having to do that, but I'd say for something that was totally certain by either statistical data, I mean, I think you need a disclaimer saying these numbers need to be double-checked for accuracy for sure. Anything that goes into the area, you know, we're going to make this big million-dollar decision based on this widget not working correctly, you know, that's what has to be double-checked.”
“In some places, yes. I just came back from Brazil. I was there for a whole month with my mom. And over there a lot of places, if they do images with AI, they put an AI credit, you know, on the image. So they are disclosing. Movies, anything that's done with AI. Here, very sporadic. I see that. But I feel that at some point we need to have some norms, some rules. Because the deepfakes, they know, I mean, we have elections coming up here. We have elections in Brazil. I mean, there's so much, so many things that AI can do to damage. I think it needs, it needs somehow to have some sort of rule, some kind of criteria that we can get things, you know, okay, you can only do A if you do B, otherwise it's going to be like nobody's land.”
“I feel like we're all using it so much. No, I think we all know that we're all using it so much. I think it's not, there's just in the context of our day-to-day work with these 10 people, I mean, I will call out I think I probably use it less for my day-to-day thinking than other people do. So, if we're having a product kind of conversation remotely, somebody might respond to my question with what I know is an AI output. Rather than somebody sending me a few sentences, they'll send me two pages worth of AI output of a concept and I'll have to read through it and see if it makes sense.”
Sessions
The Five-Day, One-Day, Three-Day Problem
P5 - Sr. Manager, UX Research, Software · Software · Apr 15, 2026
The Tunnel Vision Experiment
P7 - Principal Design Researcher, Software Consulting · Software · Apr 16, 2026
Hallucinations Are a Feature
P8 - UX Researcher/Designer, Electric Utilities · Electric Utilities · Apr 16, 2026
The Seductive Skeptic
P10 - UX Manager, Insurance · Insurance · Apr 17, 2026
The Norm-Setter on the Guest Network
P11 - CTE Program Manager, K-12 Education · K-12 Education · Apr 17, 2026
The Visual Thinker with a Framework
P12 - UX Designer/Researcher, Advertising & Design · Advertising & Design · Apr 20, 2026
Midnight Ideas and Shadow Adoption
P13 - UX Design Consultant, Consumer Finance · Consumer Finance · Apr 20, 2026
Fighting Fire with Fire
P14 - Head of Design, Healthcare Software · Healthcare Software · Apr 20, 2026