About Me
I started my career studying how pilots and air traffic controllers work with automated systems. My PhD at UT Austin investigated what happens to human performance and decision-making when automation is introduced into high-stakes operational environments. The short answer: it depends entirely on whether the humans can build an accurate mental model of what the automation is doing and when to trust it.
That question has followed me for 25 years. After finishing the PhD I moved into applied UX work at Intuit, then led a growing UX practice at Sage Software, then spent a decade building research programs at companies from healthcare (UnitedHealth Group) to legal tech (Evisort) to SaaS procurement (Vendr). I also built and directed Kent State University's Master of Science in User Experience Design program from 2014 to 2021, which gave me a different vantage point on the field: not just practicing research, but designing the curriculum that trains the next generation of researchers.
Throughout all of it, the consulting practice I started in 2004 (ShermanUX) has kept me connected to a wide range of industries and problem types. Clients have included Microsoft, Verizon, Dell, the Federal Reserve Bank, Golden 1 Credit Union, and organizations across fintech, insurance, manufacturing, and more.
I think about research as an organizational capability, not a service function. The difference is structural. A service function runs studies when asked. An organizational capability means the team has built the processes, relationships, and communication patterns that ensure user evidence is a continuous input to product decisions.
When I join a team, the first thing I build is the pipeline: how does evidence get generated, synthesized, communicated, and acted on? Who are the decision-makers and what form does evidence need to take for them to use it? Where are the handoff points where insights currently die, and how do we rewire those?
I triangulate across three signal types:
- Behavioral data (analytics, funnels, session recordings) shows what people actually do.
- Attitudinal data (surveys, satisfaction scores, NPS) captures how they feel about it.
- Generative qualitative data (interviews, contextual inquiry, usability tests) explains why.
Any one signal can mislead. The three together are hard to argue with.
When the signals don't converge neatly, I make a call based on the salience, quality, and depth of the available evidence and I communicate the confidence level to stakeholders. Organizations that wait for perfect data don't ship. Organizations that act on gut instinct ship the wrong thing. The space in between is where I operate.
The question at the center of my dissertation, “how do humans calibrate trust in automated systems,” is now the central design question of the decade. Every product team building AI features is confronting it: how do we help users understand what the AI is doing, when to trust its output, and when to override it?
I've worked both sides of this. At Evisort, I led the discovery and design sprint process for an AI-powered clause library where the core UX challenge was a mental model mismatch: users could see that the AI had done something, but they couldn't figure out what it had done or why. Three rounds of design sprints and usability testing resolved this by making the AI's reasoning legible in the interface.
At Vendr, I used AI differently: as a research operations tool. I integrated Google's NotebookLM into qualitative analysis workflows, cutting analysis time by 50%. Then I gave stakeholders direct access to the tool so they could query the research data themselves. That's not just efficiency. That's a redistribution of analytical capability across the organization.
Get in Touch
I'm currently exploring principal, staff, and director-level research roles at companies building complex products. Open to longer-term strategic consulting roles as well. Reach out if your team is working on wicked problems where rapid and valid research needs to be wired into how you make decisions.