I don't rush to answers. I start by watching and listening - finding patterns in how people actually think and behave. I step away, sketch things out, come back. When I share something, I've already pressure-tested it.
That's what teams get from working with me: not quick opinions, but clarity they can trust. The kind that helps you commit to a direction and stay confident when things get complicated.
I've done this in messy, high-stakes spaces - AI chatbots, healthcare systems, accessibility tools, education platforms. Places where a wrong decision costs you users, trust, or months of wasted work.
I also teach UX Research at Pratt Institute, review papers for ACM TOCHI, and mentor researchers through ADPList. Teaching keeps me honest - when you have to explain something clearly, you find out fast what you actually understand.
I believe good decisions come from slowing down, asking harder questions, and not letting go until it's right.
Currently curious about:
How people argue with AI - and what that reveals about trust. Also: why some research lands and some gets ignored, even when the findings are solid.
I don't have a rigid framework. But there's a rhythm to how I work:
Before forming opinions, I observe. How do people actually behave? Where do they hesitate? What are they not saying? The patterns usually show up here.
I write, sketch, cluster. I talk to people in similar roles. I look for what keeps coming up - and what contradicts. Clarity comes from getting it out of my head.
I don't force insight. Sometimes I need to leave it alone, then return with fresh eyes. The answer often shows up in the space between sessions.
When I think I've found something, I pressure-test it. Show it to others. Poke holes. I'd rather be wrong in a sketch than wrong in production.
I don't hand off and disappear. If I care about something, I keep checking in - refining, questioning, making sure it actually works in the real world.