Principal Product Designer, AI & Decision Systems
Human-in-the-loop • Interpretability • Regulated Systems
I design AI systems where complex outputs become accountable decisions • From model output to reviewable, auditable action
Carbon & Energy Intelligence Platform
Best Carbon Reporting Software — 2024 ESG Investing Carbon Awards
AI Interview & Communication Feedback SystemSix issued U.S. patents in multimodal AI feedback and interpretability.
Decision authority models • AI output explanation and confidence • Rationale capture and audit workflows • Human override and escalation systems • Autonomy aligned to risk
Who is John Deyto?
I didn’t start in AI.
I started with how people see.Before product design, before systems, I spent years in photography trying to understand perception. Not just how to capture an image, but how an image shapes meaning. A photograph can feel truthful while being constructed. Framing changes interpretation. Light can reveal or obscure. What appears objective is often a result of decisions made by the person behind the lens.That idea never left me.I moved into brand and design because I became interested in how meaning is created at scale. In branding, the question wasn’t just what something looked like, but what it stood for, how it was perceived, and how it shaped behavior over time. I worked on campaigns, identity systems, and early digital experiences where design wasn’t decoration, it was influence.From there, I moved into digital product design as platforms began to define how people communicate, express themselves, and participate in systems.At companies like Yahoo and LinkedIn, I worked on systems where identity became something people constructed. Profiles, interactions, and content weren’t just features, they were expressions of who someone is within a system. At GaiaOnline and other platforms, this extended into environments where identity was fluid, participatory, and social by design.I’ve always been drawn to creation.I founded and designed a short-form video application that allowed people to record, edit, and publish video in real time. The goal was to remove friction between intention and output, to let people express themselves without delay. The product reached around 200,000 users within three months.That experience clarified something for me: systems that allow people to create are fundamentally different from systems that ask them to consume. Creation is where identity, agency, and participation become visible.Over time, my work moved into systems where interpretation leads directly to action.At Care.com, I worked on a trust-sensitive marketplace where people make real decisions about care, safety, and responsibility. Design wasn’t about engagement, it was about clarity between people.At Hyperloop, I worked on passenger experience systems for infrastructure that didn’t yet exist in the real world, translating complex, invisible systems into something people could understand and trust.At Korn Ferry, I designed AI systems that analyzed how people communicate and translated those signals into feedback used in hiring and leadership decisions. This was where interpretation became explicit. Users needed to understand not just what the system said, but what it meant and whether to trust it.At QuinTrace, I worked on systems that enabled real-time decisions based on energy and carbon data in regulated environments. Accuracy, traceability, and accountability weren’t abstract concepts. They determined outcomes.Today, my work focuses on AI systems, autonomy, and decision-making.As systems become more capable, they also become less transparent. Outputs appear complete, but the reasoning behind them is often hidden. The distance between output and action is shrinking.This makes design more critical, not less.The problem is no longer just usability. It is interpretation, trust, and responsibility.Across everything I’ve done, from photography to AI systems, the underlying question has remained the same:How do people decide what is true enough to act on?How I Think
I believe systems should make uncertainty visible, not hide it.
Users should be able to interpret and question outputs, not just accept them.
Automation should support human judgment, not replace it.
Design should clarify responsibility, especially when outcomes matter.The goal is not to make decisions for people.
It is to help them understand what they are deciding.Teaching
Alongside my work, I’ve taught design and photography, including at ArtCenter College of Design and the California Institute of Technology.Teaching is a continuation of the same inquiry. It’s about helping others see what is often invisible in systems and understand how design decisions shape perception, behavior, and outcomes.Final thought
I’ve worked across photography, brand, product, and AI.Not because I was changing directions,
but because the medium kept changing.The question didn’t.