JOHN FRANCIS DEYTO

Principal Product Designer, AI & Decision Systems
Human-in-the-loop • Interpretability • Regulated Systems

Designing Trust

I design AI systems where complex outputs become accountable decisions • From model output to reviewable, auditable action

QuinTrace

Carbon & Energy Intelligence Platform
Best Carbon Reporting Software — 2024 ESG Investing Carbon Awards

CONTEXT
Energy and carbon data drive compliance and financial outcomes. Operators are responsible for how energy is used, stored, and verified over time.
THE PROBLEM
Operators needed to make hourly decisions about energy usage, storage, and certification based on carbon intensity, but data was fragmented, not time-aligned, and difficult to verify.
RESEARCH & INSIGHT
Operators were working with aggregated and time-lagged data, which made it difficult to align decisions with real-time conditions. What they needed was not more data, but confidence in when and how to act.
WHAT I DESIGNED
• Systems aligning generation, consumption, and carbon data in time
• Workflows for shifting load, dispatching storage, and validating energy behavior
• Interfaces enabling conversion of verified activity into RECs and offsets
• Decision environments where users could interpret and validate system outputs before acting
OUTCOME
The system enabled real-time, decision-ready visibility into energy and carbon data. Operators could validate outputs before acting, supporting both carbon goals and economic performance.
RESULT
Best Carbon Reporting Software — 2024 ESG Investing Carbon Awards
Enabled high-granularity, hourly (24/7) energy tracking aligned with EnergyTag V2 standards, contributing to improved Scope 2 reporting accuracy and verifiable carbon accounting across renewable and storage assets.

Korn Ferry

AI Interview & Communication Feedback SystemSix issued U.S. patents in multimodal AI feedback and interpretability.

CONTEXT
Korn Ferry developed an AI-powered system that analyzes how people communicate on video, evaluating emotional signals, language, and delivery to help users improve performance in interviews and high-stakes communication.
THE PROBLEM
Users were largely unaware of how they presented themselves. They could not see how their tone, word choice, or emotional signals affected how they were perceived.
RESEARCH & INSIGHT
The issue was not lack of effort, but lack of visibility. Users needed a way to see themselves as others would, and to understand how specific behaviors impacted perception.
WHAT I DESIGNED
• Interfaces that surfaced emotional signals, language patterns, and delivery in a structured, interpretable way
• Scoring systems that translated behavior into actionable feedback
• Playback and editing experiences that allowed users to see improved versions of their communication
• Iterative workflows where users could practice, adjust emotional tone (confident, calm, assertive), and improve across multiple takes
OUTCOME
Users became aware of how they were presenting themselves and could actively improve. The system created a feedback loop where users could observe, adjust, and refine their communication before real-world interactions.
RESULT
Six issued U.S. patents in multimodal AI feedback and interpretability.
Platform reached profitability within six months.

My Focus

Decision authority models • AI output explanation and confidence • Rationale capture and audit workflows • Human override and escalation systems • Autonomy aligned to risk

Who is John Deyto?
I didn’t start in AI.
I started with how people see.
Before product design, before systems, I spent years in photography trying to understand perception. Not just how to capture an image, but how an image shapes meaning. A photograph can feel truthful while being constructed. Framing changes interpretation. Light can reveal or obscure. What appears objective is often a result of decisions made by the person behind the lens.That idea never left me.I moved into brand and design because I became interested in how meaning is created at scale. In branding, the question wasn’t just what something looked like, but what it stood for, how it was perceived, and how it shaped behavior over time. I worked on campaigns, identity systems, and early digital experiences where design wasn’t decoration, it was influence.From there, I moved into digital product design as platforms began to define how people communicate, express themselves, and participate in systems.At companies like Yahoo and LinkedIn, I worked on systems where identity became something people constructed. Profiles, interactions, and content weren’t just features, they were expressions of who someone is within a system. At GaiaOnline and other platforms, this extended into environments where identity was fluid, participatory, and social by design.I’ve always been drawn to creation.I founded and designed a short-form video application that allowed people to record, edit, and publish video in real time. The goal was to remove friction between intention and output, to let people express themselves without delay. The product reached around 200,000 users within three months.That experience clarified something for me: systems that allow people to create are fundamentally different from systems that ask them to consume. Creation is where identity, agency, and participation become visible.Over time, my work moved into systems where interpretation leads directly to action.At Care.com, I worked on a trust-sensitive marketplace where people make real decisions about care, safety, and responsibility. Design wasn’t about engagement, it was about clarity between people.At Hyperloop, I worked on passenger experience systems for infrastructure that didn’t yet exist in the real world, translating complex, invisible systems into something people could understand and trust.At Korn Ferry, I designed AI systems that analyzed how people communicate and translated those signals into feedback used in hiring and leadership decisions. This was where interpretation became explicit. Users needed to understand not just what the system said, but what it meant and whether to trust it.At QuinTrace, I worked on systems that enabled real-time decisions based on energy and carbon data in regulated environments. Accuracy, traceability, and accountability weren’t abstract concepts. They determined outcomes.Today, my work focuses on AI systems, autonomy, and decision-making.As systems become more capable, they also become less transparent. Outputs appear complete, but the reasoning behind them is often hidden. The distance between output and action is shrinking.This makes design more critical, not less.The problem is no longer just usability. It is interpretation, trust, and responsibility.Across everything I’ve done, from photography to AI systems, the underlying question has remained the same:How do people decide what is true enough to act on?How I Think
I believe systems should make uncertainty visible, not hide it.
Users should be able to interpret and question outputs, not just accept them.
Automation should support human judgment, not replace it.
Design should clarify responsibility, especially when outcomes matter.
The goal is not to make decisions for people.
It is to help them understand what they are deciding.
Teaching
Alongside my work, I’ve taught design and photography, including at ArtCenter College of Design and the California Institute of Technology.
Teaching is a continuation of the same inquiry. It’s about helping others see what is often invisible in systems and understand how design decisions shape perception, behavior, and outcomes.Final thought
I’ve worked across photography, brand, product, and AI.
Not because I was changing directions,
but because the medium kept changing.
The question didn’t.