Epistemic Transparency: Facts, Assumptions & Projections
In the age of generative AI, hallucinations and hidden heuristics erode trust. To restore confidence, our systems adopt Epistemic Transparency, a discipline that labels every meaningful claim as a Fact (F), Assumption (A) or Projection (P). This simple taxonomy makes it clear what is known, what is inferred and what is speculated—without exposing proprietary models. It’s a way to explain decisions without revealing the secret sauce.
The roots of Epistemic Transparency lie in global AI governance. The AI auditability principles adopted by Japan and echoed in the EU AI Act emphasise transparency, fairness and accountability. The EU’s risk‑tiered framework even mandates that developers document their data, assumptions and potential impacts. We take these expectations seriously. Instead of performing hand‑wavy risk assessments, we attach F/A/P labels to each step in our reasoning. This allows regulators and stakeholders to audit decisions efficiently, ensuring compliance without an invasive code review.
Here’s how it works. A Fact is verifiable: the user’s location, a legal statute, a court deadline. An Assumption fills a gap: perhaps the user’s employment history or the assumption that a witness remains willing to cooperate. A Projection is a plausible future outcome, ranked with qualitative confidence. By explicitly labeling information, we avoid conflating guesswork with evidence. This helps users understand the basis of their options and identify which parts need verification. When the system suggests a course of action, it indicates which elements are Facts, which are Assumptions and which are Projections, inviting the user to correct or confirm them.
The F/A/P framework also enhances the MMX™ engine. MMX™ measures Motivation, Identity, Context and Time; Epistemic Transparency describes the quality of inputs feeding that measurement. Together they reveal when a user’s vector drifts from ethical norms: is the divergence due to missing data, incorrect assumptions or speculative leaps? The Ethics Pulse™ then weighs options according to fairness and legality, ensuring the system never prefers a harmful projection over a safer, slower path.
Transparency doesn’t slow us down. As global regulatory landscapes tighten—with AI legislation surging more than ninefold since 2016—businesses that embrace F/A/P labeling gain a competitive edge. They can demonstrate compliance, reduce bias and avoid the hidden costs of lawsuits or product recalls. Most importantly, they build trust with the people they serve. When users know why an AI suggested a particular legal remedy or job‑application strategy, they’re more likely to follow through and achieve a fair outcome.
Putting F/A/P Into Practice
To implement Epistemic Transparency, you don’t need to overhaul your entire stack. Start by identifying the key claims in your application flow and tag them. Use F for data points pulled from verified databases or official documents. Use A when inferring from context (for example, presuming a user’s preferred language based on location). Use P when describing potential future scenarios. The system should expose these labels to the user whenever it proposes an action. Over time, you can integrate these labels into data schemas, logs and user interfaces, making them an integral part of your AI’s audit trail.
Call to Action
Epistemic Transparency is not just a technical tweak—it’s a cultural shift. It invites collaboration between developers, product managers, lawyers and users. It aligns with the eight auditability principles—particularly transparency, fairness, accountability and international cooperation. If you’re curious about embedding F/A/P into your workflows or building an audit‑ready system, read our guide to the Eight Core Principles. Or contact us directly to explore how the MMX™ engine and Ethics Pulse™ can work with Epistemic Transparency to make your AI both powerful and principled.
UPL Notice: This publication is educational in nature and does not constitute legal advice. Consult a licensed attorney for advice about specific situations.