Litigation Engineering FAQ
Common questions about building software where outputs become evidence. For the full framework, see the Litigation Engineering overview or the deep-dive blog post.
What is litigation engineering?
Litigation engineering is the discipline of building software systems where the outputs become evidence. It applies engineering rigor to domains where errors have seven-figure consequences, where opposing counsel will scrutinize every decision your system made, and where "it usually works" isn't good enough. It combines chain of custody, auditability, reproducibility, and human-in-the-loop verification into a unified engineering practice.
Why is litigation engineering different from regular software engineering?
Most software operates under a "move fast and break things" philosophy where the cost of failure is a bad review or a lost click. Litigation engineering operates in adversarial environments where a wrong output can cost millions of dollars, where judges and regulators demand audit trails, and where opposing counsel will scrutinize every decision. In litigation engineering, "break things" means malpractice.
What is chain of custody in software systems?
Chain of custody in software means every transformation your system performs is a logged, timestamped, immutable event. Document ingestion, text extraction, AI processing, human review — each step is recorded so the provenance of any output can be traced back to its source. This is essential when outputs become evidence in legal proceedings.
How do you make AI outputs reproducible for legal proceedings?
Reproducibility requires pinning model versions, logging prompt templates, and storing the exact inputs that produced each output. Given the same inputs and the same model version, your system should produce the same result. If it can't, you need to understand and explain that variance — potentially under oath. "We ran it through GPT" is not an audit trail.
What does human-in-the-loop mean in legal tech?
Human-in-the-loop means AI can surface, sort, and summarize — but humans decide, verify, and certify. This boundary must be enforced in architecture, not just policy. If your system allows an AI output to become part of a legal filing without human review, you're building a liability. Attorney review gates should exist at every point where AI output could become evidence.
What industries need litigation engineering?
Any domain where software outputs face adversarial scrutiny: legal tech and e-discovery pipelines, AI systems whose outputs become evidence, regulatory compliance and financial reporting, healthcare and life sciences data systems, and insurance claims processing and fraud detection. The common thread is that someone with a law degree and a motive will examine your system's work product.
How do you build auditable AI pipelines?
Auditable AI pipelines require structured logging at every step, schema validation on all outputs, confidence scoring on extractions, pinned model versions, stored prompt templates, and complete provenance chains from source document to final output. Every decision — both automated and human — is logged so that anyone asking "where did this number come from?" can get the answer in one query.