Home/Use Cases/Insurance
Use Case

Your AI recommended
that policy.
Can you prove why?

AI-native insurance platforms make thousands of recommendation, quote, and carrier-match decisions every day. Every one of those decisions affects a real person buying real coverage. When a customer challenges a recommendation, when a regulator asks how your model works, when a carrier questions why their product was or was not shown, you need a record.

50+
US states with active AI insurance regulation
$1.3T
US insurance market adopting AI at scale
Millions
Of AI recommendations made per platform per year
0
Sealed records your platform has right now
The Decision Surface

Every recommendation your platform makes is a decision. None of them have a record.

AI-native insurance platforms built their edge on better models. That same edge is now a liability if you cannot explain and document what those models decided and why.

๐ŸŽฏ
Policy Recommendations
Core AI output
Your model matches a customer to a carrier and a coverage level. That recommendation is a decision. When the customer buys it and something goes wrong, who authorized the match?
No sealed record of the model version or criteria applied.
๐Ÿ’ฐ
Quote Generation
Thousands per day
Your pricing model generates a quote. The customer buys it. The carrier underwrites it. When the carrier disputes the risk assessment, what record do you have of how the quote was produced?
Quote generation has no independently verifiable audit trail.
๐Ÿข
Carrier Selection
Model-driven ranking
Your algorithm ranks carriers for each customer. Some carriers are shown. Some are not. Regulators are starting to ask whether those ranking decisions introduce bias or favor commercial relationships over customer outcomes.
Carrier ranking decisions are not documented at the point of decision.
โš ๏ธ
Risk Scoring
Per applicant
Your model assigns a risk score that influences what products a customer sees and at what price. When a customer is shown fewer options than others, that scoring decision needs to hold up under scrutiny.
Risk scoring decisions are subject to fair lending and anti-discrimination review.
๐Ÿ”„
Renewal and Lapse Decisions
Automated at scale
Your platform influences whether a customer renews, switches, or lapses. Those are consequential decisions affecting real coverage. Each one should have a record.
Renewal influence decisions have no documented decision trail.
๐Ÿ‘ค
Human Review Override
Edge cases and appeals
A customer escalates. A human reviews the AI recommendation and overrides it. That human decision needs to be on the record just as much as the original model output.
Human overrides are not captured in the same record as the original AI decision.
The Problem

You built a better model. Now you need to be able to explain every decision it makes.

AI-native platforms have a documentation problem that traditional insurers do not. A traditional insurer can point to an underwriter. You point to a model. That is a harder answer to give a regulator, a carrier, or a customer who feels they were poorly served.

State regulators are moving fast on AI transparency in insurance. The NAIC model bulletin on AI is already adopted in multiple states. The question is not whether documentation will be required. The question is whether you will have it when they ask.

Customer Complaint
Why did your platform recommend this policy and not a cheaper one?
Your model ranked it highest for their profile. But you cannot show the customer or the regulator a sealed record of the criteria applied at the moment of the recommendation.
Carrier Dispute
Why is your platform recommending our competitor over us for this customer segment?
Your ranking algorithm made that call. Without a sealed record, you cannot prove the decision was model-driven and not commercially influenced.
Regulatory Audit
Show us every AI recommendation made to customers in this state this year.
You can pull logs. You cannot produce a sealed artifact that proves those logs were not modified and that the model version documented is the one that actually ran.
Discrimination Review
Can you prove your risk scoring model does not produce disparate outcomes by zip code?
You have model documentation. You do not have a decision-level record that proves every individual scoring decision matched the documented model behavior.
How ProjectLedger Helps

Every recommendation your model makes gets recorded the moment it happens.

Hook ProjectLedger into your recommendation engine, your quote API, your risk scoring pipeline. One promote() call per decision. The record is written at the moment the decision is made, signed immediately, and cannot be altered after the fact.

When a regulator, a carrier, or a customer asks a hard question about a specific recommendation on a specific date, you open the artifact. The model version, the authority mode, the actor, the timestamp. All of it. Permanently on the record.

Recommendation Engine
agent_autonomous
Your model calls promote() on every recommendation. Actor: agent. Records model version, customer segment, carrier ranked, and the timestamp. Fully automated, fully on the record.
Risk Scoring Pipeline
agent_autonomous
Your scoring service fires promote() per applicant. The score, the model version, and the authority mode are sealed at the moment of evaluation. Independently verifiable.
Human Override Queue
human_in_the_loop
Agent recommends. A human reviewer approves or overrides. Both actions recorded in the same artifact. Proves human oversight happened for the decisions that required it.
Carrier Ranking Decision
agent_autonomous
Every carrier ranking call is recorded with the ranking criteria version applied. When a carrier questions the decision, the sealed record answers without ambiguity.
Regulatory Pressure

State regulators are already asking. The NAIC bulletin is just the beginning.

The National Association of Insurance Commissioners adopted its model bulletin on AI in 2023. Multiple states have already incorporated it. The direction is clear: AI decisions in insurance will require documentation. The platforms that have a sealed record when the audit arrives will be in a fundamentally different position than those that do not.

NAIC 2023
Model bulletin requires insurers to document AI decision-making processes and maintain records of algorithmic decisions.
Colorado
SB 21-169 requires documentation and testing of AI systems for unfair discrimination in insurance.
California
CDI guidance requires insurers to be able to explain individual AI-driven decisions to consumers on request.
EU AI Act
High-risk AI systems in insurance require full decision traceability and human oversight documentation.
The Deliverable

A sealed artifact your compliance team can hand to any regulator.

At the end of the evaluation period you receive Artifact 5, a KMS asymmetric-signed bundle containing every decision recorded during the window. The private signing key is non-exportable and remains inside Google Cloud KMS. Independently verifiable by anyone with the public key. No access to your systems required.

โœ“Documents model version at the point of every decision
โœ“Proves records were not modified after the fact
โœ“Covers recommendations, scoring, and human overrides
โœ“Verifiable without access to your platform
โœ“No schema changes to your existing infrastructure
Artifact 5 / Insurance Platformโœ“ VERIFIED
"context":       "ai-insurance-platform-q1-2026",
"authorityMap": {
  "human_led":        892,
  "human_in_the_loop": 4103,
  "agent_autonomous":  284719
},
"actors": {
  "human":   4995,
  "agent":   284719,
  "service": 0
},
"integrityDashboard": {
  "totalEntries": 289714,
  "verified":      289714  โœ“
},
"signedBy": {
  "provider":   "gcp-kms",
  "exportable": false  โœ“
}
Get Started

Start with your recommendation engine. See what a real record looks like.

A 30-minute call to scope the engagement. Thirty days later you have a sealed artifact you can hand to any regulator, carrier, or customer who asks. $15K to $60K, evaluation only.