Preppr Brings AI Accountability to Emergency Management with AgentSystems Notary

Preppr Brings AI Accountability to Emergency Management with AgentSystems Notary

Preppr supports emergency management, business continuity, healthcare, public sector, and critical infrastructure teams in designing and delivering tabletop exercises and crisis simulations. One of our products, Preppr Collaborate, uses autonomous AI agents to gather and synthesize insights directly from community and organizational stakeholders on behalf of a Preppr customer.

Across the platform, we intentionally build in friction — clear human checkpoints within powerful AI workflows. The user remains in control. That approach slows automation, but it strengthens judgment, oversight, and accountability.

Preppr Collaborate is different.

Its purpose is to enable stakeholder engagement at scale and at speed. Adding human brakes into every interaction would undermine the product's core value. As a result, it is one of the few areas in Preppr where autonomy operates without real-time customer supervision.

That design choice creates an inevitable question:

If the system is acting autonomously and interacting with stakeholders directly, can we prove what it actually did?

We log every AI interaction. But those logs reside within our own infrastructure. There is no independent mechanism demonstrating that they have not been altered.

It is only a matter of time before a customer, auditor, or regulator asks us to prove the integrity of autonomous AI activity. We chose not to wait for that moment.

What We Did

We integrated AgentSystems Notary into Collaborate. The idea is simple: we keep all of our data private, but every time our AI acts, a unique digital fingerprint of that interaction gets sent to independent storage that neither we nor AgentSystems can modify or delete.

Think of it like a notary stamp. We retain full custody of our sensitive data, but we can’t alter it without detection.

If a customer or auditor ever needs to verify the integrity of our AI logs:

  1. We export our logs
  2. The verifier generates fingerprints from those logs
  3. They compare against the independently stored fingerprints
  4. Match = the logs are exactly as they were. Mismatch = something changed.

No sensitive data ever leaves our infrastructure. The independent record is just fingerprints — meaningless on their own, but enough to prove our logs haven't been tampered with.

How Notary Made This Easy

Notary ships as a plugin for LangChain, the AI framework that powers Collaborate's agents (see LangChain integration docs). Adding it was straightforward — we dropped it into our existing setup without changing any of our AI logic.

Here's what an integration actually looks like:

# Import AgentSystems Notary modules
from agentsystems_notary import (
    LangChainNotary,
    RawPayloadStorage,
    CustodiedHashStorage,
    AwsS3StorageConfig,
)

# Specify where full audit payloads are stored (your S3 bucket)
raw_payload_storage = RawPayloadStorage(
    storage=AwsS3StorageConfig(
        bucket_name=os.environ["ORG_AWS_S3_BUCKET_NAME"],
        aws_access_key_id=os.environ["ORG_AWS_S3_ACCESS_KEY_ID"],
        aws_secret_access_key=os.environ["ORG_AWS_S3_SECRET_ACCESS_KEY"],
        aws_region=os.environ["ORG_AWS_S3_REGION"],
    ),
)

# Specify where verification hashes are stored
hash_storage = [
    CustodiedHashStorage(
        api_key=os.environ["AGENTSYSTEMS_NOTARY_API_KEY"],
        slug="customer-123",
    ),
]

# Initialize notary
notary = LangChainNotary(
    raw_payload_storage=raw_payload_storage,
    hash_storage=hash_storage,
    debug=True,
)

# Add to any LangChain model
model = ChatAnthropic(
    model="claude-sonnet-4-5-20250929",
    api_key=os.environ["ANTHROPIC_API_KEY"],
    callbacks=[notary],
)

That's it. No changes to agent code. No changes to AI workflows. No new infrastructure to manage.

  • Lines of code added: 25
  • Changes to existing AI logic: zero

Why This Matters for Our Customers

Preppr's customers operate in environments where trust is foundational. Emergency managers, public health leaders, and critical infrastructure operators answer to their communities — and increasingly to regulators who want clarity on how AI is being used in operational workflows.

Explainability is necessary. But in products like Collaborate, where AI operates autonomously, explanation alone is not sufficient. The higher standard is auditability — the ability to independently verify what the system did, when it did it, and that the record has not been altered.

When we enter a procurement discussion, we can say:

"Every autonomous AI interaction in Collaborate is independently fingerprinted by a neutral third party."

This changes the conversation. We are not asking buyers to rely on internal logs or assurances. We have architected the system so that integrity can be verified outside of our control.

If you are evaluating AI vendors — or building AI systems internally — this is the level of accountability you should expect.