10+ Years of Proven FedRAMP &
Cloud Security Success

We are a team of FedRAMP and Mission-Critical SaaS Security experts with 10+ consecutive years of successfully running FedRAMP clouds. Our backgrounds span Dell, Palantir, Virtustream, Oracle, BCG, Avanade, and Palo Alto Networks, bringing deep expertise in building and securing mission-critical cloud solutions for government and enterprise.

Government SaaS by the Numbers

$100Bn

US Government annual spend on software.

$20Bn

US Government annual spend on SaaS.

50% YoY

50% Year over year growth of US Government spend on SaaS.

325

The number of NIST 800-53 controls needs to be achieved for FedRAMP Moderate.

400

The total number of FedRAMPed applications as of March 2025.

10,000+

The number of applications in AWS Marketplace.

The KNOX mission:

Unlock access to cutting edge software for the Government.

10,000+
apps
7,000+
apps
7,000+
apps
3,000+
apps

Store

Future FedRAMP Marketplace
with the help of knox
Get in touch

Knox Blog

The FedStart Kubernetes infrastructure – which runs on top of AWS GovCloud and Azure Government – manages FIPS validated encryption, logging, authentication, vulnerability scanning, and more (so that you don’t have to).

See all blogs

Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO

government
00 min read
 — 
November 12, 2025

Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO

By Casey Jones, Chief Architect of Knox Systems

In Part 1, we proposed the concept of a Security Ledger: a cryptographically verifiable system of record for compliance that updates continuously based on real-time evidence. In Part 2, we detailed how risk-adjusted confidence scores can be calculated using Bayes’ Theorem and recorded immutably in LedgerDB.

In this third and final part of the series, we focus on the next frontier: standardizing telemetry coverage across controls, open-sourcing the control-to-evidence map, and redefining the role of the 3PAO to ensure integrity in a continuous compliance world.

Building the Open Compliance Telemetry Layer

In order for the Security Ledger to be trustworthy, it must be fed with comprehensive, observable evidence across the full FedRAMP boundary. That means creating a control-to-telemetry map that:

  • Defines what evidence types are relevant for each FedRAMP control
  • Maps those to Prometheus-compatible metrics
  • Defines evidence freshness, decay windows, and severity
  • Supports automated generation of control coverage reports

At Knox, we’re working to open-source this telemetry model so that:

  • Every stakeholder (CSPs, 3PAOs, agencies) understands the required observability footprint
  • No one is guessing what counts as evidence
  • The community can contribute new detectors and mappings

Just like OWASP standardized threat awareness, we need a COTMCommon Observability for Trust Model.

Coverage Is the Control: Incomplete Telemetry ≠ Compliance

In the current FedRAMP model, it's possible to "pass" controls without actually observing the whole system. But in a ledger-based model, telemetry gaps are violations.

Examples of common pitfalls:

  • Only scanning certain subnets or environments (e.g., “we forgot our staging VPN”)
  • Disabling or misconfiguring logging for noisy subsystems
  • Letting vulnerability scan coverage drop below 100% of the boundary
  • Using static evidence from prior scans without freshness guarantees
  • Allowing Prometheus exporters to fail silently without alerting

In a real-time, risk-scored model, all of these create confidence decay—and should result in lowered scores or even automated POA&M creation.

The New Role of the 3PAO: Continuous Verifier of Scope, Integrity, and Fair Play

In a world where compliance is driven by real-time evidence, the Third Party Assessment Organization (3PAO) becomes more critical—not less.

But their role shifts from "point-in-time validator" to continuous integrity checker.

Here’s what the 3PAO’s job looks like in a Knox-style system:

1. Boundary Enforcement

  • Validate that all components within the FedRAMP boundary are included in telemetry coverage
  • Detect "convenient omissions" (e.g., shadow servers, unmonitored edge cases)

2. Signal Integrity

  • Confirm that metrics flowing into the Security Ledger are accurate, unmodified, and traceable
  • Review sampling intervals, evidence freshness, and exporter health
  • Perform forensic verification of selected evidence streams

3. Anti-Fraud Auditing

  • Detect signs of foul play or negligence, such as:
    • Turning off scanning before high-risk deploys
    • Creating “burner” environments that avoid monitoring
    • Suppressing alert signals or log forwarders
    • Replaying old data to simulate real-time telemetry

4. Ledger Auditing

  • Verify the cryptographic chain of trust in the ledger system (e.g., via Amazon Aurora PostresSQL or blockchain)
  • Ensure control scores are only adjusted by valid evidence with assigned LLRs
  • Validate that manual overrides are documented and signed

In this model, the 3PAO becomes the trust anchor of the continuous compliance pipeline.

They’re not just checking boxes—they’re inspecting the wiring.

Transparency Through Community

All of this only works if the model is open:

  • The LLRs for each control must be public
  • The control-to-metrics map must be versioned and community-governed
  • The Security Ledger’s core schema must be inspectable and verifiable

Just as large language models opened their weights to gain credibility, compliance models must open their logic. Closed-source compliance logic is a liability.

The Future of FedRAMP Is Verifiable, Transparent, and Alive

We’re not just building for ATOs—we’re building for continuous trust.

FedRAMP’s future lies in:

  • Real-time metrics
  • Probabilistic control scoring
  • Immutable audit trails
  • Open-source control logic
  • 3PAOs as continuous validators, not just periodic checkers

At Knox, we’re committed to that shift—because trust shouldn’t expire every 12 months.

Frequently Asked Questions

1. What is the purpose of open telemetry in continuous FedRAMP compliance?
Open telemetry ensures every system component is continuously monitored through streaming or real-time metrics, removing blind spots and enabling transparent, evidence-based compliance tracking.

2. How does AI improve control coverage across the FedRAMP boundary?
AI analyzes telemetry data, identifies coverage gaps, and recalculates confidence scores automatically when evidence decays or monitoring fails.

3. Why is incomplete telemetry considered a compliance risk?
Missing or outdated telemetry reduces visibility into system integrity, lowers confidence scores, and indicates that certain controls may not be fully effective.

4. How is the role of the 3PAO evolving in AI-driven compliance systems?
3PAOs are shifting from one-time assessors to ongoing integrity verifiers who monitor evidence streams, validate ledger accuracy, and detect fraudulent or incomplete data.

5. Why must continuous compliance models be open-source and transparent?
Transparency builds trust because open-sourcing model dictionaries or explainability maps, telemetry mappings, and ledger schemas ensures that compliance logic is verifiable and auditable.

Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger

government
00 min read
 — 
November 12, 2025

Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger

By Chris Johnson, CTO of Knox Systems

In Part 1, we introduced the Security Ledger—a real-time, tamper-proof system that reframes FedRAMP compliance as a probabilistic, continuously updated measure, not a static report. Now, in Part 2, we go under the hood.

We'll show how Bayesian inference, log-likelihood ratios (LLRs), and ledger-based transparency work together to produce a living risk engine—one that is inspectable, auditable, and mathematically defensible.

And yes, we brought code and real data.

From Binary to Bayesian: Probabilistic Assurance of Control Effectiveness

FedRAMP controls aren’t simply "on" or "off." Their effectiveness shifts with context, evidence, and time. So we treat each control as a probabilistic hypothesis:

P(Control is Effective | Evidence)

This lets us reason continuously over real-world telemetry: IAM logs, patch scans, drift reports, vulnerability findings, and more. The system updates confidence scores in real time—no waiting for annual audits.

Step 1: Assigning Prior Probabilities

Every control begins with a prior belief—a starting point for how likely it is to be effective. These priors are informed by:

  • Control category (e.g. access control vs. incident response)

  • Historical failure rates

  • Threat modeling and exploit severity

  • Complexity and likelihood of drift

Example:

{
  "AC-2": { "prior": 0.90 },
  "SC-12": { "prior": 0.75 },
  "SI-2": { "prior": 0.60 }
}

These priors are tunable and evolve with new deployments and observed outcomes.

Step 2: Defining Evidence and LLRs

We define discrete evidence events—findings that either increase or decrease confidence in a control. Each is assigned a log-likelihood ratio (LLR):

log(posterior odds) = log(prior odds) + Σ LLRs

This additive update makes real-time scoring efficient and interpretable.

Example for SI-2 (Flaw Remediation):

"SI-2": {
  "evidence": [
    { "name": "high_cvss_unpatched", "llr": -2.5 },
    { "name": "monthly_patching_completed", "llr": 1.0 },
    { "name": "vuln_scanner_stale", "llr": -1.0 }
  ]
}

LLRs are computed based on empirical data and mapped to actual telemetry triggers.

Real-World Example: AC-2 (Account Management)

From our working model:

  • Risk Scenario: A former employee's account is still active and exploited
  • P(A): 0.3 (probability of compromise if ineffective)
  • Evidence LLRs:
    • Account review overdue: -1.2
    • No MFA for privileged accounts: -1.5
    • Active Directory logs confirm removal: +1.0

This model is applied to all 323 FedRAMP Moderate controls using structured data and open analysis:
🔗 GitHub Repo: Knox-Gov/nist_bayes_risk_auto

Prioritizing What Matters: The High-Risk Controls

Using this model, we ranked all FedRAMP Moderate controls by severity and potential impact.

The Top 11 High-Risk Controls stood out due to:

  • High exploitation risk
  • Poor observability without targeted telemetry
  • Broad system impact if compromised

These controls form the foundation of our telemetry blueprint—what every system should continuously monitor and score.

Step 3: Continuous Confidence Calculation

Every time Prometheus scrapes a new metric:

  1. Convert prior to log-odds
  2. Add up matching LLRs
  3. Convert back to a probability using the logistic function:

P = 1 / (1 + e^(-log odds))

This produces a dynamic confidence score for each control, updated in real time as evidence changes.

Step 4: Writing to the Security Ledger (Amazon Aurora PostresSQL)

Every update—control ID, evidence, LLRs, and confidence score—is appended as a new, immutable revision to Amazon Aurora PostresSQL, our Security Ledger backend.

Each record includes:

  • Control ID
  • Timestamps
  • Prior and posterior probabilities
  • Evidence names + timestamps
  • LLR sum
  • Operator ID (if manually overridden)

This creates a cryptographically verifiable audit trail. Auditors and agencies can trace any score, see what changed, and confirm whether evidence was valid and in-scope.

Why This Must Be Open

If machines are going to tell us when a control is “healthy,” then the logic behind it must be transparent.

That’s why we’re open-sourcing:

  • The LLR control dictionary
  • Control-to-evidence mappings
  • Assumptions and source data

Just like LLMs disclose model weights and benchmarks, compliance logic must be explainable, auditable, and improvable by the community.

Compliance is too important to be a black box.

Recap: What We’ve Built

  • Bayesian engine for dynamic scoring
  • Prior and evidence probabilities for every FedRAMP Moderate control
  • Identification of top 11 high-risk controls
  • Immutable compliance ledger in Amazon Aurora PostresSQL
  • Prometheus telemetry mapping in progress
  • GitHub: Open LLR control spec

Frequently Asked Questions

1. How does Bayesian inference improve FedRAMP compliance monitoring?
Bayesian inference continuously updates each control’s confidence level based on real-time evidence, allowing compliance teams to quantify risk dynamically rather than rely on periodic assessments.

2. What role does AI play in continuous compliance for SaaS vendors?
AI automates evidence collection, calculates log-likelihood ratios (LLRs) or similar statistical indicators, and updates control probabilities in real time—transforming compliance from static documentation into a living risk model.

3. How does Knox use telemetry tools like Prometheus for compliance tracking?
Knox leverages Prometheus to scrape and store live metrics tied to FedRAMP controls, enabling continuous monitoring and automated confidence score updates within its Security Ledger.

4. Why is transparency important in AI-driven compliance systems?
Open-source models and transparent model reference dictionaries or explainability maps ensure that AI logic behind compliance scoring remains auditable, explainable, and trustworthy for agencies and auditors.

5. How does the Security Ledger ensure auditability in real time?
Every compliance update is immutably logged in a managed PostgreSQL-compatible database (such as Amazon Aurora) with timestamps, evidence data, and probability revisions—creating a cryptographically verifiable audit trail.

Coming in Part 3:

We’ll go deeper into instrumentation—mapping every FedRAMP Moderate control to Prometheus-compatible metrics and redefining the role of the 3PAO as a real-time verifier of system integrity.

The future of trust is continuous, explainable, and open. Let’s build it together.

Part 1: FedRAMP Needs a Security Ledger—Not Just a Checklist

government
00 min read
 — 
November 12, 2025

FedRAMP Needs a Security Ledger—Not Just a Checklist, Part 1

By Irina Denisenko, CEO of Knox Systems

FedRAMP has long set the benchmark for cloud security compliance in the public sector. But its current structure—based on periodic assessments and voluminous documentation—struggles to reflect real-time risk and operational truth. What’s missing is not just a better checklist. What’s missing is a Security Ledger.

Just as blockchain introduced the concept of an immutable ledger to prove ownership in crypto, a Security Ledger would establish a tamper-proof, transparent record of an organization’s control posture: Are you compliant or not—and with what level of confidence?

But unlike public blockchains, this ledger isn’t visible to the world. Access is strictly limited to the parties who need to validate the system's security:

  • The Cloud Service Provider (CSP)
  • The consuming Agency(ies)
  • The authorized Third-Party Assessors (3PAOs)

No one else. This is a permissioned ledger, designed for shared trust between verified participants, not public exposure.

But security controls aren't binary. In practice, compliance lives on a spectrum. Some controls are fully satisfied, others only partially. Evidence decays. Systems drift. Risk must be constantly re-evaluated. That’s where Bayesian reasoning comes in. By applying Bayes' Theorem to control assessment—drawing from the excellent work by Stephen Shaffer—we can quantify our belief in the effectiveness of each control and update it continuously based on new observations.

So how do we build this?

The answer lies in Prometheus—the open-source monitoring system that already powers observability at scale across the cloud. Prometheus is built for high-volume, time-series data and excels at continuously scraping, storing, and querying metrics. It's an ideal foundation for a risk-adjusted compliance telemetry layer.

Imagine a system where every FedRAMP control has a corresponding set of observable metrics—scraped, labeled, and stored over time using Prometheus. These metrics feed into a Bayesian model that computes dynamic confidence scores for each control. When paired with a cryptographically verifiable ledger system, this becomes a living, breathing compliance profile: a Security Ledger that is transparent, provable, and grounded in operational reality.

At Knox, we’re building toward this future—one where compliance is not a static report, but a living signal. Powered by open standards like Prometheus and informed by probabilistic models, this is how we transform trust: from paperwork to math.

Frequently Asked Questions

1. What is a Security Ledger in the context of FedRAMP compliance?
A Security Ledger is a permissioned, tamper-resistant record of an organization’s control posture, providing real-time visibility into compliance confidence rather than relying on static documentation.

2. How does AI enhance a Security Ledger for continuous compliance?
AI models use Bayesian reasoning to analyze evolving data from systems like Prometheus, updating confidence levels for each control as new security evidence emerges.

3. Why is real-time telemetry better than checklist-based compliance?
Continuous telemetry powered by AI and observability tools captures live control data, giving agencies a dynamic picture of security health instead of outdated audit snapshots.

4. How can Bayesian inference improve FedRAMP control assessment?
By applying Bayes’ Theorem, AI can continuously quantify the likelihood that a control is still operating as intended, creating a measurable, evolving trust signal for assessors and agencies.

5. What technologies power Knox’s vision for a Security Ledger?
Knox leverages open-source systems like Prometheus for time-series monitoring, Bayesian models for risk adjustment, and cryptographically verifiable storage for auditable compliance.

Stay tuned for Part 2, where our CTO will deep-dive into how Knox envisions the mechanics behind risk-adjusting control confidence using Bayesian inference—and how we ensure the immutability and auditability of that data using Amazon Aurora PostresSQL. We’ll walk through how likelihood ratios are assigned, how evidence is evaluated in real time, and why open-sourcing the control model is essential to building trust in the next era of FedRAMP.