10+ Years of Proven FedRAMP &
Cloud Security Success
We are a team of FedRAMP and Mission-Critical SaaS Security experts with 10+ consecutive years of successfully running FedRAMP clouds. Our backgrounds span Dell, Palantir, Virtustream, Oracle, BCG, Avanade, and Palo Alto Networks, bringing deep expertise in building and securing mission-critical cloud solutions for government and enterprise.
Government SaaS by the Numbers
US Government annual spend on software.
US Government annual spend on SaaS.
50% Year over year growth of US Government spend on SaaS.
The number of NIST 800-53 controls needs to be achieved for FedRAMP Moderate.
The total number of FedRAMPed applications as of March 2025.
The number of applications in AWS Marketplace.
Unlock access to cutting edge software for the Government.



Store













Knox Blog
The FedStart Kubernetes infrastructure – which runs on top of AWS GovCloud and Azure Government – manages FIPS validated encryption, logging, authentication, vulnerability scanning, and more (so that you don’t have to).
Chad Tetreault Joins Knox Systems Federal Advisory Board
Knox Systems today announced that Zscaler Public Sector CTO and a former Deputy Chief Technology Officer and Deputy Chief Artificial Intelligence Officer from within the Department of Homeland Security (DHS), Chad Tetreault, has joined the company’s Federal Advisory Board.
A proven technology leader and AI strategist, Tetreault has spent his career bridging the gap between emerging technology and mission impact. At DHS, he led the design and deployment of proprietary AI solutions that modernized Immigration services, streamlined data operations, and advanced the department’s role as a leader in responsible AI innovation. His appointment strengthens Knox’s mission to help agencies accelerate cloud and AI adoption with the compliance, speed, and resilience required of federal systems.

At Zscaler, Tetreault leads public sector AI strategy and governance, helping highly regulated environments adopt and defend next-generation AI capabilities. He also serves on the MIT Gen AI Global leadership team as Deputy Co-Lead of the Agent Dev Department, focused on democratizing AI innovation worldwide.
Tetreault continues to define unified technology strategies, integrate data and engineering across mission systems, and mentor high-performing technical teams that deliver. His work consistently emphasizes innovation with accountability - ensuring AI and analytics can drive measurable outcomes without compromising compliance or security.

The Knox Federal Advisory Board brings together senior leaders from defense, civilian, and technology sectors to advise on emerging policy, compliance, and modernization strategies aligned with FedRAMP, NIST, and DoW frameworks. Tetreault’s appointment follows recent additions including David Epperson, former Deputy CIO of the Executive Office of the President and former Deputy CIO and CISO of the Cybersecurity and Infrastructure Security Agency, and Carrie Lee, Deputy CIO of the Department of Veterans Affairs - expanding Knox’s leadership bench across AI, cybersecurity, and federal IT transformation.
About Knox
Knox helps SaaS companies achieve FedRAMP in 90 days or less, at 90% of the traditional cost.
We run the largest FedRAMP Authorized managed cloud platform in the world, bringing a decade-long track record of secure and compliant operations.
Trusted by Adobe since 2014, Knox streamlines the path to FedRAMP authorization, enabling vendors to achieve FedRAMP in just 90 days across AWS, Azure, and GCP.
Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO
Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO
By Casey Jones, Chief Architect of Knox Systems
In Part 1, we proposed the concept of a Security Ledger: a cryptographically verifiable system of record for compliance that updates continuously based on real-time evidence. In Part 2, we detailed how risk-adjusted confidence scores can be calculated using Bayes’ Theorem and recorded immutably in LedgerDB.
In this third and final part of the series, we focus on the next frontier: standardizing telemetry coverage across controls, open-sourcing the control-to-evidence map, and redefining the role of the 3PAO to ensure integrity in a continuous compliance world.
Building the Open Compliance Telemetry Layer
In order for the Security Ledger to be trustworthy, it must be fed with comprehensive, observable evidence across the full FedRAMP boundary. That means creating a control-to-telemetry map that:
- Defines what evidence types are relevant for each FedRAMP control
- Maps those to Prometheus-compatible metrics
- Defines evidence freshness, decay windows, and severity
- Supports automated generation of control coverage reports
At Knox, we’re working to open-source this telemetry model so that:
- Every stakeholder (CSPs, 3PAOs, agencies) understands the required observability footprint
- No one is guessing what counts as evidence
- The community can contribute new detectors and mappings
Just like OWASP standardized threat awareness, we need a COTM — Common Observability for Trust Model.
Coverage Is the Control: Incomplete Telemetry ≠ Compliance
In the current FedRAMP model, it's possible to "pass" controls without actually observing the whole system. But in a ledger-based model, telemetry gaps are violations.
Examples of common pitfalls:
- Only scanning certain subnets or environments (e.g., “we forgot our staging VPN”)
- Disabling or misconfiguring logging for noisy subsystems
- Letting vulnerability scan coverage drop below 100% of the boundary
- Using static evidence from prior scans without freshness guarantees
- Allowing Prometheus exporters to fail silently without alerting
In a real-time, risk-scored model, all of these create confidence decay—and should result in lowered scores or even automated POA&M creation.
The New Role of the 3PAO: Continuous Verifier of Scope, Integrity, and Fair Play
In a world where compliance is driven by real-time evidence, the Third Party Assessment Organization (3PAO) becomes more critical—not less.
But their role shifts from "point-in-time validator" to continuous integrity checker.
Here’s what the 3PAO’s job looks like in a Knox-style system:
1. Boundary Enforcement
- Validate that all components within the FedRAMP boundary are included in telemetry coverage
- Detect "convenient omissions" (e.g., shadow servers, unmonitored edge cases)
2. Signal Integrity
- Confirm that metrics flowing into the Security Ledger are accurate, unmodified, and traceable
- Review sampling intervals, evidence freshness, and exporter health
- Perform forensic verification of selected evidence streams
3. Anti-Fraud Auditing
- Detect signs of foul play or negligence, such as:
- Turning off scanning before high-risk deploys
- Creating “burner” environments that avoid monitoring
- Suppressing alert signals or log forwarders
- Replaying old data to simulate real-time telemetry
4. Ledger Auditing
- Verify the cryptographic chain of trust in the ledger system (e.g., via Amazon Aurora PostresSQL or blockchain)
- Ensure control scores are only adjusted by valid evidence with assigned LLRs
- Validate that manual overrides are documented and signed
In this model, the 3PAO becomes the trust anchor of the continuous compliance pipeline.
They’re not just checking boxes—they’re inspecting the wiring.
Transparency Through Community
All of this only works if the model is open:
- The LLRs for each control must be public
- The control-to-metrics map must be versioned and community-governed
- The Security Ledger’s core schema must be inspectable and verifiable
Just as large language models opened their weights to gain credibility, compliance models must open their logic. Closed-source compliance logic is a liability.
The Future of FedRAMP Is Verifiable, Transparent, and Alive
We’re not just building for ATOs—we’re building for continuous trust.
FedRAMP’s future lies in:
- Real-time metrics
- Probabilistic control scoring
- Immutable audit trails
- Open-source control logic
- 3PAOs as continuous validators, not just periodic checkers
At Knox, we’re committed to that shift—because trust shouldn’t expire every 12 months.
Frequently Asked Questions
1. What is the purpose of open telemetry in continuous FedRAMP compliance?
Open telemetry ensures every system component is continuously monitored through streaming or real-time metrics, removing blind spots and enabling transparent, evidence-based compliance tracking.
2. How does AI improve control coverage across the FedRAMP boundary?
AI analyzes telemetry data, identifies coverage gaps, and recalculates confidence scores automatically when evidence decays or monitoring fails.
3. Why is incomplete telemetry considered a compliance risk?
Missing or outdated telemetry reduces visibility into system integrity, lowers confidence scores, and indicates that certain controls may not be fully effective.
4. How is the role of the 3PAO evolving in AI-driven compliance systems?
3PAOs are shifting from one-time assessors to ongoing integrity verifiers who monitor evidence streams, validate ledger accuracy, and detect fraudulent or incomplete data.
5. Why must continuous compliance models be open-source and transparent?
Transparency builds trust because open-sourcing model dictionaries or explainability maps, telemetry mappings, and ledger schemas ensures that compliance logic is verifiable and auditable.
Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger
Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger
By Chris Johnson, CTO of Knox Systems
In Part 1, we introduced the Security Ledger—a real-time, tamper-proof system that reframes FedRAMP compliance as a probabilistic, continuously updated measure, not a static report. Now, in Part 2, we go under the hood.
We'll show how Bayesian inference, log-likelihood ratios (LLRs), and ledger-based transparency work together to produce a living risk engine—one that is inspectable, auditable, and mathematically defensible.
And yes, we brought code and real data.
From Binary to Bayesian: Probabilistic Assurance of Control Effectiveness
FedRAMP controls aren’t simply "on" or "off." Their effectiveness shifts with context, evidence, and time. So we treat each control as a probabilistic hypothesis:
P(Control is Effective | Evidence)
This lets us reason continuously over real-world telemetry: IAM logs, patch scans, drift reports, vulnerability findings, and more. The system updates confidence scores in real time—no waiting for annual audits.
Step 1: Assigning Prior Probabilities
Every control begins with a prior belief—a starting point for how likely it is to be effective. These priors are informed by:
- Control category (e.g. access control vs. incident response)
- Historical failure rates
- Threat modeling and exploit severity
- Complexity and likelihood of drift
Example:
{
"AC-2": { "prior": 0.90 },
"SC-12": { "prior": 0.75 },
"SI-2": { "prior": 0.60 }
}These priors are tunable and evolve with new deployments and observed outcomes.
Step 2: Defining Evidence and LLRs
We define discrete evidence events—findings that either increase or decrease confidence in a control. Each is assigned a log-likelihood ratio (LLR):
log(posterior odds) = log(prior odds) + Σ LLRs
This additive update makes real-time scoring efficient and interpretable.
Example for SI-2 (Flaw Remediation):
"SI-2": {
"evidence": [
{ "name": "high_cvss_unpatched", "llr": -2.5 },
{ "name": "monthly_patching_completed", "llr": 1.0 },
{ "name": "vuln_scanner_stale", "llr": -1.0 }
]
}LLRs are computed based on empirical data and mapped to actual telemetry triggers.
Real-World Example: AC-2 (Account Management)
From our working model:
- Risk Scenario: A former employee's account is still active and exploited
- P(A): 0.3 (probability of compromise if ineffective)
- Evidence LLRs:
- Account review overdue: -1.2
- No MFA for privileged accounts: -1.5
- Active Directory logs confirm removal: +1.0
- Account review overdue: -1.2
This model is applied to all 323 FedRAMP Moderate controls using structured data and open analysis:
🔗 GitHub Repo: Knox-Gov/nist_bayes_risk_auto
Prioritizing What Matters: The High-Risk Controls
Using this model, we ranked all FedRAMP Moderate controls by severity and potential impact.
The Top 11 High-Risk Controls stood out due to:
- High exploitation risk
- Poor observability without targeted telemetry
- Broad system impact if compromised

These controls form the foundation of our telemetry blueprint—what every system should continuously monitor and score.
Step 3: Continuous Confidence Calculation
Every time Prometheus scrapes a new metric:
- Convert prior to log-odds
- Add up matching LLRs
- Convert back to a probability using the logistic function:
P = 1 / (1 + e^(-log odds))
This produces a dynamic confidence score for each control, updated in real time as evidence changes.
Step 4: Writing to the Security Ledger (Amazon Aurora PostresSQL)
Every update—control ID, evidence, LLRs, and confidence score—is appended as a new, immutable revision to Amazon Aurora PostresSQL, our Security Ledger backend.
Each record includes:
- Control ID
- Timestamps
- Prior and posterior probabilities
- Evidence names + timestamps
- LLR sum
- Operator ID (if manually overridden)
This creates a cryptographically verifiable audit trail. Auditors and agencies can trace any score, see what changed, and confirm whether evidence was valid and in-scope.
Why This Must Be Open
If machines are going to tell us when a control is “healthy,” then the logic behind it must be transparent.
That’s why we’re open-sourcing:
- The LLR control dictionary
- Control-to-evidence mappings
- Assumptions and source data
Just like LLMs disclose model weights and benchmarks, compliance logic must be explainable, auditable, and improvable by the community.
Compliance is too important to be a black box.
Recap: What We’ve Built
- Bayesian engine for dynamic scoring
- Prior and evidence probabilities for every FedRAMP Moderate control
- Identification of top 11 high-risk controls
- Immutable compliance ledger in Amazon Aurora PostresSQL
- Prometheus telemetry mapping in progress
- GitHub: Open LLR control spec
-
Frequently Asked Questions
1. How does Bayesian inference improve FedRAMP compliance monitoring?
Bayesian inference continuously updates each control’s confidence level based on real-time evidence, allowing compliance teams to quantify risk dynamically rather than rely on periodic assessments.
2. What role does AI play in continuous compliance for SaaS vendors?
AI automates evidence collection, calculates log-likelihood ratios (LLRs) or similar statistical indicators, and updates control probabilities in real time—transforming compliance from static documentation into a living risk model.
3. How does Knox use telemetry tools like Prometheus for compliance tracking?
Knox leverages Prometheus to scrape and store live metrics tied to FedRAMP controls, enabling continuous monitoring and automated confidence score updates within its Security Ledger.
4. Why is transparency important in AI-driven compliance systems?
Open-source models and transparent model reference dictionaries or explainability maps ensure that AI logic behind compliance scoring remains auditable, explainable, and trustworthy for agencies and auditors.
5. How does the Security Ledger ensure auditability in real time?
Every compliance update is immutably logged in a managed PostgreSQL-compatible database (such as Amazon Aurora) with timestamps, evidence data, and probability revisions—creating a cryptographically verifiable audit trail.
Coming in Part 3:
We’ll go deeper into instrumentation—mapping every FedRAMP Moderate control to Prometheus-compatible metrics and redefining the role of the 3PAO as a real-time verifier of system integrity.
The future of trust is continuous, explainable, and open. Let’s build it together.
