10+ Years of Proven FedRAMP &
Cloud Security Success
Founding Team






Founding Advisors



We are a team of FedRAMP and Mission-Critical SaaS Security experts with 10+ consecutive years of successfully running FedRAMP clouds. Our backgrounds span Dell, Palantir, Virtustream, Oracle, BCG, Avanade, and Palo Alto Networks, bringing deep expertise in building and securing mission-critical cloud solutions for government and enterprise.
Government SaaS by the Numbers
US Government annual spend on software.
US Government annual spend on SaaS.
50% Year over year growth of US Government spend on SaaS.
The number of NIST 800-53 controls needs to be achieved for FedRAMP Moderate.
The total number of FedRAMPed applications as of March 2025.
The number of applications in AWS Marketplace.
Unlock access to cutting edge software for the Government.



Store













Knox Blog
The FedStart Kubernetes infrastructure – which runs on top of AWS GovCloud and Azure Government – manages FIPS validated encryption, logging, authentication, vulnerability scanning, and more (so that you don’t have to).
Knox Raises Seed Round to Accelerate AI and SaaS Adoption by the Federal Government and Department of Defense
We’re proud to announce our $6.5 million seed round raise. TechCrunch covered the news this morning:
Knox, named after a giant gold storage fort in Kentucky, essentially provides a compliance management platform via a managed cloud that customers can connect their codebase to. The company's software runs a continuous series of tests and audits to identify where the customer's infrastructure, code and security controls are falling short of FedRAMP standards, and either remediates those issues itself or flags them to the customer. It also offers some non-software tools to track and verify policies like personnel training and vendor management.
We’re solving one of the most urgent problems in GovTech: how to safely accelerate the adoption of AI and cloud software at scale.
The investment, led by Felicis with participation from Ridgeline and FirsthandVC, will help us unlock thousands of secure, AI-powered SaaS apps for government and DoD use.
We’re on a mission to bring the best technology innovation to our government. Technologies such as AI can drive transformational growth and productivity gains, which is critical for the United States to stay competitive as the global leader. Knox is working closely with key agencies to pioneer a secure AI infrastructure model that enables access to these applications without sacrificing control or security.
Our AI-powered, turnkey platform offers a faster, more agile path to FedRAMP authorization by automating manual processes while also contextualizing decades of operational know-how into digital expert agents.
We’re working closely with the U.S. government to pioneer a secure AI infrastructure model that enables access to SaaS applications without sacrificing control or security. Knox supports all three major hyperscalers and is trusted by more than 15 federal and defense agencies, including the Department of Homeland Security, the Treasury Department, and the Marines.
Thanks again to Viviana Faga and Nancy Wang at Felicis, Ben Walker at Ridgeline and Simon Chan at FirsthandVC for their support as we build.
Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO
Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO
By Casey Jones, Chief Architect of Knox Systems
In Part 1, we proposed the concept of a Security Ledger: a cryptographically verifiable system of record for compliance that updates continuously based on real-time evidence. In Part 2, we detailed how risk-adjusted confidence scores can be calculated using Bayes’ Theorem and recorded immutably in LedgerDB.
In this third and final part of the series, we focus on the next frontier: standardizing telemetry coverage across controls, open-sourcing the control-to-evidence map, and redefining the role of the 3PAO to ensure integrity in a continuous compliance world.
Building the Open Compliance Telemetry Layer
In order for the Security Ledger to be trustworthy, it must be fed with comprehensive, observable evidence across the full FedRAMP boundary. That means creating a control-to-telemetry map that:
- Defines what evidence types are relevant for each FedRAMP control
- Maps those to Prometheus-compatible metrics
- Defines evidence freshness, decay windows, and severity
- Supports automated generation of control coverage reports
At Knox, we’re working to open-source this telemetry model so that:
- Every stakeholder (CSPs, 3PAOs, agencies) understands the required observability footprint
- No one is guessing what counts as evidence
- The community can contribute new detectors and mappings
Just like OWASP standardized threat awareness, we need a COTM — Common Observability for Trust Model.
Coverage Is the Control: Incomplete Telemetry ≠ Compliance
In the current FedRAMP model, it's possible to "pass" controls without actually observing the whole system. But in a ledger-based model, telemetry gaps are violations.
Examples of common pitfalls:
- Only scanning certain subnets or environments (e.g., “we forgot our staging VPN”)
- Disabling or misconfiguring logging for noisy subsystems
- Letting vulnerability scan coverage drop below 100% of the boundary
- Using static evidence from prior scans without freshness guarantees
- Allowing Prometheus exporters to fail silently without alerting
In a real-time, risk-scored model, all of these create confidence decay—and should result in lowered scores or even automated POA&M creation.
The New Role of the 3PAO: Continuous Verifier of Scope, Integrity, and Fair Play
In a world where compliance is driven by real-time evidence, the Third Party Assessment Organization (3PAO) becomes more critical—not less.
But their role shifts from "point-in-time validator" to continuous integrity checker.
Here’s what the 3PAO’s job looks like in a Knox-style system:
1. Boundary Enforcement
- Validate that all components within the FedRAMP boundary are included in telemetry coverage
- Detect "convenient omissions" (e.g., shadow servers, unmonitored edge cases)
2. Signal Integrity
- Confirm that metrics flowing into the Security Ledger are accurate, unmodified, and traceable
- Review sampling intervals, evidence freshness, and exporter health
- Perform forensic verification of selected evidence streams
3. Anti-Fraud Auditing
- Detect signs of foul play or negligence, such as:
- Turning off scanning before high-risk deploys
- Creating “burner” environments that avoid monitoring
- Suppressing alert signals or log forwarders
- Replaying old data to simulate real-time telemetry
4. Ledger Auditing
- Verify the cryptographic chain of trust in the ledger system (e.g., via Amazon Aurora PostresSQL or blockchain)
- Ensure control scores are only adjusted by valid evidence with assigned LLRs
- Validate that manual overrides are documented and signed
In this model, the 3PAO becomes the trust anchor of the continuous compliance pipeline.
They’re not just checking boxes—they’re inspecting the wiring.
Transparency Through Community
All of this only works if the model is open:
- The LLRs for each control must be public
- The control-to-metrics map must be versioned and community-governed
- The Security Ledger’s core schema must be inspectable and verifiable
Just as large language models opened their weights to gain credibility, compliance models must open their logic. Closed-source compliance logic is a liability.
The Future of FedRAMP Is Verifiable, Transparent, and Alive
We’re not just building for ATOs—we’re building for continuous trust.
FedRAMP’s future lies in:
- Real-time metrics
- Probabilistic control scoring
- Immutable audit trails
- Open-source control logic
- 3PAOs as continuous validators, not just periodic checkers
At Knox, we’re committed to that shift—because trust shouldn’t expire every 12 months.
Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger
Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger
By Chris Johnson, CTO of Knox Systems
In Part 1, we introduced the Security Ledger—a real-time, tamper-proof system that reframes FedRAMP compliance as a probabilistic, continuously updated measure, not a static report. Now, in Part 2, we go under the hood.
We'll show how Bayesian inference, log-likelihood ratios (LLRs), and ledger-based transparency work together to produce a living risk engine—one that is inspectable, auditable, and mathematically defensible.
And yes, we brought code and real data.
From Binary to Bayesian: Probabilistic Assurance of Control Effectiveness
FedRAMP controls aren’t simply "on" or "off." Their effectiveness shifts with context, evidence, and time. So we treat each control as a probabilistic hypothesis:
P(Control is Effective | Evidence)
This lets us reason continuously over real-world telemetry: IAM logs, patch scans, drift reports, vulnerability findings, and more. The system updates confidence scores in real time—no waiting for annual audits.
Step 1: Assigning Prior Probabilities
Every control begins with a prior belief—a starting point for how likely it is to be effective. These priors are informed by:
- Control category (e.g. access control vs. incident response)
- Historical failure rates
- Threat modeling and exploit severity
- Complexity and likelihood of drift
Example:
{
"AC-2": { "prior": 0.90 },
"SC-12": { "prior": 0.75 },
"SI-2": { "prior": 0.60 }
}
These priors are tunable and evolve with new deployments and observed outcomes.
Step 2: Defining Evidence and LLRs
We define discrete evidence events—findings that either increase or decrease confidence in a control. Each is assigned a log-likelihood ratio (LLR):
log(posterior odds) = log(prior odds) + Σ LLRs
This additive update makes real-time scoring efficient and interpretable.
Example for SI-2 (Flaw Remediation):
"SI-2": {
"evidence": [
{ "name": "high_cvss_unpatched", "llr": -2.5 },
{ "name": "monthly_patching_completed", "llr": 1.0 },
{ "name": "vuln_scanner_stale", "llr": -1.0 }
]
}
LLRs are computed based on empirical data and mapped to actual telemetry triggers.
Real-World Example: AC-2 (Account Management)
From our working model:
- Risk Scenario: A former employee's account is still active and exploited
- P(A): 0.3 (probability of compromise if ineffective)
- Evidence LLRs:
- Account review overdue: -1.2
- No MFA for privileged accounts: -1.5
- Active Directory logs confirm removal: +1.0
- Account review overdue: -1.2
This model is applied to all 323 FedRAMP Moderate controls using structured data and open analysis:
🔗 GitHub Repo: Knox-Gov/nist_bayes_risk_auto
Prioritizing What Matters: The High-Risk Controls
Using this model, we ranked all FedRAMP Moderate controls by severity and potential impact.
The Top 11 High-Risk Controls stood out due to:
- High exploitation risk
- Poor observability without targeted telemetry
- Broad system impact if compromised

These controls form the foundation of our telemetry blueprint—what every system should continuously monitor and score.
Step 3: Continuous Confidence Calculation
Every time Prometheus scrapes a new metric:
- Convert prior to log-odds
- Add up matching LLRs
- Convert back to a probability using the logistic function:
P = 1 / (1 + e^(-log odds))
This produces a dynamic confidence score for each control, updated in real time as evidence changes.
Step 4: Writing to the Security Ledger (Amazon Aurora PostresSQL)
Every update—control ID, evidence, LLRs, and confidence score—is appended as a new, immutable revision to Amazon Aurora PostresSQL, our Security Ledger backend.
Each record includes:
- Control ID
- Timestamps
- Prior and posterior probabilities
- Evidence names + timestamps
- LLR sum
- Operator ID (if manually overridden)
This creates a cryptographically verifiable audit trail. Auditors and agencies can trace any score, see what changed, and confirm whether evidence was valid and in-scope.
Why This Must Be Open
If machines are going to tell us when a control is “healthy,” then the logic behind it must be transparent.
That’s why we’re open-sourcing:
- The LLR control dictionary
- Control-to-evidence mappings
- Assumptions and source data
Just like LLMs disclose model weights and benchmarks, compliance logic must be explainable, auditable, and improvable by the community.
Compliance is too important to be a black box.
Recap: What We’ve Built
- Bayesian engine for dynamic scoring
- Prior and evidence probabilities for every FedRAMP Moderate control
- Identification of top 11 high-risk controls
- Immutable compliance ledger in Amazon Aurora PostresSQL
- Prometheus telemetry mapping in progress
- GitHub: Open LLR control spec
Coming in Part 3:
We’ll go deeper into instrumentation—mapping every FedRAMP Moderate control to Prometheus-compatible metrics and redefining the role of the 3PAO as a real-time verifier of system integrity.
The future of trust is continuous, explainable, and open. Let’s build it together.