← Tech Guides
Civic Style · Tech Guide

Security & Compliance

A comprehensive reference to federal cybersecurity frameworks, NIST standards, compliance processes, and security operations for government and enterprise IT.

Section 01

Compliance at a Glance

A quick-reference cheat sheet of every major compliance framework you will encounter in federal, DoD, and enterprise security. Start here to understand the landscape before diving into individual standards.

Major Frameworks Comparison

The table below maps the most commonly encountered federal cybersecurity frameworks. Each serves a distinct purpose, but they are designed to interlock and reference one another.

Framework Governing Body Scope Key Artifact Mandatory For
NIST CSF 2.0 NIST Voluntary risk-management framework; 6 functions, 22 categories, 106 subcategories covering the full cybersecurity lifecycle Organizational Profile (Current & Target) Voluntary (all sectors); referenced by EO 13800; widely adopted in federal & critical infrastructure
NIST SP 800-53 Rev 5 NIST / Joint Task Force Catalog of 1,189+ security & privacy controls across 20 families; technology-neutral, applicable to any system type System Security Plan (SSP) All federal information systems (per FISMA); DoD; contractors handling CUI
RMF (SP 800-37 Rev 2) NIST / Joint Task Force 7-step lifecycle process for selecting, implementing, assessing, and monitoring security controls Authorization Package (SSP, SAR, POA&M) All federal agencies; DoD (via DoDI 8510.01)
FedRAMP GSA / FedRAMP PMO Standardized approach for security assessment, authorization, and continuous monitoring of cloud services used by federal agencies FedRAMP Authorization Package (SSP, SAP, SAR, POA&M) All cloud service providers (CSPs) serving federal agencies
FISMA Congress / OMB / DHS CISA Federal law requiring agencies to develop, document, and implement information security programs; mandates annual assessments Agency FISMA Report (annual metrics to OMB/Congress) All federal agencies and their contractors
STIGs DISA (Defense Information Systems Agency) Configuration standards for DoD IT systems; prescriptive technical settings for OS, applications, network devices STIG Checklist (.ckl) / SCAP Benchmark (.xml) All DoD information systems and networks
CIS Benchmarks Center for Internet Security (CIS) Consensus-based secure configuration guides for 100+ technologies; scored Level 1 (essential) and Level 2 (defense in depth) CIS Benchmark Document / CIS-CAT Assessment Report Voluntary; widely adopted across federal, state, and private sector; referenced by many compliance regimes
CMMC 2.0 DoD (OUSD(A&S)) Cybersecurity maturity model for the Defense Industrial Base (DIB); 3 levels mapping to NIST 800-171 / 800-172 controls Self-Assessment (Level 1) or C3PAO Assessment Report (Levels 2-3) All DoD contractors handling FCI/CUI (phased rollout via DFARS clauses)
Key Insight
If you work in federal or DoD IT, you will encounter most of these frameworks simultaneously. They are designed to interlock: FISMA mandates the use of NIST standards, RMF is the process for applying 800-53 controls, FedRAMP extends RMF to cloud, STIGs provide implementation-level detail for specific products, and CMMC ensures contractors meet the same bar. Understanding how they layer is more valuable than memorizing any single one.

Key Acronyms Glossary

Compliance documentation is dense with acronyms. Master these first; they appear in every authorization package, audit report, and policy document you will read.

Acronym Expansion What It Means
ATO Authorization to Operate Formal declaration by an Authorizing Official that a system may operate, accepting residual risk. Typically valid for 3 years with continuous monitoring.
POA&M Plan of Action & Milestones A living document tracking known vulnerabilities and weaknesses, with planned remediation actions, responsible parties, and scheduled completion dates.
SSP System Security Plan The primary authorization artifact. Describes the system boundary, architecture, data flows, and how each applicable control is implemented.
SAR Security Assessment Report Results of formal security testing. Documents control assessment findings, risk ratings, and recommendations. Produced by assessors (SCA or 3PAO).
SAP Security Assessment Plan Describes the scope, methodology, and schedule for an upcoming security assessment. Defines test procedures for each control.
ISSO Information System Security Officer The person responsible for day-to-day security operations and continuous monitoring of a specific system. Reports to the ISSM.
ISSM Information System Security Manager Oversees the security posture of multiple systems or an entire program. Manages ISSOs and coordinates with the AO.
AO Authorizing Official Senior executive with the authority to accept risk and grant ATO. Personally accountable for authorization decisions.
SCA Security Control Assessor Independent evaluator who tests controls and produces the SAR. Must be organizationally independent of the system team.
ConMon Continuous Monitoring Ongoing assessment of security controls, vulnerability scanning, and risk reporting after ATO is granted. Feeds POA&M updates.
SCAP Security Content Automation Protocol Suite of specifications (XCCDF, OVAL, CVE, CPE, CVSS, CCE) enabling automated configuration checking and vulnerability assessment.
OVAL Open Vulnerability & Assessment Language XML-based language for describing system configuration checks and vulnerabilities. Used by SCAP scanners for automated compliance testing.
STIG Security Technical Implementation Guide DISA-published configuration standards for specific technologies. Contains findings (rules) rated CAT I/II/III by severity.
3PAO Third Party Assessment Organization Accredited independent assessor that evaluates cloud systems for FedRAMP authorization. Must be A2LA-accredited.
JAB Joint Authorization Board Cross-agency board (DoD, DHS, GSA) that granted provisional ATOs (P-ATOs) for FedRAMP. Transitioned to FedRAMP Board under FedRAMP Authorization Act (2022).
CUI Controlled Unclassified Information Government-created or -owned information requiring safeguarding per law, regulation, or policy, but not classified. Governed by 32 CFR Part 2002.
FCI Federal Contract Information Information provided by or generated for the government under contract, not intended for public release. Lower sensitivity than CUI.
DIB Defense Industrial Base The network of 300,000+ companies that develop, produce, and sustain military systems. CMMC 2.0 compliance is required for DIB contractors.

How the Frameworks Interlock

These frameworks are not independent silos. They form a layered compliance stack where each standard builds on or references the others.

1
FISMA (Law)
Federal mandate requiring agencies to secure information systems
2
NIST SP 800-53
Control catalog selected per FIPS 199 categorization
3
RMF (800-37)
Process to implement, assess, and authorize controls
4
STIGs / CIS
Technical implementation detail for specific products
5
ATO / ConMon
Authorization decision followed by continuous monitoring
Mapping Tip
NIST CSF 2.0 provides the strategic view (what outcomes to achieve), 800-53 provides the control detail (what to implement), RMF provides the process (how to authorize), and STIGs/CIS provide the configuration specifics (how to harden). FedRAMP layers cloud-specific requirements on top of the same 800-53 baseline.
Section 02

NIST Cybersecurity Framework (CSF) 2.0

Released February 2024, CSF 2.0 is a major update to NIST's flagship voluntary risk-management framework. It introduces a new Govern function, restructures categories, and expands applicability beyond critical infrastructure to all organizations.

The 6 Core Functions

CSF 2.0 organizes cybersecurity activities into six high-level functions. The new Govern function sits at the center, informing and connecting the other five. Together they describe the full lifecycle of managing cybersecurity risk.

GV Govern
Establish and monitor the organization's cybersecurity risk management strategy, expectations, and policy. The Govern function is cross-cutting, providing context for how the organization manages risk across all other functions.
New in 2.0 · 6 Categories · ~32 Subcategories
ID Identify
Develop the organizational understanding needed to manage cybersecurity risk to systems, assets, data, and capabilities. Includes asset management, risk assessment, and improvement activities.
Core · 4 Categories · Asset Management, Risk Assessment
PR Protect
Develop and implement the appropriate safeguards to ensure delivery of critical services. Covers identity management, access control, awareness training, data security, and platform security.
Core · 5 Categories · Access Control, Data Security
DE Detect
Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event in a timely manner. Includes continuous monitoring and adverse event analysis.
Core · 2 Categories · Continuous Monitoring, Analysis
RS Respond
Develop and implement the appropriate activities to take action regarding a detected cybersecurity incident. Covers incident management, analysis, reporting, and mitigation.
Core · 4 Categories · Incident Management, Reporting
RC Recover
Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event.
Core · 2 Categories · Recovery Planning, Communication

Implementation Tiers

Tiers describe the degree to which an organization's cybersecurity risk management practices exhibit the characteristics defined in the Framework. Tiers are not maturity levels; they help organizations determine the appropriate rigor for their risk environment.

Tier Name Risk Management Process Integration External Participation
Tier 1 Partial Ad hoc; risk management is not formalized. Prioritization is reactive and not based on organizational objectives or threat landscape. Limited awareness of cybersecurity risk at the organizational level. Processes are performed irregularly on a case-by-case basis. Organization does not understand its role in the larger ecosystem. No collaboration or information sharing with external entities.
Tier 2 Risk Informed Risk management practices are approved by management but may not be established as organization-wide policy. Prioritization is directly informed by risk assessment. Awareness of cybersecurity risk exists at the organizational level, but an organization-wide approach has not been established. Organization understands its role in the ecosystem with respect to dependencies and dependents. Receives information but may not consistently act on it.
Tier 3 Repeatable Risk management practices are formally approved and expressed as policy. Practices are regularly updated based on changes in requirements and threat/technology landscape. Organization-wide approach to managing cybersecurity risk. Risk-informed policies, processes, and procedures are defined, implemented, and reviewed. Organization understands its dependencies and dependents. Collaborates with and receives information from partners, and acts on that information.
Tier 4 Adaptive Adapts based on lessons learned and predictive indicators derived from previous and current activities. Continuous improvement incorporating advanced technologies and practices. Organizational approach to managing risk that uses risk-informed policies, processes, and procedures to address potential cybersecurity events. Relationship between risk and objectives is clearly understood. Organization manages risk and actively shares information with partners to ensure accurate, current information is distributed to improve cybersecurity before events occur.

Framework Profiles

A Profile represents the alignment of the Functions, Categories, and Subcategories with the business requirements, risk tolerance, and resources of the organization. CSF 2.0 introduces clearer guidance on creating and using profiles for gap analysis.

Current Profile
Documents the cybersecurity outcomes the organization is currently achieving. Created through assessment of existing practices against the CSF subcategories. Serves as the baseline for improvement planning.
Target Profile
Documents the desired cybersecurity outcomes based on business needs, risk tolerance, and available resources. The gap between Current and Target profiles drives a prioritized action plan.
Community Profile
New in 2.0 A baseline profile created for a specific sector, subsector, or community. Organizations can use community profiles as starting points for their own Target Profiles.
Practical Usage
The Profile gap analysis is where CSF delivers the most value. Compare your Current Profile against your Target Profile subcategory by subcategory, score the gaps, and feed them into your risk register. This directly produces a prioritized, budget-linked remediation roadmap.

Key Differences from CSF 1.1

CSF 2.0 is not a minor revision. It represents a structural rethinking of the framework, expanding both its scope and its practical utility.

Aspect CSF 1.1 (April 2018) CSF 2.0 (February 2024)
Core Structure 5 Functions / 23 Categories / 108 Subcategories 6 Functions / 22 Categories / 106 Subcategories
Govern Function Did not exist; governance was implicit across other functions New dedicated function (GV) with 6 categories representing ~30% of all subcategories
Audience Primarily critical infrastructure sectors Explicitly broadened to all organizations regardless of size, sector, or maturity
Supply Chain Limited supply chain references (SC subcategories under ID) Supply chain risk management (C-SCRM) elevated and integrated across Govern and other functions
Profiles Current and Target Profiles Current, Target, and new Community Profiles; enhanced creation guidance
Implementation Examples Not included in core document NIST publishes companion implementation examples for each subcategory
Title "Framework for Improving Critical Infrastructure Cybersecurity" "The NIST Cybersecurity Framework (CSF) 2.0" (dropped critical infrastructure focus from title)
Migration Note
If you are currently using CSF 1.1, NIST provides a mapping reference between 1.1 subcategories and 2.0 subcategories. The majority of existing 1.1 subcategories have direct equivalents; the main work involves adopting the new Govern function categories and re-mapping any organizational profiles.

CSF 2.0 Structure Summary

# CSF 2.0 Hierarchy
CSF Core
  Function (6 total)
    Category (22 total, e.g., GV.OC - Organizational Context)
      Subcategory (106 total, e.g., GV.OC-01)
        Implementation Examples (companion resource)
        Informative References (mappings to 800-53, ISO 27001, CIS, etc.)

Function Breakdown:
  GV  Govern    — 6 categories  (OC, RM, SC, RR, PO, OV)
  ID  Identify   — 4 categories  (AM, RA, IM, **)
  PR  Protect    — 5 categories  (AA, AT, DS, PS, IR)
  DE  Detect     — 2 categories  (CM, AE)
  RS  Respond    — 4 categories  (MA, AN, MI, CO)
  RC  Recover    — 2 categories  (RP, CO)

# The Govern function sits at the center of the wheel diagram,
# informing and being informed by all other five functions.
Section 03

NIST SP 800-53 Rev. 5 Control Families

The definitive catalog of security and privacy controls for federal information systems. Rev 5 (September 2020, updated December 2020) introduced the new Supply Chain Risk Management (SR) family and integrated privacy controls directly into the catalog.

All 20 Control Families

SP 800-53 Rev 5 organizes 1,189+ controls into 20 families. Each family addresses a broad area of security or privacy concern. Controls within a family are numbered sequentially (e.g., AC-1, AC-2, ... AC-25).

ID Family Name Description Example Controls
AC Access Control Policies and mechanisms for granting, managing, and revoking system access. Covers least privilege, separation of duties, session management, and remote access. AC-2 Account Management, AC-3 Access Enforcement, AC-6 Least Privilege, AC-17 Remote Access
AT Awareness & Training Security and privacy awareness training for personnel. Ensures users understand their responsibilities and can recognize threats. AT-2 Literacy Training, AT-3 Role-Based Training, AT-4 Training Records
AU Audit & Accountability Generating, reviewing, protecting, and retaining audit records. Ensures actions are traceable to individuals and anomalies are detected. AU-2 Event Logging, AU-3 Content of Audit Records, AU-6 Audit Record Review, AU-12 Audit Record Generation
CA Assessment, Authorization & Monitoring Security assessment procedures, authorization processes, and continuous monitoring. This family drives the RMF lifecycle. CA-2 Control Assessments, CA-5 POA&M, CA-6 Authorization, CA-7 Continuous Monitoring
CM Configuration Management Establishing and maintaining baselines, controlling changes, and inventorying components. The foundation for hardening and drift detection. CM-2 Baseline Configuration, CM-6 Configuration Settings, CM-7 Least Functionality, CM-8 System Component Inventory
CP Contingency Planning Ensuring continuity of operations during disruptions. Covers backup, recovery, and alternate site planning. CP-2 Contingency Plan, CP-4 Contingency Plan Testing, CP-9 System Backup, CP-10 System Recovery
IA Identification & Authentication Verifying the identities of users, devices, and services before granting access. Covers MFA, credential management, and cryptographic authentication. IA-2 User Identification, IA-5 Authenticator Management, IA-8 Identification (Non-Org Users), IA-12 Identity Proofing
IR Incident Response Preparing for, detecting, analyzing, containing, eradicating, and recovering from security incidents. Includes incident reporting requirements. IR-1 Policy & Procedures, IR-4 Incident Handling, IR-5 Incident Monitoring, IR-6 Incident Reporting
MA Maintenance Performing timely and controlled maintenance on systems and components. Covers local, remote, and nonlocal maintenance activities. MA-2 Controlled Maintenance, MA-4 Nonlocal Maintenance, MA-5 Maintenance Personnel
MP Media Protection Protecting, sanitizing, and controlling physical and digital media (USB drives, hard drives, backup tapes, paper records). MP-2 Media Access, MP-4 Media Storage, MP-6 Media Sanitization
PE Physical & Environmental Protection Physical access controls, monitoring, and environmental protections (fire, water, temperature, power) for facilities housing systems. PE-2 Physical Access Authorizations, PE-3 Physical Access Control, PE-6 Monitoring, PE-13 Fire Protection
PL Planning Developing, maintaining, and disseminating security and privacy plans. Covers rules of behavior and system architecture descriptions. PL-2 System Security Plan, PL-4 Rules of Behavior, PL-8 Security Architecture
PM Program Management Organization-wide information security program management. Unlike other families, PM controls apply at the organizational level, not per system. PM-1 InfoSec Program Plan, PM-9 Risk Management Strategy, PM-11 Mission/Business Process Definition
PS Personnel Security Screening, termination, transfer, and sanctions for personnel with system access. Ensures trustworthiness of individuals. PS-2 Position Risk Designation, PS-3 Personnel Screening, PS-4 Personnel Termination, PS-5 Personnel Transfer
PT PII Processing & Transparency Privacy-specific controls for personally identifiable information handling, consent, and data minimization. New in Rev 5 PT-2 Authority to Process PII, PT-3 PII Processing Purposes, PT-4 Consent, PT-5 Privacy Notice
RA Risk Assessment Identifying, evaluating, and prioritizing risks. Covers vulnerability scanning, threat intelligence, and security categorization (FIPS 199). RA-3 Risk Assessment, RA-5 Vulnerability Monitoring & Scanning, RA-7 Risk Response
SA System & Services Acquisition Integrating security into the system development lifecycle and acquisitions. Covers secure coding, supply chain protections, and developer security testing. SA-3 System Development Lifecycle, SA-4 Acquisition Process, SA-8 Security Engineering Principles, SA-11 Developer Testing
SC System & Communications Protection Protecting information in transit and at rest. Covers boundary defense, cryptography, denial of service protection, and network segmentation. SC-7 Boundary Protection, SC-8 Transmission Confidentiality, SC-12 Cryptographic Key Management, SC-28 Protection of Information at Rest
SI System & Information Integrity Detecting, reporting, and correcting flaws. Covers patching, malicious code protection, monitoring, and software integrity verification. SI-2 Flaw Remediation, SI-3 Malicious Code Protection, SI-4 System Monitoring, SI-7 Software & Information Integrity
SR Supply Chain Risk Management Managing risks associated with the supply chain for systems, components, and services. New in Rev 5 SR-1 Policy & Procedures, SR-2 Supply Chain Risk Management Plan, SR-3 Supply Chain Controls, SR-11 Component Authenticity

Control Structure & Hierarchy

Every control in 800-53 follows a consistent structure. Understanding this hierarchy is essential for reading, implementing, and assessing controls.

# Control Numbering Convention
Family  Control  Enhancement

# Example: Access Control family, Account Management control
AC          # Family: Access Control
AC-2        # Control: Account Management
AC-2(1)     # Enhancement 1: Automated System Account Management
AC-2(2)     # Enhancement 2: Automated Temporary & Emergency Account Management
AC-2(3)     # Enhancement 3: Disable Accounts
AC-2(4)     # Enhancement 4: Automated Audit Actions
AC-2(5)     # Enhancement 5: Inactivity Logout

# Control Components
Control Section         Description
────────────────────    ─────────────────────────────────────
Control Statement       The requirement itself (what must be done)
Discussion              Context, rationale, and implementation guidance
Related Controls        Cross-references to other controls
References              Source publications (FIPS, SPs, etc.)
Control Enhancements    Additional requirements that extend the base
# Example Control: AC-2 Account Management
AC-2  ACCOUNT MANAGEMENT

Control:
  a. Define and document the types of accounts allowed and
     specifically prohibited for use within the system;
  b. Assign account managers;
  c. Require [Assignment: organization-defined prerequisites and
     criteria] for group and role membership;
  d. Specify:
     1. Authorized users of the system;
     2. Group and role membership; and
     3. Access authorizations (i.e., privileges) and
        [Assignment: organization-defined attributes] for each account;
  e. Require approvals by [Assignment: organization-defined personnel
     or roles] for requests to create accounts;
  f. Create, enable, modify, disable, and remove accounts in
     accordance with [Assignment: organization-defined policy,
     procedures, prerequisites, and criteria];
  g. Monitor the use of accounts;
  h. Notify account managers and [Assignment: organization-defined
     personnel or roles] within:
     1. [Assignment: organization-defined time period] when accounts
        are no longer required;
     2. [Assignment: organization-defined time period] when users
        are terminated or transferred; and
     3. [Assignment: organization-defined time period] when system
        usage or need-to-know changes for an individual;
  i. Authorize access to the system based on:
     1. A valid access authorization;
     2. Intended system usage; and
     3. [Assignment: organization-defined attributes];
  j. Review accounts for compliance with account management
     requirements [Assignment: organization-defined frequency];

# [Assignment: ...] = Organization fills in the specific value
# This parameterization makes controls adaptable to any environment

Control Baselines

NIST SP 800-53B defines four baselines that pre-select controls based on system impact level (as determined by FIPS 199). Organizations start with a baseline and then tailor it by adding or removing controls based on risk assessment.

Baseline Total Controls Families Used Typical Systems Notes
Low 149 controls All 20 families Public-facing websites, non-sensitive internal tools, development/test environments Minimum security baseline. Focus on fundamental hygiene controls.
Moderate 287 controls All 20 families Most federal systems, email, financial, HR, CUI processing systems Most commonly used baseline. Approximately 80% of federal systems are categorized as Moderate.
High 370 controls All 20 families National security systems, law enforcement, critical infrastructure, classified processing Most stringent. Adds defense-in-depth controls, advanced monitoring, and redundancy requirements.
Privacy 96 controls 16 families Any system processing PII (selected based on PIA, not impact level) Applied in addition to the security baseline. Includes PT family and privacy-relevant controls from other families.
Baseline Tailoring
Baselines are starting points, not final answers. Organizations tailor them through: (1) Scoping — removing controls that don't apply (e.g., PE controls for cloud-only systems), (2) Compensating controls — substituting equivalent protections, (3) Organization-defined parameters — filling in the [Assignment] and [Selection] values, and (4) Supplementation — adding controls or enhancements beyond the baseline based on risk assessment or overlays.

Common, System-Specific & Hybrid Controls

Not every system implements every control from scratch. Controls are designated based on where they are implemented and who is responsible for them.

Common Controls
Implemented once at the organizational or facility level, then inherited by multiple systems. Examples: PE-3 Physical Access Control (facility badge readers), AT-2 Security Awareness Training (org-wide program), PS-3 Personnel Screening (HR background checks). Managed by a common control provider.
System-Specific Controls
Implemented entirely within a specific information system and under the direct control of the system owner. Examples: AC-2 Account Management (system-level accounts), AU-6 Audit Review (system-specific log review), SI-2 Flaw Remediation (system patching). Documented in the system SSP.
Hybrid Controls
Implemented partly by the organization (common) and partly by the system (system-specific). Example: IR-4 Incident Handling — the org provides the incident response team and procedures (common), while the system provides specific detection and alerting capabilities (system-specific). Both parts must be documented.

The distinction matters because common controls reduce duplication (one assessment covers many systems), but they also create shared risk: if a common control fails, every inheriting system is affected.

Overlays

Overlays are pre-built tailoring packages that address specific communities of interest, technologies, or operational environments. They modify the baseline by adding, removing, or refining controls and parameters.

Overlay Type Purpose Examples
Community Addresses requirements specific to a sector or mission area DoD SRG (Cloud), Intelligence Community (ICD 503), Space Systems
Technology Addresses risks unique to a specific technology platform Cloud computing, industrial control systems (ICS), mobile devices, IoT
Functional Addresses specific operational or legal requirements Privacy (PII processing), insider threat, cross-domain solutions
Rev 5 Updates
SP 800-53 Rev 5 integrates privacy controls directly into the catalog (the new PT family and privacy-related enhancements across other families), introduces the Supply Chain Risk Management family (SR), and removes the "Federal" qualifier from the title—signaling its applicability to any organization, not just government agencies.
Section 04

Risk Management Framework (RMF)

NIST SP 800-37 Rev. 2 defines the Risk Management Framework—a structured, repeatable process for integrating security, privacy, and cyber supply chain risk management into the system development lifecycle. RMF is the mandatory authorization process for all federal information systems and DoD systems under DoDI 8510.01.

The 7 RMF Steps

RMF follows a disciplined, seven-step lifecycle. Each step has defined tasks, responsible roles, and expected outputs. The process is iterative—monitoring findings feed back into earlier steps, creating a continuous authorization loop rather than a one-time event.

1
Prepare
Establish context, priorities, and resources
2
Categorize
Classify system by impact level (FIPS 199)
3
Select
Choose and tailor control baseline
4
Implement
Deploy controls, document in SSP
5
Assess
Test control effectiveness, produce SAR
6
Authorize
AO reviews package, makes risk decision
7
Monitor
Ongoing assessment, continuous authorization
Step Name Key Activities Primary Responsibility
1 Prepare Establish organization-level and system-level preparation activities. Define risk management roles, identify stakeholders, perform organization-wide risk assessment, establish risk tolerance, prioritize systems for authorization, develop a monitoring strategy, and identify common controls available for inheritance. Senior leadership, CISO, Risk Executive, ISO
2 Categorize Categorize the system and the information processed, stored, and transmitted by the system using FIPS 199 criteria. Determine impact levels (Low, Moderate, High) for confidentiality, integrity, and availability. Apply the high-water mark principle to arrive at the overall system categorization. Document results and register the system. ISO, ISSO, Information Owner
3 Select Select the initial control baseline from SP 800-53B based on system categorization. Tailor the baseline by applying scoping guidance, selecting compensating controls, assigning organization-defined parameter values, and supplementing with additional controls from overlays or risk assessment findings. ISO, ISSO, Security Architect
4 Implement Implement the selected controls within the system and its operating environment. Document the control implementations in the System Security Plan (SSP), describing how each control is satisfied, where it is implemented, and who is responsible. Identify which controls are inherited vs. system-specific vs. hybrid. ISO, System Developer, ISSO
5 Assess Develop a Security Assessment Plan (SAP) defining the scope, methodology, and schedule. Execute assessment procedures to determine control effectiveness. An independent Security Control Assessor (SCA) or 3PAO examines, interviews, and tests controls. Produce the Security Assessment Report (SAR) documenting findings and risk ratings. Open POA&M items for any identified deficiencies. SCA (or 3PAO), ISSO, ISO
6 Authorize The Authorizing Official (AO) reviews the complete authorization package: SSP, SAR, and POA&M. The AO evaluates residual risk against organizational risk tolerance. Based on this review, the AO issues one of the following: Authorization to Operate (ATO), ATO with Conditions, Denial of Authorization (DATO), or an Interim ATO (IATO). The Risk Acceptance Report (RAR) formally documents the decision. AO, AO Designated Representative
7 Monitor Continuously monitor the system and its environment for changes that affect security posture. Perform ongoing assessments of a subset of controls per the monitoring strategy. Conduct vulnerability scanning, analyze results, update the SSP and POA&M as needed. Report security status to the AO. Support continuous authorization decisions rather than waiting for periodic reauthorization. ISSO, ISSM, AO

Key RMF Roles

RMF defines clear roles with distinct responsibilities. Separation of duties is critical—the person who builds the system should not be the same person who assesses it, and the person who assesses it should not be the same person who authorizes it.

Role Full Title Responsibilities Key Decisions
AO Authorizing Official Senior executive with the authority and accountability to accept risk. Reviews the authorization package and makes the final determination on whether a system may operate. Personally accountable for the authorization decision. Grants ATO, ATO with Conditions, DATO, or IATO. Defines acceptable risk threshold.
ISSO Information System Security Officer Responsible for the day-to-day security operations of a specific system. Maintains the SSP, manages POA&M items, coordinates vulnerability scanning, monitors security alerts, and ensures continuous monitoring activities are executed on schedule. Escalates security issues to ISSM/AO. Manages remediation timelines.
ISSM Information System Security Manager Oversees the security program across multiple systems or an entire organization. Manages ISSOs, ensures consistent policy implementation, coordinates cross-system risk activities, and serves as the primary security liaison to the AO. Approves security configurations. Coordinates assessment schedules across systems.
SCA Security Control Assessor Independent evaluator who tests the effectiveness of security controls. Must be organizationally independent from the system development and operations teams. Develops the SAP, executes assessment procedures, and produces the SAR. Determines control effectiveness (Satisfied, Other Than Satisfied, Not Applicable). Rates risk for findings.
ISO Information System Owner The business owner of the system. Responsible for the system throughout its lifecycle, including procurement, development, operation, maintenance, and disposal. Ensures the system is properly categorized and security requirements are funded and implemented. Defines system mission and business requirements. Approves system boundary.
CCP Common Control Provider Develops, implements, assesses, and monitors common controls that are inherited by multiple information systems (e.g., physical security, enterprise identity management, awareness training). Documents control implementations for inheriting systems to reference. Defines which controls are available for inheritance. Maintains common control assessment status.

Artifacts at Each Step

Each RMF step produces or updates specific documents. These artifacts form the authorization package and the ongoing evidence trail for continuous monitoring. Understanding which documents are created, updated, or reviewed at each step is essential for managing the process.

RMF Step Created / Initiated Updated / Refined Reviewed / Approved
1. Prepare Risk Management Strategy, Organization-Level Risk Assessment, Common Control Catalog, Monitoring Strategy System Inventory, Enterprise Architecture documentation
2. Categorize Security Categorization (FIPS 199 worksheet), System Registration System boundary definition, Data flow diagrams AO or AO designee reviews categorization
3. Select Control Selection Worksheet, Tailoring documentation SSP (initial draft with selected controls and rationale) AO approves tailored baseline
4. Implement SSP (control implementation descriptions), Architecture diagrams, Network diagrams ISSO validates implementation matches SSP
5. Assess SAP (Security Assessment Plan), SAR (Security Assessment Report), POA&M (initial) SSP (corrections based on assessment findings) SCA/3PAO validates findings with system team
6. Authorize RAR (Risk Acceptance Report), Authorization Decision Letter (ATO memo) POA&M (AO may add conditions) AO reviews full package: SSP + SAR + POA&M
7. Monitor Continuous Monitoring Reports, Ongoing Assessment Results SSP (keep current), POA&M (add/close items), SAR (annual update), Vulnerability scan reports AO reviews status per monitoring strategy

FIPS 199 System Categorization

FIPS 199 is the starting point for the entire RMF process. It defines how to categorize federal information systems based on the potential impact of a security breach across three security objectives. The categorization result determines which control baseline from SP 800-53B applies to the system.

For each information type processed by the system, determine the potential impact (Low, Moderate, or High) if there were a loss of confidentiality, integrity, or availability. NIST SP 800-60 provides default impact levels for common federal information types.

# FIPS 199 Categorization Format
System: [System Name]
Confidentiality: [Low | Moderate | High]
Integrity:       [Low | Moderate | High]
Availability:    [Low | Moderate | High]
Overall Impact:  HIGH-WATER MARK  [Result]

# Example: HR Management System
System: Enterprise HR Portal
Confidentiality: High     # PII, SSNs, salary data
Integrity:       Moderate # Data accuracy matters but not life-safety
Availability:    Low      # Temporary outage is tolerable
Overall Impact:  HIGH-WATER MARK  High

# Example: Public Website
System: Agency Public Information Portal
Confidentiality: Low      # All content is public
Integrity:       Moderate # Defacement would damage trust
Availability:    Low      # Brief outage is acceptable
Overall Impact:  HIGH-WATER MARK  Moderate

# The high-water mark is simply the HIGHEST individual rating.
# If any objective is High, the overall categorization is High.
High-Water Mark Principle
The overall system categorization equals the highest impact level across all three security objectives. A system that is Low/Low/Moderate becomes Moderate overall. A system that is Low/Low/High becomes High overall. This principle ensures the most sensitive aspect of the system drives the security control rigor, because a chain is only as strong as its weakest link.

Authorization Decision Types

After reviewing the authorization package, the AO issues one of the following decisions. Each carries different implications for system operations and ongoing requirements.

ATO — Authorization to Operate
Full authorization to operate. The AO accepts the residual risk as documented in the SAR and POA&M. Typically valid for 3 years (or ongoing with continuous monitoring under Ongoing Authorization). The system may operate in production with all approved functionality.
ATO with Conditions
Authorization is granted with specific constraints or requirements. The AO may restrict operations to certain environments, limit user populations, require expedited remediation of specific POA&M items, or mandate enhanced monitoring. Conditions must be met within specified timeframes.
DATO — Denial of Authorization
The system is not authorized to operate. Residual risk exceeds organizational tolerance. The system must be taken offline or returned to development for remediation. A DATO is serious and typically reflects critical vulnerabilities or fundamental security architecture issues.
IATO — Interim ATO
Temporary authorization for a limited duration (typically 90-180 days) while known deficiencies are being remediated. Used when mission need requires continued operation but significant findings exist. Requires a remediation plan with firm deadlines. Cannot be renewed indefinitely.
Section 05

FedRAMP & Cloud Authorization

The Federal Risk and Authorization Management Program provides a standardized approach for security assessment, authorization, and continuous monitoring of cloud products and services used by federal agencies. FedRAMP was codified into law by the FedRAMP Authorization Act (December 2022) as part of the National Defense Authorization Act.

FedRAMP Authorization Levels

FedRAMP defines impact levels that align with FIPS 199 categorization. Each level requires a specific set of controls from NIST SP 800-53, with FedRAMP-specific parameters and additional requirements layered on top. The control count includes base controls plus FedRAMP-specific enhancements.

Level Controls Data Sensitivity Typical Use Cases Assessment Rigor
Li-SaaS ~36 Low impact; minimal federal data exposure. Based on FedRAMP Tailored baseline for low-risk SaaS applications. Collaboration tools, project trackers, development utilities that do not process PII or sensitive data Streamlined assessment; self-attestation for some controls with 3PAO spot checks
Low 156 Low impact per FIPS 199; data whose loss would have limited adverse effect Public-facing websites, non-sensitive internal tools, document sharing for non-CUI content Full 3PAO assessment of all controls; annual reassessment
Moderate 325 Moderate impact; covers CUI, PII, financial data, and most federal operational data Email, CRM, HR systems, financial management, case management—approximately 80% of FedRAMP authorizations are at this level Comprehensive 3PAO assessment; penetration testing; annual reassessment with continuous monitoring
High 421 High impact; data whose loss could cause severe or catastrophic adverse effects including threat to life Law enforcement systems, emergency services, healthcare (life-safety), financial systems processing large transactions, critical infrastructure control systems Most rigorous 3PAO assessment; advanced penetration testing; enhanced continuous monitoring; stricter remediation timelines

Authorization Paths

FedRAMP provides authorization paths for cloud service providers (CSPs) to obtain a government-wide authorization that can be reused by multiple agencies, avoiding redundant assessments.

Agency Authorization
The CSP partners with a specific federal agency sponsor who acts as the initial authorizing authority. The agency AO reviews the 3PAO assessment and grants an ATO. Once authorized, other agencies can reuse the authorization package, issuing their own ATOs leveraging the existing assessment work. This is the primary authorization path under FedRAMP's current model.
Legacy JAB P-ATO (Discontinued)
The Joint Authorization Board (DoD, DHS, GSA) previously issued Provisional ATOs (P-ATOs) for cloud services. This path was discontinued in 2024 as FedRAMP transitioned to the FedRAMP Board structure under the FedRAMP Authorization Act. Existing JAB P-ATOs remain valid; new authorizations follow the agency path or the emerging FedRAMP 20x process.

Under the reuse model, once a CSP achieves FedRAMP authorization through any agency, the authorization package is listed in the FedRAMP Marketplace. Other agencies review the existing package, assess any agency-specific requirements, and issue their own ATO—significantly reducing time and cost compared to a full independent assessment.

FedRAMP 20x Modernization (2025–2026)

FedRAMP 20x is the most significant transformation of the program since its creation. Announced in early 2025, it shifts the authorization model from lengthy paper-based assessments toward automation-first, continuous assurance, dramatically reducing time-to-authorization.

Phase Timeline Scope Key Changes
Phase 1 Early 2025 Low baseline pilots Automated evidence collection, machine-readable security packages, reduced manual documentation. Initial pilots with select CSPs to validate the new approach.
Phase 2 Nov 2025 – Mar 2026 Moderate baseline pilots Expansion to Moderate (the most common level). Refinement of Key Security Indicators (KSIs). Integration with OSCAL-based tooling for automated package validation.
Phase 3 Q3–Q4 2026 (planned) Wide-scale adoption Full rollout across all impact levels. Legacy assessment process sunsets. Continuous authorization becomes the default operating model.

Key Security Indicators (KSIs) are the cornerstone of FedRAMP 20x. Instead of assessing hundreds of individual controls through manual review, KSIs define a focused set of measurable security outcomes that can be verified through automated telemetry. The goal: authorization in approximately 3 months versus the 18+ months typical of the traditional process.

FedRAMP 20x Impact
FedRAMP 20x represents the biggest transformation in federal cloud security since FedRAMP's creation. Automation and continuous monitoring replace lengthy paper-based assessments. CSPs should begin investing in machine-readable security documentation (OSCAL) and automated evidence generation now, as these capabilities will be prerequisites for the new process.

3PAO Assessments

Third Party Assessment Organizations (3PAOs) are independent, accredited entities that perform the security assessments required for FedRAMP authorization. They play a critical role as the objective evaluators of a CSP's security posture.

  • Accreditation — 3PAOs must be accredited by the American Association for Laboratory Accreditation (A2LA) to ISO/IEC 17020, demonstrating competence and independence.
  • Initial Assessment — A full assessment of all applicable controls prior to the initial authorization decision. Includes documentation review, personnel interviews, and technical testing (including penetration testing).
  • Annual Assessment — Every year post-authorization, the 3PAO performs a reassessment of a subset of controls (typically one-third of the full control set, rotating annually to achieve full coverage over three years).
  • Significant Change Assessment — When a CSP makes a significant change to their environment (e.g., new data center, major architecture change, new interconnections), a focused 3PAO assessment of affected controls is required.
  • Independence — The 3PAO must be independent of the CSP. They cannot provide consulting services to the same CSP they assess. This separation ensures objectivity.

POA&M Management

The Plan of Action and Milestones (POA&M) is a living document that tracks known security weaknesses and the planned remediation actions. It is one of the three core authorization artifacts (along with the SSP and SAR) and is central to continuous monitoring.

POA&M Element Description FedRAMP Requirement
Weakness ID Unique identifier for each finding (e.g., V-0001, POA&M-2025-042) Must be traceable to the SAR finding or vulnerability scan result
Control Mapping The specific 800-53 control(s) affected by the weakness Required for all findings; maps to the SSP control implementation
Risk Rating Severity: Critical, High, Moderate, Low (based on CVSS or qualitative assessment) Critical/High: 30-day remediation. Moderate: 90 days. Low: 180 days.
Remediation Plan Detailed description of corrective actions to address the weakness Must include specific technical steps, not just "will fix"
Milestones Scheduled dates for completion of each remediation step Must demonstrate progress; cannot exceed maximum remediation timelines
Status Open, In Progress, Completed, Risk Accepted, False Positive Monthly status updates required; AO must approve risk acceptance
Responsible Party The individual or team accountable for remediating the finding Named individual required; organizational role alone is insufficient

FedRAMP enforces strict remediation timelines. Overdue POA&M items trigger escalation procedures and may jeopardize the authorization. The AO can accept risk for specific findings (documented as "Risk Accepted" with rationale), but this is intended for exceptions, not routine practice.

Authorization Boundary

The authorization boundary defines the scope of the FedRAMP assessment and authorization. Everything inside the boundary is directly assessed; everything outside is documented as an external connection or leveraged authorization. Getting the boundary right is one of the most critical early decisions in the FedRAMP process.

Inside the Boundary
All components that the CSP directly controls and that process, store, or transmit federal data: compute instances, databases, storage, networking infrastructure, management/administration tools, security appliances, monitoring systems, and code repositories for deployed applications. Also includes personnel with administrative access.
External Services
Third-party services that the CSP depends on but does not control. These must be documented as interconnections with data flow descriptions. If an external service is itself FedRAMP authorized, the CSP can leverage that authorization. If not, the risk must be assessed and accepted by the AO.
Data Flow Diagrams
The SSP must include detailed data flow diagrams showing how federal data enters, moves through, and exits the boundary. Diagrams must identify protocols, encryption points, authentication mechanisms, and all external connections. These diagrams are heavily scrutinized during 3PAO assessment.

OSCAL — Open Security Controls Assessment Language

OSCAL is a NIST-developed suite of formats (XML, JSON, YAML) for expressing security control information in a machine-readable way. FedRAMP has mandated OSCAL-formatted authorization packages, with full enforcement planned by September 2026.

# OSCAL Layer Model
Layer 1: Catalog
  Machine-readable representation of SP 800-53 controls
  catalog.json / catalog.xml

Layer 2: Profile
  Baseline selection and tailoring (e.g., FedRAMP Moderate)
  profile.json / profile.xml

Layer 3: Component Definition
  Reusable security capabilities of a product or service
  component-definition.json

Layer 4: System Security Plan (SSP)
  Machine-readable SSP documenting control implementations
  ssp.json / ssp.xml

Layer 5: Assessment Plan (SAP)
  Assessment scope, methodology, schedule
  assessment-plan.json

Layer 6: Assessment Results (SAR)
  Findings from control assessments
  assessment-results.json

Layer 7: POA&M
  Machine-readable plan of action and milestones
  poam.json / poam.xml

# Benefits of OSCAL:
# - Automated validation of authorization packages
# - Machine-to-machine sharing of security posture data
# - Reduced manual effort in package creation and review
# - Foundation for continuous authorization automation

FedRAMP Security Inbox Requirement

Effective January 2026, FedRAMP requires all authorized CSPs to maintain a dedicated security inbox for receiving vulnerability reports, security inquiries, and incident notifications from federal agencies and FedRAMP PMO. This requirement formalizes a communication channel that was previously handled inconsistently across providers.

  • Format — A monitored email address published in the CSP's SSP and FedRAMP Marketplace listing (e.g., fedramp-security@provider.com)
  • Response SLA — Acknowledgment within 24 hours for all security-related communications; substantive response within 72 hours for vulnerability reports
  • Staffing — Must be monitored by personnel with authority to initiate incident response and remediation activities
  • Enforcement — Non-compliance may result in Significant Change Request triggers and POA&M entries during ConMon reviews
Compliance Timeline
The security inbox requirement was announced in late 2025 with an effective date of January 2026. CSPs with existing FedRAMP authorizations had a 90-day grace period to establish and document their inbox. New authorization packages submitted after January 2026 must include the security inbox from the outset.
Section 06

FISMA & Federal Compliance

The Federal Information Security Modernization Act (FISMA) is the foundational law governing information security across all federal agencies. Originally enacted as the Federal Information Security Management Act of 2002 and modernized in 2014, FISMA mandates that every agency develop, document, and implement an enterprise-wide information security program.

FISMA Overview

FISMA establishes the legal foundation for federal cybersecurity. It does not prescribe specific technical controls—instead, it delegates that responsibility to NIST, mandates agency compliance programs, and establishes reporting requirements to OMB and Congress.

Aspect Detail
Original Enactment Federal Information Security Management Act of 2002 (Title III of the E-Government Act, P.L. 107-347). Established the framework for protecting government information and operations.
2014 Modernization Federal Information Security Modernization Act (P.L. 113-283). Shifted emphasis from paperwork compliance to real security outcomes. Elevated DHS/CISA's operational role. Required continuous monitoring and ongoing authorization. Codified OMB's authority over agency cybersecurity.
Agency Requirements Each agency must: (1) designate a senior agency information security officer (SAISO/CISO), (2) develop and maintain an information security program, (3) conduct periodic risk assessments, (4) implement NIST standards, (5) provide security awareness training, and (6) report annually to OMB.
Oversight Structure OMB sets policy and metrics. CISA provides operational guidance and incident response support. NIST develops standards and guidelines. Agency IGs assess compliance. GAO conducts government-wide audits.
Applicability All federal agencies, their contractors, and any organization operating or using an information system on behalf of a federal agency. Extends to state agencies administering federal programs.
Relationship to NIST FISMA directs NIST to develop standards (FIPS) and guidelines (Special Publications) for federal information security. Agencies are required to comply with FIPS and must follow SPs unless a waiver is granted.

FIPS 199 System Categorization — Detailed

FIPS 199 (Standards for Security Categorization of Federal Information and Information Systems) is the mandatory first step under FISMA. Every federal information system must be categorized before controls can be selected. The categorization determines the security baseline and drives the rigor of every subsequent step in the RMF process.

The three security objectives are defined in FIPS 199 with specific impact definitions:

Impact Level Confidentiality Impact Integrity Impact Availability Impact
Low Unauthorized disclosure would have a limited adverse effect on organizational operations, assets, or individuals. Examples: public data, non-sensitive internal memos. Unauthorized modification or destruction would have a limited adverse effect. Data inaccuracy causes minor inconvenience but no significant harm. Disruption of access would have a limited adverse effect. Brief outages are tolerable; no mission impact beyond inconvenience.
Moderate Unauthorized disclosure would have a serious adverse effect on operations, assets, or individuals. Examples: PII, CUI, procurement-sensitive data, law enforcement data. Unauthorized modification would have a serious adverse effect. Incorrect data could cause significant financial loss, degraded mission capability, or harm to individuals. Disruption would have a serious adverse effect. Extended outage would significantly reduce mission effectiveness or cause notable financial loss.
High Unauthorized disclosure would have a severe or catastrophic adverse effect. Examples: classified material, intelligence sources, national security data, data whose exposure could cause loss of life. Unauthorized modification would have a severe or catastrophic adverse effect. Corruption could cause loss of life, large-scale financial catastrophe, or mission failure. Disruption would have a severe or catastrophic adverse effect. Loss of availability could directly threaten life safety, national security, or critical infrastructure.
# FIPS 199 Formal Notation
SC information type = {
    (confidentiality, [LOW | MODERATE | HIGH]),
    (integrity,        [LOW | MODERATE | HIGH]),
    (availability,     [LOW | MODERATE | HIGH])
}

# Example: Financial Management Information
SC financial data = {
    (confidentiality, MODERATE),
    (integrity,        MODERATE),
    (availability,     LOW)
}

# Example: Law Enforcement Investigation Records
SC investigation records = {
    (confidentiality, HIGH),
    (integrity,        MODERATE),
    (availability,     LOW)
}

# System categorization = HIGH-WATER MARK across all info types
# If ANY information type on the system has a High rating
# for ANY objective, the system is categorized as High.

FIPS 200 — Minimum Security Requirements

FIPS 200 (Minimum Security Requirements for Federal Information and Information Systems) defines 17 security-related areas that form the minimum standard for all federal systems. These areas map directly to the SP 800-53 control families and ensure baseline coverage regardless of system categorization.

# Security Area 800-53 Family Requirement Summary
1 Access Control AC Limit access to authorized users, processes, and devices; enforce approved authorizations
2 Awareness & Training AT Ensure personnel are adequately trained to carry out security responsibilities
3 Audit & Accountability AU Create, protect, and retain audit records; ensure individual accountability
4 Certification, Accreditation & Security Assessment CA Periodically assess controls, authorize systems, and monitor continuously
5 Configuration Management CM Establish and enforce configuration baselines; control changes to systems
6 Contingency Planning CP Establish plans for emergency response, backup, and recovery operations
7 Identification & Authentication IA Identify and authenticate users, processes, and devices before granting access
8 Incident Response IR Establish operational incident handling capabilities
9 Maintenance MA Perform timely maintenance; control maintenance tools and remote access
10 Media Protection MP Protect, sanitize, and control physical and digital media
11 Physical & Environmental Protection PE Limit physical access; protect the physical plant and support infrastructure
12 Planning PL Develop, document, and maintain security plans describing controls
13 Personnel Security PS Ensure trustworthiness of individuals; protect information during transfers/terminations
14 Risk Assessment RA Periodically assess risk; scan for vulnerabilities and remediate findings
15 System & Services Acquisition SA Integrate security into the SDLC; manage supply chain risks in acquisitions
16 System & Communications Protection SC Monitor, control, and protect communications; implement architectural protections
17 System & Information Integrity SI Identify, report, and correct system flaws in a timely manner; protect against malicious code

Annual Reporting Requirements

FISMA mandates an annual reporting cycle in which agencies submit detailed metrics on their information security posture to OMB and Congress. These reports drive agency cybersecurity grades and influence budget decisions.

Reporting Element Frequency Responsible Party Description
CIO FISMA Metrics Annual (with quarterly data feeds) Agency CIO / CISO Standardized metrics covering identify, protect, detect, respond, and recover capabilities. Includes data on system inventory, authorization status, vulnerability management, and incident response.
IG FISMA Evaluation Annual Agency Inspector General Independent assessment of the agency's information security program effectiveness. Uses the IG FISMA Reporting Metrics, evaluating maturity across five function areas. Determines the agency's overall maturity level.
CyberScope Submission Monthly & Annual Agency ISSO / ISSM Automated data feeds into the CyberScope platform (managed by DHS CISA). Captures machine-readable security data including CDM dashboard feeds, vulnerability scan results, and system authorization status.
SAOP Privacy Report Annual Senior Agency Official for Privacy Reports on the agency's privacy program, PII breach statistics, Privacy Impact Assessments, and System of Records Notices.
Congressional Report Annual OMB (aggregated) OMB compiles agency data into a government-wide report to Congress on the state of federal cybersecurity, typically including agency scorecards and cross-government maturity trends.

FISMA maturity is assessed on a five-level scale: Level 1: Ad Hoc Level 2: Defined Level 3: Consistently Implemented Level 4: Managed and Measurable Level 5: Optimized. Most agencies target Level 4 (Managed and Measurable) as the minimum acceptable posture.

Continuous Monitoring (ConMon) per NIST SP 800-137

NIST SP 800-137 (Information Security Continuous Monitoring for Federal Information Systems and Organizations) defines the ISCM program that transforms FISMA compliance from a periodic "snapshot" into an ongoing assurance posture. ConMon is the operational engine that keeps authorization relevant between formal reassessments.

ISCM (Information Security Continuous Monitoring) Strategy Components:

Define
Define the ISCM strategy including monitoring objectives, security metrics, monitoring frequencies, and assessment methodologies. The strategy must align with organizational risk tolerance and the system's FIPS 199 categorization.
Establish
Establish an ISCM program with policies, procedures, and governance. Define roles and responsibilities. Select monitoring tools and technologies. Establish baseline configurations and expected security posture indicators.
Implement
Implement the program: deploy monitoring tools, configure automated scanning, establish data feeds to security dashboards, integrate with the organization's SIEM/SOC, and begin collecting security metrics per the defined schedule.
Analyze & Report
Analyze collected data to determine security effectiveness. Correlate findings across sources. Produce actionable reports for system owners and authorizing officials. Feed findings into POA&M and risk register updates.
Respond
Respond to findings through remediation, risk acceptance, or risk transfer. Update the SSP and POA&M. Escalate significant changes to the AO for reauthorization decisions. Adjust the monitoring strategy based on lessons learned.
Review & Update
Periodically review the ISCM strategy and program for effectiveness. Update monitoring frequencies, tools, and metrics based on changes in the threat landscape, organizational priorities, and lessons learned from incidents.

Monitoring Frequency Requirements by Control Type:

Not all controls need to be monitored at the same frequency. The monitoring strategy should define assessment schedules based on control volatility, risk, and operational impact.

Control Category Example Controls Recommended Frequency Rationale
Technical (Volatile) AC-2 Account Management, CM-6 Configuration Settings, SI-2 Flaw Remediation Monthly or more frequent These controls change frequently as users join/leave, patches are applied, and configurations drift. Automated monitoring is essential.
Technical (Stable) SC-7 Boundary Protection, SC-12 Cryptographic Key Management, IA-2 MFA Quarterly Architecture-level controls change less often but still require regular validation that the infrastructure has not been modified or degraded.
Operational IR-4 Incident Handling, CP-4 Contingency Plan Testing, AT-2 Training Annually (with event-driven reassessment) Process-oriented controls require periodic exercise or testing but do not change on a day-to-day basis.
Management PL-2 Security Plans, RA-3 Risk Assessment, PM-9 Risk Management Strategy Annually or upon significant change Policy and planning documents are reviewed and updated on an annual cycle or when the system undergoes a significant change.

ConMon Scanning Cadence

Continuous monitoring relies on a disciplined scanning cadence. The following table defines the minimum frequencies for common monitoring activities, aligning with FedRAMP ConMon requirements and NIST SP 800-137 guidance.

Activity Minimum Frequency Recommended Frequency Tools / Methods Output
Vulnerability Scanning Monthly Weekly Nessus, Qualys, Rapid7, OpenVAS Scan report with CVSS-scored findings; feeds POA&M
Configuration Compliance Monthly Weekly SCAP/STIG scanners, CIS-CAT, Chef InSpec, Ansible compliance Compliance score per benchmark; drift detection report
Inventory Updates Continuous / On-change Continuous CDM tools, ServiceNow CMDB, AWS Config, Azure Resource Graph Up-to-date asset inventory; unauthorized device detection
POA&M Review Monthly Monthly GRC platform (Xacta, eMASS, CSAM, RegScale) Updated POA&M with status changes, milestone progress
Penetration Testing Annual Annual + after major changes 3PAO or internal red team; Burp Suite, Metasploit, Cobalt Strike Penetration test report with exploitable findings
Full Control Assessment Annual (one-third rotation) Annual (one-third rotation) SCA/3PAO interviews, documentation review, technical testing Updated SAR; new/closed POA&M items
Log Review & Analysis Daily Continuous (real-time) SIEM (Splunk, Elastic, Sentinel), SOC analysts Incident alerts, anomaly reports, audit trail verification

Dashboard Metrics and KPIs: Effective ConMon programs surface key performance indicators to authorizing officials and system owners through security dashboards. Common metrics include: percentage of systems with current ATO, average time to remediate critical vulnerabilities, POA&M aging analysis, patch currency rates, configuration compliance scores, and incident response times. These metrics enable data-driven risk decisions rather than relying on periodic assessment snapshots.

Key Principle
FISMA compliance is not a one-time event. Continuous monitoring transforms the traditional "snapshot" ATO into an ongoing authorization posture. The goal is to maintain a real-time understanding of risk so that authorizing officials can make informed decisions at any point in time, not just during the three-year reauthorization cycle.
Section 07

STIGs & CIS Benchmarks

Security Technical Implementation Guides (STIGs) and Center for Internet Security (CIS) Benchmarks are the two dominant configuration-hardening standards in federal and enterprise IT. Together they define exactly how to lock down operating systems, applications, network devices, and databases to reduce attack surface.

DISA STIGs Overview

Security Technical Implementation Guides are published by the Defense Information Systems Agency (DISA) and define configuration standards for all systems operating on DoD networks. STIGs are the authoritative source for how specific products must be hardened, and STIG compliance is a prerequisite for ATO in DoD environments.

Each STIG is structured as a collection of rules (also called findings or checks). Every rule has a unique identifier (e.g., V-254239), a description of the required configuration, a check procedure for auditing compliance, and a fix procedure for remediation. Rules are grouped into categories by technical area such as authentication, logging, network configuration, or file permissions.

Severity Categories: Each STIG rule is assigned a severity category (CAT) that reflects the potential impact of non-compliance.

Severity Level Description Impact of Non-Compliance
CAT I High Any vulnerability that will directly and immediately result in loss of confidentiality, availability, or integrity Immediate risk to the system or network. Open CAT I findings can prevent ATO or trigger DATO. Must be remediated before authorization.
CAT II Medium Any vulnerability that has the potential to result in loss of confidentiality, availability, or integrity Potential risk that could be exploited. Must be tracked in POA&M with defined remediation timelines. Excessive CAT IIs raise overall risk.
CAT III Low Any vulnerability that degrades measures to protect against loss of confidentiality, availability, or integrity Degrades overall security posture. Should be remediated as part of continuous improvement. Tracked in POA&M but lower priority.

Finding States: When evaluating a system against a STIG, each rule is assigned one of four states:

  • Open — The system does not meet the STIG requirement. A finding exists and must be remediated or documented in the POA&M.
  • Not a Finding — The system meets the STIG requirement. The check procedure confirms compliance.
  • Not Applicable — The STIG rule does not apply to this system (e.g., a rule about IIS on a Linux system). Requires documented justification.
  • Not Reviewed — The rule has not yet been evaluated. No compliance determination has been made.

Common STIGs

DISA publishes hundreds of STIGs covering virtually every product deployed in DoD environments. The following table lists the most commonly encountered STIGs with approximate rule counts and update cadences. Rule counts change with each release as new findings are added and obsolete ones are retired.

STIG Name Version (Approx.) Rule Count (Approx.) Update Frequency
Windows Server 2022 V1R5+ ~280 Quarterly
Windows 11 V1R5+ ~260 Quarterly
RHEL 8 V1R13+ ~350 Quarterly
RHEL 9 V1R2+ ~300 Quarterly
Ubuntu 22.04 V1R1+ ~230 Quarterly
Cisco IOS XE V2R2+ ~180 Quarterly
Palo Alto Firewall V3R1+ ~120 Quarterly
Oracle Database 19c V1R2+ ~200 Quarterly
PostgreSQL 15 V1R1+ ~130 Semi-annually
Apache Server 2.4 V2R6+ ~100 Quarterly
Microsoft IIS 10 V2R10+ ~95 Quarterly
VMware vSphere 8.0 ESXi V1R2+ ~160 Quarterly

STIG Viewer & Workflow

STIG Viewer is DISA's official desktop application for browsing STIGs, creating checklists, and documenting compliance findings. It is the primary tool used by ISSOs, system administrators, and assessors to manage STIG compliance on a per-system basis.

STIG Viewer Workflow:

  1. Import STIGs — Download the latest STIG Library from public.cyber.mil and import the relevant STIG XML files into STIG Viewer.
  2. Create a Checklist — Generate a new checklist (.ckl file) for the target system. A checklist maps a specific STIG to a specific host.
  3. Evaluate Rules — For each rule, run the check procedure against the target system and record the finding state (Open, Not a Finding, Not Applicable, or Not Reviewed).
  4. Document Findings — For each Open finding, document the finding detail, severity override (if applicable), and any comments or compensating controls.
  5. Import SCAP Results — Optionally import automated SCAP scan results (from SCC or OpenSCAP) to pre-populate finding states, reducing manual effort.
  6. Export & Submit — Export the completed checklist as a .ckl file for inclusion in the authorization package. Multiple checklists are aggregated for the full system.

STIG Viewer supports bulk operations for large environments: you can create checklist templates, import/export multiple checklists, and merge SCAP results across many hosts. For automation at scale, many organizations use tools like Ansible, Chef InSpec, or Puppet to evaluate and remediate STIG findings programmatically.

CIS Controls v8

The CIS Critical Security Controls (version 8, released May 2021) are a prioritized set of 18 cybersecurity best practices developed by the Center for Internet Security. Unlike the NIST 800-53 catalog (which is comprehensive and technology-neutral), CIS Controls are prescriptive, actionable, and ordered by defensive priority. They answer the question: "What should we do first?"

# Control Name Description
1 Inventory & Control of Enterprise Assets Actively manage all enterprise assets connected to the infrastructure to accurately know what needs to be monitored and protected
2 Inventory & Control of Software Assets Actively manage all software on the network so only authorized software is installed and can execute
3 Data Protection Develop processes and technical controls to identify, classify, securely handle, retain, and dispose of data
4 Secure Configuration of Enterprise Assets & Software Establish and maintain secure configurations for enterprise assets and software using configuration management processes
5 Account Management Use processes and tools to assign and manage authorization to credentials for user accounts and service accounts
6 Access Control Management Use processes and tools to create, assign, manage, and revoke access credentials and privileges for user, administrator, and service accounts
7 Continuous Vulnerability Management Develop a plan to continuously assess and track vulnerabilities on all enterprise assets to remediate and minimize the window of opportunity for attackers
8 Audit Log Management Collect, alert, review, and retain audit logs of events that could help detect, understand, or recover from an attack
9 Email & Web Browser Protections Improve protections and detections of threats from email and web vectors, as these are the most common attack entry points
10 Malware Defenses Prevent or control the installation, spread, and execution of malicious applications, code, or scripts on enterprise assets
11 Data Recovery Establish and maintain data recovery practices sufficient to restore in-scope enterprise assets to a pre-incident and trusted state
12 Network Infrastructure Management Establish and maintain the secure configuration of network devices and manage network infrastructure using a secure network architecture
13 Network Monitoring & Defense Operate processes and tooling to establish and maintain comprehensive network monitoring and defense against security threats
14 Security Awareness & Skills Training Establish and maintain a security awareness program to influence behavior among the workforce to be security conscious and properly skilled
15 Service Provider Management Develop a process to evaluate service providers who hold sensitive data or are responsible for critical enterprise platforms or processes
16 Application Software Security Manage the security lifecycle of in-house developed, hosted, or acquired software to prevent, detect, and remediate security weaknesses
17 Incident Response Management Establish a program to develop and maintain an incident response capability to prepare, detect, and quickly respond to attacks
18 Penetration Testing Test the effectiveness and resiliency of enterprise assets through identifying and exploiting weaknesses in controls and simulating attacker objectives

Implementation Groups (IGs): CIS Controls are organized into three Implementation Groups that represent increasing levels of cybersecurity maturity and resource investment. Every organization should start with IG1.

Implementation Group Profile Safeguards Cumulative Total Description
IG1 Essential Cyber Hygiene 56 56 Minimum standard for all organizations. Addresses the most common attacks with basic safeguards. Small to medium enterprises with limited cybersecurity expertise.
IG2 Enterprise-Level +74 130 For organizations with moderate risk profiles that store and process sensitive data. Requires dedicated cybersecurity staff and technology. Addresses more sophisticated attacks.
IG3 Advanced +23 153 For organizations that handle highly sensitive data or face advanced persistent threats. Requires mature security programs with specialized expertise. Addresses sophisticated and targeted attacks.

CIS Benchmarks vs. STIGs

CIS Benchmarks are consensus-based secure configuration guides for specific technologies (operating systems, databases, cloud platforms, etc.). While they serve a similar purpose to STIGs, there are important differences in authority, audience, and implementation detail.

Dimension STIGs (DISA) CIS Benchmarks
Authority Mandated by DoD policy (DoDI 8500.01). Required for all DoD information systems. Voluntary, consensus-based. Developed by community of security practitioners and vendors.
Primary Audience DoD agencies, contractors, and systems operating on DoD networks All organizations (federal, state/local, private sector, international)
Severity Levels CAT I (High), CAT II (Medium), CAT III (Low) Level 1 (essential, minimal impact), Level 2 (defense-in-depth, may reduce functionality)
Assessment Tools SCAP Compliance Checker (SCC), STIG Viewer, OpenSCAP CIS-CAT Pro, CIS-CAT Lite, Chef InSpec, custom scripts
Update Cadence Quarterly (aligned with vendor patch cycles) Variable (typically updated within weeks of major product releases)
Scoring Pass/fail per rule; overall compliance percentage Scored (Level 1 and Level 2) and Not Scored (best practice recommendations)
Format XML (XCCDF/SCAP), viewable in STIG Viewer PDF document, XCCDF/OVAL for automated assessment
Cost Free (publicly available on public.cyber.mil) Free (PDF); CIS-CAT Pro requires CIS SecureSuite membership

In practice, many organizations use both: STIGs for DoD compliance requirements and CIS Benchmarks for technologies not covered by STIGs or for non-DoD systems. The two standards overlap significantly for common platforms, though specific settings may differ.

CIS Hardened Images

CIS provides pre-hardened virtual machine images that are pre-configured to meet CIS Benchmark recommendations out of the box. These images are available on all major cloud marketplaces and are updated monthly to incorporate the latest benchmark revisions and security patches.

AWS Marketplace
CIS Hardened Images available for Amazon Linux 2, RHEL, Ubuntu, Windows Server, and more. Available in both commercial and GovCloud regions. Billed hourly on top of standard EC2 costs.
Azure Marketplace
CIS images for Windows Server, Ubuntu, RHEL, and CentOS. Available in Azure Commercial and Azure Government. Deployable via ARM templates or Azure CLI.
GCP Marketplace
CIS images for Debian, Ubuntu, RHEL, CentOS, and Windows Server. Can be deployed via Deployment Manager or gcloud CLI. Available in standard and Assured Workloads environments.
Monthly Updates
All CIS Hardened Images are rebuilt monthly to incorporate the latest OS patches and benchmark revisions. Organizations should implement a golden image pipeline to track and deploy updates.

Using CIS Hardened Images as base images significantly accelerates compliance. Instead of hardening a vanilla OS post-deployment, you start from a compliant baseline and only need to manage application-specific configurations and any deviations required by your environment.

SCAP & OVAL

The Security Content Automation Protocol (SCAP) is a suite of specifications maintained by NIST that enables automated configuration checking, vulnerability assessment, and compliance verification. SCAP is the technical backbone that makes large-scale STIG and CIS compliance feasible.

SCAP Component Specifications:

  • XCCDF (Extensible Configuration Checklist Description Format) — XML language for writing security checklists. Defines the rules, profiles, and remediation scripts. STIGs and CIS Benchmarks are published in XCCDF format.
  • OVAL (Open Vulnerability and Assessment Language) — XML language for describing system state checks. Each XCCDF rule references OVAL definitions that describe exactly how to test for compliance.
  • CVE (Common Vulnerabilities and Exposures) — Standard identifiers for publicly known vulnerabilities.
  • CPE (Common Platform Enumeration) — Standard naming scheme for IT products and platforms.
  • CVSS (Common Vulnerability Scoring System) — Standard for scoring vulnerability severity.
  • CCE (Common Configuration Enumeration) — Standard identifiers for system configuration issues.

OpenSCAP is the most widely used open-source implementation of the SCAP standard. It provides command-line tools and libraries for scanning systems against XCCDF/OVAL content.

# Scan system against a STIG profile
oscap xccdf eval \
  --profile stig \
  --results results.xml \
  --report report.html \
  ssg-rhel9-ds.xml

# Generate a fix script from STIG profile
oscap xccdf generate fix \
  --profile stig \
  --output remediate.sh \
  ssg-rhel9-ds.xml

# List available profiles in a data stream
oscap info ssg-rhel9-ds.xml

# Validate SCAP content
oscap xccdf validate ssg-rhel9-ds.xml

SCAP Compliance Checker (SCC) is DoD's official scanning tool, developed and distributed by DISA. SCC provides a GUI and CLI for scanning systems against STIGs using SCAP content. It can scan local and remote hosts, supports Windows, Linux, Solaris, and network devices, and produces results in both .ckl (STIG Viewer) and XCCDF format. SCC is the standard tool used during RMF assessments and is available to DoD users from the DISA Cyber Exchange.

DoD Mandate
In DoD environments, STIG compliance is mandatory. Open CAT I findings can prevent system authorization. All systems must be scanned using SCC or an equivalent SCAP tool prior to assessment, and open findings must be documented in the POA&M with defined remediation timelines. Authorizing Officials may issue a Denial of Authorization to Operate (DATO) for systems with unresolved CAT I findings.
Section 08

The ATO Process

Authorization to Operate (ATO) is the formal declaration by an Authorizing Official that a system may operate in a production environment, accepting the residual risk to organizational operations, assets, and individuals. The ATO process is the culmination of the Risk Management Framework and represents the single most important milestone in a system's compliance lifecycle.

ATO Overview

An Authorization to Operate (ATO) is not merely a checkbox or a rubber stamp. It is a risk-based decision made by a senior executive (the Authorizing Official) who is personally accountable for the consequences of that decision. The ATO process produces a comprehensive authorization package that documents the system's security posture, the risks it carries, and the compensating controls and monitoring strategies in place to manage those risks.

Why ATO matters: Without an ATO, a federal information system cannot process, store, or transmit government data in a production environment. Operating without authorization is a federal policy violation that exposes the agency to regulatory action, audit findings, and unaccepted risk. Every system in the federal inventory — from a minor internal web application to a multi-billion-dollar weapons platform — requires an authorization decision.

Typical Timelines:

Approach Timeline Description
Traditional RMF 6–18 months Full documentation-heavy approach. Suitable for complex systems with large authorization boundaries, significant interconnections, and high-impact categorizations. Common in DoD and IC environments.
Streamlined / Agile RMF 3–6 months Accelerated approach using automated evidence collection, pre-approved components, and inherited controls from cloud providers. Common for FedRAMP-authorized platforms and DevSecOps pipelines.
FedRAMP (Initial) 6–12 months Cloud service authorization through FedRAMP. Includes 3PAO assessment, agency review, and FedRAMP PMO review. Reuse by subsequent agencies is significantly faster.
Continuous ATO (cATO) Ongoing Replaces periodic reauthorization with continuous monitoring and real-time evidence. Initial setup requires significant investment in automation, but eliminates the 3-year cycle.

Authorization Package Components

The authorization package is the collection of documents submitted to the Authorizing Official for review. Together, they present a complete picture of the system, its security posture, its residual risks, and the plan for ongoing monitoring.

SSP System Security Plan
The primary authorization document. Describes the system boundary, architecture, data flows, user roles, interconnections, and how each applicable security control is implemented. For a FIPS 199 Moderate system, the SSP typically ranges from 300 to 800 pages.
Core Artifact · Updated continuously
SAR Security Assessment Report
Documents the results of independent security testing conducted by the Security Control Assessor (SCA) or Third Party Assessment Organization (3PAO). Includes findings, risk ratings, and recommendations for each control tested.
Core Artifact · Produced by independent assessor
POA&M Plan of Action & Milestones
A living document that tracks all known security weaknesses and vulnerabilities, the planned remediation actions, responsible parties, estimated completion dates, and current status. Updated monthly as part of continuous monitoring.
Living Document · Updated monthly
RAR Risk Assessment Report
Formal risk characterization that identifies threats, vulnerabilities, likelihood, and impact for the system. Provides the risk context that informs the AO's authorization decision. Follows NIST SP 800-30 methodology.
Supporting · Per NIST SP 800-30
ConMon Continuous Monitoring Plan
Defines the ongoing assessment strategy after ATO is granted: control assessment schedule, vulnerability scanning cadence, POA&M review frequency, annual assessment scope, and escalation procedures for significant changes or findings.
Supporting · Per NIST SP 800-137

Authorization Decision Types

After reviewing the authorization package, the Authorizing Official issues one of the following decisions. The decision reflects the AO's assessment of residual risk relative to the organization's risk tolerance.

Decision Abbreviation Duration Description
Authorization to Operate ATO Typically 3 years Full authorization. The AO accepts the residual risk and authorizes the system to operate in production. Subject to continuous monitoring requirements and annual reassessment of a subset of controls.
ATO with Conditions ATO-C Up to 3 years (with constraints) Authorized with specific constraints or timelines. Common conditions include: "remediate all CAT I findings within 30 days," "implement MFA by Q2," or "restrict external access until boundary protections are verified."
Denial of Authorization to Operate DATO N/A — system cannot operate The system is not authorized and must not process government data. Issued when residual risk is unacceptable. The system must be disconnected or remediated before seeking reauthorization.
Interim Authorization to Operate IATO 90–180 days Temporary authorization for a limited period. Used when the system has known deficiencies that can be remediated within the interim period. May not be renewed indefinitely; typically limited to one or two extensions.

SSP Deep Dive

The System Security Plan (SSP) is the most important document in the authorization package. It is both the technical blueprint and the compliance narrative for the system. Assessors, auditors, and the Authorizing Official all use the SSP as their primary reference. A well-written SSP significantly accelerates the authorization process; a poorly written one guarantees delays.

Key Sections of a System Security Plan:

System Identification
System name, unique identifier, version, operational status, and FIPS 199 security categorization (confidentiality, integrity, availability impact levels). Defines the system's identity in the agency's inventory.
Authorization Boundary
Precisely defines what is "in scope" for authorization: hardware, software, network segments, cloud resources, and personnel. Everything inside the boundary is the system owner's responsibility. Boundary diagrams are required.
Architecture Diagrams
Network topology diagrams, logical architecture diagrams, and deployment diagrams showing how components interconnect. Must show security controls at each layer (boundary devices, segmentation, encryption points).
Data Flows
Internal data flows (between system components) and external data flows (to/from other systems, users, and the internet). Must identify data types, protocols, encryption in transit, and any sensitive data paths.
Control Implementation Statements
For each applicable control from the 800-53 baseline, a detailed narrative describing how the control is implemented, who is responsible, and what evidence exists. This is the bulk of the SSP by page count.
User Roles & Responsibilities
Defines user categories (privileged administrators, general users, auditors), access privileges, authentication requirements, and the separation of duties model. Maps roles to access control policies.
Interconnection Agreements
Interconnection Security Agreements (ISAs) and Memoranda of Understanding (MOUs) for all external system connections. Documents the security responsibilities of each party, data exchanged, and protections in place.
Ports, Protocols & Services
A complete inventory of network ports, protocols, and services used by the system. Must document inbound and outbound traffic flows, justification for each service, and any prohibited protocols.

Continuous ATO (cATO)

Continuous Authorization to Operate (cATO) is an emerging approach that replaces the traditional three-year reauthorization cycle with an ongoing authorization posture based on real-time risk visibility. Instead of producing a massive paper-based authorization package every three years, cATO relies on continuous automated evidence collection, real-time security dashboards, and ongoing risk assessment.

Core Requirements for cATO:

  • Automated Scanning — Continuous vulnerability scanning, configuration compliance checks, and asset inventory updates running on automated schedules (daily or more frequent).
  • Real-Time Dashboards — Security posture dashboards accessible to the AO, ISSM, and ISSO that provide current risk metrics, vulnerability trends, compliance scores, and anomaly indicators.
  • Continuous Evidence Collection — Automated collection and storage of compliance evidence (scan results, configuration snapshots, audit logs) that replaces manual evidence gathering.
  • DevSecOps Pipeline Integration — Security testing integrated into CI/CD pipelines so that every code change and infrastructure modification is automatically assessed against security requirements.
  • Defined Thresholds & Triggers — Pre-agreed risk thresholds that, when exceeded, trigger escalation to the AO for review. Replaces the subjective "significant change" determination.

DoD Adoption: The Department of Defense has embraced cATO through the DoD DevSecOps Reference Design and the Platform One initiative. DoD CIO guidance now explicitly supports continuous authorization for systems that can demonstrate ongoing risk visibility. Several DoD programs (including Platform One's Big Bang deployment and Iron Bank container registry) operate under cATO.

The Future of Authorization
cATO is the future — it replaces the "3-year cycle" with ongoing authorization through continuous monitoring and automated evidence collection. Organizations investing in DevSecOps, infrastructure as code, and automated compliance tooling are positioning themselves for cATO adoption. The key insight is that authorization becomes a continuous process rather than a periodic event.

Common ATO Pitfalls

The following are the most frequently encountered issues that delay or derail the ATO process. Each represents a failure mode that experienced compliance teams learn to anticipate and prevent.

Incomplete Authorization Boundary
The boundary diagram does not include all components (e.g., missing a database server, an external API connection, or a management network). Assessors will flag every undocumented component. Remediation requires rewriting portions of the SSP and re-scoping the assessment.
Undocumented Interconnections
External connections to other systems lack Interconnection Security Agreements (ISAs) or Memoranda of Understanding (MOUs). Every data flow crossing the boundary must be documented and authorized. Missing ISAs can halt the assessment.
Open CAT I Findings
Critical STIG or vulnerability scan findings remain unremediated. Most AOs will not grant ATO with open CAT I findings. These must be remediated or have an approved operational risk acceptance with documented compensating controls.
Missing POA&M Items
Known vulnerabilities or non-compliant controls are not tracked in the POA&M. Every open finding must have a POA&M entry with a responsible party, estimated completion date, and remediation plan. An incomplete POA&M suggests poor risk management.
Insufficient Control Evidence
Control implementation statements lack supporting evidence. Assessors need screenshots, configuration exports, policy documents, or scan results to validate each control. "We do this" is not sufficient without proof.
Stale Documentation
The SSP, boundary diagrams, or data flow diagrams do not reflect the current system. If the system has changed since the SSP was last updated, the assessment is based on inaccurate information. Maintain the SSP as a living document.
Section 09

Vulnerability Management & Scanning

Enterprise vulnerability management is a continuous process of discovering, assessing, prioritizing, and remediating security weaknesses across the entire authorization boundary. In government IT, vulnerability management is not optional — it is mandated by FISMA, FedRAMP, and binding operational directives from CISA.

Vulnerability Lifecycle

Effective vulnerability management follows a structured lifecycle. Each phase builds on the previous one, and skipping phases leads to gaps in coverage, wasted remediation effort, or unaddressed risk.

1
Discovery
Scan all assets in the authorization boundary using authenticated and unauthenticated scans. Maintain a complete asset inventory.
2
Assessment
Score findings using CVSS. Add risk context: asset criticality, exposure, exploitability, and data sensitivity.
3
Prioritization
Triage findings using risk-based criteria. CISA KEV status, active exploitation, and internet-facing exposure increase priority.
4
Remediation
Apply patches, update configurations, or implement compensating controls. Document actions in the POA&M.
5
Verification
Re-scan to confirm remediation was effective. Validate that patches did not introduce regressions or new vulnerabilities.
6
Reporting
Generate metrics and compliance evidence. Feed results into ConMon dashboards, POA&M updates, and management briefings.

Scanning Tools

The following table compares the most widely deployed vulnerability scanning platforms in federal and enterprise environments. Tool selection depends on deployment model, scale, and compliance requirements.

Tool Deployment Best For Key Feature
Nessus Professional On-premises (single scanner) Small to mid-size environments, pen testers, individual teams Plugin-based architecture with 200,000+ checks. Highly configurable scan policies. Industry-standard vulnerability detection engine.
Tenable.sc (Security Center) On-premises (enterprise platform) Large enterprises and federal agencies requiring on-premises data residency Centralized management of multiple Nessus scanners. Advanced dashboards, compliance reporting, and role-based access. Supports air-gapped networks.
Tenable.io (Vulnerability Management) Cloud-hosted (SaaS) Organizations with hybrid/cloud infrastructure seeking SaaS deployment Cloud-native platform with container scanning, web application scanning, and cloud posture management. Continuous asset discovery.
ACAS On-premises (DoD deployment) DoD networks and systems requiring DISA-mandated scanning DoD's official Tenable deployment. ACAS (Assured Compliance Assessment Solution) includes Tenable.sc, Nessus scanners, and Nessus Network Monitor. Mandated for all DoD vulnerability scanning.

CVSS Scoring: v3.1 vs. v4.0

The Common Vulnerability Scoring System (CVSS) is the industry standard for rating vulnerability severity. CVSS v4.0 was released in November 2023 and introduces significant changes over v3.1. Both versions remain in active use during the transition period.

Metric Group Comparison:

Metric Group CVSS v3.1 CVSS v4.0
Base Metrics Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, Confidentiality/Integrity/Availability Impact Attack Vector, Attack Complexity, Attack Requirements (new), Privileges Required, User Interaction, Vulnerable System C/I/A, Subsequent System C/I/A
Temporal / Threat Exploit Code Maturity, Remediation Level, Report Confidence Renamed to Threat Metrics: Exploit Maturity (simplified)
Environmental Modified Base Metrics, Confidentiality/Integrity/Availability Requirement Modified Base Metrics, Confidentiality/Integrity/Availability Requirement (retained)
Supplemental Not present New group: Safety, Automatable, Recovery, Value Density, Vulnerability Response Effort, Provider Urgency

Key Changes in CVSS v4.0:

  • Scope metric retired — The confusing "Scope" metric from v3.1 is replaced by separate impact metrics for the Vulnerable System and Subsequent Systems, providing clearer modeling of blast radius.
  • Attack Requirements added — A new metric that captures prerequisites beyond privileges and user interaction (e.g., specific network conditions, race conditions, or deployment configurations required for exploitation).
  • Dual impact metrics — Separate C/I/A impact ratings for the vulnerable system and any subsequent systems affected, replacing the binary Scope model.
  • Supplemental metrics — New optional metrics (Safety, Automatable, Recovery, etc.) that provide additional context without affecting the numeric score.

Severity Scale: The numeric scoring ranges remain the same across both versions.

Severity Score Range Remediation Priority
None 0.0 No action required
Low 0.1 – 3.9 Remediate within standard maintenance cycles (180 days)
Medium 4.0 – 6.9 Remediate within 90 days; prioritize internet-facing systems
High 7.0 – 8.9 Remediate within 30 days; escalate if actively exploited
Critical 9.0 – 10.0 Remediate within 15 days; immediate action for internet-facing or actively exploited

BOD 22-01: Known Exploited Vulnerabilities

Binding Operational Directive 22-01 (Reducing the Significant Risk of Known Exploited Vulnerabilities) is issued by CISA and applies to all Federal Civilian Executive Branch (FCEB) agencies. It establishes a mandatory requirement to remediate vulnerabilities that are actively being exploited in the wild.

The CISA KEV Catalog: CISA maintains the Known Exploited Vulnerabilities (KEV) catalog — a continuously updated list of vulnerabilities with confirmed active exploitation. When a vulnerability is added to the KEV catalog, federal agencies must remediate it within the specified timeline. The catalog is publicly available and is widely used beyond the federal government as a prioritization input.

Remediation Timelines under BOD 22-01:

Asset Type Remediation Timeline Notes
Internet-Facing Assets 2 weeks (14 calendar days) Includes any system or service accessible from the internet. Highest priority due to direct exposure to threat actors.
Internal Assets 6 weeks (42 calendar days) Systems on internal networks only. While not directly exposed, lateral movement and insider threats make remediation essential.

Although BOD 22-01 is legally binding only for FCEB agencies, CISA strongly recommends that all organizations (state, local, tribal, territorial, and private sector) use the KEV catalog to prioritize patching. Many private-sector organizations have adopted the KEV catalog as a supplement to CVSS-based prioritization, as it reflects real-world exploitation rather than theoretical severity alone.

Remediation SLAs

Federal agencies and their contractors are expected to meet defined remediation timelines based on vulnerability severity. The following SLAs represent common government standards, though specific agency policies may be more aggressive.

Severity CVSS Range Remediation SLA Escalation
Critical 9.0 – 10.0 15 days Immediate notification to ISSO/ISSM. AO briefed within 72 hours if remediation is delayed. POA&M entry required with compensating controls.
High 7.0 – 8.9 30 days ISSO tracks remediation progress weekly. Escalate to ISSM if remediation exceeds SLA. Document in POA&M with milestone dates.
Medium 4.0 – 6.9 90 days Tracked in POA&M. Included in monthly vulnerability review. Prioritize based on asset criticality and exposure.
Low 0.1 – 3.9 180 days (or accept risk) Remediate during standard maintenance windows. May be accepted as residual risk with documented justification and AO approval.

Scanning Best Practices

The quality and coverage of vulnerability scanning directly impacts the accuracy of your risk picture. Poor scanning practices create a false sense of security by missing vulnerabilities that are visible to attackers.

Authenticated vs. Unauthenticated Scans:

Scan Type How It Works Vulnerability Detection Use Case
Unauthenticated Scanner probes the target over the network without credentials, testing only externally visible services and ports Detects only network-visible vulnerabilities. Misses local privilege escalation, missing patches, misconfigurations, and installed software issues. External attack surface assessment, perimeter scanning, quick reconnaissance
Authenticated Scanner logs into the target using provided credentials (SSH, WinRM, SNMP) and examines the system from the inside Detects 3-5x more vulnerabilities than unauthenticated scans. Identifies missing patches, insecure configurations, local vulnerabilities, installed software with known CVEs, and file permission issues. Compliance scanning, internal vulnerability management, STIG/CIS compliance checks

Scan Frequency and Coverage:

  • Minimum frequency: Weekly vulnerability scans for all systems in the authorization boundary. Continuous scanning is recommended for high-value assets and internet-facing systems.
  • Scan coverage: 100% of assets within the authorization boundary must be scanned. Assets that cannot be scanned (e.g., fragile legacy systems) must be documented with compensating controls.
  • New assets: Any new system or device added to the boundary must be scanned within 24 hours of deployment.
  • Post-patch verification: Re-scan within 72 hours of applying patches to verify remediation and detect regressions.

Example Nessus CLI Scan:

# Basic authenticated scan against a subnet
nessuscli scan \
  --policy "Federal Moderate" \
  --targets 10.0.0.0/24 \
  --credentials ssh:admin \
  --output results-$(date +%Y%m%d).nessus

# Export results in multiple formats
nessuscli report \
  --format html,csv,nessus \
  --input results-20260212.nessus \
  --output report-20260212

# List available scan policies
nessuscli policy --list

# Update plugins (required for latest vulnerability checks)
nessuscli update --plugins-only
Federal Mandate
BOD 22-01 is not optional for federal agencies. Known Exploited Vulnerabilities in the CISA KEV catalog must be remediated within the specified timelines — no exceptions. Failure to comply results in CISA reporting to OMB and Congressional oversight. The KEV catalog currently tracks over 1,100 vulnerabilities with confirmed active exploitation, and new entries are added multiple times per week.
Section 10

Security Logging & SIEM

Security Information and Event Management (SIEM) is the backbone of compliance evidence and threat detection. Without centralized, tamper-resistant logging, organizations cannot demonstrate control effectiveness, detect adversary activity, or satisfy audit requirements. This section covers log sources, SIEM platforms, correlation strategies, and the NIST AU control family that governs audit and accountability.

Log Sources by Category

A comprehensive logging strategy must capture events from every layer of the technology stack. The following table organizes the most critical log sources by category, along with the key events each produces and the standard formats used.

Category Source Key Events Format
Operating System Windows Event Logs Logon/logoff (4624/4634), account lockout (4740), privilege use (4672/4673), policy change (4719), object access (4663), process creation (4688) EVTX (XML-based binary)
Linux syslog / journald Authentication (auth.log), sudo usage, service start/stop, kernel messages, cron execution, package management events Syslog (RFC 5424), Journal binary
Linux auditd System call auditing, file access (SYSCALL), user commands (USER_CMD), SELinux denials (AVC), authentication events (USER_AUTH/USER_LOGIN) Audit log (key=value pairs)
Network Firewall logs Allow/deny decisions, connection state tracking, NAT translations, rule hit counts, threat intelligence matches Syslog, CEF, LEEF (vendor-specific)
IDS/IPS alerts Signature matches, anomaly detections, protocol violations, exploit attempts, malware indicators, policy violations Syslog, CEF, IDMEF, Unified2
DNS query logs Query/response records, NXDOMAIN responses, DNS tunneling indicators, domain generation algorithm (DGA) patterns, sinkhole hits Syslog, dnstap, PCAP
NetFlow / IPFIX Traffic flow metadata (src/dst IP, port, protocol, bytes, packets, duration), bandwidth utilization, connection patterns NetFlow v5/v9, IPFIX, sFlow
Application Web server access/error logs HTTP requests (method, URI, status, user-agent), error conditions (5xx), authentication failures, WAF blocks, SSL/TLS handshake errors CLF, Combined Log Format, JSON
Database audit logs Query execution, schema changes (DDL), data modifications (DML), privilege grants/revokes, login attempts, stored procedure execution Vendor-specific (pgaudit, SQL Server Audit, Oracle Audit Vault)
Custom application logs Business logic events, user actions, API calls, error/exception traces, session management, data access patterns JSON structured, Syslog, custom
Cloud AWS CloudTrail API calls across all AWS services, console sign-in events, S3 data events, Lambda invocations, IAM policy changes, KMS key usage JSON (CloudTrail event format)
Azure Activity Log Subscription-level operations, resource creation/modification/deletion, RBAC changes, service health events, Entra ID sign-in and audit logs JSON (Azure Diagnostic format)
GCP Audit Logs Admin activity, data access, system events, policy denied events, IAM policy changes, resource metadata changes JSON (Cloud Logging format)
Identity Authentication logs Successful/failed logins, MFA challenges, password resets, token issuance, SSO assertions, session creation/termination Syslog, JSON, SAML assertions
Directory service events User/group creation and modification, OU changes, GPO modifications, replication events, LDAP bind attempts, Kerberos ticket operations Windows Event Log, Azure AD logs
Endpoint EDR telemetry Process creation trees, file modifications, registry changes, network connections, loaded modules, script execution, injection attempts Vendor-specific (CrowdStrike JSON, Defender ATP, Carbon Black)
Antivirus / antimalware events Malware detection, quarantine actions, scan results, signature update status, real-time protection events, behavioral detections Syslog, CEF, Windows Event Log

SIEM Platform Comparison

The SIEM market includes both commercial and open-source platforms. Selection depends on organizational size, budget, existing infrastructure, cloud strategy, and query complexity requirements. The following comparison covers the platforms most commonly encountered in federal and enterprise environments.

Platform Deployment Query Language Strength Best For
Splunk Enterprise Security On-prem, cloud (Splunk Cloud), hybrid SPL (Search Processing Language) Market leader with deepest ecosystem. Extensive app marketplace, mature correlation engine, robust dashboarding. Handles massive data volumes with distributed architecture. Large enterprises and federal agencies with complex environments, high ingest volumes, and need for extensive third-party integrations
Elastic SIEM (ELK Stack) Self-managed, Elastic Cloud, hybrid KQL (Kibana Query Language), EQL (Event Query Language), ES|QL Open-source core (Elasticsearch, Logstash, Kibana). Flexible data model, strong full-text search, detection rules aligned with MITRE ATT&CK. No per-GB licensing for self-managed. Organizations seeking cost control, customization, and open-source flexibility. Strong in cloud-native and container environments.
Microsoft Sentinel Cloud-native (Azure only) KQL (Kusto Query Language) Native integration with Microsoft 365, Entra ID, Defender suite, and Azure services. Consumption-based pricing (pay per GB ingested). Built-in SOAR via Logic Apps. AI-powered Fusion detection. Microsoft-centric organizations, Azure-first cloud strategies, agencies using M365 GCC/GCC High
Google Chronicle (SecOps) Cloud-native (Google Cloud) YARA-L 2.0 (detection), UDM Search Google-scale data lake with fixed-cost pricing regardless of ingest volume. 12-month hot retention by default. Unified Data Model (UDM) normalizes all sources. Strong threat intelligence integration via Mandiant/VirusTotal. Organizations with massive log volumes seeking predictable costs, Google Cloud shops, and teams wanting integrated threat intelligence
IBM QRadar On-prem, SaaS (QRadar on Cloud), hybrid AQL (Ariel Query Language) Legacy enterprise SIEM with strong compliance reporting. Mature offense management and flow analysis. Extensive protocol support for OT/ICS environments. Recently transitioning to QRadar Suite on Cloud Pak for Security. Established enterprises with existing IBM investments, OT/ICS environments, organizations needing strong out-of-box compliance reporting

Log Formats

Standardized log formats enable interoperability between disparate systems and SIEM platforms. The three most common formats in enterprise and federal environments are Syslog, CEF, and JSON structured logging.

Format Standard Characteristics When to Use
Syslog RFC 3164 (BSD), RFC 5424 (IETF) Text-based, priority/facility/severity fields, timestamp, hostname, message. RFC 5424 adds structured data elements, UTF-8 support, and millisecond timestamps. Transported via UDP (514) or TCP/TLS (6514 per RFC 5425). Network devices, Linux/Unix systems, legacy infrastructure. Universal support across virtually all platforms. Use RFC 5424 for new deployments.
CEF Common Event Format (ArcSight/OpenText) Pipe-delimited header with key=value extension pairs. Fixed header fields: Version, Device Vendor, Device Product, Device Version, Signature ID, Name, Severity. Extensions provide flexible additional context. Security devices (firewalls, IDS/IPS, WAF), SIEM integrations. Well-suited for security events with severity context. Widely supported by commercial security products.
JSON Structured No single standard; ECS (Elastic Common Schema), OCSF (Open Cybersecurity Schema Framework) emerging Self-describing key-value format. Supports nested objects, arrays, and typed values. Human-readable and machine-parseable. Schema flexibility allows rich contextual data without fixed field limits. Cloud-native applications, microservices, API-driven systems, modern SIEM ingestion. Preferred for new application development. Enables powerful structured queries without regex parsing overhead.
# Syslog RFC 5424 example
<134>1 2026-02-12T09:15:32.123Z webserver01 nginx 12345 - -
  10.0.1.50 - admin [12/Feb/2026:09:15:32 +0000] "POST /api/login HTTP/1.1" 401 287

# CEF example
CEF:0|Palo Alto|PAN-OS|11.1|THREAT|url-filtering|7|
  src=10.0.1.50 dst=203.0.113.42 dpt=443 act=block-url
  cs1=malware cs1Label=Category msg=Blocked malicious URL

# JSON structured example
{
  "timestamp": "2026-02-12T09:15:32.123Z",
  "level": "WARN",
  "event": "authentication_failure",
  "source_ip": "10.0.1.50",
  "user": "admin",
  "reason": "invalid_password",
  "attempt_count": 4
}

NIST AU (Audit and Accountability) Control Family

The AU family in NIST SP 800-53 Rev. 5 defines the requirements for event logging, audit record content, review processes, and protection of audit information. These controls form the compliance foundation for all logging and SIEM activities. The following table highlights the most critical AU controls with implementation guidance.

Control Name Requirement Implementation Example
AU-2 Event Logging Identify the types of events the system must log to support the audit function. Coordinate the event logging list with other organizational entities. Review and update the list of auditable events on an organization-defined frequency. Define auditable events: authentication success/failure, privilege escalation, object access, policy changes, system start/stop. Review list annually and after significant changes. Document in SSP Appendix.
AU-3 Content of Audit Records Ensure audit records contain sufficient information to establish what type of event occurred, when it occurred, where it occurred, the source of the event, the outcome, and the identity of individuals or subjects associated with the event. Every log entry must include: timestamp (ISO 8601), event type/ID, source (hostname/IP), user identity, outcome (success/fail), and affected resource. Enrich with session ID and correlation ID where applicable.
AU-6 Audit Record Review, Analysis, and Reporting Review and analyze audit records on an organization-defined frequency for indications of inappropriate or unusual activity. Report findings to designated organizational officials. Integrate audit review with incident response processes. Configure SIEM dashboards for daily analyst review. Set automated correlation rules for high-priority events. Generate weekly summary reports for ISSO. Escalate anomalies within 1 hour. Document review in ConMon artifacts.
AU-9 Protection of Audit Information Protect audit information and audit logging tools from unauthorized access, modification, and deletion. Alert personnel in the event of an audit logging process failure. Store logs in append-only storage (WORM). Restrict SIEM admin access to dedicated accounts with MFA. Ship logs to a separate security boundary in real-time. Hash log files for integrity verification. Alert on log forwarding failure within 5 minutes.
AU-12 Audit Record Generation Provide audit record generation capability for the event types defined in AU-2 at all system components. Allow designated organizational personnel to select which events are auditable by specific components. Generate audit records with the content defined in AU-3. Enable auditd on all Linux hosts. Configure Windows Advanced Audit Policy via GPO. Enable CloudTrail in all AWS regions. Validate log generation via automated compliance scans (OpenSCAP, InSpec). Test annually during assessment.

Correlation Rules — Common Use Cases

Correlation rules are the intelligence layer of a SIEM. They combine events from multiple sources to detect patterns that individual log entries cannot reveal. The following are foundational correlation rules that every federal SIEM deployment should implement.

Use Case Detection Logic Log Sources Required Response Action
Brute Force Detection More than N failed authentication attempts from the same source IP or targeting the same account within a defined time window (e.g., >5 failures in 10 minutes) Authentication logs, Windows Event 4625, Linux auth.log, VPN logs, cloud identity provider logs Alert SOC, temporarily block source IP at firewall, force account lockout per AC-7 policy, create incident ticket
Impossible Travel Successful logins from geographically distant locations within a timeframe that makes physical travel impossible (e.g., New York and London within 30 minutes) Authentication logs with GeoIP enrichment, VPN session logs, cloud sign-in logs (Azure AD, Okta) Flag as high-priority alert, require re-authentication with MFA step-up, notify user, investigate for credential compromise
Privilege Escalation Account receives elevated privileges (group membership change, role assignment, sudo grant) followed by unusual administrative actions within a short time window Directory service logs (AD Event 4728/4732/4756), IAM audit logs, sudo/su logs, cloud IAM change logs Alert ISSO immediately, verify change was authorized via change management ticket, roll back if unauthorized, trigger IR workflow
Lateral Movement Single account authenticating to an unusual number of systems within a short period, or sequential authentication across systems following a pattern inconsistent with normal behavior Windows Event 4624 (Type 3/10), SSH auth logs, RDP session logs, network flow data, EDR telemetry Isolate affected endpoints, disable compromised account, capture forensic images, escalate to IR team, preserve network flow data
Data Exfiltration Indicators Abnormal outbound data volume from a host or user, large file transfers to external destinations, unusual DNS query volumes (potential DNS tunneling), or connections to known file-sharing services NetFlow/IPFIX, proxy/web gateway logs, DLP alerts, DNS query logs, firewall logs, cloud storage access logs Block suspicious egress traffic, quarantine endpoint, preserve evidence, notify data owner, assess potential data breach per IR plan and breach notification requirements
After-Hours Access Successful authentication or sensitive resource access outside of defined business hours or from locations inconsistent with the user’s assigned duty station, particularly to high-value assets Authentication logs, badge/physical access logs, VPN session logs, database audit logs, file server access logs Alert on-call analyst, verify with user via out-of-band communication, document justification, review for patterns indicating insider threat

Log Retention Requirements by Framework

Retention requirements vary significantly by compliance framework. Organizations subject to multiple frameworks must meet the most stringent applicable requirement. Retention policies must address both the total retention period and the accessibility window (how quickly logs must be retrievable for analysis).

Framework Minimum Retention Accessibility Requirement Notes
NIST 800-53 Organization-defined Organization-defined AU-11 requires retention sufficient to support after-the-fact investigation. Organizations define the period in the SSP based on risk assessment and legal requirements. Typical federal implementations specify 1–3 years.
FedRAMP Minimum 1 year total 90 days readily accessible (online/hot storage) Remaining 9 months may be in cold/archive storage but must be retrievable within a reasonable timeframe. FedRAMP 20x may adjust these requirements. ConMon reporting requires access to at least 90 days of searchable data.
FISMA Varies by agency; typically 3 years Agency-defined NARA (National Archives and Records Administration) general records schedules govern federal record retention. Many agencies set 3-year retention per GRS 3.2, with sensitive systems requiring longer periods. Check agency-specific policy.
DoD (DISA) Minimum 1 year online, 5 years total 1 year online (searchable) Per DISA STIG requirements and DoD 8140/8570 guidance. Classified systems may require indefinite retention. Logs supporting active investigations must be preserved regardless of retention schedule until the investigation concludes.
Compliance Evidence
Audit logs are your compliance evidence. Without centralized, tamper-resistant logging, you cannot demonstrate control effectiveness during assessments. Assessors will verify that logs are being generated (AU-12), contain the required content (AU-3), are reviewed regularly (AU-6), and are protected from unauthorized modification (AU-9). A SIEM that is deployed but not actively monitored is a finding, not a control. Ensure your logging pipeline is tested end-to-end: from event generation to SIEM ingestion to analyst dashboard to alert notification.
Section 11

Compliance Meets DevSecOps

Traditional compliance processes—manual documentation, periodic audits, snapshot-in-time assessments—cannot keep pace with modern continuous delivery pipelines that deploy dozens of changes per day. DevSecOps integrates security and compliance requirements directly into the development lifecycle, shifting from reactive auditing to proactive, automated enforcement. Compliance-as-Code turns security controls into executable tests that run with every build.

Why Compliance Must Shift Left

In a traditional model, compliance is a gate at the end of the development lifecycle. Security assessments happen months after code is written, findings require expensive rework, and authorization delays block deployment. DevSecOps inverts this model by embedding compliance checks throughout the pipeline, catching issues when they are cheapest to fix and generating evidence continuously rather than in periodic bursts.

Aspect Traditional Compliance DevSecOps Compliance
Assessment Timing Point-in-time (annual or triennial) Continuous (every commit, every build)
Evidence Generation Manual screenshots, spreadsheets, interviews Automated test results, pipeline logs, compliance dashboards
Finding Discovery Months after deployment Seconds after code change
Remediation Cost High (rework deployed systems) Low (fix before merge)
Documentation Static Word documents, manual updates Machine-readable OSCAL, auto-generated from code
Authorization Speed 6–18 months for initial ATO Continuous authorization with automated evidence

Compliance-as-Code Tools

These tools encode security requirements as executable code, enabling automated compliance checking, continuous hardening validation, and machine-readable policy enforcement.

OpenSCAP
Open-source implementation of the SCAP (Security Content Automation Protocol) standard. Evaluates systems against STIG and CIS profiles using XCCDF benchmarks and OVAL definitions. Supports RHEL, Ubuntu, Windows, and more. Produces machine-readable ARF (Asset Reporting Format) results that can be ingested by compliance dashboards.
Chef InSpec
Infrastructure testing framework with a human-readable DSL for writing compliance profiles. Tests can verify configurations, file permissions, services, packages, and arbitrary system state. MITRE SAF (Security Automation Framework) publishes InSpec profiles for DISA STIGs covering dozens of platforms. Results map directly to NIST 800-53 controls.
Ansible STIG Roles
Automated hardening playbooks that apply STIG and CIS configurations to target systems. The ansible-lockdown project (MindPoint Group) provides maintained roles for RHEL 7/8/9, Ubuntu, Windows Server, and more. Roles are idempotent and can be run repeatedly to enforce drift correction. Each task maps to a specific STIG finding ID.
OSCAL
NIST’s Open Security Controls Assessment Language. A set of machine-readable formats (JSON, XML, YAML) for representing catalogs, profiles, system security plans, assessment plans, assessment results, and POA&Ms. Enables automated processing of compliance documentation and forms the foundation for FedRAMP’s digital authorization initiative.
# Chef InSpec — test SSH configuration against STIG requirements
describe sshd_config do
  its('Protocol') { should cmp 2 }
  its('PermitRootLogin') { should eq 'no' }
  its('MaxAuthTries') { should cmp <= 4 }
  its('ClientAliveInterval') { should cmp <= 600 }
  its('ClientAliveCountMax') { should cmp 0 }
  its('PermitEmptyPasswords') { should eq 'no' }
end

# Run InSpec against a remote target
inspec exec rhel9-stig-baseline \
  -t ssh://admin@10.0.1.50 \
  --reporter cli json:results.json
# Ansible STIG role — automated hardening
- hosts: servers
  become: true
  roles:
    - role: ansible-lockdown.rhel9-stig
      rhel9stig_cat1_patch: true
      rhel9stig_cat2_patch: true
      rhel9stig_cat3_patch: true
      rhel9stig_gui: false

# OpenSCAP — scan against DISA STIG profile
oscap xccdf eval \
  --profile xccdf_org.ssgproject.content_profile_stig \
  --results stig-results.xml \
  --report stig-report.html \
  /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml

Pipeline Security Gates

A mature DevSecOps pipeline enforces security and compliance checks at every stage. Each gate acts as a quality checkpoint—failures block progression to the next stage, ensuring that non-compliant code never reaches production.

Stage Gate Tool Examples Compliance Mapping
Pre-Commit Secrets scanning — detect hardcoded credentials, API keys, tokens, and private keys before they enter version control git-secrets, detect-secrets, truffleHog, Gitleaks IA-5 (Authenticator Management), SC-28 (Protection of Information at Rest), SA-11 (Developer Testing)
Build Static Application Security Testing (SAST) — analyze source code for vulnerabilities without executing the application SonarQube, Semgrep, Checkmarx, Fortify, CodeQL SA-11 (Developer Testing), SI-10 (Information Input Validation), SA-15 (Development Process)
Build Dependency scanning — identify known vulnerabilities (CVEs) in third-party libraries and open-source components Dependabot, Snyk, OWASP Dependency-Check, Grype SA-12 (Supply Chain Protection), RA-5 (Vulnerability Monitoring), CM-8 (System Component Inventory)
Build Container image scanning — inspect container images for OS-level and application-level vulnerabilities, misconfigurations, and compliance violations Trivy, Anchore, Prisma Cloud (Twistlock), Clair CM-6 (Configuration Settings), SI-2 (Flaw Remediation), RA-5 (Vulnerability Monitoring)
Deploy STIG/CIS compliance scan — validate that infrastructure configurations meet hardening baselines before deployment to production OpenSCAP, Chef InSpec, Ansible STIG roles, CIS-CAT Pro CM-6 (Configuration Settings), CM-2 (Baseline Configuration), SA-10 (Developer Configuration Management)
Deploy Authorization boundary validation — verify that deployments stay within the approved system boundary and do not introduce unauthorized external connections Policy-as-Code (OPA/Rego), Terraform Sentinel, AWS Config Rules, Azure Policy CA-3 (Information Exchange), SC-7 (Boundary Protection), CM-3 (Configuration Change Control)
Runtime Dynamic Application Security Testing (DAST) — test running applications for vulnerabilities by simulating real-world attacks OWASP ZAP, Burp Suite Enterprise, Nuclei, Nikto RA-5 (Vulnerability Monitoring), SA-11 (Developer Testing), SI-6 (Security & Privacy Function Verification)
Runtime Continuous monitoring & log forwarding — ongoing security posture assessment and centralized log collection SIEM integration, Prometheus/Grafana, CloudWatch, Datadog, Wazuh CA-7 (Continuous Monitoring), AU-2/AU-6 (Audit Events and Review), SI-4 (System Monitoring)

Automated Evidence Collection

The goal of compliance-as-code is not just to automate checks but to automatically generate the evidence artifacts that assessors need. Every pipeline run, every compliance scan, and every monitoring event becomes a timestamped, immutable record of control effectiveness.

  • Continuous compliance dashboards: Aggregate results from all security gates into a single view. Track compliance posture over time with trend lines showing improvement or regression. Map dashboard panels to specific NIST 800-53 control families (e.g., RA-5 compliance score, CM-6 drift percentage).
  • Automated control testing → compliance artifacts: Every InSpec or OpenSCAP scan produces machine-readable results (JSON, ARF/XML) that directly map to control assessment findings. These outputs replace manual evidence spreadsheets and can be versioned in Git alongside the code they validate.
  • OSCAL integration for machine-readable SSPs: Convert traditional Word-document SSPs into OSCAL-formatted JSON/XML. Control implementations reference automated test results, creating a living SSP that updates as the system evolves. Tools like the NIST OSCAL Reference Toolchain and Trestle (by IBM) enable this workflow.
  • Pipeline-generated POA&M entries: When a security gate fails, the pipeline automatically creates or updates a POA&M entry with the finding details, affected components, severity, and a remediation deadline based on organizational policy. This eliminates the lag between finding discovery and documentation.
OSCAL Deadline
The September 2026 OSCAL deadline means compliance-as-code is no longer optional for FedRAMP-authorized services. FedRAMP 20x requires machine-readable OSCAL-formatted SSPs, SAPs, SARs, and POA&Ms. Organizations that have not begun their OSCAL transition should start immediately—the conversion from traditional documents to OSCAL is non-trivial and requires tooling investment, staff training, and process redesign.

Container Security for Compliance

Containers introduce unique compliance challenges. Traditional STIG and CIS benchmarks assume persistent, mutable infrastructure. Container environments require adapted approaches that address image provenance, runtime isolation, orchestrator security, and software supply chain integrity.

Domain Requirement Tools & Standards Compliance Mapping
Base Image Hardening Use minimal, hardened base images. Remove unnecessary packages, shells, and utilities. Apply CIS Docker Benchmark and DISA Container STIG requirements. Pin image versions with digest hashes. CIS Docker Benchmark, DISA Container Platform STIG, Chainguard Images, Google Distroless, Iron Bank (DoD hardened registry) CM-6 (Configuration Settings), CM-7 (Least Functionality), SC-2 (Separation of System and User Functionality)
Image Scanning Scan all container images for known vulnerabilities before deployment. Block images with Critical/High CVEs from reaching production. Re-scan running images on a regular cadence to detect newly disclosed vulnerabilities. Trivy, Anchore Enterprise, Prisma Cloud (Twistlock), Clair, Snyk Container, AWS ECR scanning, Azure Defender for Containers RA-5 (Vulnerability Monitoring), SI-2 (Flaw Remediation), SA-11 (Developer Testing)
Runtime Protection Enforce read-only root filesystems, drop all unnecessary Linux capabilities, run as non-root, apply seccomp and AppArmor/SELinux profiles, restrict network policies, monitor for anomalous behavior. Falco, Sysdig Secure, Aqua Security, PodSecurity Admission (Kubernetes), RuntimeClass, seccomp profiles AC-6 (Least Privilege), SC-7 (Boundary Protection), SI-4 (System Monitoring), SC-39 (Process Isolation)
Supply Chain (SBOM) Generate Software Bill of Materials (SBOM) for every container image. Sign images with cryptographic signatures. Verify signatures before deployment. Track component provenance through the build pipeline. Syft (SBOM generation), Cosign/Sigstore (image signing), SPDX and CycloneDX formats, in-toto (supply chain attestations), SLSA framework SA-12 (Supply Chain Protection), CM-8 (System Component Inventory), SR-4 (Provenance), SA-10 (Developer Configuration Management)
# Trivy — scan container image for vulnerabilities and misconfigurations
trivy image \
  --severity CRITICAL,HIGH \
  --exit-code 1 \
  --format json \
  --output scan-results.json \
  registry.example.gov/app:v2.1.0

# Syft — generate SBOM in CycloneDX format
syft registry.example.gov/app:v2.1.0 \
  -o cyclonedx-json=sbom.cdx.json

# Cosign — sign and verify container images
cosign sign --key cosign.key registry.example.gov/app:v2.1.0
cosign verify --key cosign.pub registry.example.gov/app:v2.1.0
Section 12

Frameworks Crosswalk & Resources

No framework exists in isolation. Organizations subject to multiple compliance regimes need to understand how controls map across standards, where reciprocity exists, and where to find authoritative resources. This section provides a comprehensive crosswalk matrix, CMMC overview, framework reciprocity guidance, certification paths, and a master glossary of every acronym used throughout this guide.

Control Mapping Matrix

The following matrix maps key security domains across the major compliance frameworks. Use this crosswalk to identify equivalent controls when demonstrating compliance with multiple standards simultaneously. Satisfying the most stringent requirement in each row typically covers all others.

Security Domain NIST 800-53 CSF 2.0 Function CIS Control ISO 27001 Annex A CMMC Level
Access Control AC-1 through AC-25 PR.AA (Identity Management, Authentication, and Access Control) CIS 5 (Account Management), CIS 6 (Access Control Management) A.5.15–A.5.18 (Access Control), A.8.2–A.8.5 (Authentication) L1: AC.L1-3.1.1 (Authorized Access)
L2: All 800-171 AC controls
Audit & Accountability AU-1 through AU-16 DE.CM (Continuous Monitoring), DE.AE (Adverse Event Analysis) CIS 8 (Audit Log Management) A.8.15 (Logging), A.8.16 (Monitoring Activities) L2: AU.L2-3.3.1 through AU.L2-3.3.9
Configuration Management CM-1 through CM-14 PR.PS (Platform Security) CIS 4 (Secure Configuration), CIS 2 (Software Asset Inventory) A.8.9 (Configuration Management), A.8.19 (Software Installation) L2: CM.L2-3.4.1 through CM.L2-3.4.9
Incident Response IR-1 through IR-10 RS (Respond) — RS.MA (Incident Management), RS.AN (Incident Analysis) CIS 17 (Incident Response Management) A.5.24–A.5.28 (Information Security Incident Management) L2: IR.L2-3.6.1 through IR.L2-3.6.3
Risk Assessment RA-1 through RA-10 GV.RM (Risk Management Strategy), ID.RA (Risk Assessment) CIS 7 (Continuous Vulnerability Management) A.5.7 (Threat Intelligence), A.8.8 (Vulnerability Management) L2: RA.L2-3.11.1 through RA.L2-3.11.3
System & Communications Protection SC-1 through SC-51 PR.DS (Data Security), PR.PS (Platform Security) CIS 3 (Data Protection), CIS 12 (Network Infrastructure Management) A.5.14 (Information Transfer), A.8.20–A.8.24 (Network Security) L2: SC.L2-3.13.1 through SC.L2-3.13.16
System & Information Integrity SI-1 through SI-23 DE.CM (Continuous Monitoring), PR.PS (Platform Security) CIS 7 (Continuous Vulnerability Management), CIS 10 (Malware Defenses) A.8.7 (Malware Protection), A.8.8 (Vulnerability Management) L2: SI.L2-3.14.1 through SI.L2-3.14.7
Personnel Security PS-1 through PS-9 GV.RR (Roles, Responsibilities, Authorities) CIS 14 (Security Awareness Training) A.6.1–A.6.8 (People Controls) L2: PS.L2-3.9.1, PS.L2-3.9.2
Physical & Environmental Protection PE-1 through PE-23 PR.AA (Access Control — physical context) CIS 1 (Enterprise Asset Inventory — physical) A.7.1–A.7.14 (Physical Controls) L2: PE.L2-3.10.1 through PE.L2-3.10.6
Security Assessment & Authorization CA-1 through CA-9 GV.OC (Organizational Context), ID.IM (Improvement) CIS 18 (Penetration Testing) A.5.35–A.5.36 (Compliance Review), A.5.29 (Business Continuity) L2: CA.L2-3.12.1 through CA.L2-3.12.4

CMMC 2.0 Overview

The Cybersecurity Maturity Model Certification (CMMC) 2.0 is the DoD’s mechanism for verifying that defense contractors adequately protect Federal Contract Information (FCI) and Controlled Unclassified Information (CUI). CMMC streamlines the original 5-level model down to 3 levels and aligns directly with existing NIST standards.

Level Name Practices Assessment Type Standard Alignment Who Needs It
Level 1 Foundational 17 practices Annual self-assessment; results affirmed by senior company official and posted to SPRS (Supplier Performance Risk System) Subset of NIST 800-171 (basic safeguarding per FAR 52.204-21) All DoD contractors handling FCI. Estimated 140,000+ companies.
Level 2 Advanced 110 practices Third-party assessment by a C3PAO (CMMC Third-Party Assessment Organization) for critical programs; self-assessment allowed for select non-critical programs Full NIST SP 800-171 Rev. 2 (110 security requirements across 14 families) Contractors handling CUI on prioritized programs. Estimated 40,000+ companies.
Level 3 Expert 110+ practices (800-171 + select 800-172 enhanced requirements) Government-led assessment by DIBCAC (Defense Industrial Base Cybersecurity Assessment Center) NIST SP 800-171 + SP 800-172 (Enhanced Security Requirements) Contractors on the highest-priority programs handling the most sensitive CUI. Small subset of DIB.
CMMC Rollout Timeline
CMMC 2.0 final rule (32 CFR Part 170) was published December 2024. Phase 1 began in 2025 with self-assessment requirements appearing in new DoD contracts. Phase 2 introduces C3PAO assessment requirements for Level 2 in applicable contracts. Phase 3 (2026–2027) extends the requirement to all applicable contracts and option periods. Contractors should begin preparation now—achieving Level 2 readiness typically requires 12–18 months of effort.

Framework Reciprocity

Reciprocity reduces duplicative effort by allowing compliance evidence from one framework to satisfy requirements in another. Understanding these reciprocity relationships is critical for organizations operating across multiple compliance regimes.

Reciprocity Relationship How It Works Practical Benefit
FedRAMP → GovRAMP (formerly StateRAMP) GovRAMP (rebranded from StateRAMP in 2024) accepts FedRAMP authorization as meeting its requirements. A FedRAMP-authorized CSP can achieve GovRAMP verification through a streamlined review rather than a full reassessment. CSPs with FedRAMP authorization can serve state and local government customers without duplicating the full assessment process. Saves 3–6 months and significant assessment costs.
FedRAMP “Presumption of Adequacy” The FedRAMP Authorization Act (2022) established that a FedRAMP authorization is presumed adequate for use across all federal agencies. Agencies may not impose additional requirements beyond FedRAMP baselines without a risk-based justification documented and approved by the agency CIO. CSPs authorized once through FedRAMP can be reused by any federal agency without undergoing a separate agency-level assessment. Dramatically reduces time-to-ATO for agency customers.
CMMC ↔ NIST 800-171 CMMC Level 2 maps directly to the 110 security requirements in NIST SP 800-171 Rev. 2. Organizations that have fully implemented 800-171 (with a perfect SPRS score of 110) have substantially met CMMC Level 2 requirements, though the assessment methodology differs. Organizations already compliant with 800-171 have a clear path to CMMC Level 2. The gap is primarily in assessment readiness (documented evidence, artifact maturity) rather than control implementation.
SOC 2 ↔ Federal Frameworks SOC 2 Trust Services Criteria overlap significantly with NIST 800-53 controls. While SOC 2 is not a substitute for FedRAMP or FISMA compliance, organizations with mature SOC 2 programs have already implemented many of the technical and operational controls required by federal frameworks. Mapping SOC 2 evidence to NIST 800-53 controls can accelerate FedRAMP preparation by 30–40%. Many SOC 2 audit artifacts (access reviews, change management records, incident response evidence) directly satisfy federal assessment requirements with minimal adaptation.

Key Compliance Resources

The following authoritative sources should be bookmarked by every compliance practitioner. These are the primary references for current standards, vulnerability data, hardening guides, and assessment tools.

NIST CSRC
csrc.nist.gov — The NIST Computer Security Resource Center is the authoritative source for all NIST cybersecurity publications: SP 800-53, 800-171, CSF 2.0, FIPS standards, and OSCAL. Includes the SP 800-53 control catalog browser with keyword search, baseline filtering, and control-to-framework mapping.
DISA STIG Library
public.cyber.mil — The official DISA repository for all Security Technical Implementation Guides. Includes STIG viewers, SCAP benchmarks, SRGs (Security Requirements Guides), and the STIG compilation library. Updated quarterly with new platform coverage and revised findings.
CISA KEV Catalog
cisa.gov/known-exploited-vulnerabilities-catalog — The Known Exploited Vulnerabilities catalog maintained by CISA. Lists vulnerabilities with confirmed active exploitation and required remediation deadlines (per BOD 22-01). Essential for vulnerability prioritization and compliance with federal patch management requirements.
FedRAMP Marketplace
marketplace.fedramp.gov — The official registry of FedRAMP-authorized cloud service offerings, including authorization status, impact level, sponsoring agency, and 3PAO details. Essential for agencies evaluating cloud services and CSPs tracking their authorization status.
CIS Benchmarks
cisecurity.org/cis-benchmarks — Free PDF downloads of consensus-based secure configuration guides for 100+ technologies. Includes Level 1 (essential) and Level 2 (defense-in-depth) recommendations. CIS-CAT Pro provides automated assessment against these benchmarks.
OSCAL GitHub
github.com/usnistgov/OSCAL — The official NIST repository for the Open Security Controls Assessment Language. Includes schema definitions (JSON, XML), reference catalogs, example profiles, conversion tools, and the OSCAL specification documentation. The foundation for FedRAMP’s machine-readable compliance initiative.

Certification Paths for Compliance Professionals

Professional certifications validate compliance expertise and are often required or preferred for federal security roles. The following certifications are most relevant to the frameworks and processes covered in this guide.

Certification Issuing Body Focus Area Relevance to Compliance Roles
CompTIA Security+ CompTIA Foundational security concepts, threats, risk management, cryptography, identity management, network security Baseline certification for DoD 8140 (IA Level I). Required for many entry-level federal security positions. Covers core compliance concepts at an introductory level.
CompTIA CASP+ CompTIA Advanced security architecture, operations, governance, risk, and compliance in enterprise environments DoD 8140 approved for advanced roles. Covers enterprise risk management, security architecture, and compliance integration at a technical practitioner level.
(ISC)² CISSP (ISC)² Eight domains: Security and Risk Management, Asset Security, Security Architecture, Communications & Network Security, IAM, Security Assessment, Security Operations, Software Development Security Gold standard for senior security roles. Required or preferred for ISSO, ISSM, and AO-designated representative positions. Demonstrates breadth across all compliance domains.
(ISC)² CAP (ISC)² Certified Authorization Professional — focused specifically on the RMF lifecycle, authorization processes, and continuous monitoring Purpose-built for federal compliance professionals. Directly tests knowledge of RMF steps, NIST publications, authorization packages, and ConMon. Highly valued in federal contractor and agency roles.
ISACA CISM ISACA Information Security Governance, Risk Management, Program Development, Incident Management Management-focused certification. Relevant for ISSM roles and security program leadership. Emphasizes governance, risk alignment with business objectives, and program maturity.
ISACA CRISC ISACA IT Risk Identification, Risk Assessment, Risk Response and Reporting, Information Technology and Security Specialized in IT risk management. Directly applicable to RMF risk assessment steps, organizational risk tolerance decisions, and risk-based control selection. Valuable for Risk Executive and AO advisory roles.

Key Acronyms — Master Glossary

A comprehensive alphabetical glossary of all acronyms used throughout this guide. This expands on the Section 01 quick reference and includes terms introduced in later sections.

Acronym Expansion Context
3PAOThird Party Assessment OrganizationAccredited independent assessor for FedRAMP cloud authorizations
AOAuthorizing OfficialSenior executive who accepts risk and grants authorization to operate
AQLAriel Query LanguageIBM QRadar’s native query language for log search and correlation
ARFAsset Reporting FormatSCAP output format for automated compliance scan results
ATOAuthorization to OperateFormal declaration that a system may operate, accepting residual risk
BODBinding Operational DirectiveCISA-issued mandatory directives for federal agencies (e.g., BOD 22-01)
C3PAOCMMC Third-Party Assessment OrganizationAccredited assessor for CMMC Level 2 certification
CAPCertified Authorization Professional(ISC)² certification focused on RMF and authorization
CASP+CompTIA Advanced Security PractitionerAdvanced security certification for technical practitioners
CCECommon Configuration EnumerationUnique identifiers for system configuration settings (part of SCAP)
CEFCommon Event FormatPipe-delimited log format with key-value extensions for security events
CISCenter for Internet SecurityOrganization publishing consensus-based secure configuration benchmarks
CISACybersecurity and Infrastructure Security AgencyFederal agency under DHS responsible for cybersecurity coordination
CISMCertified Information Security ManagerISACA management-focused security certification
CISSPCertified Information Systems Security Professional(ISC)² gold-standard certification for senior security roles
CMMCCybersecurity Maturity Model CertificationDoD contractor cybersecurity verification program (3 levels)
ConMonContinuous MonitoringOngoing security assessment after authorization is granted
CPECommon Platform EnumerationStructured naming scheme for IT systems, platforms, and packages
CRISCCertified in Risk and Information Systems ControlISACA certification for IT risk management professionals
CSFCybersecurity FrameworkNIST voluntary risk-management framework (6 functions in v2.0)
CSPCloud Service ProviderOrganization providing cloud computing services to federal agencies
CUIControlled Unclassified InformationGovernment information requiring safeguarding but not classified
CVECommon Vulnerabilities and ExposuresUnique identifiers for publicly known cybersecurity vulnerabilities
CVSSCommon Vulnerability Scoring SystemStandardized severity rating for vulnerabilities (0.0–10.0)
DASTDynamic Application Security TestingTesting running applications for vulnerabilities via simulated attacks
DATODenial of Authorization to OperateAO decision that a system may not operate due to unacceptable risk
DIBDefense Industrial BaseNetwork of 300,000+ companies supporting military systems
DIBCACDefense Industrial Base Cybersecurity Assessment CenterGovernment entity conducting CMMC Level 3 assessments
DISADefense Information Systems AgencyDoD agency responsible for IT infrastructure and STIG publication
EDREndpoint Detection and ResponseSecurity technology monitoring endpoints for threat activity
EQLEvent Query LanguageElastic’s correlation-focused query language for sequence detection
FCIFederal Contract InformationInformation generated for the government under contract, not for public release
FedRAMPFederal Risk and Authorization Management ProgramStandardized cloud security assessment and authorization program
FIPSFederal Information Processing StandardsNIST standards mandatory for federal systems (e.g., FIPS 199, FIPS 200)
FISMAFederal Information Security Modernization ActFederal law requiring agencies to implement information security programs
GovRAMPGovernment Risk and Authorization Management ProgramFormerly StateRAMP; compliance framework for state/local government cloud
GRSGeneral Records ScheduleNARA records retention schedules for federal agencies
IATOInterim Authorization to OperateTemporary authorization (90–180 days) while deficiencies are remediated
IDS/IPSIntrusion Detection/Prevention SystemNetwork security devices that detect and/or block malicious traffic
IPFIXIP Flow Information ExportIETF standard for network flow data export (successor to NetFlow)
ISOInformation System OwnerBusiness owner responsible for a system throughout its lifecycle
ISSMInformation System Security ManagerOversees security across multiple systems or an organization
ISSOInformation System Security OfficerDay-to-day security operations for a specific system
KEVKnown Exploited VulnerabilitiesCISA catalog of vulnerabilities with confirmed active exploitation
KQLKusto Query LanguageQuery language used by Microsoft Sentinel and Azure Data Explorer
NARANational Archives and Records AdministrationFederal agency governing records management and retention schedules
NISTNational Institute of Standards and TechnologyFederal agency publishing cybersecurity standards and frameworks
OCSFOpen Cybersecurity Schema FrameworkEmerging standard for normalized security event data
OSCALOpen Security Controls Assessment LanguageNIST machine-readable format for compliance documentation
OVALOpen Vulnerability and Assessment LanguageXML-based language for automated configuration and vulnerability checks
POA&MPlan of Action and MilestonesDocument tracking known vulnerabilities with remediation plans
RMFRisk Management FrameworkNIST 7-step lifecycle process for security authorization
SAPSecurity Assessment PlanDescribes scope, methodology, and schedule for security assessment
SARSecurity Assessment ReportResults of formal security testing with findings and risk ratings
SASTStatic Application Security TestingSource code analysis for vulnerabilities without execution
SBOMSoftware Bill of MaterialsInventory of components in a software artifact for supply chain transparency
SCASecurity Control AssessorIndependent evaluator who tests control effectiveness
SCAPSecurity Content Automation ProtocolSuite of specs enabling automated compliance checking (XCCDF, OVAL, etc.)
SIEMSecurity Information and Event ManagementPlatform for centralized log collection, correlation, and alerting
SLSASupply-chain Levels for Software ArtifactsFramework for software supply chain integrity assurance
SOARSecurity Orchestration, Automation, and ResponsePlatform for automating incident response and security workflows
SOC 2System and Organization Controls 2AICPA audit standard for service organization security controls
SPLSearch Processing LanguageSplunk’s native query language for log search and analysis
SPRSSupplier Performance Risk SystemDoD portal where contractors post NIST 800-171 self-assessment scores
SRGSecurity Requirements GuideDISA technology-neutral guidance that STIGs are derived from
SSPSystem Security PlanPrimary authorization document describing system and control implementations
STIGSecurity Technical Implementation GuideDISA configuration standards with findings rated CAT I/II/III
UDMUnified Data ModelGoogle Chronicle’s normalized schema for security event data
WORMWrite Once Read ManyStorage that prevents modification after writing; used for log integrity
XCCDFExtensible Configuration Checklist Description FormatXML format for representing security checklists and benchmarks (part of SCAP)
YARA-LYARA Language (Chronicle variant)Detection rule language for Google Chronicle/SecOps SIEM