- API Penetration Testing

FedRAMP and Microservices Environments

All about FedRAMP and microservices environments in 2022
The Federal Risk and Authorization Management Program (FedRAMP) was established in December 2011 by the Office of Management and Budget (OMB), which reports directly to the US President and is responsible for enforcing Presidential decisions, policies, and actions across multiple sectors (e.g., healthcare, energy, national security). FedRAMP provides US executive departments and agencies with a cost-effective approach to adopting and using cloud services while focusing on prioritizing and mitigating risks based on the damage they can do.
FedRAMP Regulation – A Snapshot
The FedRAMP program was developed over a 24-month period leading up to December 2011 by the OMB in association with various government agencies. These include the National Institute of Standards and Technology (NIST), General Services Administration (GSA), the Department of Defense (DOD), the Department of Homeland Security (DHS), the United States Chief Information Officers Council (CIO Council), and Information Security and Identity Management Committee (ISIMC), besides state and local governments, NGOs, private sector and academia.

Essentially, FedRAMP is a compliance framework to safeguard information created, processed, maintained, distributed, or disclosed by the federal government. So, cloud service providers (CSPs) working with federal agencies need to be FedRAMP-certified, which means their cloud products and services meet one of three security impact levels - low, moderate, or high. The last one was released by FedRAMP in June 2016. Three security objectives form the basis of these impact levels:

  • Keeping sensitive personal/proprietary information private (confidentiality)
  • Protecting information from improper alteration or erasure (integrity)
  • Ensuring reliable and timely access to information (availability)
* This is applicable to commercial and non-commercial cloud services provided by information systems that support the operations and assets of federal departments and agencies.

** Private clouds operated solely for the benefit of a federal department or agency, implemented within its facility, and not providing cloud services to external agencies, including subordinate units within the agency, are exempted from FedRAMP provisions.
FedRAMP Impact Levels for CSPs
The Federal Information Security Modernization Act (FISMA) sets forth the security obligations and requirements for federal IT systems. Essentially, federal agencies must protect data against access by unauthorized persons (confidentiality) and from unauthorized modification, destruction, or corruption (integrity). At the same time, federal systems need to ensure authorized users have timely and reliable access to data (availability). Any potential loss of data confidentiality, data integrity, or data availability is bound to have an impact on the organization, but the intensity of such impact ("impact level") will vary depending on how sensitive the data in the system is. Therefore, in order to be FedRAMP-certified, CSPs must follow the impact level (low, moderate, or high) that applies to them, which is based on the confidentiality of the data they handle.

Publicly available data carries a low-impact security level while data whose leakage might seriously affect public organizations is deemed to be at high-impact security level. Personally identifiable data (PII) occupies the moderate-impact security level. The higher the security impact level, the more controls (e.g., user authentication) the CSPs would be expected to have to ensure the data is adequately secured from compromise.
Only CSPs with "FedRAMP Authority to Operate" authorizations are allowed to offer cloud services and products to federal agencies, and these are issued by an authorized officer (AO) at each federal agency. Alternatively, CSPs can obtain a FedRAMP Provisional Authorization to Operate (P-ATO) from a Joint Authorization Board (JAB), comprising security experts from DoD, DHS, and GSA. Authorizations are issued based on the CSP's security assessment reports and continuous security monitoring plans, among others. Under FedRAMP, security controls put in place by CSPs are subject to assessment by independent third-parties.
Benefits of FedRAMP Compliance
Typically, FedRAMP accreditation costs anything between $250,000 to $750,000, depending on the nature and scope of services. Implementing some of the security control types and incident reporting processes is going to be a tall order for CSPs/federal agencies without the right tools, services, and trained personnel. Sample this: Within an hour of an information security incident being identified by its top-level security incident response team, a CSP must report the incident, be it suspected or confirmed, to the following stakeholders.
  • Impacted customers
  • US-CERT, if the incident is the result of an attack vector
  • AOs and incident response teams
  • JAB reviewers and FedRAMP program management office (PMO) at the US General Services Administration (GSA)!
Despite these hassles, it pays to be FedRAMP compliant. Significantly, federal agencies purchased cloud services to the tune of $4 billion in 2018, and this spend is projected to touch $9+ billion by 2024. Barring some on-premises private clouds, all federal agency cloud deployments and service models must meet FedRAMP requirements at the appropriate risk impact level.
Penalties for Non-Compliance
Legacy on-premises computing resources, end-of-life operating systems, as well as glitches, gaps, and errors in cloud environments that are out of sync with FedRAMP security mandates could result in massive fines and reputation loss for the CSP. CSPs and federal agencies share important security responsibilities to ensure protection for data within a cloud setup. Moreover, weaknesses and flaws in codes and programs could be potentially exploited by hackers to steal sensitive data from federal networks.
FedRAMP Compliance for Kubernetes
Kubernetes comes with certain built-for-the-cloud security features such as role-based access control and admission controllers that intercept requests to the API server. However, enabling more fine-grained access control, encryption, and auditablity, on top of Kubernetes's native security attributes, is important to ensure the deployments are tight and well in sync with FedRAMP mandates. Here are some key "hardening" measures for Kubernetes to help minimize risks and secure clusters even further.

Access Control

Role-based access control (RBAC) helps regulate which users/set of users across various teams or projects have access to Kubernetes resources (pods, services, replication controllers) and at what level. This is a set of additive permissions, and there is no option to deny access to users. Permissions are granted to users based on their need to perform actions on resources in a cluster, and this is defined by their roles in the organization. Kubernetes uses logical isolation to segregate a set of virtual machines (namespaces) and limit user access to namespaces, applications that run on them, and data located on them. This also helps beef up data security and compliance requirements by putting restrictions on cluster-wide access by users.

Access to pods with secure socket shell (SSH) server installed can be limited to users whose identity has been established via their public and private keys. This will limit access to containers to acceptable traffic only coming via the SSH protocol from outside the cluster.

Identification and Authentication

Authorization, as discussed above, is only half the game. The other half is about authentication. Without proper authentication, a scammer can pretend to be "A," someone you trust, when he is in reality B, and access sensitive data or inject malware into systems. Kubernetes supports tokens to establish two-way trust between API server nodes and worker nodes. Besides, it supports proxy servers responsible for authentication and client certificates for authenticating primary node agents to the API server.

Kubernetes is capable of supporting users from a local database account. In addition, it can leverage identity service providers (IdP) to protect data more securely against threats by limiting unauthenticated "calls" to APIs. Besides, Kubernetes works with several IdPs (e.g., Okta, Google OpenID Connect, and Azure Active Directory), which facilitate single sign-on as well as authenticated access to Web/API services. Even so, the Kubernetes platform doesn't natively integrate with such IdPs.

There are several in-market identity services (such as Dex) that can serve as a beachhead beween Kubernetes and various IdPs that use the SAML protocol. In this scheme of things, service providers send users to IdPs, which store and verify their login credentials, before redirecting them back to the service providers. Besides, limiting unauthenticated access to APIs that deal with sensitive data, this identity layer saves service providers the time and effort involved in managing passwords internally.

Login Attempts

Another tactic to hinder potential threats is to configure the web administration interface such that a user is locked out of an account/node, say, for "x" minutes, after she/he exceeds a preset number of consecutive failed login attempts. The lockout is lifted after the time period has elapsed, and adding this delay helps to slow down malicious attacks. This applies where users are supported from a local database. When it comes to single sign-on accounts, it's left to the IdPs, after careful consideration, to settle on the number of failed login attempts and lockout time period.

Session Control

Where a user is logged in to multiple nodes within a cluster, there is elevated risk of login credentials being misused and passwords of user accounts illegitimately reset. Limiting the total number of concurrent connections (via SSH) to nodes in a cluster by a single client (user) will help stave off this threat. Of course, more than one terminal session, enabled simultaneously on a single screen (Tmux), will count as a single session. User attempts to exceed the maximum number of session channels is promptly logged on to a database of access events and used for audit purpose.

The load balancing solution can be configured to set the "session sticky time" between a user and server to a maximum of three hours. On top of that, the information system can prepped to terminate a user session when a predefined condition is satisfied or in the event of any authenticated attempt to access restricted data. Plus, the idletime, in minutes, can be specified, so the server can terminate a session as soon as the specified period of inactivity expires, thus releasing applications from inactive connections. Users should also be able to exit a remote session at will.


Kubernetes uses certificates issued by trusted certificate authorities (CAs) to establish trust. These include node CAs and user CAs to authenticate nodes and users respectively. By default, user CAs contain a "time to live" date after which they expire. When the certificate nears expiration, the kubelet will automatically request a new certificate.

Remote Access

Remote access to an API server from anywhere on the Internet could potentially leave data vulnerable to theft, misuse, unauthorized changes, and corruption. To address this risk, it is important that all remote connections are routed without fail through an authentication gateway and allowed access only on a need-basis. Such proxies serve as a single point for managing remote access to the server and use secure SSH or HTTP/TLS protocol to authenticate and encrypt data moving between users and the server. The SSH or TLS sessions can be further protected with x509 certificates. These make use of a public key infrastructure framework to uniquely identify and authenticate users to a server or remote device, thus securing the communication between them. Importantly, the SSH and TLS certificates are deleted from the hard disk at logout. Likewise, cookies are deleted from the browser to avoid the risk of them being hijacked by threat actors in order to access browsing sessions.

Trusted Clusters

An open-source proxy server (e.g., teleport), which understands SSH and TLS protocols, can be utilized to establish trusted connections for users across Kubernetes clusters to access services located behind firewalls. This obviates the need for open static ports in the firewall for TCP access. Further, this will enable authenticated users to process, store, and transfer information from outside a cluster. The nature and extent of such access is based on the action the user is required to perform on Kubernetes resources as per her/his role in the organization.

Audit and Accountability

Kubernetes also maintains a chronological record of activities by users, applications that use the API server, as well as by the server itself in non-volatile storage at the backend in human-readable format. These serve as sequential records of various changes that occur within a cluster and capture details around events such as:

  • What happened and when?
  • Who (individual or group) triggered it?
  • From where was it triggered?
  • What was the outcome

By making use of an open-source proxy server it should be possible to capture more audit records such as unsuccessful login attempts, file transfers, file system changes, network activities during sessions, and commands executed on SSH server. Additionally, customers should also consider running their Kubernetes clusters in FedRAMP-compliant clouds to ensure critical government data is well protected against security breaches. Typically, the FedRAMP compliance of the cloud does not involve any increase in cloud service costs for customers.

That was our quick take on how to add some more guardrails to the Kubernetes environment. Stay in the loop. Will be back soon with more security updates. Team