Red Hat OpenShift Container Platform STIG V1R1
View as table
OpenShift must use TLS 1.2 or greater for secure container image transport from trusted sources.
STIG ID:
CNTR-OS-000010 |
SRG: SRG-APP-000014-CTR-000035 |
Severity: medium |
CCI: CCI-000068 |
Vulnerability Id: V-257505 |
Vulnerability Discussion
The authenticity and integrity of the container image during the container image lifecycle is part of the overall security posture of the container platform. This begins with the container image creation and pull of a base image from a trusted source for child container image creation and the instantiation of the new image into a running service.
If an insecure protocol is used during transmission of container images at any step of the lifecycle, a bad actor may inject nefarious code into the container image. The container image, when instantiated, then becomes a security risk to the container platform, the host server, and other containers within the container platform. To thwart the injection of code during transmission, a secure protocol (TLS 1.2 or newer) must be used. Further guidance on secure transport protocols can be found in NIST SP 800-52.
Check
Verify that no insecure registries are configured by executing the following:
oc get image.config.openshift.io/cluster -ojsonpath='{.spec.allowedRegistriesForImport}' | jq -r '.[] | select(.insecure == true)'
If the above query finds any registries, this is a finding. Empty output is not a finding.
Verify that no insecure registries are configured by executing the following:
oc get image.config.openshift.io/cluster -ojsonpath='{.spec.registrySources.insecureRegistries}'
If the above query returns anything, then this is a finding. Empty output is not a finding.
Fix
Remove insecure registries from the cluster's image registry configuration by executing the following:
oc edit image.config.openshift.io/cluster
Edit or remove any registries where insecure is set to true or are listed under insecureRegistries.
Refer to https://docs.openshift.com/container-platform/4.8/openshift_images/image-configuration.html for more details on configuring registries in OpenShift.
OpenShift must use TLS 1.2 or greater for secure communication.
STIG ID:
CNTR-OS-000020 |
SRG: SRG-APP-000014-CTR-000040 |
Severity: medium |
CCI: CCI-000068,CCI-001453 |
Vulnerability Id: V-257506 |
Vulnerability Discussion
The authenticity and integrity of the container platform and communication between nodes and components must be secure. If an insecure protocol is used during transmission of data, the data can be intercepted and manipulated. The manipulation of data can be used to inject status changes of the container platform, causing the execution of containers or reporting an incorrect healthcheck. To thwart the manipulation of the data during transmission, a secure protocol (TLS 1.2 or newer) must be used. Further guidance on secure transport protocols can be found in NIST SP 800-52.
Satisfies: SRG-APP-000014-CTR-000040, SRG-APP-000560-CTR-001340
Check
Verify the TLS Security Profile is not set to a profile that does not enforce TLS 1.2 or above.
View the TLS security profile for the ingress controllers by executing the following:
oc get --all-namespaces ingresscontrollers.operator.openshift.io -ocustom-columns="NAME":.metadata.name,"NAMESPACE":.metadata.namespace,"TLS PROFILE":.spec.tlsSecurityProfile
View the TLS security profile for the control plane by executing the following:
oc get APIServer cluster -ocustom-columns="TLS PROFILE":.spec.tlsSecurityProfile
View the TLS profile for the Kubelet by executing the following:
oc get kubeletconfigs -ocustom-columns="NAME":.metadata.name,"TLS PROFILE":.spec.tlsSecurityProfile
If any of the above returns a TLS profile of "Old", this is a finding.
If any of the above returns a TLS profile of "Custom" and the minTLSVersion is not set to "VersionTLS12" or greater, this is a finding.
If the above returns "" TLS profile, this is not a finding as the TLS profile defaults to "Intermediate".
If the kubelet TLS profile check does not return any kubeletconfigs, this is not a finding as the default OCP installation uses defaults only.
Fix
Edit each resource and set the TLS Security Profile to Intermediate by executing the following:
oc edit ingresscontroller -n
Add the following to the file:
apiVersion: config.openshift.io/v1
kind: IngressController
...
spec:
tlsSecurityProfile:
intermediate: {}
type: Intermediate
Edit API Server by executing the following:
oc edit APIServer
Add the following to the file:
apiVersion: config.openshift.io/v1
kind: APIServer
...
spec:
tlsSecurityProfile:
intermediate: {}
type: Intermediate
Edit Kubelet by executing the following:
oc edit KubeletConfig
Set to the following:
apiVersion: config.openshift.io/v1
kind: KubeletConfig
...
spec:
tlsSecurityProfile:
intermediate: {}
type: Intermediate
OpenShift must use a centralized user management solution to support account management functions.
STIG ID:
CNTR-OS-000030 |
SRG: SRG-APP-000023-CTR-000055 |
Severity: medium |
CCI: CCI-000015 |
Vulnerability Id: V-257507 |
Vulnerability Discussion
OpenShift supports several different types of identity providers. To add users and grant access to OpenShift, an identity provider must be configured. Some of the identity provider types such as HTPassword only provide simple user management and are not intended for production. Other types are public services like GitHub. These provider types are not appropriate as they are managed by public service providers, and therefore are unable to enforce the organizations account management requirements.
Use either the LDAP or the OpenIDConnect Identity Provider type to configure OpenShift to use the organizations centrally managed IdP that is able to enforce the organization's policies regarding user identity management.
Check
Verify the authentication operator is configured to use either an LDAP or a OpenIDConnect provider by executing the following:
oc get oauth cluster -o jsonpath="{.spec.identityProviders[*].type}{'\n'}"
If the output lists any other type besides LDAP or OpenID, this is a finding.
Fix
Configure OpenShift to use an appropriate Identity Provider. Do not use HTPasswd. Use either LDAP(AD), OpenIDConnect, or an approved identity provider.
To configure LDAP provider:
1. Create Secret for BIND DN password by executing the following:
oc create secret generic ldap-secret --from-literal=bindPassword= -n openshift-config
2. Create config map for LDAP Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create LDAP Auth Config Resource YAML:
Using the preferred text editor, create a file named ldapidp.yaml using the example content. (replacing config values as appropriate):
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: ldapidp
mappingMethod: claim
type: LDAP
ldap:
attributes:
id:
- dn
email:
- mail
name:
- cn
preferredUsername:
- uid
bindDN: <"bindDN">
bindPassword:
name: ldap-secret
ca:
name: ca-config-map
insecure: false
url:
4. Apply LDAP config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an LDAP provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-ldap-identity-provider.html.
To configure OpenID provider:
1. Create Secret for Client Secret by executing the following:
oc create secret generic idp-secret --from-literal=clientSecret= -n openshift-config
2. Create config map for OpenID Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create OpenID Auth Config Resource YAML.
Using the preferred text editor, create a file named oidcidp.yaml using the example content (replacing config values as appropriate).
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: oidcidp
mappingMethod: claim
type: OpenID
openID:
clientID:
clientSecret:
name: oidc-idp-secret
claims:
preferredUsername:
- preferred_username
name:
- name
email:
- email
ca:
name: ca-config-map
issuer:
4. Apply OpenID config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an OpenID provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-oidc-identity-provider.html.
The kubeadmin account must be disabled.
STIG ID:
CNTR-OS-000040 |
SRG: SRG-APP-000023-CTR-000055 |
Severity: medium |
CCI: CCI-000015 |
Vulnerability Id: V-257508 |
Vulnerability Discussion
Using a centralized user management solution for account management functions enhances security, simplifies administration, improves user experience, facilitates compliance, and provides scalability and integration capabilities. It is a foundational element of effective identity and access management practices.
OpenShift supports several different types of identity providers. To add users and grant access to OpenShift, an identity provider needs to be configured. Some of the identity provider types, such as HTPassword, only provide simple user management and are not intended for production. Other types are public services, like GitHub. These provider types may not be appropriate as they are managed by public service providers and therefore are unable to enforce the organizations account management requirements.
After a new install, the default authentication uses kubeadmin as the default cluster-admin account. This default account must be disabled and another user account must be given cluster-admin rights.
Check
Verify the kubeadmin account is disabled by executing the following:
oc get secrets kubeadmin -n kube-system
If the command returns an error, the secret was not found, and this is not a finding.
(Example output:
Error from server (NotFound): secrets "kubeadmin" not found)
If the command returns a listing that includes the kubeadmin secret, its type, the data count, and age, this is a finding.
(Example Output for not a finding:
NAME TYPE DATA AGE
kubeadmin Opaque 1 6h3m)
Fix
If an alternative IDP is already configured and an administrative user exists with the role of cluster-admin, disable the kubeadmin account by running the following command as a cluster administrator:
oc delete secrets kubeadmin -n kube-system
OpenShift must automatically audit account creation.
STIG ID:
CNTR-OS-000050 |
SRG: SRG-APP-000026-CTR-000070 |
Severity: medium |
CCI: CCI-000018 |
Vulnerability Id: V-257509 |
Vulnerability Discussion
Once an attacker establishes access to a system, the attacker often attempts to create a persistent method of reestablishing access. One way to accomplish this is for the attacker to create a new account. Auditing account creation is one method for mitigating this risk.
A comprehensive account management process will ensure an audit trail documents the creation of application user accounts and, as required, notifies administrators and/or application owners exists. Such a process greatly reduces the risk that accounts will be surreptitiously created and provides logging that can be used for forensic purposes.
To address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/auditing mechanisms that meet or exceed access control policy requirements. Such integration allows the application developer to off-load those access control functions and focus on core application features and functionality.
Check
Verify Red Hat Enterprise Linux CoreOS (RHCOS) generates audit records for all account creations, modifications, disabling, and termination events that affect "/etc/shadow".
Logging on as administrator, check the auditing rules in "/etc/audit/audit.rules" by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME: "; grep /etc/shadow /etc/audit/audit.rules /etc/audit/rules.d/*'; done
(Example output:
-w /etc/shadow -p wa -k identity)
If the command does not return a line, or the line is commented out, this is a finding.
Fix
Apply the machine config using the following command:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-account-modifications-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-w%20/etc/sudoers.d/%20-p%20wa%20-k%20actions%0A-w%20/etc/sudoers%20-p%20wa%20-k%20actions%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-sysadmin-actions.rules
overwrite: true
- contents:
source: data:,-w%20/etc/group%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_group_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/gshadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_gshadow_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/security/opasswd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_security_opasswd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/passwd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_passwd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/shadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_shadow_usergroup_modification.rules
overwrite: true
" | oc apply -f -
done
OpenShift must automatically audit account modification.
STIG ID:
CNTR-OS-000060 |
SRG: SRG-APP-000027-CTR-000075 |
Severity: medium |
CCI: CCI-001403 |
Vulnerability Id: V-257510 |
Vulnerability Discussion
Once an attacker establishes access to a system, the attacker often attempts to create a persistent method of reestablishing access. One way to accomplish this is for the attacker to modify an existing account. Auditing of account modifications is one method for mitigating this risk.
A comprehensive account management process will ensure an audit trail documents the creation of application user accounts and, as required, notifies administrators and/or application owners exists. Such a process greatly reduces the risk that accounts will be surreptitiously modified and provides logging that can be used for forensic purposes.
To address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/auditing mechanisms that meet or exceed access control policy requirements. Such integration allows the application developer to offload those access control functions and focus on core application features and functionality.
Check
Verify for each of the files that contain account information the system is configured to emit an audit event in case of a write by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; for f in /etc/passwd /etc/group /etc/gshadow /etc/security/opasswd /etc/shadow /etc/sudoers /etc/sudoers.d/; do grep -q "\-w $f \-p wa \-k" /etc/audit/audit.rules || echo "rule for $f not found"; done' 2>/dev/null; done
If for any of the files a line saying "rule for $filename not found" is printed, this is a finding.
Fix
Apply the machine config using the following command:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-account-modifications-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-w%20/etc/sudoers.d/%20-p%20wa%20-k%20actions%0A-w%20/etc/sudoers%20-p%20wa%20-k%20actions%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-sysadmin-actions.rules
overwrite: true
- contents:
source: data:,-w%20/etc/group%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_group_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/gshadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_gshadow_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/security/opasswd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_security_opasswd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/passwd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_passwd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/shadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_shadow_usergroup_modification.rules
overwrite: true
" | oc apply -f -
done
OpenShift must generate audit rules to capture account related actions.
STIG ID:
CNTR-OS-000070 |
SRG: SRG-APP-000028-CTR-000080 |
Severity: medium |
CCI: CCI-000172,CCI-001404,CCI-001683,CCI-001684,CCI-001685,CCI-001686,CCI-002130,CCI-002132 |
Vulnerability Id: V-257511 |
Vulnerability Discussion
Account management actions, such as creation, modification, disabling, removal, and enabling are important changes within the system.
When management actions are modified, user accessibility is affected. Once an attacker establishes access to an application, the attacker often attempts to disable authorized accounts to disrupt services or prevent the implementation of countermeasures. In the event of a security incident or policy violation, having detailed audit logs for account creation, modification, disabling, removal, and enabling actions is crucial for incident response and forensic investigations. These logs provide a trail of activities that can be analyzed to determine the cause, impact, and scope of the incident, aiding in the investigation and remediation process.
Satisfies: SRG-APP-000028-CTR-000080, SRG-APP-000291-CTR-000675, SRG-APP-000292-CTR-000680, SRG-APP-000293-CTR-000685, SRG-APP-000294-CTR-000690, SRG-APP-000319-CTR-000745, SRG-APP-000320-CTR-000750, SRG-APP-000509-CTR-001305
Check
Verify the audit rules capture account creation, modification, disabling, removal, and enabling actions by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e user-modify -e group-modify -e audit_rules_usergroup_modification /etc/audit/rules.d/* /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-w /etc/group -p wa -k audit_rules_usergroup_modification
-w /etc/gshadow -p wa -k audit_rules_usergroup_modification
-w /etc/security/opasswd -p wa -k audit_rules_usergroup_modification
-w /etc/passwd -p wa -k audit_rules_usergroup_modification
-w /etc/shadow -p wa -k audit_rules_usergroup_modification
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config using the following command:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-account-modifications-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-w%20/etc/sudoers.d/%20-p%20wa%20-k%20actions%0A-w%20/etc/sudoers%20-p%20wa%20-k%20actions%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-sysadmin-actions.rules
overwrite: true
- contents:
source: data:,-w%20/etc/group%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_group_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/gshadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_gshadow_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/security/opasswd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_security_opasswd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/passwd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_passwd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/shadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_shadow_usergroup_modification.rules
overwrite: true
" | oc apply -f -
done
Open Shift must automatically audit account removal actions.
STIG ID:
CNTR-OS-000080 |
SRG: SRG-APP-000029-CTR-000085 |
Severity: medium |
CCI: CCI-001405 |
Vulnerability Id: V-257512 |
Vulnerability Discussion
When application accounts are removed, user accessibility is affected. Once an attacker establishes access to an application, the attacker often attempts to remove authorized accounts to disrupt services or prevent the implementation of countermeasures. Auditing account removal actions provides logging that can be used for forensic purposes.
To address access requirements, many application developers choose to integrate their applications with enterprise-level authentication/access/audit mechanisms meeting or exceeding access control policy requirements. Such integration allows the application developer to off-load those access control functions and focus on core application features and functionality.
Check
Verify the audit rules capture the execution of setuid and setgid binaries by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e key=privileged /etc/audit/audit.rules || echo "not found"' 2>/dev/null; done
If "not found" is printed, this is a finding.
Confirm the following rules exist on each node:
-a always,exit -S all -F path=/usr/libexec/dbus-1/dbus-daemon-launch-helper -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/sbin/grub2-set-bootflag -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/libexec/openssh/ssh-keysign -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/chage -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/fusermount3 -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/fusermount -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/gpasswd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/mount -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/newgrp -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/passwd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/pkexec -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/sudo -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/su -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/umount -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/bin/write -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/libexec/sssd/krb5_child -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/libexec/sssd/ldap_child -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/libexec/sssd/proxy_child -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/libexec/sssd/selinux_child -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/libexec/utempter -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/lib/polkit-1/polkit-agent-helper-1 -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/sbin/mount.nfs -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/sbin/pam_timestamp_check -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -S all -F path=/usr/sbin/unix_chkpwd -F auid>=1000 -F auid!=unset -F key=privileged
To find all setuid binaries on the system, execute the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; find / -xdev -type f -perm -4000 -o -type f -perm -2000 2>/dev/null'; done
If any setuid binary does not have a corresponding audit rule, this is a finding.
Fix
Apply the machine config by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-setuid-setgid-binaries-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chage%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chage_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/fusermount%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_fusermount_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/fusermount3%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_fusermount3_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/gpasswd%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_gpasswd_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/mount%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_mount_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/newgrp%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_newgrp_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/passwd%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_passwd_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/pkexec%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_pkexec_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/su%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_su_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/sudo%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_sudo_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/umount%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_umount_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/lib/polkit-1/polkit-agent-helper-1%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_lib_polkit_helper_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/libexec/dbus-1/dbus-daemon-launch-helper%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-dbus_daemon_launch_helper.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/libexec/sssd/krb5_child%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_libexec_sssd_krb5_child.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/libexec/sssd/ldap_child%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_libexec_sssd_ldap_child.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/libexec/sssd/proxy_child%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_libexec_sssd_proxy_child.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/libexec/sssd/selinux_child%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_libexec_sssd_selinux_child.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/grub2-set-bootflag%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-grub2_set_bootflag.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/mount.nfs%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_mount_nfs.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/pam_timestamp_check%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_pam_timestamp_check.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/unix_chkpwd%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_unix_chkpwd.rules
overwrite: true
" | oc apply -f -
done
OpenShift RBAC access controls must be enforced.
STIG ID:
CNTR-OS-000090 |
SRG: SRG-APP-000033-CTR-000090 |
Severity: high |
CCI: CCI-000213,CCI-000764,CCI-000770,CCI-001499,CCI-001774,CCI-001812,CCI-001813,CCI-002235 |
Vulnerability Id: V-257513 |
Vulnerability Discussion
Controlling and limiting users access to system services and resources is key to securing the platform and limiting the intentional or unintentional compromising of the system and its services. OpenShift provides a robust RBAC policy system that allows for authorization policies to be as detailed as needed. Additionally, there are two layers of RBAC policies. The first is Cluster RBAC policies which administrators can control who has what access to cluster level services. The other is Local RBAC policies, which allow project developers/administrators to control what level of access users have to a given project or namespace.
OpenShift provides a set of default roles out of the box, and additional roles may be added as needed. Each role has a set of rules controlling what access that role may have, and users and/or groups may be bound to one or more roles. The cluster-admin cluster level RBAC role has complete super admin privileges and it is a required role for select cluster administrators to have.
The OpenShift Container Platform includes a built-in image registry. The primary purpose is to allow users to create, import, and generally manage images running in the cluster. This registry is integrated with the authentication and authorization (RBAC) services on the cluster.
Restricting access permissions and providing access only to the necessary components and resources within the OpenShift environment reduces the potential impact of security breaches and unauthorized activities.
Satisfies: SRG-APP-000033-CTR-000090, SRG-APP-000033-CTR-000095, SRG-APP-000033-CTR-000100, SRG-APP-000133-CTR-000290, SRG-APP-000133-CTR-000295, SRG-APP-000133-CTR-000300, SRG-APP-000133-CTR-000305, SRG-APP-000133-CTR-000310, SRG-APP-000148-CTR-000350, SRG-APP-000153-CTR-000375, SRG-APP-000340-CTR-000770, SRG-APP-000378-CTR-000880, SRG-APP-000378-CTR-000885, SRG-APP-000378-CTR-000890, SRG-APP-000380-CTR-000900, SRG-APP-000386-CTR-000920
Check
The administrator must verify that OpenShift is configured with the necessary RBAC access controls.
Review the RBAC configuration.
As the cluster-admin, view the cluster roles and their associated rule sets by executing the following:
oc describe clusterrole.rbac
Now, view the current set of cluster role bindings, which shows the users and groups that are bound to various roles by executing the following:
oc describe clusterrolebinding.rbac
Local roles and bindings can be determined by executing the following:
oc describe rolebinding.rbac
If these results show users with privileged access that do not require that access, this is a finding.
Fix
If users or groups exist that are bound to roles they must not have, modify the user or group permissions using the following cluster and local role binding commands:
Remove a user from a Cluster RBAC role by executing the following:
oc adm policy remove-cluster-role-from-user
Remove a group from a Cluster RBAC role by executing the following:
oc adm policy remove-cluster-role-from-group
Remove a user from a Local RBAC role by executing the following:
oc adm policy remove-role-from-user
Remove a group from a Local RBAC role by executing the following:
oc adm policy remove-role-from-group
Note: For additional information, refer to https://docs.openshift.com/container-platform/4.8/authentication/using-rbac.html.
OpenShift must enforce network policy on the namespace for controlling the flow of information within the container platform based on organization-defined information flow control policies.
STIG ID:
CNTR-OS-000100 |
SRG: SRG-APP-000038-CTR-000105 |
Severity: medium |
CCI: CCI-001368 |
Vulnerability Id: V-257514 |
Vulnerability Discussion
OpenShift provides several layers of protection to control the flow of information between the container platform components and user services. Each user project is given a separate namespace and OpenShift enforces RBAC policies controlling which projects and services users can access.
OpenShift forces the use of namespaces. Service accounts are a namespace resource as well, so they are segregated. RBAC policies apply to service accounts. In addition, Network Policies are used to control the flow of requests between containers hosted on the container platform.
It is important to define a default Network Policy on the namespace that will be applied automatically to new projects to prevent unintended requests. These policies can be updated by the project's administrator (with the appropriate RBAC permissions) to apply a policy that is appropriate to the service(s) within the project namespace.
Check
Verify that each user namespace has a Network Policy by executing the following:
for ns in $(oc get namespaces -ojson | jq -r '.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default") | .metadata.name '); do oc get networkpolicy -n$ns; done
If the above returns any lines saying "No resources found in
namespace.", this is a finding. Empty output is not a finding.
Fix
Add a Network Policy to an existing project namespace by performing the following steps:
1. Create .yaml and insert the desired resource Network Policy content. The following is an example resource quota definition:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
namespace:
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
2. Apply the Network Policy definition to the project namespace by executing the following:
oc apply -f .yaml -n
Details regarding the configuration of resource Network Policy can be reviewed at https://docs.openshift.com/container-platform/4.12/networking/network_policy/about-network-policy.html.
OpenShift must enforce approved authorizations for controlling the flow of information within the container platform based on organization-defined information flow control policies.
STIG ID:
CNTR-OS-000110 |
SRG: SRG-APP-000039-CTR-000110 |
Severity: medium |
CCI: CCI-001414 |
Vulnerability Id: V-257515 |
Vulnerability Discussion
OpenShift provides several layers of protection to control the flow of information between the container platform components and user services. Each user project is given a separate namespace and OpenShift enforces RBAC policies controlling which projects and services users can access. In addition, Network Policies are used to control the flow of requests to and from externally integrated services to services hosted on the container platform.
It is important to define a default Network Policy that will be applied automatically to new projects to prevent unintended requests. These policies can be updated by the project's administrator (with the appropriate RBAC permissions) to apply a policy that is appropriate to the service(s) within the project namespace.
Check
Check for Network Policy. Verify a default project template is defined by executing the following:
oc get project.config.openshift.io/cluster -o jsonpath="{.spec.projectRequestTemplate.name}"
If no project request template is in use by the project config, this is a finding.
Verify the project request template creates a Network Policy:
oc get templates/
-n openshift-config -o jsonpath="{.objects[?(.kind=='NetworkPolicy')]}{'\n'}"
Replace with the name of the project request template returned from the earlier query. If the project template is not defined, or there are no Network Policy definitions in it, this is a finding.
Fix
Configure a default network policy as necessary to protect the flow of information by performing the following steps:
1. Create a bootstrap project template (if not already created) by executing the following:
oc adm create-bootstrap-project-template -o yaml > template.yaml
2. Edit the template and add Network Policy object definitions before the parameters section. For example, the following section defines two policies: one to allow requests from the same namespace and one to allow from the OpenShift ingress routing service.
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
parameters:
3. Apply the project template to the cluster by executing the following:
oc create -f template.yaml -n openshift-config
4. Set the default cluster project request template by executing the following:
oc patch project.config.openshift.io/cluster --type=merge -p '{"spec":{"projectRequestTemplate":{"name": "
"}}}'
For additional information regarding network policies, refer to https://docs.openshift.com/container-platform/4.8/networking/network_policy/about-network-policy.html.
OpenShift must display the Standard Mandatory DOD Notice and Consent Banner before granting access to platform components.
STIG ID:
CNTR-OS-000130 |
SRG: SRG-APP-000068-CTR-000120 |
Severity: low |
CCI: CCI-000048 |
Vulnerability Id: V-257516 |
Vulnerability Discussion
OpenShift has countless components where different access levels are needed. To control access, the user must first log into the component and then be presented with a DOD-approved use notification banner before granting access to the component. This guarantees privacy and security notification verbiage used is consistent with applicable federal laws, Executive Orders, directives, policies, regulations, standards, and guidance.
Check
To verify the OpenShift CLI tool is configured to display the DOD Notice and Consent Banner, do either of the following steps:
Log in to OpenShift using the oc CLI tool.
oc login -u
enter password when prompted
If the DOD Notice and Consent Banner is not displayed, this is a finding.
Or
Verify that motd config map exists and contains the DOD Notice and Consent Banner by executing the following:
oc describe configmap/motd -n openshift
If the configmap does not exist, or it does not contain the DOD Notice and Consent Banner text in the message data attribute, this is a finding.
Fix
The following command will create a configmap that displays the DOD Notice and Consent Banner when logging in using the OpenShift CLI tool by executing the following:
echo 'apiVersion: v1
kind: ConfigMap
metadata:
name: motd
namespace: openshift
data:
message: "You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only. By using this IS (which includes any device attached to this IS), you consent to the following conditions:\n\n-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.\n\n-At any time, the USG may inspect and seize data stored on this IS.\n\n-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.\n\n-This IS includes security measures (e.g., authentication and access controls) to protect USG interests--not for your personal benefit or privacy.\n\n-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details."' | oc apply -f -
OpenShift must generate audit records for all DOD-defined auditable events within all components in the platform.
STIG ID:
CNTR-OS-000150 |
SRG: SRG-APP-000089-CTR-000150 |
Severity: medium |
CCI: CCI-000135,CCI-000169,CCI-000171,CCI-000172,CCI-000366 |
Vulnerability Id: V-257517 |
Vulnerability Discussion
The OpenShift Platform supports three audit levels: Default, WriteRequestBodies, and AllRequestBodies. The identities of the users are logged for all three audit levels log level. The WriteRequestBodies will log the metadata and the request body for any create, update, or patch request. The AllRequestBodies will log the metadata and the request body for all read and write requests. As this generates a significant number of logs, this level is only to be used as needed. To capture sufficient data to investigate an issue, it is required to set the audit level to WriteRequestBodies.
For more detailed documentation on what is being logged, refer to https://docs.openshift.com/container-platform/4.8/security/audit-log-view.html.
Satisfies: SRG-APP-000089-CTR-000150, SRG-APP-000090-CTR-000155, SRG-APP-000101-CTR-000205, SRG-APP-000510-CTR-001310, SRG-APP-000516-CTR-000790
Check
To determine at what level the OpenShift audit policy logging verbosity is configured, as a cluster-administrator:execute the following command:
oc get apiserver.config.openshift.io/cluster -ojsonpath='{.spec.audit.profile}'
If the output from the options does not return WriteRequestBodies or AllRequestBodies, this is a finding.
Fix
As the cluster administrator, update the APIServer.config.openshift.io/cluster object to set the profile to the defined level of detail. For example, to configure the profile to WriteRequestBodies, meaning that all write requests to any API server object are logged in their entirety, by executing the following:
oc patch apiserver.config.openshift.io/cluster --type=merge -p '{"spec": {"audit": {"profile": "WriteRequestBodies"}}}'
OpenShift must generate audit records when successful/unsuccessful attempts to access privileges occur.
STIG ID:
CNTR-OS-000160 |
SRG: SRG-APP-000091-CTR-000160 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257518 |
Vulnerability Discussion
OpenShift and its components must generate audit records successful/unsuccessful attempts to access or delete security objects, security levels, and privileges occur.
All the components must use the same standard so that the events can be tied together to understand what took place within the overall container platform. This must establish, correlate, and help assist with investigating the events relating to an incident, or identify those responsible.
Without audit record generation, access controls levels can be accessed by unauthorized users unknowingly for malicious intent, creating vulnerabilities within the container platform.
Satisfies: SRG-APP-000091-CTR-000160, SRG-APP-000492-CTR-001220, SRG-APP-000493-CTR-001225, SRG-APP-000494-CTR-001230, SRG-APP-000500-CTR-001260, SRG-APP-000507-CTR-001295
Check
Verify OpenShift is configured to generate audit records when successful/unsuccessful attempts to access or delete security objects, security levels, and privileges occur by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "key=perm_mod" -e "key=unsuccessful-create" -e "key=unsuccessful-modification" -e "key=unsuccessful-access" /etc/audit/audit.rules|| echo "not found"' 2>/dev/null; done
Confirm the following rules exist on each node:
-a always,exit -F arch=b32 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S open -F a1&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S open -F a1&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S open -F a1&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S open -F a1&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S open -F a1&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S open -F a1&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S open -F a1&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S open -F a1&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b32 -S creat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S creat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S creat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S creat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S truncate,ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S truncate,ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S truncate,ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S truncate,ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
On each node, if the above rules are not listed, or the return is "not found", this is a finding.
Fix
Apply the machine config to generate audit records by executing following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-access-privileges-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20chmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20chmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-chmod_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20chown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20chown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-chown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchmod_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchmodat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchmodat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchmodat_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchownat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchownat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchownat_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lchown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-removexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20setxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20setxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-setxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,%23%23%20This%20content%20is%20a%20section%20of%20an%20Audit%20config%20snapshot%20recommended%20for%20Red%2520Hat%2520Enterprise%2520Linux%2520CoreOS%25204%20systems%20that%20target%20OSPP%20compliance.%0A%23%23%20The%20following%20content%20has%20been%20retreived%20on%202019-03-11%20from%3A%20https%3A//github.com/linux-audit/audit-userspace/blob/master/rules/30-ospp-v42.rules%0A%0A%23%23%20The%20purpose%20of%20these%20rules%20is%20to%20meet%20the%20requirements%20for%20Operating%0A%23%23%20System%20Protection%20Profile%20%28OSPP%29v4.2.%20These%20rules%20depends%20on%20having%0A%23%23%2010-base-config.rules%2C%2011-loginuid.rules%2C%20and%2043-module-load.rules%20installed.%0A%0A%23%23%20Unsuccessful%20file%20creation%20%28open%20with%20O_CREAT%29%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20creat%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20creat%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20creat%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20creat%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A%0A%23%23%20Unsuccessful%20file%20modifications%20%28open%20for%20write%20or%20truncate%29%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A%0A%23%23%20Unsuccessful%20file%20access%20%28any%20other%20opens%29%20This%20has%20to%20go%20last.%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A
mode: 0644
path: /etc/audit/rules.d/30-ospp-v42-remediation.rules
overwrite: true
" | oc apply -f -
done
Red Hat Enterprise Linux CoreOS (RHCOS) must initiate session audits at system startup.
STIG ID:
CNTR-OS-000170 |
SRG: SRG-APP-000092-CTR-000165 |
Severity: high |
CCI: CCI-001464 |
Vulnerability Id: V-257519 |
Vulnerability Discussion
Initiating session audits at system startup allows for comprehensive monitoring of user activities and system events from the moment the system is powered on. Audit logs capture information about login attempts, commands executed, file access, and other system activities. By starting session audits at system startup, RHCOS ensures that all relevant events are recorded, providing a complete security monitoring solution. Some audit systems also maintain state information only available if auditing is enabled before a given process is created.
By initiating session audits at system startup, RHCOS enhances security monitoring, aids in timely incident detection and response, meets compliance requirements, facilitates forensic analysis, and promotes accountability and governance.
Check
Verify the RHCOS boot loader configuration has audit enabled, including backlog:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep audit /boot/loader/entries/*.conf || echo "not found"' 2>/dev/null; done
If "audit" is not set to "1" or returns "not found", this is a finding.
If "audit_backlog" is not set to 8192 or returns "not found", this is a finding.
Fix
Apply the machine config by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 05-kernelarg-audit-enabled-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
kernelArguments:
- audit=1
- audit_backlog_limit=8192
" | oc create -f -
done
All audit records must identify what type of event has occurred within OpenShift.
STIG ID:
CNTR-OS-000180 |
SRG: SRG-APP-000095-CTR-000170 |
Severity: medium |
CCI: CCI-000130,CCI-000172,CCI-002884 |
Vulnerability Id: V-257520 |
Vulnerability Discussion
Within the container platform, audit data can be generated from any of the deployed container platform components. This audit data is important when there are issues such as security incidents that must be investigated. Identifying the type of event in audit records helps classify and categorize different activities or actions within OpenShift. This classification allows for easier analysis, reporting, and filtering of audit logs based on specific event types. It helps distinguish between user actions, system events, policy violations, or security incidents, providing a clearer understanding of the activities occurring within the platform.
Satisfies: SRG-APP-000095-CTR-000170, SRG-APP-000409-CTR-000990, SRG-APP-000508-CTR-001300, SRG-APP-000510-CTR-001310
Check
Verify the audit service is enabled and active by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; systemctl is-enabled auditd.service; systemctl is-active auditd.service' 2>/dev/null; done
If the auditd service is not "enabled" and "active" this is a finding.
Fix
Apply the machine config setting auditd to active and enabled by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-auditd-service-enable-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
systemd:
units:
- name: auditd.service
enabled: true
" | oc apply -f -
done
OpenShift audit records must have a date and time association with all events.
STIG ID:
CNTR-OS-000190 |
SRG: SRG-APP-000096-CTR-000175 |
Severity: medium |
CCI: CCI-000131 |
Vulnerability Id: V-257521 |
Vulnerability Discussion
Within the container platform, audit data can be generated from any of the deployed container platform components. This audit data is important when there are issues, such as security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to know when the event occurred. To establish the time of the event, the audit record must contain the date and time.
Check
1. Verify Red Hat Enterprise Linux CoreOS (RHCOS) Audit Daemon is configured to resolve audit information before writing to disk by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep "log_format" /etc/audit/auditd.conf' 2>/dev/null; done
If the "log_format" option is not "ENRICHED", or the line is missing or commented out, this is a finding.
2. Verify RHCOS takes the appropriate action when an audit processing failure occurs.
Verify RHCOS takes the appropriate action when an audit processing failure occurs by executing following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep disk_error_action /etc/audit/auditd.conf' 2>/dev/null; done
If the value of the "disk_error_action" option is not "SYSLOG", "SINGLE", or "HALT", or the line is missing or commented out, ask the system administrator to indicate how the system takes appropriate action when an audit process failure occurs. If there is no evidence of appropriate action, this is a finding.
3. Verify the SA and ISSO (at a minimum) are notified when the audit storage volume is full.
Check which action RHEL takes when the audit storage volume is full by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep max_log_file_action /etc/audit/auditd.conf' 2>/dev/null; done
If the value of the "max_log_file_action" option is set to "ignore", "rotate", or "suspend", or the line is missing or commented out, ask the system administrator to indicate how the system takes appropriate action when an audit storage volume is full. If there is no evidence of appropriate action, this is a finding.
Fix
Apply the machine config by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 70-auditd-conf-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%0A%23%20This%20file%20controls%20the%20configuration%20of%20the%20audit%20daemon%0A%23%0A%0Alocal_events%20%3D%20yes%0Awrite_logs%20%3D%20yes%0Alog_file%20%3D%20%2Fvar%2Flog%2Faudit%2Faudit.log%0Alog_group%20%3D%20root%0Alog_format%20%3D%20ENRICHED%0Aflush%20%3D%20incremental_async%0Afreq%20%3D%2050%0Amax_log_file%20%3D%206%0Anum_logs%20%3D%205%0Apriority_boost%20%3D%204%0Aname_format%20%3D%20hostname%0A%23%23name%20%3D%20mydomain%0Amax_log_file_action%20%3D%20syslog%0Aspace_left%20%3D%2025%25%0Aspace_left_action%20%3D%20syslog%0Averify_email%20%3D%20yes%0Aaction_mail_acct%20%3D%20root%0Aadmin_space_left%20%3D%2050%0Aadmin_space_left_action%20%3D%20syslog%0Adisk_full_action%20%3D%20HALT%0Adisk_error_action%20%3D%20HALT%0Ause_libwrap%20%3D%20yes%0A%23%23tcp_listen_port%20%3D%2060%0Atcp_listen_queue%20%3D%205%0Atcp_max_per_addr%20%3D%201%0A%23%23tcp_client_ports%20%3D%201024-65535%0Atcp_client_max_idle%20%3D%200%0Atransport%20%3D%20TCP%0Akrb5_principal%20%3D%20auditd%0A%23%23krb5_key_file%20%3D%20%2Fetc%2Faudit%2Faudit.key%0Adistribute_network%20%3D%20no%0Aq_depth%20%3D%20400%0Aoverflow_action%20%3D%20syslog%0Amax_restarts%20%3D%2010%0Aplugin_dir%20%3D%20%2Fetc%2Faudit%2Fplugins.d%0A
mode: 0640
path: /etc/audit/auditd.conf
overwrite: true
" | oc apply -f -
done
All audit records must generate the event results within OpenShift.
STIG ID:
CNTR-OS-000200 |
SRG: SRG-APP-000099-CTR-000190 |
Severity: medium |
CCI: CCI-000132,CCI-000133,CCI-000134,CCI-000140,CCI-001487,CCI-001496,CCI-001849 |
Vulnerability Id: V-257522 |
Vulnerability Discussion
Within the container platform, audit data can be generated from any of the deployed container platform components. Since the audit data may be part of a larger audit system, it is important for the audit data to also include the container platform name for traceability back to the container platform itself and not just the container platform components. This audit data is important when there are issues, such as security incidents, that must be investigated. To make the audit data worthwhile for the investigation of events, it is necessary to know the outcome of the event.
Protecting the integrity of the tools used for auditing purposes is a critical step to ensuring the integrity of audit data. Audit data includes all information (e.g., audit records, audit settings, and audit reports) needed to successfully audit information system activity.
Audit tools include, but are not limited to, vendor-provided and open source audit tools needed to successfully view and manipulate audit information system activity and records. Audit tools include custom queries and report generators.
It is common for attackers to replace the audit tools or inject code into the existing tools with the purpose of providing the capability to hide or erase system activity from the audit logs.
To address this risk, audit tools must be cryptographically signed in order to provide the capability to identify when the audit tools have been modified, manipulated, or replaced. An example is a checksum hash of the file or files.
Satisfies: SRG-APP-000099-CTR-000190, SRG-APP-000097-CTR-000180, SRG-APP-000098-CTR-000185, SRG-APP-000100-CTR-000195, SRG-APP-000100-CTR-000200, SRG-APP-000109-CTR-000215, SRG-APP-000290-CTR-000670, SRG-APP-000357-CTR-000800
Check
1. Verify Red Hat Enterprise Linux CoreOS (RHCOS) Audit Daemon is configured to resolve audit information before writing to disk, by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep "log_format" /etc/audit/auditd.conf' 2>/dev/null; done
If the "log_format" option is not "ENRICHED", or the line is missing or commented out, this is a finding.
2. Verify RHCOS takes the appropriate action when an audit processing failure occurs by executing following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep disk_error_action /etc/audit/auditd.conf' 2>/dev/null; done
If the value of the "disk_error_action" option is not "SYSLOG", "SINGLE", or "HALT", or the line is missing, or commented out, ask the system administrator to indicate how the system takes appropriate action when an audit process failure occurs. If there is no evidence of appropriate action, this is a finding.
3. Verify the SA and ISSO (at a minimum) are notified when the audit storage volume is full.
Check which action RHEL takes when the audit storage volume is full by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep max_log_file_action /etc/audit/auditd.conf' 2>/dev/null; done
If the value of the "max_log_file_action" option is set to "ignore", "rotate", or "suspend", or the line is missing or commented out, ask the system administrator to indicate how the system takes appropriate action when an audit storage volume is full. If there is no evidence of appropriate action, this is a finding.
Fix
Apply the machine config by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 70-auditd-conf-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%0A%23%20This%20file%20controls%20the%20configuration%20of%20the%20audit%20daemon%0A%23%0A%0Alocal_events%20%3D%20yes%0Awrite_logs%20%3D%20yes%0Alog_file%20%3D%20%2Fvar%2Flog%2Faudit%2Faudit.log%0Alog_group%20%3D%20root%0Alog_format%20%3D%20ENRICHED%0Aflush%20%3D%20incremental_async%0Afreq%20%3D%2050%0Amax_log_file%20%3D%206%0Anum_logs%20%3D%205%0Apriority_boost%20%3D%204%0Aname_format%20%3D%20hostname%0A%23%23name%20%3D%20mydomain%0Amax_log_file_action%20%3D%20syslog%0Aspace_left%20%3D%2025%25%0Aspace_left_action%20%3D%20syslog%0Averify_email%20%3D%20yes%0Aaction_mail_acct%20%3D%20root%0Aadmin_space_left%20%3D%2050%0Aadmin_space_left_action%20%3D%20syslog%0Adisk_full_action%20%3D%20HALT%0Adisk_error_action%20%3D%20HALT%0Ause_libwrap%20%3D%20yes%0A%23%23tcp_listen_port%20%3D%2060%0Atcp_listen_queue%20%3D%205%0Atcp_max_per_addr%20%3D%201%0A%23%23tcp_client_ports%20%3D%201024-65535%0Atcp_client_max_idle%20%3D%200%0Atransport%20%3D%20TCP%0Akrb5_principal%20%3D%20auditd%0A%23%23krb5_key_file%20%3D%20%2Fetc%2Faudit%2Faudit.key%0Adistribute_network%20%3D%20no%0Aq_depth%20%3D%20400%0Aoverflow_action%20%3D%20syslog%0Amax_restarts%20%3D%2010%0Aplugin_dir%20%3D%20%2Fetc%2Faudit%2Fplugins.d%0A
mode: 0640
path: /etc/audit/auditd.conf
overwrite: true
" | oc apply -f -
done
OpenShift must take appropriate action upon an audit failure.
STIG ID:
CNTR-OS-000210 |
SRG: SRG-APP-000109-CTR-000215 |
Severity: medium |
CCI: CCI-000140 |
Vulnerability Id: V-257523 |
Vulnerability Discussion
It is critical that when the container platform is at risk of failing to process audit logs as required that it takes action to mitigate the failure. Audit processing failures include software/hardware errors, failures in the audit capturing mechanisms, and audit storage capacity being reached or exceeded. Responses to audit failure depend upon the nature of the failure mode.
Because availability of the services provided by the container platform, approved actions in response to an audit failure are as follows:
(i) If the failure was caused by the lack of audit record storage capacity, the container platform must continue generating audit records if possible (automatically restarting the audit service if necessary), overwriting the oldest audit records in a first-in-first-out manner.
(ii) If audit records are sent to a centralized collection server and communication with this server is lost or the server fails, the container platform must queue audit records locally until communication is restored or until the audit records are retrieved manually. Upon restoration of the connection to the centralized collection server, action must be taken to synchronize the local audit data with the collection server.
Check
Verify there is a Prometheus rule to watch for audit events by executing the following:
oc get prometheusrule -o yaml --all-namespaces | grep apiserver_audit
Output:
sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
If the output above is not displayed, this is a finding.
Fix
Apply the following Prometheus rule by executing the following:
oc apply -f - << 'EOF'
---
# platform = multi_platform_ocp
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: audit-errors
namespace: openshift-kube-apiserver
spec:
groups:
- name: apiserver-audit
rules:
- alert: AuditLogError
annotations:
summary: |-
An API Server instance was unable to write audit logs. This could be
triggered by the node running out of space, or a malicious actor
tampering with the audit logs.
description: An API Server had an error writing to an audit log.
expr: |
sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
for: 1m
labels:
severity: warning
EOF
OpenShift components must provide the ability to send audit logs to a central enterprise repository for review and analysis.
STIG ID:
CNTR-OS-000220 |
SRG: SRG-APP-000111-CTR-000220 |
Severity: medium |
CCI: CCI-000154,CCI-001464,CCI-001851 |
Vulnerability Id: V-257524 |
Vulnerability Discussion
Sending audit logs to a central enterprise repository allows for centralized log management. Instead of scattered logs across multiple OpenShift components, having a centralized repository simplifies log storage, retention, and retrieval. It provides a single source of truth for audit logs, making it easier to manage and analyze log data.
Centralized audit logs are crucial for incident response and forensic investigations. When a security incident occurs, having audit logs in a central repository allows security teams to quickly access relevant log data for analysis. It facilitates incident reconstruction, root cause analysis, and the identification of the scope and impact of the incident. This is vital for effective incident response and minimizing the impact of security breaches.
Satisfies: SRG-APP-000111-CTR-000220, SRG-APP-000092-CTR-000165, SRG-APP-000358-CTR-000805
Check
Determine if cluster log forwarding is configured.
1. Verify the cluster-logging operator is installed by executing the following:
oc get subscription/cluster-logging -n openshift-logging
(Example Output:
NAME PACKAGE SOURCE CHANNEL
cluster-logging cluster-logging redhat-operators stable
)
If the cluster-logging operator is not present, this is a finding.
2. List the cluster log forwarders defined by executing the following:
oc get clusterlogforwarder -n openshift-logging
If there are no clusterlogforwarders defined, this is a finding.
3. For each cluster log forwarder listed above, view the configuration details by executing the following:
oc describe clusterlogforwarder/ -n openshift-logging
Review the details of the cluster log forwarder.
If the configuration is not set to forward logs the organization's centralized logging service, this is a finding.
Fix
To configure log forwarding, the OpenShift Cluster Logging operator first must be installed, and then the Cluster Log Forwarder is configured to forward logs to a centralized log aggregation service.
To install the OpenShift Cluster Logging operator, execute the following command to apply the subscription manifests to the cluster:
oc apply -f - << 'EOF'
---
apiVersion: project.openshift.io/v1
kind: Project
metadata:
labels:
kubernetes.io/metadata.name: openshift-logging
openshift.io/cluster-monitoring: "true"
name: openshift-logging
spec: {}
...
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-logging
namespace: openshift-logging
spec:
targetNamespaces:
- openshift-logging
...
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
labels:
operators.coreos.com/cluster-logging.openshift-logging: ""
name: cluster-logging
namespace: openshift-logging
spec:
channel: stable
installPlanApproval: Automatic
name: cluster-logging
source: redhat-operators
sourceNamespace: openshift-marketplace
...
EOF
After the OpenShift Logging operator has finished installing, a ClusterLogForwarder instance can be created to forward cluster logs to a log aggregator. A basic configuration that would forward OpenShift audit, application, and infrastructure logs to an rsyslog server that is managed separately and is configured for mTLS authentication over TCP when sending audit logs, but traditional UDP access for other types of logs, can be provided by editing the appropriate values in the Secret resource below and changing the "url" parameters in the "outputs" section of the "spec" below, then running the command to apply (Example):
oc apply -f - << 'EOF'
---
apiVersion: v1
kind: Secret
metadata:
name: rsyslog-tls-secret
namespace: openshift-logging
data:
tls.crt:
tls.key:
ca-bundle.crt:
...
---
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: rsyslog-audit
type: syslog
syslog:
facility: security
rfc: RFC5424
severity: Informational
appName: openshift
msgID: audit
procID: audit
url: 'tls://rsyslogserver.example.com:514'
secret:
name: rsyslog-tls-secret
- name: rsyslog-apps
type: syslog
syslog:
facility: user
rfc: RFC5424
severity: Informational
appName: openshift
msgID: apps
procID: apps
url: 'udp://rsyslogserver.example.com:514'
- name: rsyslog-infra
type: syslog
syslog:
facility: local0
rfc: RFC5424
severity: Informational
appName: openshift
msgID: infra
procID: infra
url: 'udp://rsyslogserver.example.com:514'
pipelines:
- name: audit-logs
inputRefs:
- audit
outputRefs:
- rsyslog-audit
- name: apps-logs
inputRefs:
- application
outputRefs:
- rsyslog-apps
- name: infrastructure-logs
inputRefs:
- infrastructure
outputRefs:
- rsyslog-infra
...
EOF
Note that many log forwarding destinations are supported, and the fix does not require that users forward audit logs to rsyslog over mTLS. To better understand how to configure the ClusterLogForwarder, consult the OpenShift Logging documentation:
https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-external.html
OpenShift must use internal system clocks to generate audit record time stamps.
STIG ID:
CNTR-OS-000230 |
SRG: SRG-APP-000116-CTR-000235 |
Severity: medium |
CCI: CCI-000159 |
Vulnerability Id: V-257525 |
Vulnerability Discussion
Knowing when a sequence of events for an incident occurred is crucial to understand what may have taken place. Without a common clock, the components generating audit events could be out of synchronization and would then present a picture of the event that is warped and corrupted. To give a clear picture, it is important that the container platform and its components use a common internal clock.
Check
Verify the chronyd service is enabled and active by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; systemctl is-enabled chronyd.service; systemctl is-active chronyd.service' 2>/dev/null; done
If the auditd service is not "enabled" and "active", this is a finding.
Fix
Apply the machine config to use internal system clocks for audit records by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-chronyd-service-enable-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
systemd:
units:
- name: chronyd.service
enabled: true
" | oc apply -f -
done
The Red Hat Enterprise Linux CoreOS (RHCOS) chrony Daemon must use multiple NTP servers to generate audit record time stamps.
STIG ID:
CNTR-OS-000240 |
SRG: SRG-APP-000116-CTR-000235 |
Severity: medium |
CCI: CCI-000159 |
Vulnerability Id: V-257526 |
Vulnerability Discussion
Utilizing multiple NTP servers for the chrony daemon in RHCOS ensures accurate and reliable audit record timestamps. It improves time synchronization, mitigates time drift, provides redundancy, and enhances resilience against attacks. Knowing when a sequence of events for an incident occurred is crucial to understand what may have taken place. Without a common clock, the components generating audit events could be out of synchronization and would then present a picture of the event that is warped and corrupted. To give a clear picture, it is important that the container platform and its components use a common internal clock.
Check
Verify Red Hat Enterprise Linux CoreOS (RHCOS) chrony Daemon is configured to use multiple NTP servers by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep "server" /etc/chrony.d/*' 2>/dev/null; done
(Sample output: server minpoll 4 maxpoll 10
server minpoll 4 maxpoll 10)
If multiple NTP servers are not configured, this is a finding.
Fix
Apply the machine config by executing the following, replacing the variables in the MachineConfig with organizationally-defined NTP servers.
for mcpool in $(oc get mcp -oname | sed ""s:.*/::"" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 90-chrony-ntp-servers-set-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%20Allow%20for%20extra%20configuration%20files.%20This%20is%20useful%0A%23%20for%20admins%20specifying%20their%20own%20NTP%20servers%0Ainclude%20%2Fetc%2Fchrony.d%2F%2A.conf%0A%0A%23%20Set%20chronyd%20as%20client-only.%0Aport%200%0A%0A%23%20Disable%20chronyc%20from%20the%20network%0Acmdport%200%0A%0A%23%20Record%20the%20rate%20at%20which%20the%20system%20clock%20gains%2Flosses%20time.%0Adriftfile%20%2Fvar%2Flib%2Fchrony%2Fdrift%0A%0A%23%20Allow%20the%20system%20clock%20to%20be%20stepped%20in%20the%20first%20three%20updates%0A%23%20if%20its%20offset%20is%20larger%20than%201%20second.%0Amakestep%201.0%203%0A%0A%23%20Enable%20kernel%20synchronization%20of%20the%20real-time%20clock%20%28RTC%29.%0Artcsync%0A%0A%23%20Enable%20hardware%20timestamping%20on%20all%20interfaces%20that%20support%20it.%0A%23hwtimestamp%20%2A%0A%0A%23%20Increase%20the%20minimum%20number%20of%20selectable%20sources%20required%20to%20adjust%0A%23%20the%20system%20clock.%0A%23minsources%202%0A%0A%23%20Allow%20NTP%20client%20access%20from%20local%20network.%0A%23allow%20192.168.0.0%2F16%0A%0A%23%20Serve%20time%20even%20if%20not%20synchronized%20to%20a%20time%20source.%0A%23local%20stratum%2010%0A%0A%23%20Require%20authentication%20%28nts%20or%20key%20option%29%20for%20all%20NTP%20sources.%0A%23authselectmode%20require%0A%0A%23%20Specify%20file%20containing%20keys%20for%20NTP%20authentication.%0Akeyfile%20%2Fetc%2Fchrony.keys%0A%0A%23%20Insert%2Fdelete%20leap%20seconds%20by%20slewing%20instead%20of%20stepping.%0A%23leapsecmode%20slew%0A%0A%23%20Get%20TAI-UTC%20offset%20and%20leap%20seconds%20from%20the%20system%20tz%20database.%0Aleapsectz%20right%2FUTC%0A%0A%23%20Specify%20directory%20for%20log%20files.%0Alogdir%20%2Fvar%2Flog%2Fchrony%0A%0A%23%20Select%20which%20information%20is%20logged.%0A%23log%20measurements%20statistics%20tracking
mode: 420
overwrite: true
path: /etc/chrony.conf
- contents:
source: data:,
mode: 420
overwrite: true
path: /etc/chrony.d/.mco-keep
- contents:
source: data:,%23%0A%23%20This%20file%20controls%20the%20configuration%20of%20the%20ntp%20server%0A%23%200.pool.ntp.org%2C1.pool.ntp.org%2C2.pool.ntp.org%2C3.pool.ntp.org%20we%20have%20to%20put%20variable%20array%20name%20here%20for%20mutilines%20remediation%20%0A%0Aserver%20%20minpoll%204%20maxpoll%2010%0Aserver%20%20minpoll%204%20maxpoll%2010%0Aserver%20%20minpoll%204%20maxpoll%2010%0Aserver%20%20minpoll%204%20maxpoll%2010%0A
mode: 420
overwrite: true
path: /etc/chrony.d/ntp-server.conf
" | oc apply -f -
done
OpenShift must protect audit logs from any type of unauthorized access.
STIG ID:
CNTR-OS-000250 |
SRG: SRG-APP-000118-CTR-000240 |
Severity: medium |
CCI: CCI-000162 |
Vulnerability Id: V-257527 |
Vulnerability Discussion
If audit data were to become compromised, then competent forensic analysis and discovery of the true source of potentially malicious system activity is difficult, if not impossible, to achieve. In addition, access to audit records provides information an attacker could potentially use to their advantage.
To ensure the veracity of audit data, the information system and/or the application must protect audit information from all unauthorized access. This includes read, write, and copy access.
This requirement can be achieved through multiple methods, which will depend upon system architecture and design. Commonly employed methods for protecting audit information include least privilege permissions as well as restricting the location and number of log file repositories.
Additionally, applications with user interfaces to audit records must not allow for the unfettered manipulation of or access to those records via the application. If the application provides access to the audit data, the application becomes accountable for ensuring audit information is protected from unauthorized access.
Audit information includes all information (e.g., audit records, audit settings, and audit reports) needed to successfully audit information system activity.
Check
Verify the audit logs have a mode of "0600" by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; stat -c "%a %n" /var/log/audit/audit.log' 2>/dev/null; done
(Sample Output: 600 /var/log/audit/audit.log)
If the audit log has a mode more permissive than "0600", this is a finding.
Determine if the audit log is owned by "root" executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; ls -l /var/log/audit/audit.log' 2>/dev/null; done
(Sample Output: rw------- 2 root root 23 Jun 11 11:56 /var/log/audit/audit.log)
If the audit log is not owned by "root", this is a finding.
Verify the audit log directory is group-owned by "root" to prevent unauthorized read access by executing the following.
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; ls -ld /var/log/audit' 2>/dev/null; done
(Sample Output: drw------- 2 root root 23 Jun 11 11:56 /var/log/audit)
If the audit log directory is not group-owned by "root", this is a finding.
Verify the audit log directories have a mode of "0700" by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; stat -c "%a %n" /var/log/audit' 2>/dev/null; done
(Sample Output: 700 /var/log/audit)
If the audit log directory has a mode more permissive than "0700", this is a finding.
Fix
Correct permissions (audit logs have a mode of "0600") by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); chmod 600 /var/log/audit/audit.log' 2>/dev/null; done
Correct permissions (audit log is owned by "root") by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); chown root:root /var/log/audit/audit.log' 2>/dev/null; done
Correct permissions (audit log directory is group-owned by "root") by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); chown root:root /var/log/audit' 2>/dev/null; done
Correct permissions ( audit log directories have a mode of "0700") by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); chmod 700 /var/log/audit' 2>/dev/null; done
OpenShift must protect system journal file from any type of unauthorized access by setting file permissions.
STIG ID:
CNTR-OS-000260 |
SRG: SRG-APP-000118-CTR-000240 |
Severity: medium |
CCI: CCI-000162 |
Vulnerability Id: V-257528 |
Vulnerability Discussion
It is a fundamental security practice to enforce the principle of least privilege, where only the necessary permissions are granted to authorized entities. OpenShift must protect the system journal file from any type of unauthorized access by setting file permissions.
The system journal file contains important log data that helps in troubleshooting and monitoring the system. Unauthorized access or tampering with the journal file can compromise the integrity of this data. By setting appropriate file permissions, OpenShift ensures that only authorized users or processes have access to the journal file, maintaining the integrity and reliability of system logs.
Check
Verify the system journal file has mode "0640" or less permissive by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); stat -c "%a %n" /var/log/journal/$machine_id/system.journal' 2>/dev/null; done
If a value of "0640" or less permissive is not returned, this is a finding.
Fix
Correct journal file permissions by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); chmod 640 /var/log/journal/$machine_id/system.journal' 2>/dev/null; done
OpenShift must protect system journal file from any type of unauthorized access by setting owner permissions.
STIG ID:
CNTR-OS-000270 |
SRG: SRG-APP-000118-CTR-000240 |
Severity: medium |
CCI: CCI-000162 |
Vulnerability Id: V-257529 |
Vulnerability Discussion
OpenShift follows the principle of least privilege, which aims to restrict access to resources based on user roles and responsibilities. This separation of privileges helps mitigate the risk of unauthorized modifications or unauthorized access by users or processes that do not need to interact with the file.
Protecting the system journal file from unauthorized access helps safeguard against potential security threats. The system journal file contains critical log data that is vital for system analysis, troubleshooting, and security auditing. Unauthorized users gaining access to the file may exploit vulnerabilities, tamper with logs, or extract sensitive information. By setting strict file owner permissions, OpenShift minimizes the risk of unauthorized individuals or processes accessing or modifying the journal file, reducing the likelihood of security breaches.
Check
Verify the "system journal" file is group-owned by systemd-journal and owned by root by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); stat -c "%U %G" /var/log/journal/$machine_id/system.journal' 2>/dev/null; done
Example output:
ip-10-0-150-1 root systemd-journal
If "root" is not returned as the owner, this is a finding.
If "systemd-journald" is not returned as the group owner, this is a finding.
Fix
Correct journal file ownership by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; machine_id=$(systemd-machine-id-setup --print); chown root:systemd-journal /var/log/journal/$machine_id/system.journal' 2>/dev/null; done
OpenShift must protect log directory from any type of unauthorized access by setting file permissions.
STIG ID:
CNTR-OS-000280 |
SRG: SRG-APP-000118-CTR-000240 |
Severity: medium |
CCI: CCI-000162 |
Vulnerability Id: V-257530 |
Vulnerability Discussion
Log files contain sensitive information such as user credentials, system configurations, and potentially even security-related events. Unauthorized access to log files can expose this sensitive data to malicious actors. By protecting the log directory, OpenShift ensures that only authorized users or processes can access the log files, preserving the confidentiality of the information contained within them.
Check
Verify the "/var/log" directory has a mode of "0755" or less by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; stat -c "%a %n" /var/log' 2>/dev/null; done
If a value of "0755" or less permissive is not returned, this is a finding.
Fix
Correct log directory permissions by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; chmod 755 /var/log/' 2>/dev/null; done
OpenShift must protect log directory from any type of unauthorized access by setting owner permissions.
STIG ID:
CNTR-OS-000290 |
SRG: SRG-APP-000118-CTR-000240 |
Severity: medium |
CCI: CCI-000162 |
Vulnerability Id: V-257531 |
Vulnerability Discussion
OpenShift follows the principle of least privilege, which aims to restrict access to resources based on user roles and responsibilities. This separation of privileges helps mitigate the risk of unauthorized modifications or unauthorized access by users or processes that do not need to interact with the file.
Protecting the /var/log directory from unauthorized access helps safeguard against potential security threats. Unauthorized users gaining access to the file may exploit vulnerabilities, tamper with logs, or extract sensitive information. By setting strict file owner permissions, OpenShift minimizes the risk of unauthorized individuals or processes accessing or modifying the directory, reducing the likelihood of security breaches.
Check
Verify the "/var/log" directory is group-owned by root by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; stat -c "%G" /var/log' 2>/dev/null; done
If "root" is not returned as a result, this is a finding.
Fix
Correct log directory ownership by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; chown root:root /var/log/' 2>/dev/null; done
OpenShift must protect pod log files from any type of unauthorized access by setting owner permissions.
STIG ID:
CNTR-OS-000300 |
SRG: SRG-APP-000118-CTR-000240 |
Severity: medium |
CCI: CCI-000162 |
Vulnerability Id: V-257532 |
Vulnerability Discussion
Pod log files may contain sensitive information such as application data, user credentials, or system configurations. Unauthorized access to these log files can expose sensitive data to malicious actors. By setting owner permissions, OpenShift ensures that only authorized users or processes with the necessary privileges can access the pod log files, preserving the confidentiality of the logged information.
Check
Verify the permissions and ownership of files located under "/var/log/pods" that store the output of pods are set to protect from unauthorized access.
1. Verify the files are readable only by the owner by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; find /var/log/pods/ -type f \( -perm /022 -o -perm /044 \)' 2>/dev/null; done
If any files are returned, this is a finding.
2. Verify files are group-owned by root and owned by root by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; find /var/log/pods/ -type f \! -user 0' 2>/dev/null; done
(Example output:
ip-10-0-150-1 root root)
If "root" is not returned as the owner, this is a finding.
If "root" is not returned as the group owner, this is a finding.
Fix
Change the permissions and ownership of files located under "/var/log/pods" to protect from unauthorized access.
1. Execute the following to set the output of pods readable only by the owner:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; find /var/log/pods/ -type f \( -perm /022 -o -perm /044 \) | xargs -r chmod 600' 2>/dev/null; done
2. Execute the following to set the group and group-ownership to root for files that store the output of pods:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; find /var/log/pods/ -type f \! -user 0 | xargs -r chown root:root' 2>/dev/null; done
OpenShift must protect audit information from unauthorized modification.
STIG ID:
CNTR-OS-000310 |
SRG: SRG-APP-000119-CTR-000245 |
Severity: medium |
CCI: CCI-000163,CCI-000164 |
Vulnerability Id: V-257533 |
Vulnerability Discussion
If audit data were to become compromised, then competent forensic analysis and discovery of the true source of potentially malicious system activity is difficult if not impossible to achieve. In addition, access to audit records provides information an attacker could potentially use to his or her advantage.
To ensure the veracity of audit data, the information system and/or the application must protect audit information from all unauthorized access. This includes read, write, and copy access.
This requirement can be achieved through multiple methods, which will depend upon system architecture and design. Commonly employed methods for protecting audit information include least privilege permissions as well as restricting the location and number of log file repositories.
Additionally, applications with user interfaces to audit records must not allow for the unfettered manipulation of or access to those records via the application. If the application provides access to the audit data, the application becomes accountable for ensuring audit information is protected from unauthorized access.
Audit information includes all information (e.g., audit records, audit settings, and audit reports) needed to successfully audit information system activity.
Satisfies: SRG-APP-000119-CTR-000245, SRG-APP-000120-CTR-000250
Check
Verify the audit system prevents unauthorized changes by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n ""$HOSTNAME ""; grep "^\-e\s2\s*$" /etc/audit/audit.rules /etc/audit/rules.d/* || echo "not found"' 2>/dev/null; done
If the check returns "not found", the audit system is not set to be immutable by adding the ""-e 2"" option to the ""/etc/audit/audit.rules"", this is a finding.
Fix
Apply the machine config to prevent unauthorized changes by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 90-immutable-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-e%202%0A
mode: 0600
path: /etc/audit/rules.d/90-immutable.rules
overwrite: true
" | oc apply -f -
done
OpenShift must prevent unauthorized changes to logon UIDs.
STIG ID:
CNTR-OS-000320 |
SRG: SRG-APP-000121-CTR-000255 |
Severity: medium |
CCI: CCI-001493 |
Vulnerability Id: V-257534 |
Vulnerability Discussion
Logon UIDs are used to uniquely identify and authenticate users within the system. By preventing unauthorized changes to logon UIDs, OpenShift ensures that user identities remain consistent and accurate. This helps maintain the integrity of user accounts and ensures that users can be properly authenticated and authorized for their respective resources and actions.
User accounts and associated logon UIDs are important for security monitoring, auditing, and accountability purposes. By preventing unauthorized changes to logon UIDs, OpenShift ensures that actions performed by users can be accurately traced and attributed to the correct user account. This helps with incident investigation, compliance requirements, and maintaining overall system security.
Check
Verify the audit system prevents unauthorized changes to logon UIDs by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -i immutable /etc/audit/audit.rules || echo "not found"' 2>/dev/null; done
If the login UIDs are not set to be immutable by adding the "--loginuid-immutable" option to the "/etc/audit/audit.rules", this is a finding.
Fix
Apply the machine config to prevent changes to logon UIDs by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 11-loginuid-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%23%20Make%20the%20loginuid%20immutable.%20This%20prevents%20tampering%20with%20the%20auid.%0A--loginuid-immutable%0A
mode: 0644
path: /etc/audit/rules.d/11-loginuid.rules
overwrite: true
" | oc apply -f -
done
OpenShift must protect audit tools from unauthorized access.
STIG ID:
CNTR-OS-000330 |
SRG: SRG-APP-000121-CTR-000255 |
Severity: medium |
CCI: CCI-001493,CCI-001494,CCI-001495 |
Vulnerability Id: V-257535 |
Vulnerability Discussion
Protecting audit data also includes identifying and protecting the tools used to view and manipulate log data. Therefore, protecting audit tools is necessary to prevent unauthorized operation on audit data.
Applications providing tools to interface with audit data will leverage user permissions and roles identifying the user accessing the tools and the corresponding rights the user enjoys in order to make access decisions regarding the access to audit tools.
Audit tools include, but are not limited to, vendor-provided and open source audit tools needed to successfully view and manipulate audit information system activity and records. Audit tools include custom queries and report generators.
Satisfies: SRG-APP-000121-CTR-000255, SRG-APP-000122-CTR-000260, SRG-APP-000123-CTR-000265
Check
List the users and groups who have permission to view the cluster logging configuration by executing the following two commands:
oc policy who-can view ClusterLogging -n openshift-logging
oc policy who-can view ClusterLoggingForwarder -n openshift-logging
Review the list of users and groups who have view access to the cluster logging resources. If any user or group listed must not have access to view the cluster logging resources, this is a finding.
Fix
Remove view permissions from any unauthorized user or group by performing one or more of the following commands.
Remove role from user by executing the following:
oc adm policy remove-role-from-user -n openshift-logging
Remove role from group by executing the following:
oc adm policy remove-role-from-group -n openshift-logging
Remove cluster role from user by executing the following:
oc adm policy remove-cluster-role-from-user -n openshift-logging
Remove cluster role from group by executing the following:
oc adm policy remove-cluster-role-from-group -n openshift-logging
Note: ROLE/CLUSTER_ROLE is the role granting user view permission to resources in openshift-logging namespace.
OpenShift must use FIPS-validated cryptographic mechanisms to protect the integrity of log information.
STIG ID:
CNTR-OS-000340 |
SRG: SRG-APP-000126-CTR-000275 |
Severity: medium |
CCI: CCI-001350 |
Vulnerability Id: V-257536 |
Vulnerability Discussion
To fully investigate an incident and to have trust in the audit data that is generated, it is important to put in place data protections. Without integrity protections, unauthorized changes may be made to the audit files and reliable forensic analysis and discovery of the source of malicious system activity may be degraded. Although digital signatures are one example of protecting integrity, this control is not intended to cause a new cryptographic hash to be generated every time a record is added to a log file.
Integrity protections can also be implemented by using cryptographic techniques for security function isolation and file system protections to protect against unauthorized changes.
Check
Verify the Cluster Log Forwarder is using an encrypted transport by executing the following:
oc get clusterlogforwarder -n openshift-logging
For each Cluster Log Forwarder, run the following command to display the configuration.
oc describe clusterlogforwarder -n openshift-logging
Review the configuration and determine if the transport is secure, such as tls:// or https://. If there are any transports configured that are not secured by TLS, this is a finding.
Fix
Edit the Cluster Log Forwarder configuration to configure TLS on the transport by executing the following:
oc edit clusterlogforwarder -n openshift-logging
For any output->url value that is not using a secure transport, edit the url to use a secure (https:// or tls://) transport.
For detailed information regarding configuration of the Cluster Log Forwarder, refer to https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-external.html.
OpenShift must verify container images.
STIG ID:
CNTR-OS-000360 |
SRG: SRG-APP-000131-CTR-000285 |
Severity: medium |
CCI: CCI-001749 |
Vulnerability Id: V-257537 |
Vulnerability Discussion
The container platform must be capable of validating that container images are signed and that the digital signature is from a recognized and source approved by the organization. Allowing any container image to be introduced into the registry and instantiated into a container can allow for services to be introduced that are not trusted and may contain malicious code, which introduces unwanted services. These unwanted services can cause harm and security risks to the hosting server, the container platform, other services running within the container platform, and the overall organization.
Check
Determine if a policy has been put in place by running the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; cat /etc/containers/policy.json' 2>/dev/null; done
If the policy is not set to "reject" by default, or the signature keys are not configure appropriately on the registries, this is a finding.
The following is an example of how this will look on a system using Red Hat's public registries:
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"registry.access.redhat.com": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
}
],
"registry.redhat.io": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
}
],
...
}
Fix
Configure the OpenShift Container policy to validate that image signatures are verified and enforced by executing the following:
Note: This can be configured manually or through the use of the MachineConfig Operator published by Red Hat.
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: $mcpool
name: 51-rh-registry-trust-$mcpool
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,LS0tCmRvY2tlcjoKICByZWdpc3RyeS5hY2Nlc3MucmVkaGF0LmNvbToKICAgIHNpZ3N0b3JlOiBodHRwczovL2FjY2Vzcy5yZWRoYXQuY29tL3dlYmFzc2V0cy9kb2NrZXIvY29udGVudC9zaWdzdG9yZQo=
verification: {}
filesystem: root
mode: 420
path: /etc/containers/registries.d/registry.access.redhat.com.yaml
- contents:
source: data:text/plain;charset=utf-8;base64,LS0tCmRvY2tlcjoKICByZWdpc3RyeS5yZWRoYXQuaW86CiAgICBzaWdzdG9yZTogaHR0cHM6Ly9yZWdpc3RyeS5yZWRoYXQuaW8vY29udGFpbmVycy9zaWdzdG9yZQo=
verification: {}
filesystem: root
mode: 420
path: /etc/containers/registries.d/registry.redhat.io.yaml
- contents:
source: data:text/plain;charset=utf-8;base64,ewogICJkZWZhdWx0IjogW3sidHlwZSI6ICJyZWplY3QifV0sCiAgInRyYW5zcG9ydHMiOiB7CiAgICAiZG9ja2VyIjogewogICAgICAicmVnaXN0cnkuYWNjZXNzLnJlZGhhdC5jb20iOiBbCiAgICAgICAgewogICAgICAgICAgInR5cGUiOiAic2lnbmVkQnkiLAogICAgICAgICAgImtleVR5cGUiOiAiR1BHS2V5cyIsCiAgICAgICAgICAia2V5UGF0aCI6ICIvZXRjL3BraS9ycG0tZ3BnL1JQTS1HUEctS0VZLXJlZGhhdC1yZWxlYXNlIgogICAgICAgIH0KICAgICAgXSwKICAgICAgInJlZ2lzdHJ5LnJlZGhhdC5pbyI6IFsKICAgICAgICB7CiAgICAgICAgICAidHlwZSI6ICJzaWduZWRCeSIsCiAgICAgICAgICAia2V5VHlwZSI6ICJHUEdLZXlzIiwKICAgICAgICAgICJrZXlQYXRoIjogIi9ldGMvcGtpL3JwbS1ncGcvUlBNLUdQRy1LRVktcmVkaGF0LXJlbGVhc2UiCiAgICAgICAgfQogICAgICBdLAogICAgICAiaW1hZ2UtcmVnaXN0cnkub3BlbnNoaWZ0LWltYWdlLXJlZ2lzdHJ5LnN2Yzo1MDAwIjogW3sidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dLAogICAgICAicXVheS5pby9jb21wbGlhbmNlYXNjb2RlIjogW3sidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dLAogICAgICAicXVheS5pby9jb21wbGlhbmNlLW9wZXJhdG9yIjogW3sidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dLAogICAgICAicXVheS5pby9rZXljbG9hayI6IFt7InR5cGUiOiAiaW5zZWN1cmVBY2NlcHRBbnl0aGluZyJ9XSwKICAgICAgInF1YXkuaW8vb3BlbnNoaWZ0LXJlbGVhc2UtZGV2IjogW3sidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dLAogICAgICAicmVnaXN0cnkuYnVpbGQwMi5jaS5vcGVuc2hpZnQub3JnIjogW3sidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dLAogICAgICAicmVnaXN0cnkuYnVpbGQwMS5jaS5vcGVuc2hpZnQub3JnIjogW3sidHlwZSI6ICJpbnNlY3VyZUFjY2VwdEFueXRoaW5nIn1dCiAgICB9CiAgfQp9Cg==
verification: {}
filesystem: root
mode: 420
path: /etc/containers/policy.json
" | oc apply -f -
done
OpenShift must contain only container images for those capabilities being offered by the container platform.
STIG ID:
CNTR-OS-000380 |
SRG: SRG-APP-000141-CTR-000320 |
Severity: medium |
CCI: CCI-000381 |
Vulnerability Id: V-257538 |
Vulnerability Discussion
Allowing container images to reside within the container platform registry that are not essential to the capabilities being offered by the container platform becomes a potential security risk. By allowing these nonessential container images to exist, the possibility for accidental instantiation exists. The images may be unpatched, not supported, or offer nonapproved capabilities. Those images for customer services are considered essential capabilities.
Check
To review the container images within the container platform registry, execute the following:
oc get images
Review the container platform container images to validate that only container images necessary for the functionality of the information system are present. If unnecessary container images exist, this is a finding.
Fix
Remove any images from the container registry that are not required for the functionality of the system by executing the following:
oc delete image -n
OpenShift runtime must enforce ports, protocols, and services that adhere to the PPSM CAL.
STIG ID:
CNTR-OS-000390 |
SRG: SRG-APP-000142-CTR-000325 |
Severity: medium |
CCI: CCI-000382 |
Vulnerability Id: V-257539 |
Vulnerability Discussion
OpenShift Container Platform uses several IPV4 and IPV6 ports and protocols to facilitate cluster communication and coordination. Not all these ports are identified and approved by the PPSM CAL. Those ports, protocols, and services that fall outside the PPSM CAL must be blocked by the runtime or registered.
Instructions on the PPSM can be found in DOD Instruction 8551.01 Policy.
Check
Review the OpenShift documentation and configuration.
For additional information, refer to https://docs.openshift.com/container-platform/4.12/installing/installing_platform_agnostic/installing-platform-agnostic.html.
1. Interview the application administrator.
2. Identify the TCP/IP port numbers OpenShift is configured to use and is utilizing by using a combination of relevant OS commands and application configuration utilities.
3. Identify the network ports and protocols that are used by kube-apiserver by executing the following:
oc get configmap kube-apiserver-pod -n openshift-kube-apiserver -o "jsonpath={ .data['pod\.yaml'] }" | jq '..|.containerPort?' | grep -v "null"
oc get configmap kube-apiserver-pod -n openshift-kube-apiserver -o "jsonpath={ .data['pod\.yaml'] }" | jq '..|.hostPort?' | grep -v "null"
oc get services -A --show-labels | grep apiserver | awk '{print $6,$8}' | grep apiserver
4. Identify the network ports and protocols used by kube-scheduler by executing the following:
oc get configmap kube-scheduler-pod -n openshift-kube-scheduler -o "jsonpath={ .data['pod\.yaml'] }" | jq '..|.containerPort?' | grep -v "null"
oc get services -A --show-labels | grep scheduler | awk '{print $6,$8}' | grep scheduler
5. Identify the network ports and protocols used by kube-controller-manager by executing the following:
oc get configmap kube-controller-manager-pod -n openshift-kube-controller-manager -o "jsonpath={ .data['pod\.yaml'] }" | jq '..|.containerPort?' | grep -v "null"
oc get services -A --show-labels | grep kube-controller
6. Identify the network ports and protocols used by etcd by executing the following:
oc get configmap etcd-pod -n openshift-etcd -o "jsonpath={ .data['pod\.yaml'] }" | grep -Po '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:[0-9]+' | sort -u
Review the PPSM web page at: http://www.disa.mil/Network-Services/Enterprise-Connections/PPSM.
Review the PPSM Category Assurance List (CAL) directly at the following link: https://disa.deps.mil/ext/cop/iase/ppsm/Pages/cal.aspx.
Verify the ports used by the OpenShift are approved by the PPSM CAL.
If the ports, protocols, and services have not been registered locally, this is a finding.
Fix
Verify the accreditation documentation lists all interfaces and the ports, protocols, and services used.
Register OpenShift's ports, protocols, and services with PPSM.
OpenShift must disable root and terminate network connections.
STIG ID:
CNTR-OS-000400 |
SRG: SRG-APP-000148-CTR-000335 |
Severity: high |
CCI: CCI-000764,CCI-001133 |
Vulnerability Id: V-257540 |
Vulnerability Discussion
Direct login as the "root" user must be disabled to prevent unrestricted access and control over the entire system.
Terminating an idle session within a short time reduces the window of opportunity for unauthorized personnel to take control of a management session enabled on the console or console port that has been left unattended. In addition, quickly terminating an idle session will also free up resources committed by the managed network element.
Terminating network connections associated with communications sessions includes, for example, de-allocating associated TCP/IP address/port pairs at the operating system level, or de-allocating networking assignments at the application level if multiple application sessions are using a single operating system level network connection. This does not mean that the application terminates all sessions or network access; it only ends the inactive session and releases the resources associated with that session.
Satisfies: SRG-APP-000148-CTR-000335, SRG-APP-000190-CTR-000500
Check
Verify SSH is restricted from logging on as root and network connections are terminated.
Prevent logging on directly as "root" using SSH by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -i PermitRootLogin /etc/ssh/sshd_config' 2>/dev/null; done
If the "PermitRootLogin" keyword is set to "yes", is missing, or is commented out, this is a finding.
Verify all network connections associated with SSH traffic are automatically terminated at the end of the session or after 10 minutes of inactivity.
Check the "ClientAliveCountMax" and ClientAliveInterval by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -i clientalive /etc/ssh/sshd_config
' 2>/dev/null; done
If "ClientAliveCountMax" do not exist, is not set to a value of "0" in "/etc/ssh/sshd_config", or is commented out, this is a finding.
If "ClientAliveInterval" does not exist, or has a value of > 600 in "/etc/ssh/sshd_config", or is commented out, this is a finding.
Fix
Apply the machine config that disables root and terminates network connections by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 50-sshd-config-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%0A%23%09$OpenBSD:%20sshd_config%2Cv%201.103%202018%2F04%2F09%2020:41:22%20tj%20Exp%20$%0A%0A%23%20This%20is%20the%20sshd%20server%20system-wide%20configuration%20file.%20%20See%0A%23%20sshd_config%285%29%20for%20more%20information.%0A%0A%23%20This%20sshd%20was%20compiled%20with%20PATH=%2Fusr%2Flocal%2Fbin:%2Fusr%2Fbin:%2Fusr%2Flocal%2Fsbin:%2Fusr%2Fsbin%0A%0A%23%20The%20strategy%20used%20for%20options%20in%20the%20default%20sshd_config%20shipped%20with%0A%23%20OpenSSH%20is%20to%20specify%20options%20with%20their%20default%20value%20where%0A%23%20possible%2C%20but%20leave%20them%20commented.%20%20Uncommented%20options%20override%20the%0A%23%20default%20value.%0A%0A%23%20If%20you%20want%20to%20change%20the%20port%20on%20a%20SELinux%20system%2C%20you%20have%20to%20tell%0A%23%20SELinux%20about%20this%20change.%0A%23%20semanage%20port%20-a%20-t%20ssh_port_t%20-p%20tcp%20%23PORTNUMBER%0A%23%0A%23Port%2022%0A%23AddressFamily%20any%0A%23ListenAddress%200.0.0.0%0A%23ListenAddress%20::%0A%0AHostKey%20%2Fetc%2Fssh%2Fssh_host_rsa_key%0AHostKey%20%2Fetc%2Fssh%2Fssh_host_ecdsa_key%0AHostKey%20%2Fetc%2Fssh%2Fssh_host_ed25519_key%0A%0A%23%20Ciphers%20and%20keying%0ARekeyLimit%20512M%201h%0A%0A%23%20System-wide%20Crypto%20policy:%0A%23%20This%20system%20is%20following%20system-wide%20crypto%20policy.%20The%20changes%20to%0A%23%20Ciphers%2C%20MACs%2C%20KexAlgoritms%20and%20GSSAPIKexAlgorithsm%20will%20not%20have%20any%0A%23%20effect%20here.%20They%20will%20be%20overridden%20by%20command-line%20options%20passed%20on%0A%23%20the%20server%20start%20up.%0A%23%20To%20opt%20out%2C%20uncomment%20a%20line%20with%20redefinition%20of%20%20CRYPTO_POLICY=%0A%23%20variable%20in%20%20%2Fetc%2Fsysconfig%2Fsshd%20%20to%20overwrite%20the%20policy.%0A%23%20For%20more%20information%2C%20see%20manual%20page%20for%20update-crypto-policies%288%29.%0A%0A%23%20Logging%0A%23SyslogFacility%20AUTH%0ASyslogFacility%20AUTHPRIV%0A%23LogLevel%20INFO%0A%0A%23%20Authentication:%0A%0A%23LoginGraceTime%202m%0APermitRootLogin%20no%0AStrictModes%20yes%0A%23MaxAuthTries%206%0A%23MaxSessions%2010%0A%0APubkeyAuthentication%20yes%0A%0A%23%20The%20default%20is%20to%20check%20both%20.ssh%2Fauthorized_keys%20and%20.ssh%2Fauthorized_keys2%0A%23%20but%20this%20is%20overridden%20so%20installations%20will%20only%20check%20.ssh%2Fauthorized_keys%0AAuthorizedKeysFile%09.ssh%2Fauthorized_keys%0A%0A%23AuthorizedPrincipalsFile%20none%0A%0A%23AuthorizedKeysCommand%20none%0A%23AuthorizedKeysCommandUser%20nobody%0A%0A%23%20For%20this%20to%20work%20you%20will%20also%20need%20host%20keys%20in%20%2Fetc%2Fssh%2Fssh_known_hosts%0AHostbasedAuthentication%20no%0A%23%20Change%20to%20yes%20if%20you%20don%27t%20trust%20~%2F.ssh%2Fknown_hosts%20for%0A%23%20HostbasedAuthentication%0AIgnoreUserKnownHosts%20yes%0A%23%20Don%27t%20read%20the%20user%27s%20~%2F.rhosts%20and%20~%2F.shosts%20files%0AIgnoreRhosts%20yes%0A%0A%23%20To%20disable%20tunneled%20clear%20text%20passwords%2C%20change%20to%20no%20here%21%0A%23PasswordAuthentication%20yes%0APermitEmptyPasswords%20no%0APasswordAuthentication%20no%0A%0A%23%20Change%20to%20no%20to%20disable%20s%2Fkey%20passwords%0A%23ChallengeResponseAuthentication%20yes%0AChallengeResponseAuthentication%20no%0A%0A%23%20Kerberos%20options%0AKerberosAuthentication%20no%0A%23KerberosOrLocalPasswd%20yes%0A%23KerberosTicketCleanup%20yes%0A%23KerberosGetAFSToken%20no%0A%23KerberosUseKuserok%20yes%0A%0A%23%20GSSAPI%20options%0AGSSAPIAuthentication%20no%0AGSSAPICleanupCredentials%20no%0A%23GSSAPIStrictAcceptorCheck%20yes%0A%23GSSAPIKeyExchange%20no%0A%23GSSAPIEnablek5users%20no%0A%0A%23%20Set%20this%20to%20%27yes%27%20to%20enable%20PAM%20authentication%2C%20account%20processing%2C%0A%23%20and%20session%20processing.%20If%20this%20is%20enabled%2C%20PAM%20authentication%20will%0A%23%20be%20allowed%20through%20the%20ChallengeResponseAuthentication%20and%0A%23%20PasswordAuthentication.%20%20Depending%20on%20your%20PAM%20configuration%2C%0A%23%20PAM%20authentication%20via%20ChallengeResponseAuthentication%20may%20bypass%0A%23%20the%20setting%20of%20%22PermitRootLogin%20without-password%22.%0A%23%20If%20you%20just%20want%20the%20PAM%20account%20and%20session%20checks%20to%20run%20without%0A%23%20PAM%20authentication%2C%20then%20enable%20this%20but%20set%20PasswordAuthentication%0A%23%20and%20ChallengeResponseAuthentication%20to%20%27no%27.%0A%23%20WARNING:%20%27UsePAM%20no%27%20is%20not%20supported%20in%20Fedora%20and%20may%20cause%20several%0A%23%20problems.%0AUsePAM%20yes%0A%0A%23AllowAgentForwarding%20yes%0A%23AllowTcpForwarding%20yes%0A%23GatewayPorts%20no%0AX11Forwarding%20yes%0A%23X11DisplayOffset%2010%0A%23X11UseLocalhost%20yes%0A%23PermitTTY%20yes%0A%0A%23%20It%20is%20recommended%20to%20use%20pam_motd%20in%20%2Fetc%2Fpam.d%2Fsshd%20instead%20of%20PrintMotd%2C%0A%23%20as%20it%20is%20more%20configurable%20and%20versatile%20than%20the%20built-in%20version.%0APrintMotd%20no%0A%0APrintLastLog%20yes%0A%23TCPKeepAlive%20yes%0APermitUserEnvironment%20no%0ACompression%20no%0AClientAliveInterval%20600%0AClientAliveCountMax%200%0A%23UseDNS%20no%0A%23PidFile%20%2Fvar%2Frun%2Fsshd.pid%0A%23MaxStartups%2010:30:100%0A%23PermitTunnel%20no%0A%23ChrootDirectory%20none%0A%23V+G67ersionAddendum%20none%0A%0A%23%20no%20default%20banner%20path%0ABanner%20%2Fetc%2Fissue%0A%0A%23%20Accept%20locale-related%20environment%20variables%0AAcceptEnv%20LANG%20LC_CTYPE%20LC_NUMERIC%20LC_TIME%20LC_COLLATE%20LC_MONETARY%20LC_MESSAGES%0AAcceptEnv%20LC_PAPER%20LC_NAME%20LC_ADDRESS%20LC_TELEPHONE%20LC_MEASUREMENT%0AAcceptEnv%20LC_IDENTIFICATION%20LC_ALL%20LANGUAGE%0AAcceptEnv%20XMODIFIERS%0A%0A%23%20override%20default%20of%20no%20subsystems%0ASubsystem%09sftp%09%2Fusr%2Flibexec%2Fopenssh%2Fsftp-server%0A%0A%23%20Example%20of%20overriding%20settings%20on%20a%20per-user%20basis%0A%23Match%20User%20anoncvs%0A%23%09X11Forwarding%20no%0A%23%09AllowTcpForwarding%20no%0A%23%09PermitTTY%20no%0A%23%09ForceCommand%20cvs%20server%0A%0AUsePrivilegeSeparation%20sandbox
mode: 0644
overwrite: true
path: /etc/ssh/sshd_config
" | oc apply -f -
done
OpenShift must use multifactor authentication for network access to accounts.
STIG ID:
CNTR-OS-000430 |
SRG: SRG-APP-000149-CTR-000355 |
Severity: medium |
CCI: CCI-000765,CCI-000766 |
Vulnerability Id: V-257541 |
Vulnerability Discussion
Without the use of multifactor authentication, the ease of access to privileged and nonprivileged functions is greatly increased.
Multifactor authentication requires using two or more factors to achieve authentication.
Factors include:
(i) something a user knows (e.g., password/PIN);
(ii) something a user has (e.g., cryptographic identification device, token); or
(iii) something a user is (e.g., biometric).
A privileged account is defined as an information system account with authorizations of a privileged user.
A nonprivileged account is any information system account with authorizations of a nonprivileged user.
Network access is defined as access to an information system by a user (or a process acting on behalf of a user) communicating through a network (e.g., local area network, wide area network, or the internet).
Satisfies: SRG-APP-000149-CTR-000355, SRG-APP-000150-CTR-000360
Check
Verify the authentication operator is configured to use either an LDAP or a OpenIDConnect provider by executing the following:
oc get oauth cluster -o jsonpath="{.spec.identityProviders[*].type}{'\n'}"
If the output lists any other type besides LDAP or OpenID, this is a finding.
Fix
Configure OpenShift to use an appropriate Identity Provider. Do not use HTPasswd. Use either LDAP(AD), OpenIDConnect, or an approved identity provider.
Steps to configure LDAP provider:
1. Create Secret for BIND DN password by executing the following:
oc create secret generic ldap-secret --from-literal=bindPassword= -n openshift-config
2. Create config map for LDAP Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create LDAP Auth Config Resource YAML:
Using the preferred text editor, create a file named ldapidp.yaml using the example content. (replacing config values as appropriate):
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: ldapidp
mappingMethod: claim
type: LDAP
ldap:
attributes:
id:
- dn
email:
- mail
name:
- cn
preferredUsername:
- uid
bindDN: <"bindDN">
bindPassword:
name: ldap-secret
ca:
name: ca-config-map
insecure: false
url:
4. Apply LDAP config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an LDAP provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-ldap-identity-provider.html.
Steps to configure OpenID provider:
1. Create Secret for Client Secret by executing the following:
oc create secret generic idp-secret --from-literal=clientSecret= -n openshift-config
2. Create config map for OpenID Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create OpenID Auth Config Resource YAML.
Using your preferred text editor, create a file named oidcidp.yaml using the example content (replacing config values as appropriate).
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: oidcidp
mappingMethod: claim
type: OpenID
openID:
clientID:
clientSecret:
name: oidc-idp-secret
claims:
preferredUsername:
- preferred_username
name:
- name
email:
- email
ca:
name: ca-config-map
issuer:
4. Apply OpenID config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an OpenID provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-oidc-identity-provider.html.
OpenShift must use FIPS-validated SHA-1 or higher hash function to provide replay-resistant authentication mechanisms for network access to privileged accounts.
STIG ID:
CNTR-OS-000440 |
SRG: SRG-APP-000156-CTR-000380 |
Severity: medium |
CCI: CCI-001941 |
Vulnerability Id: V-257542 |
Vulnerability Discussion
A replay attack may enable an unauthorized user to gain access to the application. Authentication sessions between the authenticator and the application validating the user credentials must not be vulnerable to a replay attack.
Anti-replay is a cryptographically based mechanism; thus, it must use FIPS-approved algorithms. An authentication process resists replay attacks if it is impractical to achieve a successful authentication by recording and replaying a previous authentication message. Note that the anti-replay service is implicit when data contains monotonically increasing sequence numbers and data integrity is assured. Use of DOD PKI is inherently compliant with this requirement for user and device access. Use of Transport Layer Security (TLS), including application protocols such as HTTPS and DNSSEC, that use TLS/SSL as the underlying security protocol is also complaint.
Configure the information system to use the hash message authentication code (HMAC) algorithm for authentication services to Kerberos, SSH, web management tool, and any other access method.
Check
Verify the authentication operator is configured to use a secure transport to an OpenIDConnect provider:
oc get oauth cluster -o jsonpath="{.spec.identityProviders[*]}{'\n'}"
If the transport is not secure (ex. HTTPS), this is a finding.
Fix
Configure OpenShift to use an OpenIDConnect Identity Provider. Note: This STIG was written for OIC; do not use HTPasswd. Only use an approved identity provider.
Steps to configure OpenID provider:
1. Create Secret for Client Secret by executing the following:
oc create secret generic idp-secret --from-literal=clientSecret= -n openshift-config
2. Create config map for OpenID Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create OpenID Auth Config Resource YAML.
Using the preferred text editor, create a file named oidcidp.yaml using the example content (replacing config values as appropriate).
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: oidcidp
mappingMethod: claim
type: OpenID
openID:
clientID: ...
clientSecret:
name: idp-secret
claims:
preferredUsername:
- preferred_username
name:
- name
email:
- email
issuer: https://www.idp-issuer.com
4. Apply OpenID config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an OpenID provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-oidc-identity-provider.html.
OpenShift must use FIPS validated LDAP or OpenIDConnect.
STIG ID:
CNTR-OS-000460 |
SRG: SRG-APP-000172-CTR-000440 |
Severity: high |
CCI: CCI-000016,CCI-000017,CCI-000044,CCI-000187,CCI-000192,CCI-000193,CCI-000194,CCI-000195,CCI-000196,CCI-000197,CCI-000198,CCI-000199,CCI-000200,CCI-000205,CCI-000767,CCI-000768,CCI-000795,CCI-001619,CCI-001942,CCI-001953,CCI-001991,CCI-002009,CCI-002041,CCI-002142,CCI-002145,CCI-002238 |
Vulnerability Id: V-257543 |
Vulnerability Discussion
Passwords need to be protected on entry, in transmission, during authentication, and when stored. If compromised at any of these security points, a nefarious user can use the password along with stolen user account information to gain access or to escalate privileges. The container platform may require account authentication during container platform tasks and before accessing container platform components (e.g., runtime, registry, and keystore).
During any user authentication, the container platform must use FIPS-validated SHA-2 or later protocol to protect the integrity of the password authentication process.
Satisfies: SRG-APP-000172-CTR-000440, SRG-APP-000024-CTR-000060, SRG-APP-000025-CTR-000065, SRG-APP-000065-CTR-000115, SRG-APP-000151-CTR-000365, SRG-APP-000152-CTR-000370, SRG-APP-000157-CTR-000385, SRG-APP-000163-CTR-000395, SRG-APP-000164-CTR-000400, SRG-APP-000165-CTR-000405, SRG-APP-000166-CTR-000410, SRG-APP-000167-CTR-000415, SRG-APP-000168-CTR-000420, SRG-APP-000169-CTR-000425, SRG-APP-000170-CTR-000430, SRG-APP-000171-CTR-000435, SRG-APP-000173-CTR-000445, SRG-APP-000174-CTR-000450, SRG-APP-000177-CTR-000465, SRG-APP-000317-CTR-000735, SRG-APP-000318-CTR-000740, SRG-APP-000345-CTR-000785, SRG-APP-000391-CTR-000935, SRG-APP-000397-CTR-000955, SRG-APP-000401-CTR-000965, SRG-APP-000402-CTR-000970
Check
Verify the authentication operator is configured to use either an LDAP or a OpenIDConnect provider by executing the following:
oc get oauth cluster -o jsonpath="{.spec.identityProviders[*].type}{'\n'}"
If the output lists any other type besides LDAP or OpenID, this is a finding.
Fix
Configure OpenShift to use an appropriate Identity Provider. Do not use HTPasswd. Use either LDAP(AD), OpenIDConnect or an approved identity provider.
To configure LDAP provider:
1. Create Secret for BIND DN password by executing the following:
oc create secret generic ldap-secret --from-literal=bindPassword= -n openshift-config
2. Create config map for LDAP Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create LDAP Auth Config Resource YAML:
Using the preferred text editor, create a file named ldapidp.yaml using the example content (replacing config values as appropriate).
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: ldapidp
mappingMethod: claim
type: LDAP
ldap:
attributes:
id:
- dn
email:
- mail
name:
- cn
preferredUsername:
- uid
bindDN: <"bindDN">
bindPassword:
name: ldap-secret
ca:
name: ca-config-map
insecure: false
url:
4. Apply LDAP config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an LDAP provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-ldap-identity-provider.html.
To configure OpenID provider:
1. Create Secret for Client Secret by executing the following:
oc create secret generic idp-secret --from-literal=clientSecret= -n openshift-config
2. Create config map for OpenID Trust CA by executing the following:
oc create configmap ca-config-map --from-file=ca.crt=/path/to/ca -n openshift-config
3. Create OpenID Auth Config Resource YAML.
Using your preferred text editor, create a file named oidcidp.yaml using the example content (replacing config values as appropriate).
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: oidcidp
mappingMethod: claim
type: OpenID
openID:
clientID:
clientSecret:
name: oidc-idp-secret
claims:
preferredUsername:
- preferred_username
name:
- name
email:
- email
ca:
name: ca-config-map
issuer:
4. Apply OpenID config to cluster by executing the following:
oc apply -f ldapidp.yaml
Note: For more information on configuring an OpenID provider, refer to https://docs.openshift.com/container-platform/4.8/authentication/identity_providers/configuring-oidc-identity-provider.html.
OpenShift must terminate all network connections associated with a communications session at the end of the session, or as follows: for in-band management sessions (privileged sessions), the session must be terminated after 10 minutes of inactivity.
STIG ID:
CNTR-OS-000490 |
SRG: SRG-APP-000190-CTR-000500 |
Severity: medium |
CCI: CCI-001133,CCI-002038 |
Vulnerability Id: V-257544 |
Vulnerability Discussion
In OpenShift, the "session token inactivity timeout" on OAuth clients is set to ensure security and protect against potential unauthorized access to user sessions. OAuth is an open standard for secure authorization and authentication between different services. By setting a session token inactivity timeout, OpenShift reduces the risk of unauthorized access to a user's session if they become inactive or leave their session unattended. It helps protect against potential session hijacking or session replay attacks.
OpenShift is designed to efficiently manage resources across the cluster. Active sessions consume resources such as memory and CPU. By setting timeouts, OpenShift can reclaim these resources if a session remains inactive for a certain duration. This helps optimize resource allocation and ensures that resources are available for other active sessions and workloads.
OpenShift provides the ability for automatic time-out to debug node sessions on client versions starting with 4.8.36. By setting a time-out, OpenShift can manage the allocation of resources efficiently. It prevents the scenario where a debug session remains active indefinitely, potentially consuming excessive resources and impacting the performance of other applications running on the cluster.
Allowing debug sessions to run indefinitely could introduce security risks. If a session is left unattended or unauthorized access is gained to a debug session, it could potentially compromise the application or expose sensitive information. By enforcing time-outs, OpenShift reduces the window of opportunity for unauthorized access and helps maintain the security and stability of the platform.
Satisfies: SRG-APP-000190-CTR-000500, SRG-APP-000389-CTR-000925
Check
On each administrators terminal, verify the OC client version includes the required idle timeout by executing the following.
oc version
If the client version < "4.8.36", this is a finding.
Determine if the session token inactivity timeout is set on the oauthclients by executing the following.
oc get oauthclients -ojsonpath='{range .items[*]}{.metadata.name}{"\t"}{.accessTokenInactivityTimeoutSeconds}{"\n"}'
The output will list each oauth client name followed by a number. The number represents the timeout in seconds. If no number is displayed, or the timeout value is >600, this is a finding.
Fix
Download the latest version of the OC client, and remove/replace any older versions.
For each oauth client that does not have the idle timeout set, or the timeout is set to the wrong duration, run the following command to set the idle timeout value to 10 minutes.
oc patch oauthclient/ --type=merge -p '{"accessTokenInactivityTimeoutSeconds":600}'
where CLIENT_NAME is the name of the oauthclient identified in the check.
OpenShift must separate user functionality (including user interface services) from information system management functionality.
STIG ID:
CNTR-OS-000500 |
SRG: SRG-APP-000211-CTR-000530 |
Severity: medium |
CCI: CCI-001082 |
Vulnerability Id: V-257545 |
Vulnerability Discussion
Red Hat Enterprise Linux CoreOS (RHCOS) is a single-purpose container operating system. RHCOS is only supported as a component of the OpenShift Container Platform. Remote management of the RHCOS nodes is performed at the OpenShift Container Platform API level.
Any direct access to the RHCOS nodes is unnecessary. RHCOS only has two user accounts defined, root(0) and core(1000). These are the only two user accounts that should exist on the RHCOS nodes. As any administrative access or actions are to be done through the OpenShift Container Platform's administrative APIs, direct logon access to the RHCOS host must be disabled.
Check
Verify that root and core are the only user accounts on the nodes by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; cat /etc/passwd' 2>/dev/null; done
The output will look similar to:
root:x:0:0:root:/root:/bin/bash
core:x:1000:1000:CoreOS Admin:/var/home/core:/bin/bash
containers:x:993:995:User for housing the sub ID range for containers:/var/home/containers:/sbin/nologin
If there are any user accounts in addition to root, containers, and core, this is a finding.
Verify the root and core users are set to disable password logon by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "^root" -e "^core" /etc/shadow' 2>/dev/null; done
The output will look similar to:
root:*:18367:0:99999:7:::
core:*:18939:0:99999:7:::
If the password entry has anything other than '*', this is a finding.
Fix
Disable and remove passwords from root and core accounts by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'usermod -p "*" root; usermod -p "*" core' 2>/dev/null; done
Remove any additional user accounts from the nodes by executing the following:
oc debug node/ -- chroot /host /bin/bash -c 'userdel '
OpenShift must protect authenticity of communications sessions with the use of FIPS-validated 140-2 or 140-3 validated cryptography.
STIG ID:
CNTR-OS-000510 |
SRG: SRG-APP-000219-CTR-000550 |
Severity: high |
CCI: CCI-001184,CCI-001350,CCI-002450,CCI-002890,CCI-003123 |
Vulnerability Id: V-257546 |
Vulnerability Discussion
FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes.
Because FIPS must be enabled before the operating system used by the cluster boots for the first time, FIPS cannot be disabled after a cluster is deployed.
OpenShift employs industry-validated cryptographic algorithms, key management practices, and secure protocols, reducing the likelihood of cryptographic vulnerabilities and attacks.
Satisfies: SRG-APP-000219-CTR-000550, SRG-APP-000635-CTR-001405, SRG-APP-000126-CTR-000275, SRG-APP-000411-CTR-000995, SRG-APP-000412-CTR-001000, SRG-APP-000416-CTR-001015, SRG-APP-000514-CTR-001315
Check
To validate the OpenShift cluster is running with FIPS enabled on each node by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; sysctl crypto.fips_enabled' 2>/dev/null; done
If any lines of output end in anything other than 1, this is a finding.
Fix
Reinstall the OpenShift cluster in FIPS mode. The file install-config.yaml has a top-level key that enables FIPS mode for all nodes and the cluster platform layer. If the install-config.yaml was not backed up prior to consumption as part of the installation, recreate it. An example install-config.yaml with some sections trimmed out for brevity, and the "fips: true" key applied at the top level is shown below:
apiVersion: v1
baseDomain: example.com
controlPlane:
name: master
platform:
aws:
[...]
replicas: 3
compute:
- name: worker
platform:
aws:
replicas: 3
metadata:
name: fips-cluster
networking:
[...]
platform:
aws:
[...]
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
fips: true
Once the install-config.yaml is saved with corresponding correct information for the installation infrastructure, run the installer to create a cluster that uses FIPS Validated/Modules in Process cryptographic libraries. The command to install a cluster and consume the install-config.yaml is:
> ./openshift-install create cluster --dir= --log-level=info
Where is the directory that contains install-config.yaml
Additional details can be found here: https://docs.openshift.com/container-platform/4.8/installing/installing-fips.html
OpenShift runtime must isolate security functions from nonsecurity functions.
STIG ID:
CNTR-OS-000540 |
SRG: SRG-APP-000233-CTR-000585 |
Severity: medium |
CCI: CCI-001084 |
Vulnerability Id: V-257547 |
Vulnerability Discussion
An isolation boundary provides access control and protects the integrity of the hardware, software, and firmware that perform security functions. Security functions are the hardware, software, and/or firmware of the information system responsible for enforcing the system security policy and supporting the isolation of code and data on which the protection is based.
Operating systems implement code separation (i.e., separation of security functions from nonsecurity functions) in several ways, including through the provision of security kernels via processor rings or processor modes. For nonkernel code, security function isolation is often achieved through file system protections that serve to protect the code on disk and address space protections that protect executing code.
Developers and implementers can increase the assurance in security functions by employing well-defined security policy models; structured, disciplined, and rigorous hardware and software development techniques; and sound system/security engineering principles. Implementation may include isolation of memory space and libraries. Operating systems restrict access to security functions using access control mechanisms and by implementing least privilege capabilities.
Check
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) verifies correct operation of all security functions by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; getenforce' 2>/dev/null; done
If "SELinux" is not active and not in "Enforcing" mode, this is a finding.
Fix
Apply the machine config by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 05-kernelarg-selinux-enforcing-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
kernelArguments:
- enforcing=1
" | oc apply -f -
done
OpenShift must prevent unauthorized and unintended information transfer via shared system resources and enable page poisoning.
STIG ID:
CNTR-OS-000560 |
SRG: SRG-APP-000243-CTR-000600 |
Severity: medium |
CCI: CCI-001090 |
Vulnerability Id: V-257548 |
Vulnerability Discussion
Enabling page poisoning in OpenShift improves memory safety, mitigates memory corruption vulnerabilities, aids in fault isolation, assists with debugging. It enhances the overall security and stability of the platform, reducing the risk of memory-related exploits and improving the resilience of applications running on OpenShift.
Check
Check the current CoreOS boot loader configuration has page poisoning enabled by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep page_poison /boot/loader/entries/*.conf|| echo "not found"' 2>/dev/null; done
If "page_poison" is not set to "1" or returns "not found", this is a finding.
Fix
Apply the machine config to enable page poisoning by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 05-kernelarg-page-poison-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
kernelArguments:
- page_poison=1
" | oc apply -f -
done
OpenShift must disable virtual syscalls.
STIG ID:
CNTR-OS-000570 |
SRG: SRG-APP-000243-CTR-000600 |
Severity: medium |
CCI: CCI-001090 |
Vulnerability Id: V-257549 |
Vulnerability Discussion
Virtual syscalls are a mechanism that allows user-space programs to make privileged system calls without transitioning to kernel mode. However, this feature can introduce additional security risks. Disabling virtual syscalls helps to mitigate potential vulnerabilities associated with this mechanism. By reducing the attack surface and limiting the ways in which user-space programs can interact with the kernel, OpenShift can enhance the overall security posture of the platform.
Check
Check the current CoreOS boot loader configuration has virtual syscalls disabled by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep vsyscall=none boot/loader/entries/*.conf || echo "not found"' 2>/dev/null; done
If "vsyscall" is not set to "none" or returns "not found", this is a finding.
Fix
Apply the machine config to disable virtual syscalls by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 05-kernelarg-vsyscall-none-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
kernelArguments:
- vsyscall=none
" | oc apply -f -
done
OpenShift must enable poisoning of SLUB/SLAB objects.
STIG ID:
CNTR-OS-000580 |
SRG: SRG-APP-000243-CTR-000600 |
Severity: medium |
CCI: CCI-001090 |
Vulnerability Id: V-257550 |
Vulnerability Discussion
By enabling poisoning of SLUB/SLAB objects, OpenShift can detect and identify use-after-free scenarios more effectively. The poisoned objects are marked as invalid or inaccessible, causing crashes or triggering alerts when an application attempts to access them. This helps identify and mitigate potential security vulnerabilities before they can be exploited.
Check
Verify that Red Hat Enterprise Linux CoreOS (RHCOS) is configured to enable poisoning of SLUB/SLAB objects to mitigate use-after-free vulnerabilities by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep slub_debug /boot/loader/entries/*.conf ' 2>/dev/null; done
If "slub_debug" is not set to "P" or is missing, this is a finding.
Fix
Apply the machine config to enable poisoning of SLUB/SLAB objects by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 05-kernelarg-slub-debug-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
kernelArguments:
- slub_debug=P
" | oc apply -f -
done
OpenShift must set the sticky bit for world-writable directories.
STIG ID:
CNTR-OS-000590 |
SRG: SRG-APP-000243-CTR-000600 |
Severity: medium |
CCI: CCI-001090 |
Vulnerability Id: V-257551 |
Vulnerability Discussion
Removing world-writable permissions or setting the sticky bit helps enforce access control on directories within the OpenShift platform. World-writable permissions allow any user to modify or delete files within the directory, which can introduce security risks. By removing these permissions or setting the sticky bit, OpenShift restricts modifications to the directory's owner and prevents unauthorized or unintended changes by other users.
Check
Verify that all world-writable directories have the sticky bit set. List any world-writeable directories that do not have the sticky bit set by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; find / -type d \( -perm -0002 -a ! -perm -1000 ! -path "/var/lib/containers/*" ! -path "/var/lib/kubelet/pods/*" ! -path "/sysroot/ostree/deploy/*" \) -print 2>/dev/null' 2>/dev/null; done
If there are any directories listed in the results, this is a finding.
Fix
Fix the directory permissions, by either removing world-writeable permission, or setting the sticky bit by executing the following:
oc debug node/ -- chroot /host /bin/bash -c 'chmod XXXX '
where
node_name: The name of the node to connect to (oc get node)
XXXX: Either 1777 (sticky bit) or 0755 (remove group and world write permission)
: The directory on which to correct the permissions
OpenShift must restrict access to the kernel buffer.
STIG ID:
CNTR-OS-000600 |
SRG: SRG-APP-000243-CTR-000600 |
Severity: medium |
CCI: CCI-001090 |
Vulnerability Id: V-257552 |
Vulnerability Discussion
Restricting access to the kernel buffer in OpenShift is crucial for preventing unauthorized access, protecting system stability, mitigating kernel-level attacks, preventing information leakage, and adhering to the principle of least privilege. It enhances the security posture of the platform and helps maintain the confidentiality, integrity, and availability of critical system resources.
Check
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) is configured to restrict access to the kernel message buffer.
Check the status of the kernel.dmesg_restrict kernel parameter by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; sysctl kernel.dmesg_restrict' 2>/dev/null; done
If "kernel.dmesg_restrict" is not set to "1" or is missing, this is a finding.
Fix
Apply the machine config to restrict access to the kernel message buffer by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-sysctl-kernel-dmesg-restrict-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,kernel.dmesg_restrict%3D1%0A
mode: 0644
path: /etc/sysctl.d/75-sysctl_kernel_dmesg_restrict.conf
overwrite: true
" | oc apply -f -
done
OpenShift must prevent kernel profiling.
STIG ID:
CNTR-OS-000610 |
SRG: SRG-APP-000243-CTR-000600 |
Severity: medium |
CCI: CCI-001090 |
Vulnerability Id: V-257553 |
Vulnerability Discussion
Kernel profiling involves monitoring and analyzing the behavior of the kernel, including its internal operations and system calls. This level of access and visibility into the kernel can potentially be exploited by attackers to gather sensitive information or launch attacks. By preventing kernel profiling, the attack surface is minimized and the risk of unauthorized access or malicious activities targeting the kernel is reduced.
Kernel profiling can introduce additional overhead and resource utilization, potentially impacting the stability and performance of the system. Profiling tools and techniques often involve instrumenting the kernel code, injecting hooks, or collecting detailed data, which may interfere with the normal operation of the kernel. By disallowing kernel profiling, OpenShift helps ensure the stability and reliability of the platform, preventing any potential disruptions caused by profiling activities.
Check
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) is configured to prevent kernel profiling by unprivileged users.
Check the status of the kernel.perf_event_paranoid kernel parameter by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; sysctl kernel.perf_event_paranoid
' 2>/dev/null; done
If "kernel.perf_event_paranoid" is not set to "2" or is missing, this is a finding.
Fix
Apply the machine config to prevent kernel profiling by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-sysctl-kernel-perf-event-paranoid-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,kernel.perf_event_paranoid%3D2%0A
mode: 0644
path: /etc/sysctl.d/75-sysctl_kernel_perf_event_paranoid.conf
overwrite: true
" | oc apply -f -
done
OpenShift must restrict individuals the ability to launch organizational-defined Denial-of-Service (DOS) attacks against other information systems by setting a default Resource Quota.
STIG ID:
CNTR-OS-000620 |
SRG: SRG-APP-000246-CTR-000605 |
Severity: medium |
CCI: CCI-001094 |
Vulnerability Id: V-257554 |
Vulnerability Discussion
OpenShift allows administrators to define resource quotas on a namespace basis. This allows tailoring of the shared resources based on a project needs. However, when a new project is created, unless a default project resource quota is configured, that project will not have any limits or quotas defined. This could allow someone to create a new project and then deploy services that exhaust or overuse the shared cluster resources. Thus, it is necessary to ensure that there is a default resource quota configured for all new projects. A Cluster Admin may increase resource quotas on a given project namespace, if that project requires additional resources at any time.
Check
Check for Resource Quota. Verify a default project template is defined by executing the following:
oc get project.config.openshift.io/cluster -o jsonpath="{.spec.projectRequestTemplate.name}"
If no project request template is in use by the project config, this is a finding.
Verify the project template includes a default resource quota.
oc get templates/
-n openshift-config -o jsonpath="{.objects[?(.kind=='ResourceQuota')]}{'\n'}"
Replace with the name of the project request template returned from the earlier query.
If the project template is not defined, or there are no ResourceQuota definitions in it, this is a finding.
Fix
Configure a default resource quota to protect resource over utilization by performing the following steps:
1. Create a bootstrap project template (if not already created) by executing the following:
oc adm create-bootstrap-project-template -o yaml > template.yaml
2. Edit the template and add a ResourceQuota object definition before the parameters section.
- apiVersion: v1
kind: ResourceQuota
metadata:
name: example
spec:
hard:
persistentvolumeclaims: "10"
requests.storage: "50Gi"
...
parameters:
3. Apply the project template to the cluster by executing the following:
oc create -f template.yaml -n openshift-config
4. Set the default cluster project request template by executing the following:
oc patch project.config.openshift.io/cluster --type=merge -p '{"spec":{"projectRequestTemplate":{"name": "
"}}}'
Details regarding the configuration of resource quotas can be reviewed at https://docs.openshift.com/container-platform/4.8/applications/quotas/quotas-setting-per-project.html.
OpenShift must restrict individuals the ability to launch organizational-defined Denial-of-Service (DOS) attacks against other information systems by rate-limiting.
STIG ID:
CNTR-OS-000630 |
SRG: SRG-APP-000246-CTR-000605 |
Severity: medium |
CCI: CCI-001094 |
Vulnerability Id: V-257555 |
Vulnerability Discussion
By setting rate limits, OpenShift can control the number of requests or connections allowed from a single source within a specific period. This prevents an excessive influx of requests that can overwhelm the application and degrade its performance or availability.
Setting rate limits also ensures fair resource allocation, prevents service degradation, protects backend systems, and enhances overall security. Along with, helping to maintain the availability, performance, and security of the applications hosted on the platform, contributing to a reliable and robust application infrastructure.
OpenShift has an option to set the rate limit for Routes (refer to link below) when creating new Routes. All routes outside the OpenShift namespaces and the kube namespaces must use the rate-limiting annotations.
https://docs.openshift.com/container-platform/4.9/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
Check
Verify that all namespaces except those that start with kube-* or openshift-* use the rate-limiting annotation by executing the following:
oc get routes --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/rate-limit-connections"] == "true" | not) | .metadata.name]'
If the above command returns any namespaces, this is a finding.
Fix
Add the haproxy.router.openshift.io/rate-limit-connections annotation to any routes outside the kube-* or openshift-* namespaces
oc annotate route -n --overwrite=true "haproxy.router.openshift.io/timeout=2s"
https://docs.openshift.com/container-platform/4.9/networking/routes/route-configuration.html
OpenShift must display an explicit logout message indicating the reliable termination of authenticated communication sessions.
STIG ID:
CNTR-OS-000650 |
SRG: SRG-APP-000297-CTR-000705 |
Severity: low |
CCI: CCI-002364 |
Vulnerability Id: V-257556 |
Vulnerability Discussion
The OpenShift CLI tool includes an explicit logout option.
The web console's default logout will invalidate the user's session token and redirect back to the console page, which will redirect the user to the authentication page. There is no explicit logout message. And in addition, if the IdP provider type is OIDC, the session token from the SSO provider will remain valid, which would effectively keep the user logged in. To correct this, the web console needs to be configured to redirect the user to a logout page. If using an OIDC provider, this would be the logout page for that provider.
Check
Verify the logout redirect setting in web console configuration is set by executing the following:
oc get console.config.openshift.io cluster -o jsonpath='{.spec.authentication.logoutRedirect}{"\n"}'
If nothing is returned, this is a finding.
Fix
Configure the web console's logout redirect to direct to an appropriate logout page. If OpenShift is configured to use an OIDC provider, then the redirect needs to first go to the OIDC provider's logout page, and then it can be redirected to another logout page as needed.
Run the following command to update the console:
oc patch console.config.openshift.io cluster --type merge -p '{"spec":{"authentication":{"logoutRedirect":""}}}'
where LOGOUT_URL is set to the logout page.
Container images instantiated by OpenShift must execute using least privileges.
STIG ID:
CNTR-OS-000660 |
SRG: SRG-APP-000342-CTR-000775 |
Severity: high |
CCI: CCI-000382,CCI-001090,CCI-002233 |
Vulnerability Id: V-257557 |
Vulnerability Discussion
Container images running on OpenShift must support running as any arbitrary UID. OpenShift will then assign a random, nonprivileged UID to the running container instance. This avoids the risk from containers running with specific UIDs that could map to host service accounts, or an even greater risk of running as root level service.
OpenShift uses the default security context constraints (SCC), restricted, to prevent containers from running as root or other privileged user IDs. Pods must be configured to use an SCC policy that allows the container to run as a specific UID, including root(0) when approved. Only a cluster administrator may grant the change of an SCC policy.
https://docs.openshift.com/container-platform/4.8/openshift_images/create-images.html#images-create-guide-openshift_create-images
Satisfies: SRG-APP-000342-CTR-000775, SRG-APP-000142-CTR-000330, SRG-APP-000243-CTR-000595
Check
Check SCC:
1. Identify any SCC policy that allows containers to access the host network or filesystem resources, or allows privileged containers or where runAsUser is not MustRunAsRange by executing the following:
oc get scc -ojson | jq '.items[]|select(.allowHostIPC or .allowHostPID or .allowHostPorts or .allowHostNetwork or .allowHostDirVolumePlugin or .allowPrivilegedContainer or .runAsUser.type != "MustRunAsRange" )|.metadata.name,{"Group:":.groups},{"User":.users}'
For each SCC listed, if any of those users or groups are anything other than the following, this is a finding:
* system:cluster-admins
* system:nodes
* system:masters
* system:admin
* system:serviceaccount:openshift-infra:build-controller
* system:serviceaccount:openshift-infra:pv-recycler-controller
* system:serviceaccount:openshift-machine-api:machine-api-termination-handler
The group "system:authenticated" is the default group for any authenticated user, this group should only be associated with the restricted profile. If this group is listed under any other SCC Policy, or the restricted SCC policy has been altered to allow any of the nonpermitted actions, this is a finding.
2. Determine if there are any cluster roles or local roles that allow the use of use of nonpermitted SCC policies. The following commands will print the role's name and namespace, followed by a list of resource names and if that resource is an SCC.
oc get clusterrole.rbac -ojson | jq -r '.items[]|select(.rules[]?|select( (.apiGroups[]? == ("security.openshift.io")) and (.resources[]? == ("securitycontextconstraints")) and (.verbs[]? == ("use"))))|.metadata.name,{"scc":(.rules[]?|select((.resources[]? == ("securitycontextconstraints"))).resourceNames[]?)}'
oc get role.rbac --all-namespaces -ojson | jq -r '.items[]|select(.rules[]?|select( (.apiGroups[]? == ("security.openshift.io")) and (.resources[]? == ("securitycontextconstraints")) and (.verbs[]? == ("use"))))|.metadata.name,{"scc":(.rules[]?|select((.resources[]? == ("securitycontextconstraints"))).resourceNames[]?)}'
Excluding platform specific roles, identify any roles that allow use of nonpermitted SCC policies. For example, the follow output shows that the role 'examplePrivilegedRole' allows use of the 'privileged' SCC.
examplePrivilegedRole
{
"scc": "privileged"
}
3. Determine if there are any role bindings to cluster or local roles that allow use of nonpermitted SCCs by executing the following:
oc get clusterrolebinding.rbac -ojson | jq -r '.items[]|select(.roleRef.kind == ("ClusterRole","Role") and .roleRef.name == ())|{ "crb": .metadata.name, "roleRef": .roleRef, "subjects": .subjects}'
oc get rolebinding.rbac --all-namespaces -ojson | jq -r '.items[]|select(.roleRef.kind == ("ClusterRole","Role") and .roleRef.name == ())|{ "crb": .metadata.name, "roleRef": .roleRef, "subjects": .subjects}'
Where and are comma-separated lists of the roles allowing use of nonpermitted SCC policies as identified above. For example:
... .roleRef.name == ("system:openshift:scc:privileged","system:openshift:scc:hostnetwork","system:openshift:scc:hostaccess") ...
Excluding any platform namespaces (kube-*,openshift-*), if there are any rolebindings to roles that are not permitted, this is a finding.
Fix
For users and groups that are defined in the SCC policy, execute the following to remove the users or groups by editing the corresponding SCC policy.
oc edit scc
The following instructions will remove the user or group from the cluster role binding for the SCC policy.
Remove user from the SCC policy binding by executing the following:
oc adm policy remove-scc-from-user
Remove a group from the SCC policy binding by executing the following:
oc adm policy remove-scc-from-group
Remove service account from the SCC policy binding by executing the following:
oc project
oc adm policy remove-scc-from-user -z
Remove any roles that allows use of nonpermitted SCC policies (excluding platform-defined roles) by executing the following:
oc delete clusterrole.rbac
or
oc delete role.rbac -n
Red Hat Enterprise Linux CoreOS (RHCOS) must allocate audit record storage capacity to store at least one weeks' worth of audit records, when audit records are not immediately sent to a central audit record storage facility.
STIG ID:
CNTR-OS-000670 |
SRG: SRG-APP-000357-CTR-000800 |
Severity: low |
CCI: CCI-001849 |
Vulnerability Id: V-257558 |
Vulnerability Discussion
To ensure RHCOS has a sufficient storage capacity in which to write the audit logs, operating systems need to be able to allocate audit record storage capacity. The task of allocating audit record storage capacity is performed during initial installation of the operating system.
Check
Verify RHCOS allocates audit record storage capacity to store at least one week of audit records when audit records are not immediately sent to a central audit record storage facility.
Check the size of the partition to which audit records are written (with the example being /var/log/audit/) by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; df -h /var/log/audit/' 2>/dev/null; done
Filesystem Size Used Avail Use% Mounted on
/dev/sdb4 1.0T 27G 998G 3% /var
If the audit record partition is not allocated for sufficient storage capacity, this is a finding.
Note: The partition size needed to capture a week of audit records is based on the activity level of the system and the total storage capacity available. Typically, 10.0 GB of storage space for audit records should be sufficient. If the partition used is not exclusively for audit logs, then determine the amount of additional space needed to support the partition reserving enough space for audit logs.
Fix
Reinstall the cluster, generating custom ignition configs to allocate audit record storage capacity.
1. Generate manifest files for the cluster by executing the following:
openshift-install create manifests --dir
2. Create a Butane config that configures additional partition by executing the following:
variant: openshift
version: 4.9.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
storage:
disks:
- device: /dev/
partitions:
- label: var
start_mib:
size_mib:
filesystems:
- device: /dev/disk/by-partlabel/var
path: /var
format: xfs
mount_options: [rw, nodev, nosuid, noexec,...]
with_mount_unit: true
3. Create a manifest from the Butane config by executing the following:
butane /98-var-partition.bu -o /openshift/98-var-partition.yaml
4. Create the ignition config files by executing the following:
openshift-install create ignition-configs --dir
OpenShift must configure Alert Manger Receivers to notify SA and ISSO of all audit failure events requiring real-time alerts.
STIG ID:
CNTR-OS-000690 |
SRG: SRG-APP-000360-CTR-000815 |
Severity: medium |
CCI: CCI-001858,CCI-002702 |
Vulnerability Id: V-257559 |
Vulnerability Discussion
It is critical for the appropriate personnel to be aware if a system is at risk of failing to process audit logs as required. Without a real-time alert, security personnel may be unaware of an impending failure of the audit capability and system operation may be adversely affected.
Alerts provide organizations with urgent messages. Real-time alerts provide these messages immediately (i.e., the time from event detection to alert occurs in seconds or less).
Satisfies: SRG-APP-000360-CTR-000815, SRG-APP-000474-CTR-001180
Check
Verify the AlertManager config includes a configured receiver.
1. From the Administrator perspective on the OpenShift web console, navigate to Administration >> Cluster Settings >> Configuration >> Alertmanager.
2. View the list of receivers and inspect the configuration.
3. Verify that at least one receiver is configured as either PagerDuty, Webhook, Email, or Slack according to the organizations policy.
If an alert receiver is not configured according to the organizational policy, this is a finding.
Fix
Create an alert notification receiver.
1. From the Administrator perspective on the OpenShift web console, navigate to Administration >> Cluster Settings >> Configuration >> Alertmanager.
2. Select "Create Receiver".
3. Set the name and choose a Receiver Type.
4. Complete the form as per the organizations policy.
5. Click "Create".
Refer to the following documentation for more information:
https://docs.openshift.com/container-platform/4.8/monitoring/managing-alerts.html#sending-notifications-to-external-systems_managing-alerts
OpenShift must enforce access restrictions and support auditing of the enforcement actions.
STIG ID:
CNTR-OS-000720 |
SRG: SRG-APP-000381-CTR-000905 |
Severity: medium |
CCI: CCI-001814,CCI-002234 |
Vulnerability Id: V-257560 |
Vulnerability Discussion
Enforcing access restrictions helps protect the OpenShift environment and its resources from unauthorized access, misuse, or malicious activities. By implementing access controls, OpenShift ensures that only authorized users or processes can access sensitive data, make changes to configurations, or perform privileged actions. This helps prevent unauthorized individuals or entities from compromising the system's security and integrity.
Enforcing access restrictions and auditing the enforcement actions ensures accountability for actions performed within the OpenShift environment. It helps identify the individuals or processes responsible for specific activities, whether they are legitimate actions or potential security breaches. This accountability discourages unauthorized or malicious behavior and supports incident response and forensic investigations.
Auditing the enforcement actions provides administrators with visibility into the system's security posture, access patterns, and potential security risks. It helps identify anomalies, detect suspicious activities, and monitor compliance with established security policies. This operational visibility enables timely detection and response to security incidents, ensuring the ongoing security and stability of the OpenShift environment.
Satisfies: SRG-APP-000381-CTR-000905, SRG-APP-000343-CTR-000780
Check
Verify OpenShift is configured to audit the execution of the "execve" system call by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "execpriv" /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -k execpriv
-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -k execpriv
-a always,exit -F arch=b32 -S execve -C gid!=egid -F egid=0 -k execpriv
-a always,exit -F arch=b64 -S execve -C gid!=egid -F egid=0 -k execpriv
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config to audit the execution of "execve" by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-audit-rules-suid-privilege-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch=b32%20-S%20execve%20-C%20uid%21=euid%20-F%20euid=0%20-k%20execpriv%0A-a%20always%2Cexit%20-F%20arch=b64%20-S%20execve%20-C%20uid%21=euid%20-F%20euid=0%20-k%20execpriv%0A-a%20always%2Cexit%20-F%20arch=b32%20-S%20execve%20-C%20gid%21=egid%20-F%20egid=0%20-k%20execpriv%0A-a%20always%2Cexit%20-F%20arch=b64%20-S%20execve%20-C%20gid%21=egid%20-F%20egid=0%20-k%20execpriv%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-suid-privilege-function.rules
overwrite: true
" | oc apply -f -
done
OpenShift must prevent the installation of patches, service packs, device drivers, or operating system components without verification they have been digitally signed using a certificate that is recognized and approved by the organization.
STIG ID:
CNTR-OS-000740 |
SRG: SRG-APP-000384-CTR-000915 |
Severity: medium |
CCI: CCI-001764 |
Vulnerability Id: V-257561 |
Vulnerability Discussion
Integrity of the OpenShift platform is handled by the cluster version operator. The cluster version operator will by default GPG verify the integrity of the release image before applying it. The release image contains a sha256 digest of machine-os-content which is used by the machine config operators for updates. On the host, the container runtime (podman) verifies the integrity of that sha256 when pulling the image before the machine config operator reads its content. Hence, there is end-to-end GPG-verified integrity for the operating system updates (as well as the rest of the cluster components which run as regular containers).
Check
To verify integrity of the cluster version, execute the following:
oc get clusterversion version
If the Cluster Version Operator is not installed or the AVAILABLE is not set to True, this is a finding.
Run the following command to retrieve the Cluster Version objects in the system:
oc get clusterversion version -o yaml
If "verified: true", under status history for each item is not present, this is a finding.
Fix
By default, the integrity of RH CoreOS is checked by cluster version operator on OpenShift platform. If the integrity is not verified, reinstall of the cluster is necessary.
Refer to instructions:
https://docs.openshift.com/container-platform/4.10/installing/index.html
OpenShift must set server token max age no greater than eight hours.
STIG ID:
CNTR-OS-000760 |
SRG: SRG-APP-000400-CTR-000960 |
Severity: medium |
CCI: CCI-002007 |
Vulnerability Id: V-257562 |
Vulnerability Discussion
The setting for OAuth server token max age is used to control the maximum duration for which an issued OAuth access token remains valid. Access tokens serve as a form of authentication and authorization in OAuth-based systems. By setting a maximum age for these tokens, OpenShift helps mitigate security risks associated with long-lived tokens. If a token is compromised, its impact is limited to the maximum age duration, as the token will expire and become invalid after that period. It reduces the window of opportunity for unauthorized access and enhances the security of the system.
By setting a maximum age for access tokens, OpenShift encourages the use of token refresh rather than relying on the same token for an extended period. Regular token refresh helps maintain a higher level of security by ensuring that tokens are periodically revalidated and rotated.
Check
To check if the OAuth server token max age is configured, execute the following:
oc get oauth cluster -ojsonpath='{.spec.tokenConfig.accessTokenMaxAgeSeconds}'
If the output timeout value on the OAuth server is >"28800" or missing, this is a finding.
Check the OAuth client token value (this can be set on each client also).
Check all clients OAuth client token max age configuration by execute the following:
oc get oauthclients -ojson | jq -r '.items[] | { accessTokenMaxAgeSeconds: .accessTokenMaxAgeSeconds}'
If the output returns a timeout value of >"28800" for any client, this is a finding.
Fix
To set the OAuth server token max age, edit the OAuth server object by executing the following:
oc patch oauth cluster --type merge -p '{"spec":{"tokenConfig":{"accessTokenMaxAgeSeconds": 28800}}}'
To set the OAuth client token max age, edit the OAuth client object by executing the following:
cli in $(oc get oauthclient -oname); do oc patch oauthclient $cli --type=merge -p '{"accessTokenMaxAgeSeconds": 28800}'; done
Vulnerability scanning applications must implement privileged access authorization to all OpenShift components, containers, and container images for selected organization-defined vulnerability scanning activities.
STIG ID:
CNTR-OS-000770 |
SRG: SRG-APP-000414-CTR-001010 |
Severity: medium |
CCI: CCI-001067 |
Vulnerability Id: V-257563 |
Vulnerability Discussion
OpenShift uses service accounts to provide applications running on or off the platform access to the API service using the enforced RBAC policies. Vulnerability scanning applications that need access to the container platform may use a service account to grant that access. That service account can then be bound to the appropriate role required. The highest level of access granted is the cluster-admin role. Any account bound to that role can access and modify anything on the platform. It is strongly recommended to limit the number of accounts bound to that role. Instead, there are other predefined cluster level roles that may support the scanning to, such as the view or edit cluster roles. Additionally, custom roles may be defined to tailor fit access as needed by the scanning tools.
Check
If no vulnerability scanning tool is used, this requirement is Not Applicable.
Identify the service accounts used by the vulnerability scanning tools. If the tool runs as a container on the platform, then service account information can be found in the pod details by executing the following:
(oc get pods to list pods)
oc get pod
-o jsonpath='{.spec.serviceAccount}{"\n"}'
If no service account exists for the vulnerability scanning tool, this is a finding.
View cluster role bindings to determine which role the service account is bound to by executing the following:
oc get clusterrolebinding -ojson | jq '.items[]|select(.subjects[]?|select(.kind == "ServiceAccount" and .name == "ingress-to-route-controller"))|{ "crb": .metadata.name, "roleRef": .roleRef, "subjects": .subjects}'
Find the role to which the service account is bound, if the service account is not bound to a cluster role, or the role does not provide sufficient access, this is a finding.
Fix
If no vulnerability scanning tool is used, this requirement is Not Applicable.
Create a service if one does not already exist.
Change to the appropriate namespace by executing the following:
oc project
Create Service Account in the Project by executing the following:
oc create sa
Verify creation of the Service Account by executing the following:
oc get sa | grep
Bind to the appropriate cluster RBAC role by executing the following:
oc adm policy add-cluster-role-to-user -z
For more information, refer to the following guides:
https://docs.openshift.com/container-platform/4.8/authentication/using-rbac.html
https://docs.openshift.com/container-platform/4.8/authentication/understanding-and-creating-service-accounts.html
https://docs.openshift.com/container-platform/4.8/authentication/using-service-accounts-in-applications.html
OpenShift keystore must implement encryption to prevent unauthorized disclosure of information at rest within the container platform.
STIG ID:
CNTR-OS-000780 |
SRG: SRG-APP-000429-CTR-001060 |
Severity: medium |
CCI: CCI-002476 |
Vulnerability Id: V-257564 |
Vulnerability Discussion
By default, etcd data is not encrypted in OpenShift Container Platform. Enable etcd encryption for the cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When users enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
Secrets
Config maps
Routes
OAuth access tokens
OAuth authorize tokens
When users enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. Users must have these keys to restore from an etcd backup.
Check
Review the API server encryption by running by executing the following:
oc edit apiserver
EXAMPLE OUTPUT
spec:
encryption:
type: aescbc
If the encryption type is not "aescbc", this is a finding.
Fix
Set API encryption type by executing the following:
oc edit apiserver
Set the encryption field type to aescbc:
spec:
encryption:
type: aescbc
Additional details about the configuration can be found in the documentation:
https://docs.openshift.com/container-platform/4.8/security/encrypting-etcd.html
OpenShift must protect against or limit the effects of all types of Denial-of-Service (DoS) attacks by employing organization-defined security safeguards by including a default resource quota.
STIG ID:
CNTR-OS-000800 |
SRG: SRG-APP-000435-CTR-001070 |
Severity: medium |
CCI: CCI-002385 |
Vulnerability Id: V-257565 |
Vulnerability Discussion
DNS attacks that are internal to the container platform (exploited or otherwise malicious applications) can have a limited blast radius by adhering to least privilege RBAC and Network access:
https://docs.openshift.com/container-platform/4.8/post_installation_configuration/network-configuration.html#post-install-configuring-network-policy
Additionally, applications can even be limited using OpenShift Service Mesh Operator.
DoS attacks coming from outside the cluster (ingress) can also be limited using an external cloud load balancer or by using 3scale API Gateway:
https://docs.openshift.com/container-platform/4.8/security/container_security/security-platform.html
Resource quotas must be set on a given namespace or across multiple namespaces. Using resource quotas will help to mitigate a DoS attack by limiting how much CPU, memory, and pods may be consumed in a project. This helps protect other projects (namespaces) from being denied resources to process.
https://docs.openshift.com/container-platform/4.8/applications/quotas/quotas-setting-per-project.html
Check
Verify the new project template includes a default resource quota by executing the following:
oc get templates/project-request -n openshift-config -o jsonpath="{.objects[?(.kind=='ResourceQuota')]}{'\n'}"
Review the ResourceQuota definition. If nothing is return, this is a finding.
Fix
Configure a default resource quota as necessary to protect resource over utilization.
1. Create a bootstrap project template by executing the following:
oc adm create-bootstrap-project-template -o yaml > template.yaml
2. Edit the template and add a ResourceQuota object definition before the parameters section.
- apiVersion: v1
kind: ResourceQuota
metadata:
name: example
spec:
hard:
persistentvolumeclaims: "10"
requests.storage: "50Gi"
...
parameters:
3. Apply the project template to the cluster by executing the following:
oc create -f template.yaml -n openshift-config
Details regarding the configuration of resource quotas can be reviewed at https://docs.openshift.com/container-platform/4.8/applications/quotas/quotas-setting-per-project.html.
OpenShift must protect against or limit the effects of all types of Denial-of-Service (DoS) attacks by defining resource quotas on a namespace.
STIG ID:
CNTR-OS-000810 |
SRG: SRG-APP-000435-CTR-001070 |
Severity: medium |
CCI: CCI-001094,CCI-002385,CCI-002824 |
Vulnerability Id: V-257566 |
Vulnerability Discussion
OpenShift allows administrators to define resource quotas on a namespace basis. This allows tailoring of the shared resources based on a project needs. However, when a new project is created, unless a default project resource quota is configured, that project will not have any limits or quotas defined. This could allow someone to create a new project and then deploy services that exhaust or overuse the shared cluster resources.
It is necessary to ensure that all existing namespaces with user-defined workloads have an applied resource quota configured.
Using resource quotas will help to mitigate a DoS attack by limiting how much CPU, memory, and pods may be consumed in a project. This helps protect other projects (namespaces) from being denied resources to process.
https://docs.openshift.com/container-platform/4.8/applications/quotas/quotas-setting-per-project.html
Satisfies: SRG-APP-000435-CTR-001070, SRG-APP-000246-CTR-000605, SRG-APP-000450-CTR-001105
Check
Note: CNTR-OS-000140 is a prerequisite to this control. A Network Policy must exist to run this check.
Verify that each user namespace has a ResourceQuota defined by executing the following:
for ns in $(oc get namespaces -ojson | jq -r '.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default") | .metadata.name '); do oc get resourcequota -n$ns; done
If the above returns any lines saying "No resources found in
namespace.", this is a finding. Empty output is not a finding.
Fix
Add a resource quota to an existing project namespace by performing the following steps:
1. Create .yaml and insert the desired resource quota content. The following is an example resource quota definition.
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace:
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
requests.ephemeral-storage: 2Gi
limits.cpu: "2"
limits.memory: 2Gi
limits.ephemeral-storage: 4Gi
2. Apply the ResourceQuota definition to the project namespace by executing the following:
oc apply -f .yaml -n
Details regarding the configuration of resource quotas can be reviewed at https://docs.openshift.com/container-platform/4.8/applications/quotas/quotas-setting-per-project.html.
OpenShift must protect the confidentiality and integrity of transmitted information.
STIG ID:
CNTR-OS-000820 |
SRG: SRG-APP-000439-CTR-001080 |
Severity: medium |
CCI: CCI-002418 |
Vulnerability Id: V-257567 |
Vulnerability Discussion
OpenShift provides for two types of application level ingress types, Routes, and Ingresses. Routes have been a part of OpenShift since version 3. Ingresses were promoted out of beta in Aug 2020 (kubernetes v1.19). Routes provides for three type of TLS configuration options; Edge, Passthrough, and Re-encrypt. Each of those options provide TLS encryption over HTTP for inbound transmissions originating outside the cluster. Ingresses will have an IngressController associated that manages the routing and proxying of inbound transmissions.
Check
Verify that routes and ingress are using secured transmission ports and protocols by executing the following:
oc get routes --all-namespaces
Review the ingress ports, if the Ingress is not using a secure TLS transport, this is a finding.
Fix
Delete any Route or Ingress that does not use a secure transport.
oc delete route -n
or
oc delete ingress -n
Red Hat Enterprise Linux CoreOS (RHCOS) must implement nonexecutable data to protect its memory from unauthorized code execution.
STIG ID:
CNTR-OS-000860 |
SRG: SRG-APP-000450-CTR-001105 |
Severity: medium |
CCI: CCI-002824 |
Vulnerability Id: V-257568 |
Vulnerability Discussion
The NX bit is a hardware feature that prevents the execution of code from data memory regions. By enabling NX bit execute protection, OpenShift ensures that malicious code or exploits cannot execute from areas of memory that are intended for data storage. This helps protect against various types of buffer overflow attacks, where an attacker attempts to inject and execute malicious code in data memory.
Check
Verify the NX (no-execution) bit flag is set on the system by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; dmesg | grep Execute ' 2>/dev/null; done
Example Output:([ 0.000000] NX (Execute Disable) protection: active)
If "dmesg" does not show "NX (Execute Disable) protection active", check the cpuinfo settings by executing the following command:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; less /proc/cpuinfo | grep 'nx' /proc/cpuinfo | uniq' 2>/dev/null; done
(Example Output: flags : fpu vme de pse tsc ms nx rdtscp lm constant_tsc...)
If "flags" does not contain the "nx" flag, this is a finding.
Fix
The NX bit execute protection must be enabled in the system BIOS. The nodes must be reinstalled. Follow the steps found here for more information:
https://access.redhat.com/solutions/2936741
Red Hat Enterprise Linux CoreOS (RHCOS) must implement ASLR (Address Space Layout Randomization) from unauthorized code execution.
STIG ID:
CNTR-OS-000870 |
SRG: SRG-APP-000450-CTR-001105 |
Severity: medium |
CCI: CCI-002824 |
Vulnerability Id: V-257569 |
Vulnerability Discussion
ASLR is a security technique that randomizes the memory layout of processes, making it more difficult for attackers to predict the location of system components and exploit memory-based vulnerabilities. By implementing ASLR, OpenShift reduces the effectiveness of common attacks such as buffer overflow, return-oriented programming (ROP), and other memory corruption exploits.
ASLR enhances the resilience of the OpenShift platform by introducing randomness into the memory layout. This randomization makes it harder for attackers to exploit vulnerabilities and launch successful attacks. Even if a vulnerability exists in the system, the randomized memory layout introduced by ASLR reduces the chances of the attacker being able to reliably exploit it, increasing the overall security of the platform.
ASLR is particularly effective in mitigating remote code execution attacks. By randomizing the memory layout, ASLR prevents attackers from precisely predicting the memory addresses needed to execute malicious code. This makes it significantly more challenging for attackers to successfully exploit vulnerabilities and execute arbitrary code on the system.
Protection of Shared Libraries: ASLR also protects shared libraries used by applications running on OpenShift. By randomizing the base addresses of shared libraries, ASLR makes it harder for attackers to leverage vulnerabilities in shared libraries to compromise applications or gain unauthorized access to the system. It adds an extra layer of protection to prevent attacks targeting shared library vulnerabilities.
Check
Verify Red Hat Enterprise Linux CoreOS (RHCOS) implements ASLR by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; sysctl kernel.randomize_va_space
' 2>/dev/null; done
If "kernel.randomize_va_space" is not set to "2", this is a finding.
Fix
Apply the machine config to implement ASLR by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-sysctl-kernel-randomize-va-space-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,kernel.randomize_va_space%3D2%0A
mode: 0644
path: /etc/sysctl.d/75-sysctl_kernel_randomize_va_space.conf
overwrite: true
" | oc apply -f -
done
OpenShift must remove old components after updated versions have been installed.
STIG ID:
CNTR-OS-000880 |
SRG: SRG-APP-000454-CTR-001110 |
Severity: medium |
CCI: CCI-002617 |
Vulnerability Id: V-257570 |
Vulnerability Discussion
Previous versions of OpenShift components that are not removed from the container platform after updates have been installed may be exploited by adversaries by causing older components to execute which contain vulnerabilities. When these components are deleted, the likelihood of this happening is removed.
Satisfies: SRG-APP-000454-CTR-001110, SRG-APP-000454-CTR-001115
Check
Ensure the imagepruner is configured and is not in a suspended state by executing the following:
oc get imagepruners.imageregistry.operator.openshift.io/cluster -o jsonpath='{.spec}{"\n"}'
Review the settings. If "suspend" is set to "true", this is a finding.
Fix
Enable the image pruner to automate the pruning of images from the cluster by executing the following:
oc patch imagepruners.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"suspend":false}}'
For additional details on configuring the image pruner operator, refer to the following document:
https://docs.openshift.com/container-platform/4.8/applications/pruning-objects.html#pruning-images_pruning-objects
OpenShift must contain the latest images with most recent updates and execute within the container platform runtime as authorized by IAVM, CTOs, DTMs, and STIGs.
STIG ID:
CNTR-OS-000890 |
SRG: SRG-APP-000456-CTR-001125 |
Severity: medium |
CCI: CCI-002605 |
Vulnerability Id: V-257571 |
Vulnerability Discussion
It is critical to the security and stability of the container platform and the software services running on the platform to ensure that images are deployed through a trusted software supply chain. The OpenShift platform can be configured to limit and control which image source repositories may be used by the platform and the users of the platform. By configuring this to only allow users to deploy images from trusted sources, lowers the risk for a user to deploy unsafe or untested images that would be detrimental to the security and stability of the platform.
In order to help users manage images, OpenShift uses image streams to provide a level of obstruction for the users. In this way the users can trigger automatic redeployments as images are updated. It is also possible to configure the image stream to periodically check the image source repository for any updates and automatically pull in the latest updates.
Check
Verify the image source policy is configured by executing the following:
oc get image.config.openshift.io/cluster -o jsonpath='{.spec.registrySources}{"\nAllowedRegistriesForImport: "}{.spec.allowedRegistriesForImport}{"\n"}'
If nothing is returned, this is a finding.
If the registries listed under allowedRegistries, insecureRegistries, or AllowedRegistriesForImport are not from trusted sources as defined by the organization, this is a finding.
Fix
Edit the cluster image config resource to define the allowed registries by executing the following:
oc edit image.config.openshift.io/cluster
The following is an example configuration. For a detailed explanation of the configuration properties, refer to https://docs.openshift.com/container-platform/4.8/openshift_images/image-configuration.html.
----------------------------------------------------------------------
apiVersion: config.openshift.io/v1
kind: Image
metadata:
annotations:
release.openshift.io/create-only: "true"
creationTimestamp: "2019-05-17T13:44:26Z"
generation: 1
name: cluster
resourceVersion: "8302"
selfLink: /apis/config.openshift.io/v1/images/cluster
uid: e34555da-78a9-11e9-b92b-06d6c7da38dc
spec:
allowedRegistriesForImport:
- domainName: quay.io
insecure: false
additionalTrustedCA:
name: myconfigmap
registrySources:
allowedRegistries:
- example.com
- quay.io
- registry.redhat.io
- image-registry.openshift-image-registry.svc:5000
- reg1.io/myrepo/myapp:latest
insecureRegistries:
- insecure.com
status:
internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
----------------------------------------------------------------------
OpenShift runtime must have updates installed within the period directed by an authoritative source (e.g., IAVM, CTOs, DTMs, and STIGs).
STIG ID:
CNTR-OS-000900 |
SRG: SRG-APP-000456-CTR-001130 |
Severity: medium |
CCI: CCI-002605 |
Vulnerability Id: V-257572 |
Vulnerability Discussion
OpenShift runtime must be carefully monitored for vulnerabilities, and when problems are detected, they must be remediated quickly. A vulnerable runtime exposes all containers it supports, as well as the host itself, to potentially significant risk. Organizations must use tools to look for Common Vulnerabilities and Exposures (CVEs) in the runtimes deployed, to upgrade any instances at risk, and to ensure that orchestrators only allow deployments to properly maintained runtimes.
Satisfies: SRG-APP-000456-CTR-001130, SRG-APP-000456-CTR-001125
Check
To list all the imagestreams and identify which imagestream tags are configured to periodically check for updates (imagePolicy = { scheduled: true }), execute the following:
oc get imagestream --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{range .spec.tags[*]}{"\t"}{.name}{": "}{.importPolicy}{"\n"}'
The output will be similar to:
httpd
2.4: {}
2.4-el7: {}
2.4-el8: {}
latest: {}
:
installer
latest: {"scheduled":true}
:
installer-artifacts
latest: {"scheduled":true}
:
Review the listing, and for each imagestream tag version that does not have the value '{"scheduled":true}' that should otherwise check for updates, this is a finding.
Fix
For container images that are not scheduled to check for updates that otherwise should, update the imagestream to schedule updates for each tag by executing the following:
oc patch imagestream -n NAMESPACE --type merge -p '{"spec":{"tags":[{"name":"","importPolicy":{"scheduled":true}}]}}'
where,
NAME: The imagestream name to update
NAMESPACE: The namespace the imagestream is in. This will most often be 'openshift'.
TAG_NAME: The imagestream tag to update
The Compliance Operator must be configured.
STIG ID:
CNTR-OS-000910 |
SRG: SRG-APP-000472-CTR-001170 |
Severity: medium |
CCI: CCI-002696 |
Vulnerability Id: V-257573 |
Vulnerability Discussion
The Compliance Operator enables continuous compliance monitoring within OpenShift. It regularly assesses the environment against defined compliance policies and automatically detects and reports any deviations. This helps organizations maintain a proactive stance towards compliance, identify potential issues in real-time, and take corrective actions promptly.
The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster.
The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. This allows an organization to define organizational policy to align with the SSP, combine it with standardized vendor-provided content, and periodically scan the platform in accordance with organization-defined policy.
Check
If Red Hat OpenShift Compliance Operator is not used, this check is Not Applicable.
Note: If Red Hat OpenShift Compliance Operator is not used, run the checks manually.
Review the cluster configuration to validate that all required security functions are being validated with the Compliance Operator.
To determine if any scans have been applied to the cluster and the status of the scans, execute the following:
oc get compliancescan -n openshift-compliance
Example output:
NAME PHASE RESULT
ocp4-cis DONE NON-COMPLIANT
ocp4-cis-manual DONE NON-COMPLIANT
ocp4-cis-node-master DONE NON-COMPLIANT
ocp4-cis-node-master-manual DONE NON-COMPLIANT
ocp4-cis-node-worker DONE NON-COMPLIANT
ocp4-cis-node-worker-manual DONE NON-COMPLIANT
ocp4-moderate DONE NON-COMPLIANT
ocp4-moderate-manual DONE NON-COMPLIANT
rhcos4-moderate-master DONE NON-COMPLIANT
rhcos4-moderate-master-manual DONE NON-COMPLIANT
rhcos4-moderate-worker DONE NON-COMPLIANT
rhcos4-moderate-worker-manual DONE NON-COMPLIANT
If no ComplianceScan names return, the scans do not align to the organizationally-defined appropriate security functions, the command returns with an error, or any of the results show "NON-COMPLIANT" as their result, then this is a finding.
Fix
If Red Hat OpenShift Compliance Operator is not used,, this check is Not Applicable.
The compliance operator must be leveraged to ensure that components are configured in alignment with the SSP. Install the Compliance Operator by executing the following:
oc apply -f - << 'EOF'
---
apiVersion: project.openshift.io/v1
kind: Project
metadata:
labels:
kubernetes.io/metadata.name: openshift-compliance
openshift.io/cluster-monitoring: "true"
name: openshift-compliance
spec: {}
...
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- openshift-compliance
...
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
channel: release-0.1
installPlanApproval: Automatic
name: compliance-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
...
EOF
Following installation of the Compliance Operator, a ScanSettingBinding object that configures the Compliance Operator to use the desired profile sets must be created. TailoredProfiles enable customization of controls to meet specific organizational controls defined in the SSP and can be based on existing profiles or written from scratch in standard SCAP format. If users have the definition for ScanSettingBinding that aligns profiles with ScanSettings in a YAML file named my-scansettingbinding.yml, users would apply that ScanSettingBinding by executing the following:
oc apply -f my-scansettingbinding.yml -n openshift-compliance
For more information about the compliance operator and its use, including the creation of TailoredProfiles and the ScanSettings available to meet specific security functions or organizational goals defined in the SSP, refer to https://docs.openshift.com/container-platform/4.8/security/compliance_operator/compliance-operator-understanding.html.
OpenShift must perform verification of the correct operation of security functions: upon startup and/or restart; upon command by a user with privileged access; and/or every 30 days.
STIG ID:
CNTR-OS-000920 |
SRG: SRG-APP-000473-CTR-001175 |
Severity: medium |
CCI: CCI-002699 |
Vulnerability Id: V-257574 |
Vulnerability Discussion
Security functionality includes, but is not limited to, establishing system accounts, configuring access authorization (i.e., permissions, privileges), setting events to be audited, and setting intrusion detection parameters.
The Compliance Operator enables continuous compliance monitoring within OpenShift. It regularly assesses the environment against defined compliance policies and automatically detects and reports any deviations. This helps organizations maintain a proactive stance towards compliance, identify potential issues in real-time, and take corrective actions promptly.
The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster.
The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content. This allows an organization to define organizational policy to align with the SSP, combine it with standardized vendor-provided content, and periodically scan the platform in accordance with organization-defined policy.
Check
If Red Hat OpenShift Compliance Operator is not used, this check is Not Applicable.
Review the cluster configuration to validate that all required security functions are being validated with the Compliance Operator.
To map the schedule of every profile through its ScanSettingBinding and output the schedules on which each Profile or TailoredProfile is run, execute the following commands:
declare -A binding_profiles
declare -A binding_schedule
while read binding setting profiles; do binding_profiles[$binding]="$profiles"; binding_schedule[$binding]=$(oc get scansetting -n openshift-compliance $setting -ojsonpath='{.schedule}'); done < <(oc get scansettingbinding -n openshift-compliance -ojsonpath='{range .items[*]}{.metadata.name} {.settingsRef.name} {range .profiles[*]}{.name} {end}{"\n"}{end}')
for binding in "${!binding_profiles[@]}"; do for profile in ${binding_profiles[$binding]}; do echo "$profile: ${binding_schedule[$binding]}"; done; done
If any error is returned, this is a finding.
If the schedules are not at least monthly or within the organizationally defined periodicity, this is a finding.
Check the profiles that are bound to schedules by executing the following:
To determine which rules are enforced by the profiles that are currently bound to the scheduled periodicities, execute the following commands:
for binding in "${!binding_profiles[@]}"; do for profile in ${binding_profiles[$binding]}; do for rule in $(oc get profile.compliance $profile -n openshift-compliance -ojsonpath='{range .rules[*]}{$}{"\n"}{end}'); do echo "$rule: ${binding_schedule[$binding]}"; done; done; done | sort -u
If the profiles that are bound to schedules do not cover the organization-designed security functions, this is a finding.
Fix
If Red Hat OpenShift Compliance Operator is not used, this check is Not Applicable.
The compliance operator must be leveraged to ensure that components are configured in alignment with the SSP at a desired schedule. Install the Compliance Operator by executing the following:
oc apply -f - << 'EOF'
---
apiVersion: project.openshift.io/v1
kind: Project
metadata:
labels:
kubernetes.io/metadata.name: openshift-compliance
openshift.io/cluster-monitoring: "true"
name: openshift-compliance
spec: {}
...
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- openshift-compliance
...
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
channel: release-0.1
installPlanApproval: Automatic
name: compliance-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
...
EOF
Following installation of the Compliance Operator, a ScanSettingBinding object that configures the Compliance Operator to use the desired scan cadence must be created. If users have the definition for ScanSettingBinding in a YAML file named my-scansettingbinding.yml, users would apply that ScanSettingBinding by executing the following:
oc apply -f my-scansettingbinding.yml -n openshift-compliance
For more information about the compliance operator and its use, including the configurability of scheduling of scan cadence in ScanSetting resources and the role-based access control requirements for manually triggered scans, refer to https://docs.openshift.com/container-platform/4.8/security/compliance_operator/compliance-operator-understanding.html.
OpenShift must generate audit records when successful/unsuccessful attempts to modify privileges occur.
STIG ID:
CNTR-OS-000930 |
SRG: SRG-APP-000495-CTR-001235 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257575 |
Vulnerability Discussion
Audit records provide a crucial source of information for security monitoring and incident response. By generating audit records for privilege modification attempts, OpenShift enables administrators and security teams to track and investigate any unauthorized or suspicious changes to privileges. These records serve as an essential source of evidence for detecting and responding to potential security incidents.
Audit records for unsuccessful attempts to modify privileges help in identifying unauthorized activities or potential attacks. If an unauthorized entity attempts to modify privileges, the audit records can serve as an early warning sign of a security threat. By monitoring and analyzing such records, administrators can detect and mitigate potential security breaches before they escalate.
Audit records play a vital role in forensic analysis and investigation. In the event of a security incident or suspected compromise, audit logs for privilege modifications provide valuable information for understanding the scope and impact of the incident.
Check
Verify OpenShift is configured to generate audit records when successful/unsuccessful attempts to modify privileges occur by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "key=unsuccessful-create" -e "key=unsuccessful-modification" -e "key=delete" -e "key=unsuccessful-access" -e "actions" -e "key=perm_mod" -e "audit_rules_usergroup_modification" -e "module-change" -e "logins" /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-w /etc/group -p wa -k audit_rules_usergroup_modification
-w /etc/gshadow -p wa -k audit_rules_usergroup_modification
-w /etc/passwd -p wa -k audit_rules_usergroup_modification
-w /etc/security/opasswd -p wa -k audit_rules_usergroup_modification
-w /etc/shadow -p wa -k audit_rules_usergroup_modification
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S open -F a1&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S open -F a1&0x40 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S open -F a1&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S open -F a1&0x40 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S creat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S creat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S creat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b64 -S creat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-create
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S open -F a1&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S open -F a1&0x203 -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S openat,open_by_handle_at -F a2&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modmodule-changeification
-a always,exit -F arch=b32 -S open -F a1&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S open -F a1&0x203 -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S truncate,ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S truncate,ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S truncate,ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b64 -S truncate,ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-modification
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-access
-w /etc/sudoers.d -p wa -k actions
-w /etc/sudoers -p wa -k actions
-a always,exit -F arch=b32 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S delete_module -F key=module-change
-a always,exit -F arch=b64 -S delete_module -F key=module-change
-a always,exit -F arch=b32 -S finit_module -F key=module-change
-a always,exit -F arch=b64 -S finit_module -F key=module-change
-a always,exit -F arch=b32 -S init_module -F key=module-change
-a always,exit -F arch=b64 -S init_module -F key=module-change
-w /var/log/lastlog -p wa -k logins
-a always,exit -F arch=b32 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S renameat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S renameat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S rmdir -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rmdir -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S umount2 -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S umount2 -F auid>=1000 -F auid!=unset -F key=perm_mod
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config to generate audit records when successful/unsuccessful attempts to modify privileges by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-modify-privileges-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%23%20Make%20the%20loginuid%20immutable.%20This%20prevents%20tampering%20with%20the%20auid.%0A--loginuid-immutable%0A
mode: 0644
path: /etc/audit/rules.d/11-loginuid.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20chmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20chmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-chmod_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20chown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20chown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-chown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchmod_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchmodat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchmodat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchmodat_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchownat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchownat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchownat_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lchown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-removexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20setxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20setxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-setxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20umount2%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20umount%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-umount_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20umount2%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20umount2%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-umount2_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/usermod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_usermod_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/unix_update%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_unix_update_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/kmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_kmod_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/setfacl%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_setfacl_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chacl%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chacl_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chcon%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chcon_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/semanage%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_semanage_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/setfiles%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_setfiles_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/setsebool%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_setsebool_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20rename%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20rename%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-rename-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20renameat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20renameat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-renameat-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20rmdir%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20rmdir%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-rmdir-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlink%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlink%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-unlink-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlinkat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlinkat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-unlinkat-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20delete_module%20-k%20module-change%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20delete_module%20-k%20module-change%0A
mode: 0644
path: /etc/audit/rules.d/75-kernel-module-loading-delete.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20finit_module%20-k%20module-change%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20finit_module%20-k%20module-change%0A
mode: 0644
path: /etc/audit/rules.d/75-kernel-module-loading-finit.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20init_module%20-k%20module-change%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20init_module%20-k%20module-change%0A
mode: 0644
path: /etc/audit/rules.d/75-kernel-module-loading-init.rules
overwrite: true
- contents:
source: data:,-w%20/var/log/lastlog%20-p%20wa%20-k%20logins%0A
mode: 0644
path: /etc/audit/rules.d/75-lastlog_login_events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20mount%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20mount%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-mount_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chage%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chage_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chsh%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chsh_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/crontab%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_crontab_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/gpasswd%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_gpasswd_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/newgrp%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_newgrp_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/pam_timestamp_check%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_pam_timestamp_check_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/passwd%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_passwd_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/libexec/openssh/ssh-keysign%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_libexec_openssh_ssh-keysign_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/su%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_su_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/sudo%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_sudo_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/sudoedit%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_sudoedit_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/unix_chkpwd%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_unix_chkpwd_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/userhelper%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_userhelper_execution.rules
overwrite: true
- contents:
source: data:,-w%20/etc/sudoers.d/%20-p%20wa%20-k%20actions%0A-w%20/etc/sudoers%20-p%20wa%20-k%20actions%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-sysadmin-actions.rules
overwrite: true
- contents:
source: data:,%23%23%20This%20content%20is%20a%20section%20of%20an%20Audit%20config%20snapshot%20recommended%20for%20Red%2520Hat%2520Enterprise%2520Linux%2520CoreOS%25204%20systems%20that%20target%20OSPP%20compliance.%0A%23%23%20The%20following%20content%20has%20been%20retreived%20on%202019-03-11%20from%3A%20https%3A//github.com/linux-audit/audit-userspace/blob/master/rules/30-ospp-v42.rules%0A%0A%23%23%20The%20purpose%20of%20these%20rules%20is%20to%20meet%20the%20requirements%20for%20Operating%0A%23%23%20System%20Protection%20Profile%20%28OSPP%29v4.2.%20These%20rules%20depends%20on%20having%0A%23%23%2010-base-config.rules%2C%2011-loginuid.rules%2C%20and%2043-module-load.rules%20installed.%0A%0A%23%23%20Unsuccessful%20file%20creation%20%28open%20with%20O_CREAT%29%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%260100%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20creat%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20creat%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20creat%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20creat%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-create%0A%0A%23%23%20Unsuccessful%20file%20modifications%20%28open%20for%20write%20or%20truncate%29%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20openat%2Copen_by_handle_at%20-F%20a2%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%20-F%20a1%2601003%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20truncate%2Cftruncate%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-modification%0A%0A%23%23%20Unsuccessful%20file%20access%20%28any%20other%20opens%29%20This%20has%20to%20go%20last.%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20open%2Ccreat%2Ctruncate%2Cftruncate%2Copenat%2Copen_by_handle_at%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-access%0A
mode: 0644
path: /etc/audit/rules.d/30-ospp-v42-remediation.rules
overwrite: true
- contents:
source: data:,-w%20/etc/group%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_group_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/gshadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_gshadow_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/security/opasswd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_security_opasswd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/passwd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_passwd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/shadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_shadow_usergroup_modification.rules
overwrite: true
" | oc apply -f -
done
OpenShift must generate audit records when successful/unsuccessful attempts to modify security objects occur.
STIG ID:
CNTR-OS-000940 |
SRG: SRG-APP-000496-CTR-001240 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257576 |
Vulnerability Discussion
OpenShift and its components must generate audit records when modifying security objects. All the components must use the same standard so that the events can be tied together to understand what took place within the overall container platform. This must establish, correlate, and help assist with investigating the events relating to an incident, or identify those responsible.
Without audit record generation, unauthorized users can modify security objects unknowingly for malicious intent creating vulnerabilities within the container platform.
Satisfies: SRG-APP-000496-CTR-001240, SRG-APP-000497-CTR-001245, SRG-APP-000498-CTR-001250
Check
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) is configured to generate audit records when successful/unsuccessful attempts to modify security categories or objects occur by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "key=privileged" -e "key=perm_mod" /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-a always,exit -F arch=b64 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F path=/usr/bin/chcon -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/restorecon -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/semanage -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/setfiles -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/setsebool -F auid>=1000 -F auid!=unset -F key=privileged
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config to configure modification audit records by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-modify-categories-secobjects-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-removexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chcon%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chcon_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/restorecon%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_restorecon_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/semanage%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_semanage_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/setfiles%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_setfiles_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/sbin/setsebool%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_sbin_setsebool_execution.rules
overwrite: true
" | oc apply -f -
done
OpenShift must generate audit records when successful/unsuccessful attempts to delete privileges occur.
STIG ID:
CNTR-OS-000950 |
SRG: SRG-APP-000499-CTR-001255 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257577 |
Vulnerability Discussion
Audit records for unsuccessful attempts to delete privileges help in identifying unauthorized activities or potential attacks. If an unauthorized entity attempts to remove privileges, the audit records can serve as an early warning sign of a security threat. By monitoring and analyzing such records, administrators can detect and mitigate potential security breaches before they escalate.
Audit records play a vital role in forensic analysis and investigation. In the event of a security incident or suspected compromise, audit logs for privilege deletions provide valuable information for understanding the scope and impact of the incident.
Check
Verify OpenShift is configured to generate audit records when successful/unsuccessful attempts to delete security privileges occur by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "key=delete" -e "key=perm_mod" -e "key=privileged" -e "audit_rules_usergroup_modification" /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-w /etc/group -p wa -k audit_rules_usergroup_modification
-w /etc/gshadow -p wa -k audit_rules_usergroup_modification
-w /etc/passwd -p wa -k audit_rules_usergroup_modification
-w /etc/security/opasswd -p wa -k audit_rules_usergroup_modification
-w /etc/shadow -p wa -k audit_rules_usergroup_modification
-a always,exit -F arch=b32 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S renameat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S renameat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S rmdir -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rmdir -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S umount2 -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S umount2 -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S unlinkat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S unlinkat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S unlink -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S unlink -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F path=/usr/bin/chage -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/chcon -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/chsh -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/crontab -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/gpasswd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/newgrp -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/passwd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/sudoedit -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/sudo -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/su -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/umount -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/libexec/pt_chown -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/pam_timestamp_check -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/postdrop -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/postqueue -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/semanage -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/setfiles -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/setsebool -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/unix_chkpwd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/userhelper -F auid>=1000 -F auid!=unset -F key=privileged
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config to generate audit records when successful/unsuccessful attempts to delete security privileges by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-delete-privileges-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20chmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20chmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-chmod_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20chown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20chown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-chown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchmod%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchmod_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchmodat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchmodat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchmodat_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fchownat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fchownat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fchownat_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lchown%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lchown_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-removexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20rename%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20rename%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-rename-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20renameat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20renameat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-renameat-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20rmdir%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20rmdir%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-rmdir-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlink%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlink%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-unlink-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlinkat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlinkat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-unlinkat-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/su%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_su_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/sudo%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_sudo_execution.rules
overwrite: true
- contents:
source: data:,-w%20/etc/sudoers.d/%20-p%20wa%20-k%20actions%0A-w%20/etc/sudoers%20-p%20wa%20-k%20actions%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-sysadmin-actions.rules
overwrite: true
- contents:
source: data:,-w%20/etc/group%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_group_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/gshadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_gshadow_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/passwd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_passwd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/shadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_shadow_usergroup_modification.rules
overwrite: true
" | oc apply -f -
done
OpenShift must generate audit records when successful/unsuccessful attempts to delete security objects occur.
STIG ID:
CNTR-OS-000960 |
SRG: SRG-APP-000501-CTR-001265 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257578 |
Vulnerability Discussion
By generating audit records for security object deletions, OpenShift enables administrators and security teams to track and investigate any unauthorized or suspicious removal of security objects. These records serve as valuable evidence for detecting and responding to potential security incidents.
Audit records for unsuccessful attempts to delete security objects help in identifying unauthorized activities or potential attacks. If an unauthorized entity attempts to delete security objects, the audit records can serve as an early warning sign of a security threat. By monitoring and analyzing such records, administrators can detect and mitigate potential security breaches before they escalate.
Audit records play a vital role in forensic analysis and investigation. In the event of a security incident or suspected compromise, audit logs for security object deletions provide valuable information for understanding the scope and impact of the incident.
Satisfies: SRG-APP-000501-CTR-001265, SRG-APP-000502-CTR-001270
Check
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) is configured to generate audit records when successful/unsuccessful attempts to delete security objects or categories of information occur by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "key=access" -e "key=delete" -e "key=unsuccessful-delete" -e "key=privileged" -e "key=perm_mod" /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-a always,exit -F arch=b32 -S unlink,unlinkat,rename,renameat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-delete
-a always,exit -F arch=b64 -S unlink,unlinkat,rename,renameat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-delete
-a always,exit -F arch=b32 -S unlink,unlinkat,rename,renameat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-delete
-a always,exit -F arch=b64 -S unlink,unlinkat,rename,renameat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=unsuccessful-delete
-a always,exit -F arch=b32 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S chown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmodat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchmod -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchownat -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S fsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lchown -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lremovexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S lsetxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S removexattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S renameat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S renameat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S renameat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b64 -S renameat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S renameat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S renameat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S rename -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rename -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rename -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b64 -S rename -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S rename -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S rename -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S rmdir -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S rmdir -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b32 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S setxattr -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S umount2 -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b64 -S umount2 -F auid>=1000 -F auid!=unset -F key=perm_mod
-a always,exit -F arch=b32 -S unlinkat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S unlinkat -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S unlinkat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b64 -S unlinkat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S unlinkat -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S unlinkat -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S unlink -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S unlink -F auid>=1000 -F auid!=unset -F key=delete
-a always,exit -F arch=b64 -S unlink -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b64 -S unlink -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S unlink -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F arch=b32 -S unlink -F exit=-EPERM -F auid>=1000 -F auid!=unset -F key=access
-a always,exit -F path=/usr/bin/chage -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/chcon -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/chsh -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/crontab -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/gpasswd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/newgrp -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/passwd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/sudoedit -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/sudo -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/su -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/bin/umount -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/libexec/pt_chown -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/pam_timestamp_check -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/postdrop -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/postqueue -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/semanage -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/setfiles -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/setsebool -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/unix_chkpwd -F auid>=1000 -F auid!=unset -F key=privileged
-a always,exit -F path=/usr/sbin/userhelper -F auid>=1000 -F auid!=unset -F key=privileged
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config to generate audit records when security objects or categories are deleted by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-delete-sec-objects-levels-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%23%20Unsuccessful%20file%20delete%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlink%2Cunlinkat%2Crename%2Crenameat%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-delete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlink%2Cunlinkat%2Crename%2Crenameat%20-F%20exit%3D-EACCES%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-delete%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlink%2Cunlinkat%2Crename%2Crenameat%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-delete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlink%2Cunlinkat%2Crename%2Crenameat%20-F%20exit%3D-EPERM%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dunsuccessful-delete%0A
mode: 0644
path: /etc/audit/rules.d/30-ospp-v42-4-delete-failed.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20fsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-fsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lremovexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lremovexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20lsetxattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-lsetxattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20removexattr%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dperm_mod%0A
mode: 0644
path: /etc/audit/rules.d/75-removexattr_dac_modification.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chcon%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chcon_execution.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20rename%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20rename%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-rename-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20renameat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20renameat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-renameat-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20rmdir%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20rmdir%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-rmdir-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlink%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlink%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-unlink-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20unlinkat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20unlinkat%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Ddelete%0A
mode: 0644
path: /etc/audit/rules.d/75-unlinkat-file-deletion-events.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20path%3D/usr/bin/chage%20-F%20auid%3E%3D1000%20-F%20auid%21%3Dunset%20-F%20key%3Dprivileged%0A
mode: 0644
path: /etc/audit/rules.d/75-usr_bin_chage_execution.rules
overwrite: true
" ; done | oc apply -f -
OpenShift must generate audit records when successful/unsuccessful logon attempts occur.
STIG ID:
CNTR-OS-000970 |
SRG: SRG-APP-000503-CTR-001275 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257579 |
Vulnerability Discussion
Audit records provide valuable information for security monitoring and intrusion detection. By generating audit logs for logon attempts, OpenShift enables administrators and security teams to track and investigate any unauthorized or suspicious access attempts. These records serve as a vital source of information for detecting and responding to potential security breaches or unauthorized logon activities.
Generating audit records for logon attempts supports user accountability. Audit logs provide a trail of logon activities, allowing administrators to attribute specific logon events to individual users or entities. This promotes accountability and helps in identifying any unauthorized access attempts or suspicious behavior by specific users.
By monitoring logon activity logs, administrators and security teams can identify unusual or suspicious patterns of logon attempts. Forensic analysts can examine these records to reconstruct the timeline of logon activities and determine the scope and nature of the incident.
Check
Verify that logons are audited by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n ""$HOSTNAME ""; grep ""logins"" /etc/audit/audit.rules /etc/audit/rules.d/*' 2>/dev/null; done
The output will look similar to:
node-name /etc/audit/:-w /var/run/faillock -p wa -k logins
/etc/audit/:-w /var/log/lastlog -p wa -k logins
If the two rules above are not found on each node, this is a finding.
Fix
Apply the machine config to audit logons by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-logon-attempts-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-w%20/var/run/faillock%20-p%20wa%20-k%20logins%0A
mode: 0644
path: /etc/audit/rules.d/75-faillock_login_events.rules
overwrite: true
- contents:
source: data:,-w%20/var/log/lastlog%20-p%20wa%20-k%20logins%0A
mode: 0644
path: /etc/audit/rules.d/75-lastlog_login_events.rules
overwrite: true
- contents:
source: data:,-w%20/etc/sudoers.d/%20-p%20wa%20-k%20actions%0A-w%20/etc/sudoers%20-p%20wa%20-k%20actions%0A
mode: 0644
path: /etc/audit/rules.d/75-audit-sysadmin-actions.rules
overwrite: true
- contents:
source: data:,-w%20/etc/group%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_group_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/gshadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_gshadow_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/security/opasswd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_security_opasswd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/passwd%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_passwd_usergroup_modification.rules
overwrite: true
- contents:
source: data:,-w%20/etc/shadow%20-p%20wa%20-k%20audit_rules_usergroup_modification%0A
mode: 0644
path: /etc/audit/rules.d/30-etc_shadow_usergroup_modification.rules
overwrite: true
" | oc apply -f -
done
Red Hat Enterprise Linux CoreOS (RHCOS) must be configured to audit the loading and unloading of dynamic kernel modules.
STIG ID:
CNTR-OS-000980 |
SRG: SRG-APP-000504-CTR-001280 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257580 |
Vulnerability Discussion
By generating audit logs for the loading and unloading of dynamic kernel modules, OpenShift enables administrators and security teams to track and investigate any unauthorized or suspicious changes to the kernel modules. These records serve as a vital source of information for detecting and responding to potential security breaches or unauthorized module manipulations.
Audit records play a crucial role in forensic analysis and investigation. In the event of a security incident or suspected compromise, audit logs for dynamic kernel module loading and unloading provide valuable information for understanding the sequence of events and identifying any unauthorized or malicious module manipulations.
Audit records for module loading and unloading can be used for system performance analysis and troubleshooting. By reviewing these records, administrators can identify any problematic or misbehaving modules that may affect system performance or stability. This helps in diagnosing and resolving issues related to kernel modules more effectively.
Check
Verify the audit rules capture loading and unloading of kernel modules by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e module-load -e module-unload -e module-change /etc/audit/rules.d/* /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node.
-a always,exit -F arch=b32 -S init_module,finit_module -F key=module-load
-a always,exit -F arch=b64 -S init_module,finit_module -F key=module-load
-a always,exit -F arch=b32 -S delete_module -F key=module-unload
-a always,exit -F arch=b64 -S delete_module -F key=module-unload
-a always,exit -F arch=b32 -S delete_module -k module-change
-a always,exit -F arch=b64 -S delete_module -k module-change
-a always,exit -F arch=b32 -S finit_module -k module-change
-a always,exit -F arch=b64 -S finit_module -k module-change
-a always,exit -F arch=b32 -S init_module -k module-change
-a always,exit -F arch=b64 -S init_module -k module-change
If the above rules are not listed for each node, this is a finding.
Fix
Apply the machine config for audit rules capture by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-kernel-modules-rules-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%23%20These%20rules%20watch%20for%20kernel%20module%20insertion.%20By%20monitoring%0A%23%23%20the%20syscall%2C%20we%20do%20not%20need%20any%20watches%20on%20programs.%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20init_module%2Cfinit_module%20-F%20key%3Dmodule-load%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20init_module%2Cfinit_module%20-F%20key%3Dmodule-load%0A-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20delete_module%20-F%20key%3Dmodule-unload%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20delete_module%20-F%20key%3Dmodule-unload%0A
mode: 0644
path: /etc/audit/rules.d/43-module-load.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20delete_module%20-k%20module-change%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20delete_module%20-k%20module-change%0A
mode: 0644
path: /etc/audit/rules.d/75-kernel-module-loading-delete.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20finit_module%20-k%20module-change%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20finit_module%20-k%20module-change%0A
mode: 0644
path: /etc/audit/rules.d/75-kernel-module-loading-finit.rules
overwrite: true
- contents:
source: data:,-a%20always%2Cexit%20-F%20arch%3Db32%20-S%20init_module%20-k%20module-change%0A-a%20always%2Cexit%20-F%20arch%3Db64%20-S%20init_module%20-k%20module-change%0A
mode: 0644
path: /etc/audit/rules.d/75-kernel-module-loading-init.rules
overwrite: true
" | oc apply -f -
done
OpenShift audit records must record user access start and end times.
STIG ID:
CNTR-OS-000990 |
SRG: SRG-APP-000505-CTR-001285 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257581 |
Vulnerability Discussion
OpenShift must generate audit records showing start and end times for users and services acting on behalf of a user accessing the registry and keystore. These components must use the same standard so that the events can be tied together to understand what took place within the overall container platform. This must establish, correlate, and help assist with investigating the events relating to an incident, or identify those responsible.
Check
Verify the Red Hat Enterprise Linux CoreOS (RHCOS) is configured to generate audit records showing starting and ending times for user access by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -e "-k session" /etc/audit/audit.rules' 2>/dev/null; done
Confirm the following rules exist on each node:
-w /var/log/btmp -p wa -k session
-w /var/log/utmp -p wa -k session
If the above rules are not listed on each node, this is a finding.
Fix
Apply the machine config for user access times by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-session-start-end-time-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-w%20/var/log/btmp%20-p%20wa%20-k%20session%0A
mode: 0644
path: /etc/audit/rules.d/75-var_log_btmp_write_events.rules
overwrite: true
- contents:
source: data:,-w%20/var/log/utmp%20-p%20wa%20-k%20session%0A
mode: 0644
path: /etc/audit/rules.d/75-var_log_utmp_write_events.rules
overwrite: true
" | oc apply -f -
done
OpenShift must generate audit records when concurrent logons from different workstations and systems occur.
STIG ID:
CNTR-OS-001000 |
SRG: SRG-APP-000506-CTR-001290 |
Severity: medium |
CCI: CCI-000172 |
Vulnerability Id: V-257582 |
Vulnerability Discussion
OpenShift and its components must generate audit records for concurrent logons from workstations perform remote maintenance, runtime instances, connectivity to the container registry, and keystore. All the components must use the same standard so the events can be tied together to understand what took place within the overall container platform. This must establish, correlate, and help assist with investigating the events relating to an incident, or identify those responsible.
Check
Verify that concurrent logons are audited by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep "logins" /etc/audit/audit.rules /etc/audit/rules.d/*' 2>/dev/null; done
The output will look similar to:
node-name /etc/audit/:-w /var/run/faillock -p wa -k logins
/etc/audit/:-w /var/log/lastlog -p wa -k logins
If the two rules above are not found on each node, this is a finding.
Fix
Apply the machine config so concurrent logons are audited by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 75-concurrent-logons-rules
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,-w%20/var/run/faillock%20-p%20wa%20-k%20logins%0A
mode: 0644
path: /etc/audit/rules.d/75-faillock_login_events.rules
overwrite: true
- contents:
source: data:,-w%20/var/log/lastlog%20-p%20wa%20-k%20logins%0A
mode: 0644
path: /etc/audit/rules.d/75-lastlog_login_events.rules
overwrite: true
" | oc apply -f -
done
Red Hat Enterprise Linux CoreOS (RHCOS) must disable SSHD service.
STIG ID:
CNTR-OS-001010 |
SRG: SRG-APP-000141-CTR-000315 |
Severity: high |
CCI: CCI-000381,CCI-000877 |
Vulnerability Id: V-257583 |
Vulnerability Discussion
Any direct remote access to the RHCOS nodes is not allowed. RHCOS is a single-purpose container operating system and is only supported as a component of the OpenShift Container Platform. Remote management of the RHCOS nodes is performed at the OpenShift Container Platform API level.
Disabling the SSHD service reduces the attack surface and potential vulnerabilities associated with SSH access. SSH is a commonly targeted vector by malicious actors, and disabling the service eliminates the potential risks associated with unauthorized SSH access or exploitation of SSH-related vulnerabilities.
By disabling SSHD, OpenShift can restrict access to the platform to only authorized channels and protocols. This helps mitigate the risk of unauthorized access attempts and reduces the exposure to potential brute-force attacks or password-guessing attacks against SSH.
Disabling SSHD encourages the use of more secure and controlled access mechanisms, such as API-based access or secure remote management tools provided by OpenShift. These mechanisms offer better access control and auditing capabilities, allowing administrators to manage and monitor access to the platform more effectively.
Satisfies: SRG-APP-000141-CTR-000315, SRG-APP-000185-CTR-000490
Check
Verify the SSHD service is inactive and disabled by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; systemctl is-enabled sshd.service; systemctl is-active sshd.service' 2>/dev/null; done
If the SSHD service is either active or enabled this is a finding.
Fix
Apply the machine config to disable SSHD service by executing following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-sshd-service-disable-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
systemd:
units:
- name: sshd.service
enabled: false
" | oc apply -f -
done
Red Hat Enterprise Linux CoreOS (RHCOS) must disable USB Storage kernel module.
STIG ID:
CNTR-OS-001020 |
SRG: SRG-APP-000141-CTR-000315 |
Severity: medium |
CCI: CCI-000381 |
Vulnerability Id: V-257584 |
Vulnerability Discussion
Disabling the USB Storage kernel module helps protect against potential data exfiltration or unauthorized access to sensitive data. USB storage devices can be used to transfer data in and out of the system, which poses a risk if unauthorized or untrusted devices are connected. By disabling the USB Storage kernel module, OpenShift can prevent the use of USB storage devices and reduce the risk of data breaches or unauthorized data transfers.
USB storage devices can potentially introduce malware or malicious code into the system. Disabling the USB Storage kernel module helps mitigate the risk of malware infections or the introduction of malicious software from external storage devices. It prevents unauthorized execution of code from USB storage devices, reducing the attack surface and protecting the system from potential security threats.
Disabling USB storage prevents unauthorized data transfers to and from the system. This helps enforce data loss prevention (DLP) policies and mitigates the risk of sensitive or confidential data being copied or stolen using USB storage devices. It adds an additional layer of control to protect against data leakage or unauthorized data movement.
Check
Verify the operating system disables the ability to load the USB Storage kernel module by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -r usb-storage /etc/modprobe.d/* | grep -i "/bin/true"' 2>/dev/null; done
install usb-storage /bin/true
If the command does not return any output, or the line is commented out, and use of USB Storage is not documented with the Information System Security Officer (ISSO) as an operational requirement, this is a finding.
Fix
Apply the machine config to disable USB Storage to load USB Storage kernel module by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-kernmod-usb-storage-disable-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,install%20usb-storage%20/bin/true%0A
mode: 0644
path: /etc/modprobe.d/75-kernel_module_usb-storage_disabled.conf
overwrite: true
" | oc apply -f -
done
Red Hat Enterprise Linux CoreOS (RHCOS) must use USBGuard for hosts that include a USB Controller.
STIG ID:
CNTR-OS-001030 |
SRG: SRG-APP-000141-CTR-000315 |
Severity: medium |
CCI: CCI-000381 |
Vulnerability Id: V-257585 |
Vulnerability Discussion
USBGuard adds an extra layer of security to the overall OpenShift infrastructure. It provides an additional control mechanism to prevent potential security threats originating from USB devices. By monitoring and controlling USB access, USBGuard helps mitigate risks associated with unauthorized or malicious devices that may attempt to exploit vulnerabilities within the system.
Check
1. Determine if the host devices include a USB Controller by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; lspci' 2>/dev/null; done
If there is not a USB Controller, then this requirement is Not Applicable.
2. Verify the USBGuard service is installed by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; rpm -q usbguard' 2>/dev/null; done
If the output returns "package usbguard is not installed", this is a finding.
3. Verify that USBGuard is set up to log into the Linux audit log.
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; grep -r AuditBackend /etc/usbguard/usbguard-daemon.conf' 2>/dev/null; done
The output should return:
"AuditBackend=LinuxAudit". If it does not, this is a finding.
4. Verify the USBGuard has a policy configured with by executing the following:
for node in $(oc get node -oname); do oc debug $node -- chroot /host /bin/bash -c 'echo -n "$HOSTNAME "; usbguard list-rules' 2>/dev/null; done
If USBGuard is not found or the results do not match the organizationally defined rules, this is a finding.
Fix
If there is not a USB Controller, this requirement is Not Applicable.
1. Install the service by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-usbguard-extensions-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
extensions:
- usbguard
" | oc apply -f -
done
2. Wait for the nodes to reboot.
Note: (>oc get mcp) None of the pools should say Updating=true when the update is done.
3. Enable the service by executing the following:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-usbguard-service-enable-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
systemd:
units:
- name: usbguard.service
enabled: true
" | oc apply -f -
done
4. Configure USBGuard to log to the audit log:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-usbguard-daemon-conf-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,%23%0A%23%20Rule%20set%20file%20path.%0A%23%0A%23%20The%20USBGuard%20daemon%20will%20use%20this%20file%20to%20load%20the%20policy%0A%23%20rule%20set%20from%20it%20and%20to%20write%20new%20rules%20received%20via%20the%0A%23%20IPC%20interface.%0A%23%0A%23%20RuleFile%3D%2Fpath%2Fto%2Frules.conf%0A%23%0ARuleFile%3D%2Fetc%2Fusbguard%2Frules.conf%0A%0A%23%0A%23%20Rule%20set%20folder%20path.%0A%23%0A%23%20The%20USBGuard%20daemon%20will%20use%20this%20folder%20to%20load%20the%20policy%0A%23%20rule%20set%20from%20it%20and%20to%20write%20new%20rules%20received%20via%20the%0A%23%20IPC%20interface.%20Usually%2C%20we%20set%20the%20option%20to%0A%23%20%2Fetc%2Fusbguard%2Frules.d%2F.%20The%20USBGuard%20daemon%20is%20supposed%20to%0A%23%20behave%20like%20any%20other%20standard%20Linux%20daemon%20therefore%20it%0A%23%20loads%20rule%20files%20in%20alpha-numeric%20order.%20File%20names%20inside%0A%23%20RuleFolder%20directory%20should%20start%20with%20a%20two-digit%20number%0A%23%20prefix%20indicating%20the%20position%2C%20in%20which%20the%20rules%20are%0A%23%20scanned%20by%20the%20daemon.%0A%23%0A%23%20RuleFolder%3D%2Fpath%2Fto%2Frulesfolder%2F%0A%23%0ARuleFolder%3D%2Fetc%2Fusbguard%2Frules.d%2F%0A%0A%23%0A%23%20Implicit%20policy%20target.%0A%23%0A%23%20How%20to%20treat%20devices%20that%20don%27t%20match%20any%20rule%20in%20the%0A%23%20policy.%20One%20of%3A%0A%23%0A%23%20%2A%20allow%20%20-%20authorize%20the%20device%0A%23%20%2A%20block%20%20-%20block%20the%20device%0A%23%20%2A%20reject%20-%20remove%20the%20device%0A%23%0AImplicitPolicyTarget%3Dblock%0A%0A%23%0A%23%20Present%20device%20policy.%0A%23%0A%23%20How%20to%20treat%20devices%20that%20are%20already%20connected%20when%20the%0A%23%20daemon%20starts.%20One%20of%3A%0A%23%0A%23%20%2A%20allow%20%20%20%20%20%20%20%20-%20authorize%20every%20present%20device%0A%23%20%2A%20block%20%20%20%20%20%20%20%20-%20deauthorize%20every%20present%20device%0A%23%20%2A%20reject%20%20%20%20%20%20%20-%20remove%20every%20present%20device%0A%23%20%2A%20keep%20%20%20%20%20%20%20%20%20-%20just%20sync%20the%20internal%20state%20and%20leave%20it%0A%23%20%2A%20apply-policy%20-%20evaluate%20the%20ruleset%20for%20every%20present%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20device%0A%23%0APresentDevicePolicy%3Dapply-policy%0A%0A%23%0A%23%20Present%20controller%20policy.%0A%23%0A%23%20How%20to%20treat%20USB%20controllers%20that%20are%20already%20connected%0A%23%20when%20the%20daemon%20starts.%20One%20of%3A%0A%23%0A%23%20%2A%20allow%20%20%20%20%20%20%20%20-%20authorize%20every%20present%20device%0A%23%20%2A%20block%20%20%20%20%20%20%20%20-%20deauthorize%20every%20present%20device%0A%23%20%2A%20reject%20%20%20%20%20%20%20-%20remove%20every%20present%20device%0A%23%20%2A%20keep%20%20%20%20%20%20%20%20%20-%20just%20sync%20the%20internal%20state%20and%20leave%20it%0A%23%20%2A%20apply-policy%20-%20evaluate%20the%20ruleset%20for%20every%20present%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20device%0A%23%0APresentControllerPolicy%3Dkeep%0A%0A%23%0A%23%20Inserted%20device%20policy.%0A%23%0A%23%20How%20to%20treat%20USB%20devices%20that%20are%20already%20connected%0A%23%20%2Aafter%2A%20the%20daemon%20starts.%20One%20of%3A%0A%23%0A%23%20%2A%20block%20%20%20%20%20%20%20%20-%20deauthorize%20every%20present%20device%0A%23%20%2A%20reject%20%20%20%20%20%20%20-%20remove%20every%20present%20device%0A%23%20%2A%20apply-policy%20-%20evaluate%20the%20ruleset%20for%20every%20present%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20device%0A%23%0AInsertedDevicePolicy%3Dapply-policy%0A%0A%23%0A%23%20Control%20which%20devices%20are%20authorized%20by%20default.%0A%23%0A%23%20The%20USBGuard%20daemon%20modifies%20some%20the%20default%20authorization%20state%20attributes%0A%23%20of%20controller%20devices.%20This%20setting%2C%20enables%20you%20to%20define%20what%20value%20the%0A%23%20default%20authorization%20is%20set%20to.%0A%23%0A%23%20%2A%20keep%20%20%20%20%20%20%20%20%20-%20do%20not%20change%20the%20authorization%20state%0A%23%20%2A%20none%20%20%20%20%20%20%20%20%20-%20every%20new%20device%20starts%20out%20deauthorized%0A%23%20%2A%20all%20%20%20%20%20%20%20%20%20%20-%20every%20new%20device%20starts%20out%20authorized%0A%23%20%2A%20internal%20%20%20%20%20-%20internal%20devices%20start%20out%20authorized%2C%20external%20devices%20start%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20out%20deauthorized%20%28this%20requires%20the%20ACPI%20tables%20to%20properly%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20label%20internal%20devices%2C%20and%20kernel%20support%29%0A%23%0A%23AuthorizedDefault%3Dnone%0A%0A%23%0A%23%20Restore%20controller%20device%20state.%0A%23%0A%23%20The%20USBGuard%20daemon%20modifies%20some%20attributes%20of%20controller%0A%23%20devices%20like%20the%20default%20authorization%20state%20of%20new%20child%20device%0A%23%20instances.%20Using%20this%20setting%2C%20you%20can%20control%20whether%20the%0A%23%20daemon%20will%20try%20to%20restore%20the%20attribute%20values%20to%20the%20state%0A%23%20before%20modification%20on%20shutdown.%0A%23%0A%23%20SECURITY%20CONSIDERATIONS%3A%20If%20set%20to%20true%2C%20the%20USB%20authorization%0A%23%20policy%20could%20be%20bypassed%20by%20performing%20some%20sort%20of%20attack%20on%20the%0A%23%20daemon%20%28via%20a%20local%20exploit%20or%20via%20a%20USB%20device%29%20to%20make%20it%20shutdown%0A%23%20and%20restore%20to%20the%20operating-system%20default%20state%20%28known%20to%20be%20permissive%29.%0A%23%0ARestoreControllerDeviceState%3Dfalse%0A%0A%23%0A%23%20Device%20manager%20backend%0A%23%0A%23%20Which%20device%20manager%20backend%20implementation%20to%20use.%20One%20of%3A%0A%23%0A%23%20%2A%20uevent%20%20%20-%20Netlink%20based%20implementation%20which%20uses%20sysfs%20to%20scan%20for%20present%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20devices%20and%20an%20uevent%20netlink%20socket%20for%20receiving%20USB%20device%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20related%20events.%0A%23%20%2A%20umockdev%20-%20umockdev%20based%20device%20manager%20capable%20of%20simulating%20devices%20based%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20on%20umockdev-record%20files.%20Useful%20for%20testing.%0A%23%0ADeviceManagerBackend%3Duevent%0A%0A%23%21%21%21%20WARNING%3A%20It%27s%20good%20practice%20to%20set%20at%20least%20one%20of%20the%20%21%21%21%0A%23%21%21%21%20%20%20%20%20%20%20%20%20%20two%20options%20bellow.%20If%20none%20of%20them%20are%20set%2C%20%20%21%21%21%0A%23%21%21%21%20%20%20%20%20%20%20%20%20%20the%20daemon%20will%20accept%20IPC%20connections%20from%20%20%20%21%21%21%0A%23%21%21%21%20%20%20%20%20%20%20%20%20%20anyone%2C%20thus%20allowing%20anyone%20to%20modify%20the%20%20%20%20%21%21%21%0A%23%21%21%21%20%20%20%20%20%20%20%20%20%20rule%20set%20and%20%28de%29authorize%20USB%20devices.%20%20%20%20%20%20%20%21%21%21%0A%0A%23%0A%23%20Users%20allowed%20to%20use%20the%20IPC%20interface.%0A%23%0A%23%20A%20space%20delimited%20list%20of%20usernames%20that%20the%20daemon%20will%0A%23%20accept%20IPC%20connections%20from.%0A%23%0A%23%20IPCAllowedUsers%3Dusername1%20username2%20...%0A%23%0AIPCAllowedUsers%3Droot%0A%0A%23%0A%23%20Groups%20allowed%20to%20use%20the%20IPC%20interface.%0A%23%0A%23%20A%20space%20delimited%20list%20of%20groupnames%20that%20the%20daemon%20will%0A%23%20accept%20IPC%20connections%20from.%0A%23%0A%23%20IPCAllowedGroups%3Dgroupname1%20groupname2%20...%0A%23%0AIPCAllowedGroups%3Dwheel%0A%0A%23%0A%23%20IPC%20access%20control%20definition%20files%20path.%0A%23%0A%23%20The%20files%20at%20this%20location%20will%20be%20interpreted%20by%20the%20daemon%0A%23%20as%20access%20control%20definition%20files.%20The%20%28base%29name%20of%20a%20file%0A%23%20should%20be%20in%20the%20form%3A%0A%23%0A%23%20%20%20%5Buser%5D%5B%3A%3Cgroup%3E%5D%0A%23%0A%23%20and%20should%20contain%20lines%20in%20the%20form%3A%0A%23%0A%23%20%20%20%3Csection%3E%3D%5Bprivilege%5D%20...%0A%23%0A%23%20This%20way%20each%20file%20defines%20who%20is%20able%20to%20connect%20to%20the%20IPC%0A%23%20bus%20and%20what%20privileges%20he%20has.%0A%23%0AIPCAccessControlFiles%3D%2Fetc%2Fusbguard%2FIPCAccessControl.d%2F%0A%0A%23%0A%23%20Generate%20device%20specific%20rules%20including%20the%20%22via-port%22%0A%23%20attribute.%0A%23%0A%23%20This%20option%20modifies%20the%20behavior%20of%20the%20allowDevice%0A%23%20action.%20When%20instructed%20to%20generate%20a%20permanent%20rule%2C%0A%23%20the%20action%20can%20generate%20a%20port%20specific%20rule.%20Because%0A%23%20some%20systems%20have%20unstable%20port%20numbering%2C%20the%20generated%0A%23%20rule%20might%20not%20match%20the%20device%20after%20rebooting%20the%20system.%0A%23%0A%23%20If%20set%20to%20false%2C%20the%20generated%20rule%20will%20still%20contain%0A%23%20the%20%22parent-hash%22%20attribute%20which%20also%20defines%20an%20association%0A%23%20to%20the%20parent%20device.%20See%20usbguard-rules.conf%285%29%20for%20more%0A%23%20details.%0A%23%0ADeviceRulesWithPort%3Dfalse%0A%0A%23%0A%23%20USBGuard%20Audit%20events%20log%20backend%0A%23%0A%23%20One%20of%3A%0A%23%0A%23%20%2A%20FileAudit%20-%20Log%20audit%20events%20into%20a%20file%20specified%20by%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20AuditFilePath%20setting%20%28see%20below%29%0A%23%20%2A%20LinuxAudit%20-%20Log%20audit%20events%20using%20the%20Linux%20Audit%0A%23%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20subsystem%20%28using%20audit_log_user_message%29%0A%23%0AAuditBackend%3DLinuxAudit%0A%0A%23%0A%23%20USBGuard%20audit%20events%20log%20file%20path.%0A%23%0A%23AuditFilePath%3D%2Fvar%2Flog%2Fusbguard%2Fusbguard-audit.log%0A%0A%23%0A%23%20Hides%20personally%20identifiable%20information%20such%20as%20device%20serial%20numbers%20and%0A%23%20hashes%20of%20descriptors%20%28which%20include%20the%20serial%20number%29%20from%20audit%20entries.%0A%23%0A%23HidePII%3Dfalse
mode: 0600
path: /etc/usbguard/usbguard-daemon.conf
overwrite: true
" | oc apply -f -
done
5. Create a USBGuard policy:
Below is an example policy. Replace the string in files.contents.source.data with an organizationally approved policy:
for mcpool in $(oc get mcp -oname | sed "s:.*/::" ); do
echo "apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 80-usbguard-rules-conf-$mcpool
labels:
machineconfiguration.openshift.io/role: $mcpool
spec:
config:
ignition:
version: 3.1.0
storage:
files:
- contents:
source: data:,allow%20with-interface%20one-of%20%7B%2003%3A00%3A01%2003%3A01%3A01%20%7D%20if%20%21allowed-matches%28with-interface%20one-of%20%7B%2003%3A00%3A01%2003%3A01%3A01%20%7D%29
mode: 0600
path: /etc/usbguard/rules.conf
overwrite: true
" | oc apply -f -
done
OpenShift must continuously scan components, containers, and images for vulnerabilities.
STIG ID:
CNTR-OS-001060 |
SRG: SRG-APP-000516-CTR-001335 |
Severity: medium |
CCI: CCI-000366 |
Vulnerability Id: V-257586 |
Vulnerability Discussion
Finding vulnerabilities quickly within the container platform and within containers deployed within the platform is important to keep the overall platform secure. When a vulnerability within a component or container is unknown or allowed to remain unpatched, other containers and customers within the platform become vulnerability. The vulnerability can lead to the loss of application data, organizational infrastructure data, and Denial-of-Service (DoS) to hosted applications.
Vulnerability scanning can be performed by the container platform or by external applications.
Check
To check if the Container Security Operator is running, execute the following:
oc get deploy -n openshift-operators container-security-operator -ojsonpath='{.status.readyReplicas}'
If this command returns an error or the number 0, and a separate tool is not being used to perform continuous vulnerability scans of components, containers, and container images, this is a finding.
Fix
Vulnerability scanning can be performed by the Container Security Operator, Red Hat Advanced Cluster Security (formerly StackRox) or by external applications. Follow instructions from the application vendor if using external tool for vulnerability scanning. To install the Container Security Operator into the cluster, run the following:
oc apply -f - << 'EOF'
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
labels:
operators.coreos.com/container-security-operator.openshift-operators: ''
name: container-security-operator
namespace: openshift-operators
spec:
channel: stable-3.8
installPlanApproval: Automatic
name: container-security-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
OpenShift must use FIPS-validated SHA-2 or higher hash function for digital signature generation and verification (nonlegacy use).
STIG ID:
CNTR-OS-001080 |
SRG: SRG-APP-000610-CTR-001385 |
Severity: medium |
CCI: CCI-000803 |
Vulnerability Id: V-257587 |
Vulnerability Discussion
Using a FIPS-validated SHA-2 or higher hash function for digital signature generation and verification in OpenShift ensures strong cryptographic security, compliance with industry standards, and protection against known attacks. It promotes the integrity, authenticity, and nonrepudiation of digital signatures, which are essential for secure communication and data exchange in the OpenShift platform.
SHA1 is disabled in digital signatures when FIPS mode is enabled. OpenShift must verify that the certificates in /etc/kubernetes and /etc/pki are using sha256 signatures.
Check
Verify the use of a FIPS-compliant hash function for digital signature generation and validation, by executing and reviewing the following commands:
update-crypto-policies --show
If the return is not "FIPS", this is a finding.
Verify the crypto-policies by executing the following:
openssl x509 -in /etc/kubernetes/kubelet-ca.crt -noout -text | grep Algorithm
openssl x509 -in /etc/kubernetes/ca.crt -noout -text | grep Algorithm
If any of the crypto-policies listed are not FIPS compliant, this is a finding. Details of algorithms can be reviewed at the following knowledge base article:
https://access.redhat.com/articles/3642912
Fix
Reinstall the OpenShift cluster in FIPS mode. The file install-config.yaml has a top-level key that enables FIPS mode for all nodes and the cluster platform layer. If the install-config.yaml was not backed up prior to consumption as part of the installation, it must be recreated. An example install-config.yaml with some sections trimmed out for brevity, and the "fips: true" key applied at the top level is shown below:
apiVersion: v1
baseDomain: example.com
controlPlane:
name: master
platform:
aws:
[...]
replicas: 3
compute:
- name: worker
platform:
aws:
replicas: 3
metadata:
name: fips-cluster
networking:
[...]
platform:
aws:
[...]
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
fips: true
After saving the install-config.yaml with the corresponding correct information, run the installer to create a cluster that uses FIPS-validated Modules in Process cryptographic libraries. The command to install a cluster and consume the install-config.yaml is:
> ./openshift-install create cluster --dir= --log-level=info
Where is the directory that contains install-config.yaml
Additional details can be found here: https://docs.openshift.com/container-platform/4.8/installing/installing-fips.html