Deploying ScanCentral SAST Controller in Kubernetes

This document describes how to configure and use the scancentral-sast 24.4 Helm chart for ScanCentral SAST Controller container orchestration in Kubernetes. You can find the ScanCentral SAST Helm chart at https://hub.docker.com/r/fortifydocker/helm-scancentral-sast.

Table of contents

Kubernetes versions

These charts were tested using the following Kubernetes versions:

Tool prerequisites

OpenText recommends that you use the same tool versions to avoid unpredictable results.

Installation

The following instructions are for example purposes and are based on a default Kubernetes environment running under Linux, using the default namespace. Windows systems might require different syntax for certain commands and other Kubernetes Cluster providers might require additional/different configurations. Your Kubernetes administrator may require the use of specific namespaces and/or other configuration adjustments.

Installation prerequisites

Installation steps

  1. Creating an image pull secret
  2. Installing ScanCentral SAST components
  3. Verifying the ScanCentral SAST API is available
  4. Integrating ScanCentral SAST with Fortify Software Security Center
  5. Special Considerations for testing environments

Creating an image pull secret

By default, the Fortify ScanCentral SAST components Helm chart references its images directly from DockerHub. For Kubernetes to properly install your images using the default configuration, you must create an image pull secret and store it in your installation namespace in Kubernetes. If you are replicating these images to a local repository, you can skip this task and update the relevant image values in the Helm chart to reference your local repository. To create an image pull secret:

Installing ScanCentral SAST components

The following command installs the SAST Controller and sensor (on linux by default) using the recommended defaults. In some cases, you might need to customize these values using the '--set' parameter or by creating a 'values.yaml' and passing it to the command line with '-f' flag. For more information about the values you can override, see the Helm Chart Values table.

  1. Use the following Helm commands to perform the installation:

    helm install sast <release-name> oci://registry-1.docker.io/fortifydocker/helm-scancentral-sast --version 24.4.0-2 \
      --set-file=secrets.fortifyLicense=fortify.license \
      --set controller.sscUrl="https://<SSC service FQDN>/ssc"
  2. Verify that your SAST pods (sast-scancentral-sast-controller-0 & sast-scancentral-sast-worker-linux-0) are running successfully. It might take a few minutes before your pod gets to a proper 1/1 Running configuration. You can run the command above multiple times or use the flag -w to watch for any changes.

    kubectl get pods

Verifying the ScanCentral SAST API is available

  1. Open a new terminal shell.

  2. To access your ScanCentral SAST endpoint, set up port forwarding through kubectl.

    For example, to forward the ScanCentral SAST service on localhost port 8082:

    kubectl port-forward svc/sast-scancentral-sast-controller 8082:80
  3. Verify that you can access ScanCentral SAST API endpoint with your browser.

    https://localhost:8082/scancentral-ctrl/
    
    

Integrating ScanCentral SAST with Fortify Software Security Center

  1. Configure your ScanCentral SAST Service URL in Fortify Software Security Center by performing following the steps:

  2. Access the Fortify Software Security Center webpage by opening up a new terminal shell and setting up port forward on the Software Security Center service through kubectl.

    For example, to forward the Software Security Center service on localhost port 8081:

    kubectl port-forward svc/ssc-service 8081:443
  3. Access the Software Security Center web application from your browser.

    https://localhost:8081
    
  4. After you login to Software Security Center, select the Administration view.

  5. Expand Configuration, and then select ScanCentral SAST.

  6. Specify the ScanCentral SAST Controller URL, and the SSC and ScanCentral Controller shared secret. Follow the below steps to get the secret.

      kubectl get secret --namespace sc-sast sast-scancentral-sast -o jsonpath="{.data.scancentral-ssc-scancentral-ctrl-secret}" | base64 -d
  7. Restart the Software Security Center pod to effect the change by running the following command:

     kubectl rollout restart statefulset ssc-webapp
  8. Login back to Software Security Center. You should see ScanCentral view on the header and you can access the SAST page from that view.

  9. See the Software Security Center and ScanCentral SAST documentation to submit a scan requests.

Special Considerations for testing environments:

By default, the helm chart defines the container resource/requests based on recommended best-practice values intended to prevent performance issues and unexpected Kubernetes evictions of containers and pods. These values are often too large for a small test environment that does not require the same level of resources.

To disable these settings, paste the below values into a file called "resource_override.yaml" and add it to the install commandline with the -f flag. (e.g. -f resource_override.yaml")

WARNING: Using the following settings in production is not supported and will lead to unstable behaviors.

# Set all Kubernetes resources except for the datastores to best-effort mode (no resource requirements)
# DO NOT null out the resource configuration for the 'datastore' containers, this will result in unexpected evictions due to how that service allocates memory.
resources:
requests:
   cpu: null
   memory: null
limits:
   cpu: null
   memory: null
wise:
resources: null

Upgrading

Preparing for the upgrade

This release of the ScanCentral SAST helm chart has many changes that are not compatible with the previous chart. However, because all the states for Scancentral SAST is installed in the database, no data is lost.

Performing the upgrade

  1. Remove the previous ScanCentral SAST helm deployment. If you do not remember the release name, use the following example command to find it.

    helm -n <scancentral namespace> list
  2. After you identify the previous ScanCentral SAST installation, uninstall that Helm chart.

    helm -n <scancentral namespace> uninstall <release name>
  3. Now perform the steps listed in Installation.

Values

The following values are exposed by the Helm chart. Unless specified as Required, values should only be overridden as made necessary by your specific environment.

Required

Key Type Default Description
controller.sscUrl string "" Specify the URL of Software Security Center
secrets.fortifyLicense Required if secrets.secretName is blank "" fortify.license file contents. (Tip) Use "--set-file=secrets.fortifyLicense=<FORTIFY_LICENSE_PATH>" option when running a Helm install or upgrade.

Other Values

Key Type Default Description
controller.additionalEnvironment object {} Defines any additional environment variables to add to the resulting pod.
controller.affinity pod.affinity {} Defines Node Affinity configurations to add to resulting Kubernetes pods.
controller.enabled bool true Specifies whether to deploy the Controller component.
controller.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
controller.image.repository string "fortifydocker/scancentral-sast-controller" Specifies the Docker repository from where to pull docker image.
controller.image.tag string "24.4.0" Specifies the version of the docker image to pull.
controller.ingress.annotations object {} Specifies ingress Annotations.
controller.ingress.className string "" Specifies the ingress class.
controller.ingress.enabled bool false Specifies whether to enable the ingress.
controller.ingress.hosts list [{"host":"scancentral-sast-controller.local","paths":[{"path":"/","pathType":"Prefix"}]}] Defines ingress Host configurations.
controller.ingress.tls list [{"hosts":["some-host"],"secretName":"some-name"}] Defines ingress TLS configurations. The default shows example configuration values. The actual default is [].
controller.nodeSelector pod.nodeSelector kubernetes.io/os: linux Defines Node selection constraint configurations to add to resulting Kubernetes pods.
controller.persistence.accessMode string "ReadWriteOnce" Persistent Volume access mode (DEPRECATED: use persistence.accessModes instead) Used when 'existingClaim' is not defined. Should generally not be modified.
controller.persistence.accessModes list ["ReadWriteOnce"] Persistent volume access modes. Used when 'existingClaim' is not defined. Should generally not be modified.
controller.persistence.annotations object {} Persistent Volume Claim annotations. Used when 'existingClaim' is not defined.
controller.persistence.enabled bool true Specifies whether to persist controller data across pod reboots. 'false' is not supported in production environments.
controller.persistence.existingClaim string "" Provides a pre-configured Kubernetes PersistentVolumeClaim. This is the recommended approach to use in production.
controller.persistence.selector object {} Specifies the selector to match an existing Persistent Volume. Used when 'existingClaim' is not defined.
controller.persistence.size string "10Gi" Specifies the Persistent Volume size. Used when 'existingClaim' is not defined.
controller.persistence.storageClass string "" Specifies the Persistent Volume storage class, used when 'existingClaim' is not defined. If defined, storageClassName: . If set to "-", storageClassName: "", which disables dynamic provisioning. If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner ** Warning: Due to the risk of data loss on application uninstall, OpenText strongly recommends against utilizing directly configuring the storageClass and related options in a production environment. This mechanism will be removed in a future update.
controller.podAnnotations pod.annotations {} Defines annotations to add to resulting Kubernetes Pod(s).
controller.podLabels pod.labels {} Defines labels to add to resulting Kubernetes Pod(s).
controller.resources object {"limits":{"cpu":1,"memory":"8Gi"},"requests":{"cpu":1,"memory":"8Gi"}} Specifies resource requests (guaranteed resources) and limits for the pod. By default, these are configured to the minimum hardware requirements as described by the ScanCentral SAST documentation. Your workload might require larger values. Consult the documentation and ensure these values are appropriate for your environment.
controller.service.port int 80 Specifies the port for the K8s service.
controller.service.type string "ClusterIP" Specifies which Kubernetes service type to use for Controller (ClusterIP, NodeIP, or LoadBalancer)
controller.sscRemoteIp string "0.0.0.0/0" Specifies the allowed remote IP address for Software Security Center. Only requests with a matching remote IP address are allowed. The default IP address is resolved from ssc_url. Set this value if the Controller accesses Software Security Center via a reverse proxy server. This value can be a comma separated IP addresses or CIDR network ranges.
controller.thisUrl string "" Specifies the URL of the ScanCentral SAST Controller used to provide artifacts from the Controller.
controller.tolerations pod.tolerations [] Specifies Toleration configurations to add to the resulting Kubernetes pods.
fullnameOverride string .Release.name Overrides the fully qualified app name of the release.
imagePullSecrets list [] Specifies a secret in the same namespace to use for pulling any of the images used by the current release. You must provide this if pulling images directly from DockerHub.
nameOverride string .Chart.name Overrides the name of this chart.
secrets.clientAuthToken string "" Kubernetes secret that stores the authentication token clients use to connect to the Controller.
secrets.secretName string "" Specifies a Kubernetes secret containing ScanCentral SAST sensitive data. If empty, a secret is created automatically using sensitive properties. If not empty, the existing secret referenced by "secretName" is used and sensitive entries in this section are ignored
secrets.sscScanCentralCtrlSecret string "" Secret that stores authentication credentials to Software Security Center.
secrets.workerAuthToken string "" Kubernetes secret that stores the authentication token sensors used to connect to the Controller.
trustedCertificates list [] Specifies a list of certificates in PEM format to be added to ScanCentral SAST Controller and sensor Trust Store (Tip) Use "--set-file=trustedCertificates[]=" argument when running helm install or upgrade. Example: --set-file=trustedCertificates[0]=cert0.crt --set-file=trustedCertificates[1]=cert1.crt
workers.linux.additionalEnvironment list [] Defines any additional environment variables to add to the resulting pod.
workers.linux.affinity pod.affinity {} Defines Node Affinity configurations to add to resulting Kubernetes pods.
workers.linux.autoUpdate.enabled bool true Specifies whether to update Rulepacks on the sensor prior to starting.
workers.linux.autoUpdate.locale string "en" Specifies the Rulepack locale.
workers.linux.autoUpdate.proxy.host string nil FQDN for the proxy to autoupdate. Not a URL.
workers.linux.autoUpdate.proxy.password string nil Autoupdate proxy password
workers.linux.autoUpdate.proxy.port string nil Autoupdate Proxy server port
workers.linux.autoUpdate.proxy.username string nil Autoupdate proxy username
workers.linux.autoUpdate.server.acceptKey bool false Automatically accept the update server's public key.
workers.linux.autoUpdate.server.acceptSslCertificate bool false Automatically accept the update server's SSL certificate public key.
workers.linux.autoUpdate.server.url string nil Specifies the URL of the update server. Leave empty to set to the default for the Fortify update server
workers.linux.controllerProxyHost string nil Specifies a proxy host used to connect to the Controller.
workers.linux.controllerProxyPassword string nil Specifies the proxy password.
workers.linux.controllerProxyPort string nil Specifies the proxy host port used to connect to the Controller.
workers.linux.controllerProxyUser string nil Specifies the username used to connect to the proxy.
workers.linux.controllerUrl string nil Specifies the Controller URL. If empty, it is configured automatically based on the endpoint of the Controller installed by the chart. If the chart's Controller is disabled, this property is required.
workers.linux.enabled bool true Allows to enable/disable this component
workers.linux.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
workers.linux.image.repository string "fortifydocker/scancentral-sast-sensor" Specifies the Docker Repository name from where to pull the Docker image.
workers.linux.image.tag string "24.4.0" Specifies the version of the docker image to pull.
workers.linux.nodeSelector pod.nodeSelector kubernetes.io/os: linux Defines Node selection constraint configurations to add to resulting Kubernetes pods.
workers.linux.os string "linux" Specifies the sensor operating system (linux/windows).
workers.linux.persistence.accessMode string "ReadWriteOnce"
workers.linux.persistence.accessModes[0] string "ReadWriteOnce"
workers.linux.persistence.annotations object {}
workers.linux.persistence.enabled bool false Specifies whether to use an external persistent store for the temporary worker data. Using this option in production is not recommended and will be removed in a future release.
workers.linux.persistence.selector object {}
workers.linux.persistence.size string "10Gi" Specifies the Persistent Volume size. Used when 'existingClaim' is not defined.
workers.linux.persistence.storageClass string ""
workers.linux.podAnnotations pod.annotations {} Defines annotations to add to resulting Kubernetes pods.
workers.linux.podLabels pod.labels {} Defines labels to add to resulting Kubernetes pods.
workers.linux.replicas int 1 Number of replicas
workers.linux.resources object {"limits":{"cpu":8,"memory":"32Gi"},"requests":{"cpu":8,"memory":"32Gi"}} Resource requests (guaranteed resources) and limits for the pod. The default values are based on a generalized baseline and should be adjusted to the correct size based on the sizing calculations in the ScanCentral SAST documentation.
workers.linux.restapiConnectTimeout int 10000
workers.linux.restapiReadTimeout int 30000
workers.linux.scanTimeout string nil
workers.linux.sscProxyHost string nil Proxy host for connecting to Software Security Center.
workers.linux.sscProxyPassword string nil Specifies the proxy password.
workers.linux.sscProxyPort string nil Specifies the proxy host port used to connect to Software Security Center.
workers.linux.sscProxyUser string nil Specifies the username used to connect to the proxy
workers.linux.tolerations pod.tolerations [] Defines Toleration configurations to add to resulting Kubernetes pods.
workers.linux.topologySpreadConstraints list [] Implementing spread constraints can be used to balance load and manage "noisy neighbor" scenarios.
workers.linux.uuidDerivedFromPodName bool true Specifies whether to a assign UUIDs to sensors based on the namespace and pod name. Sensors will have the same UUID on restarts even if persistence is disabled.
workers.linux.workerCleanupAge int 168 Specifies the sensor cleanup age.
workers.linux.workerCleanupInterval int 1 Specifies the sensor cleanup interval.