Kubernetes
Managing K8s cluster access through StrongDM's proxy network
What is a cluster in StrongDM?

A cluster in StrongDM refers to a Kubernetes cluster combined with the credentials needed to access it. StrongDM proxies kubectl commands through its network, letting you control exactly what users and applications can do inside a cluster.

User
kubectl cmd
StrongDM client
Node
gateway/relay
K8s cluster
kube-apiserver
Key benefit: StrongDM nodes inject credentials at the "last-mile" hop. Sensitive credentials are inaccessible to users and never transferred in any form — they are unlocked only when a cryptographically valid proxy requests decryption on behalf of a valid user session.

K8s RBAC overview

Kubernetes Role-Based Access Control (RBAC) limits what resources users and apps can access within a cluster. StrongDM leverages the K8s API to manage these controls.

KindFunctionScope
RoleDefines access to resources within the clusterSpecified namespace only
ClusterRoleSame as Role, but cluster-wideAll namespaces
RoleBindingGrants permissions defined in a Role to identitiesSpecified namespace only
ClusterRoleBindingGrants permissions cluster-wideAll namespaces
Note: Discovery functionality requires the cluster resource to use a ClusterRole. RoleBindings can be used with either Roles or ClusterRoles.
Communicating with the K8s API

The StrongDM Node sends API requests to the kube-apiserver using a request-response model. Each request includes three types of data:

Baseline credentials
Grants the node access to the cluster. Level of access is defined on the K8s cluster itself.
Roles & rolebindings
Defines a role that baseline credentials can use to impersonate other roles with higher privileges.
kubectl commands
The StrongDM user's actual requests, passed through to the K8s cluster.
Baseline credentials by cluster type

Because different Cloud Service Providers have their own identity abstraction methods, this module uses baseline credentials as a universal term for the identity required to connect to a K8s cluster.

Cluster typeCredential field in StrongDM
AWS EKSCluster Name + Region (instance profile) or Access Key ID + Secret Access Key
GKEService Account Key (JSON file)
AKSAPI Token
AWS best practice: Use instance profiles for EKS. This leverages the EC2 instance's built-in key rotation — only the cluster name and region are required.

Authentication methods

Leased credentials

When using leased credentials, the authentication detail in the K8s resource (based on a ClusterRole) is the only identity passed to the cluster. All connections share this credential and its privilege level.

  • Simplest configuration — baseline credential is sufficient
  • All K8s connections share the same permissions
  • Less suitable for environments requiring granular per-user control
Because of the broad ClusterRole scope, leased credentials are not recommended for environments that need granular Kubernetes access control.
Identity aliases

Identity aliases allow the StrongDM gateway/relay to send user-specific information to the K8s API with each request, enabling per-user impersonation.

  • Maps StrongDM user identities to K8s users or groups
  • The email identity set is created in StrongDM by default
  • Requires a Healthcheck Username (e.g. sdm-health) and a Discovery Username
# Example: privilege escalation ClusterRoleBinding kind: ClusterRoleBinding metadata: name: fse-users-aliases-bindings subjects: - kind: User name: fse-users - kind: User name: alice@example.com
If Healthcheck or Discovery usernames are misconfigured, identity aliases will not work.

Resource parameters

Cluster type options

StrongDM supports all major managed K8s services. The cluster type you select determines what credentials are required:

  • Elastic Kubernetes Service (EKS) — instance profile (best practice) or IAM access keys
  • Google Kubernetes Engine (GKE) — service account JSON key file
  • Azure Kubernetes Service (AKS) — API token
  • Kubernetes (Pod Identity) — node deployed inside the cluster using Helm charts
Enable resource discovery

When Enable Resource Discovery is turned on, StrongDM collects Users, Service Accounts, and Group information from the Kubernetes cluster. This data is then used in access workflows.

K8s cluster
RBAC groups
Discovery
Collected: groups, service accounts, users
JIT access grants
Standing access grants
Important: Discovery requires the Kubernetes resource to be configured with a ClusterRole on the K8s cluster.

Preparing the StrongDM client

To proxy kubectl commands through StrongDM, the user's kubectl tool must point to the StrongDM client loopback address.

# kubeconfig (~/.kube/config) apiVersion: v1 clusters: - cluster: server: http://127.0.0.1:10003 # StrongDM loopback + assigned port name: my-cluster

Generate this config automatically using:

# All clusters sdm kubernetes update-config # Specific cluster sdm kubernetes update-config <CLUSTER_NAME>

Or via the StrongDM client UI: Settings → Update kubectl configuration

Knowledge check — Kubernetes

1. Why are leased credentials not recommended for environments requiring granular control?
AThey only work with EKS clusters
BAll connections share the same ClusterRole permissions, preventing per-user access control
CThey require a separate Healthcheck username for each user
DLeased credentials cannot access GKE clusters
✓ Correct! With leased credentials, all K8s connections share the same ClusterRole and its permissions — there's no per-user differentiation.
✗ Not quite. The key limitation is that leased credentials apply the same ClusterRole permissions to every connection, making per-user granular control impossible.
2. What is the recommended AWS best practice for EKS baseline credentials?
AStatic IAM Access Key ID + Secret Access Key
BService Account JSON key file
CInstance profiles — key management is handled by the EC2 host
DAPI Token from the AWS console
✓ Correct! Instance profiles leverage the EC2 instance's built-in key rotation, requiring only the cluster name and region in StrongDM.
✗ Not quite. Instance profiles are the AWS best practice because they use the EC2 host for key management, avoiding long-lived static credentials.
Datasources
Database resources and policy-based access controls
What is a datasource?

A datasource is a combination of a specific database and the credentials required to access it. Once access is granted, Policy-based Action Controls (PBAC) govern what users can do with the database.

StrongDM currently supports 46 datasource types. Some databases offer more granular controls than others — PostgreSQL-based databases have the richest policy support.

Common parameters

All datasource types share these properties regardless of the underlying database.

Display name

The name used to identify the datasource in the StrongDM console. Cannot include special characters.

Datasource type

Specifies the database engine. Some databases have multiple types based on connection requirements. For example, MongoDB can be configured as:

  • Single Host
  • Replica Set
  • Shared Cluster
Hostname & port

Hostname is the address (hostname or IP) used to connect to the database.

Port is auto-populated with the default port for the selected datasource type. Update it only if the database listens on a non-default port.

Secret store & credentials

The Secret Store field specifies where the datasource credentials are stored. Defaults to Strong Vault which requires no additional configuration.

Username and password fields appear only when:

  • No secret store integration is configured, or
  • Strong Vault is selected as the secret store

For external secret stores (e.g. HashiCorp Vault, AWS Secrets Manager), credentials are fetched dynamically at runtime.

PostgreSQL-based datasources

StrongDM provides the most granular policy controls for PostgreSQL and compatible databases. Policy controls for other databases are limited to connection-level control only.

Supported for granular controls
Aurora PostgreSQL Aurora PostgreSQL (IAM) Azure Database for PostgreSQL Azure PostgreSQL (Managed Identity) Citus CockroachDB Greenplum PostgreSQL PostgreSQL (mTLS) RDS PostgreSQL (IAM) Redshift

Example: policy control for a destructive operation

StrongDM policies can detect high-risk SQL commands and trigger an action (MFA challenge, block, alert) before they reach the database:

# StrongDM Policy Editor action identifier SQL::Action::"dropTable" # Corresponds to the PostgreSQL command: DROP TABLE removes tables from the database. Only the table owner, schema owner, or superuser can drop a table.
Use case: Configure a policy to require MFA whenever a user executes DROP TABLE, protecting against accidental or malicious schema destruction.

Knowledge check — datasources

Which statement about datasource username/password fields is correct?
AThey are always required for every datasource
BThey only appear when Strong Vault is selected or no secret store integration is configured
CThey are only available for PostgreSQL datasources
DThey replace the hostname field when credentials are stored externally
✓ Correct! Username/password fields appear only when using Strong Vault or when no external secret store is configured. External stores fetch credentials dynamically.
✗ Not quite. These fields appear only when Strong Vault is selected or when no secret store integration is configured.
Resource discovery
Automatically finding and onboarding cloud resources
The problem resource discovery solves

Manual resource onboarding works when you have a known, static inventory. But what if:

  • The administrator lacks a comprehensive, reliable inventory of resources to onboard?
  • Resources are created and terminated dynamically as part of cloud workflows?
Solution: Resource discovery

Key concepts

Connector
Scans cloud environments at scheduled intervals to search for and catalog resources.
Managed resource
A cloud resource that is already protected and proxied by StrongDM.
Discovered resource
Found by the connector but not yet onboarded to StrongDM. Can be promoted to a managed resource.

What can be discovered?

Cloud providerDiscoverable resource types
Amazon Web ServicesEC2, RDS, EKS
Google Cloud PlatformGCE, Cloud SQL, GKE
Microsoft AzureVM, SQL, AKS

Configuring a connector

Connectors are configured at Settings → Connectors in the Admin UI. Each connector needs:

Cloud type (AWS / GCP / Azure)

Selects the cloud platform the connector will scan. There are three types: AWS GCP Azure

Node assignment

The connector relies on a Node (gateway, relay, or proxy worker) to perform scanning. The node must have read-only permissions on the cloud environment.

For AWS, required permissions include:

  • AmazonEC2ReadOnlyAccess — managed policy
  • AmazonRDSReadOnlyAccess — managed policy
  • Custom EKS read-only policy — no managed policy exists for this
StrongDM documentation assumes the selected node was created within the same cloud environment it will scan.
Crawl interval

Connectors scan on a configurable schedule. Current options are:

  • 12 hours
  • 24 hours

Administrators can also manually trigger a scan at any time via Actions → Schedule Scan.

Discovery workflow

  1. Configure — set up a connector with the appropriate cloud type, node, and crawl interval
  2. Discover — at each crawl interval the connector scans the cloud environment and reports what it finds
  3. View — scan results appear at Resources → Discovered Resources. Filter by Kind, Cloud, or Status (Managed/Unmanaged)
  4. Manage — select an unmanaged resource and click Actions → Manage to onboard it to StrongDM. Discovery tag information is automatically carried over.
What information is collected for each discovered resource?

Resource details

  • Resource name
  • Tags (imported into StrongDM resource)
  • Endpoint details
  • Resource type

Metadata

  • First seen timestamp
  • Last seen timestamp
  • Updated at timestamp
  • Region (for cloud resources)

Knowledge check — resource discovery

An unmanaged resource has been discovered by a connector. What does "unmanaged" mean in this context?
AThe resource has no cloud tags attached to it
BThe resource is publicly accessible and not secured
CStrongDM knows the resource exists but cannot yet proxy or manage access to it
DThe node does not have permission to scan this resource type
✓ Correct! An unmanaged resource is one StrongDM has discovered but not yet onboarded. It must be promoted via the Manage button before StrongDM can proxy traffic to it.
✗ Not quite. "Unmanaged" means StrongDM is aware the resource exists but has not onboarded it — so it cannot proxy or manage access to it yet.
Logs
Audit trails for platform activity and resource access
Overview

StrongDM logs are a core part of the platform. They provide a comprehensive audit trail for all activities within StrongDM and for every query made against protected resources.

Log types

StrongDM client
→ Activities
Node
← Session logs →
Resource
← Query logs
Activity logs
Record admin and user actions within the StrongDM platform — logins, resource grants, policy changes, etc.
Session logs
Capture the full session between a user and a resource. SSH and Kubernetes sessions can be replayed from the Admin UI.
Query logs
Record individual queries sent from the StrongDM node to a datasource — including SQL statements for database resources.
Viewing Kubernetes session replays

Kubernetes session replays are available in the Admin UI at Logs → Kubernetes after the session ends. Each replay record shows:

  • Date and target cluster
  • Privilege level used
  • StrongDM user and authentication method
  • Session duration
  • First command executed

Replays can be viewed in the Admin UI or played back via the CLI:

sdm kubernetes play <SESSION_ID>
Error logs

Error logs record errors that occur throughout the StrongDM network — for example, failed connection attempts or credential issues. These are covered in detail in StrongDM 302 - Troubleshooting.

Module summary

What you've learned
  • How StrongDM proxies kubectl commands through its network to Kubernetes clusters
  • K8s RBAC concepts: Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings
  • Baseline credentials and how they differ across EKS, GKE, and AKS
  • Leased credentials vs identity aliases for authentication
  • How resource discovery and JIT/standing access grants work together
  • Datasource types and the special granular controls available for PostgreSQL
  • How connectors discover cloud resources and the manage workflow
  • The three StrongDM log types: activity, session, and query logs

Final knowledge check

What must be true for a StrongDM node to successfully perform resource discovery on an AWS environment?
AThe node must be the same version as the StrongDM Admin UI
BThe node must have read-only permissions to retrieve resource information from the cloud environment
CA datasource must already be configured for the cloud account
DResource discovery only works with gateway nodes, not relays
✓ Correct! The StrongDM node must have read-only cloud permissions (e.g. AmazonEC2ReadOnlyAccess for EC2, AmazonRDSReadOnlyAccess for RDS) to scan and report discovered resources.
✗ Not quite. The key requirement is that the node has the appropriate read-only cloud permissions to scan the environment.