A cluster in StrongDM refers to a Kubernetes cluster combined with the credentials needed to access it. StrongDM proxies kubectl commands through its network, letting you control exactly what users and applications can do inside a cluster.
kubectl cmd
gateway/relay
kube-apiserver
K8s RBAC overview
Kubernetes Role-Based Access Control (RBAC) limits what resources users and apps can access within a cluster. StrongDM leverages the K8s API to manage these controls.
| Kind | Function | Scope |
|---|---|---|
| Role | Defines access to resources within the cluster | Specified namespace only |
| ClusterRole | Same as Role, but cluster-wide | All namespaces |
| RoleBinding | Grants permissions defined in a Role to identities | Specified namespace only |
| ClusterRoleBinding | Grants permissions cluster-wide | All namespaces |
The StrongDM Node sends API requests to the kube-apiserver using a request-response model. Each request includes three types of data:
Because different Cloud Service Providers have their own identity abstraction methods, this module uses baseline credentials as a universal term for the identity required to connect to a K8s cluster.
| Cluster type | Credential field in StrongDM |
|---|---|
| AWS EKS | Cluster Name + Region (instance profile) or Access Key ID + Secret Access Key |
| GKE | Service Account Key (JSON file) |
| AKS | API Token |
Authentication methods
When using leased credentials, the authentication detail in the K8s resource (based on a ClusterRole) is the only identity passed to the cluster. All connections share this credential and its privilege level.
- Simplest configuration — baseline credential is sufficient
- All K8s connections share the same permissions
- Less suitable for environments requiring granular per-user control
Identity aliases allow the StrongDM gateway/relay to send user-specific information to the K8s API with each request, enabling per-user impersonation.
- Maps StrongDM user identities to K8s users or groups
- The email identity set is created in StrongDM by default
- Requires a Healthcheck Username (e.g.
sdm-health) and a Discovery Username
Resource parameters
StrongDM supports all major managed K8s services. The cluster type you select determines what credentials are required:
- Elastic Kubernetes Service (EKS) — instance profile (best practice) or IAM access keys
- Google Kubernetes Engine (GKE) — service account JSON key file
- Azure Kubernetes Service (AKS) — API token
- Kubernetes (Pod Identity) — node deployed inside the cluster using Helm charts
When Enable Resource Discovery is turned on, StrongDM collects Users, Service Accounts, and Group information from the Kubernetes cluster. This data is then used in access workflows.
RBAC groups
Preparing the StrongDM client
To proxy kubectl commands through StrongDM, the user's kubectl tool must point to the StrongDM client loopback address.
Generate this config automatically using:
Or via the StrongDM client UI: Settings → Update kubectl configuration
Knowledge check — Kubernetes
A datasource is a combination of a specific database and the credentials required to access it. Once access is granted, Policy-based Action Controls (PBAC) govern what users can do with the database.
Common parameters
All datasource types share these properties regardless of the underlying database.
The name used to identify the datasource in the StrongDM console. Cannot include special characters.
Specifies the database engine. Some databases have multiple types based on connection requirements. For example, MongoDB can be configured as:
- Single Host
- Replica Set
- Shared Cluster
Hostname is the address (hostname or IP) used to connect to the database.
Port is auto-populated with the default port for the selected datasource type. Update it only if the database listens on a non-default port.
The Secret Store field specifies where the datasource credentials are stored. Defaults to Strong Vault which requires no additional configuration.
Username and password fields appear only when:
- No secret store integration is configured, or
- Strong Vault is selected as the secret store
For external secret stores (e.g. HashiCorp Vault, AWS Secrets Manager), credentials are fetched dynamically at runtime.
PostgreSQL-based datasources
StrongDM provides the most granular policy controls for PostgreSQL and compatible databases. Policy controls for other databases are limited to connection-level control only.
Example: policy control for a destructive operation
StrongDM policies can detect high-risk SQL commands and trigger an action (MFA challenge, block, alert) before they reach the database:
DROP TABLE, protecting against accidental or malicious schema destruction.Knowledge check — datasources
Manual resource onboarding works when you have a known, static inventory. But what if:
- The administrator lacks a comprehensive, reliable inventory of resources to onboard?
- Resources are created and terminated dynamically as part of cloud workflows?
Key concepts
What can be discovered?
| Cloud provider | Discoverable resource types |
|---|---|
| Amazon Web Services | EC2, RDS, EKS |
| Google Cloud Platform | GCE, Cloud SQL, GKE |
| Microsoft Azure | VM, SQL, AKS |
Configuring a connector
Connectors are configured at Settings → Connectors in the Admin UI. Each connector needs:
Selects the cloud platform the connector will scan. There are three types: AWS GCP Azure
The connector relies on a Node (gateway, relay, or proxy worker) to perform scanning. The node must have read-only permissions on the cloud environment.
For AWS, required permissions include:
AmazonEC2ReadOnlyAccess— managed policyAmazonRDSReadOnlyAccess— managed policy- Custom EKS read-only policy — no managed policy exists for this
Connectors scan on a configurable schedule. Current options are:
- 12 hours
- 24 hours
Administrators can also manually trigger a scan at any time via Actions → Schedule Scan.
Discovery workflow
- Configure — set up a connector with the appropriate cloud type, node, and crawl interval
- Discover — at each crawl interval the connector scans the cloud environment and reports what it finds
- View — scan results appear at Resources → Discovered Resources. Filter by Kind, Cloud, or Status (Managed/Unmanaged)
- Manage — select an unmanaged resource and click Actions → Manage to onboard it to StrongDM. Discovery tag information is automatically carried over.
Resource details
- Resource name
- Tags (imported into StrongDM resource)
- Endpoint details
- Resource type
Metadata
- First seen timestamp
- Last seen timestamp
- Updated at timestamp
- Region (for cloud resources)
Knowledge check — resource discovery
StrongDM logs are a core part of the platform. They provide a comprehensive audit trail for all activities within StrongDM and for every query made against protected resources.
Log types
Kubernetes session replays are available in the Admin UI at Logs → Kubernetes after the session ends. Each replay record shows:
- Date and target cluster
- Privilege level used
- StrongDM user and authentication method
- Session duration
- First command executed
Replays can be viewed in the Admin UI or played back via the CLI:
Error logs record errors that occur throughout the StrongDM network — for example, failed connection attempts or credential issues. These are covered in detail in StrongDM 302 - Troubleshooting.
Module summary
- How StrongDM proxies kubectl commands through its network to Kubernetes clusters
- K8s RBAC concepts: Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings
- Baseline credentials and how they differ across EKS, GKE, and AKS
- Leased credentials vs identity aliases for authentication
- How resource discovery and JIT/standing access grants work together
- Datasource types and the special granular controls available for PostgreSQL
- How connectors discover cloud resources and the manage workflow
- The three StrongDM log types: activity, session, and query logs