EKS Authentication Reimagined: The Definitive Guide to Access Entries
The aws-auth ConfigMap era is over. This is the complete technical reference covering architecture, entry types, access policies, namespace scoping, Terraform implementation, migration strategy, and every production gotcha you will not find in the AWS docs.
By Chinmaya Kumar Mishra — Principal Platform Engineer · CKA · AWS SAA · May 2026 · 28 min read
Why This Guide Exists
It’s 1:47 AM. Your on-call phone rings. A frantic engineer just ran kubectl edit configmap aws-auth -n kube-system to add a new pipeline role and introduced a YAML indentation error. The cluster API server silently rejected the malformed ConfigMap. Now nobody — not the team, not the CI/CD system, not the break-glass admin role — can authenticate to the cluster.
This is not a hypothetical. It happens at real organisations operating EKS at scale. The aws-auth ConfigMap is arguably the single most dangerous object in an EKS cluster — a Kubernetes resource responsible for cluster-wide authentication that can be accidentally deleted, silently corrupted, or overwritten by anyone with kubectl write access to the kube-system namespace.
EKS Access Entries change all of that fundamentally. Introduced in late 2023 and now GA, Access Entries move cluster authentication out of the Kubernetes data plane and into the AWS API — where it belongs.
This guide covers the complete mental model: the architecture, the types, the policies, the Terraform patterns, the migration steps, and the production pitfalls that most guides skip entirely.
Table of Contents
- The Problem with aws-auth
- The Architecture Shift
- Authentication Modes
- How Authentication Works — Step by Step
- Access Entry Types
- Access Policies — Deep Dive
- Namespace Scoping — The Blast Radius Boundary
- The RBAC Bridge
- Terraform Implementation
- AWS CLI Reference
- Migration Guide — aws-auth to Access Entries
- Production Hardening
- Common Pitfalls
- What Access Entries Don’t Do
1. The Problem with aws-auth
The aws-auth ConfigMap was never designed to be a production-grade authentication mechanism. It was a Kubernetes community workaround introduced before AWS had a proper API surface to expose cluster identity management. A stopgap that became load-bearing infrastructure for thousands of production clusters.
Here is an honest accounting of its failure modes:
- Accidental deletion locks the entire cluster — no access for anyone
- YAML indentation errors are silently accepted by
kubectl applybut rejected at authentication time - No CloudTrail audit trail —
kubectl editchanges are invisible to AWS - No validation before apply — no admission webhook protects it by default
- Race conditions when multiple automation tools write concurrently
- Lives inside the cluster — unavailable during control plane issues
- No native AWS API — requires
kubectlor Kubernetes SDK to manage - Duplicate
rolearnentries cause silent authentication failures
Beyond operational risk, aws-auth had a deeper architectural flaw: it violated separation of concerns. Authentication — answering “who are you?” — is an AWS IAM concern. But the answer lived in a Kubernetes object, managed by Kubernetes tooling, evaluated by the Kubernetes API server. Anything with kubectl write access to kube-system could accidentally or maliciously affect cluster-wide authentication.
The aws-auth ConfigMap was never meant to be a long-term authentication solution. EKS Access Entries are the architectural correction that should have existed from day one.
2. The Architecture Shift
EKS Access Entries move cluster authentication from the Kubernetes data plane into the AWS service layer. This is not a configuration change — it is a fundamental architectural realignment.
The Old Model: aws-auth ConfigMap
In the legacy model, EKS authentication worked like this:
- A client presents an AWS IAM identity via a token generated by
aws eks get-token - The EKS control plane validates the token against AWS IAM
- The control plane looks up the IAM identity in the
aws-authConfigMap in thekube-systemnamespace - If a matching entry is found, the IAM identity is mapped to a Kubernetes username and group
- Kubernetes RBAC evaluates the request against that username/group
The critical flaw is step 3. The aws-auth ConfigMap lived inside the cluster itself — a Kubernetes resource that could be accidentally deleted, corrupted, or misconfigured, locking out the very administrators who needed to fix it.
┌──────────────────────────────────────────┐
│ AWS IAM Layer │
│ IAM Role / User presents token │
└──────────────────┬───────────────────────┘
│ token validation
▼
┌──────────────────────────────────────────┐
│ EKS Control Plane │
│ Token validated — now... │
└──────────────────┬───────────────────────┘
│ reaches INTO cluster
▼
┌──────────────────────────────────────────┐
│ aws-auth ConfigMap ← WEAK LINK │
│ kube-system / INSIDE CLUSTER │
│ ✗ Deletable ✗ No audit ✗ YAML traps │
└──────────────────┬───────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ Kubernetes RBAC │
└──────────────────────────────────────────┘
The New Model: EKS Access Entries
In the new model, after token validation, the EKS control plane queries the EKS Service Layer — AWS-managed infrastructure that lives outside the cluster, protected by IAM, and fully independent of cluster health.
┌──────────────────────────────────────────┐
│ AWS IAM Layer │
│ IAM Role / User presents token │
└──────────────────┬───────────────────────┘
│ token validation
▼
┌──────────────────────────────────────────┐
│ EKS Control Plane │
│ Token validated — now... │
└──────────────────┬───────────────────────┘
│ queries AWS service layer
▼
┌──────────────────────────────────────────┐
│ EKS Access Entries │
│ AWS Service Layer │
│ OUTSIDE the cluster │
│ ✓ CloudTrail ✓ IAM-protected │
│ ✓ IaC native ✓ Cannot be kubectl'd │
└──────────────────┬───────────────────────┘
│ resolved identity + policies
▼
┌──────────────────────────────────────────┐
│ Kubernetes RBAC │
└──────────────────────────────────────────┘
The cluster cannot poison its own authentication system. This is the key guarantee Access Entries provide.
3. Authentication Modes
EKS clusters expose an authentication mode setting at the cluster level. There are exactly three options:
CONFIG_MAP
The legacy mode. Only aws-auth is used. Access Entries are not evaluated. This is the default for clusters created before Access Entries became available.
API_AND_CONFIG_MAP
The transition mode. Both aws-auth and Access Entries are evaluated. This exists specifically for gradual migration — you can incrementally move IAM principals from aws-auth to Access Entries without a hard cutover. If an identity exists in both, the Access Entry takes precedence.
API
The target state. Only Access Entries are used. The aws-auth ConfigMap is ignored entirely. This is the recommended end state for all clusters.
CONFIG_MAP ──→ API_AND_CONFIG_MAP ──→ API
(migration bridge) (target)
⚠ One-directional. You cannot revert. Plan carefully.
Critical: Authentication mode migration is strictly one-directional: CONFIG_MAP → API_AND_CONFIG_MAP → API. You cannot revert. Always use API_AND_CONFIG_MAP as your intermediate state and verify all principals authenticate before switching to API.
Checking Your Current Mode
# Check current authentication mode
aws eks describe-cluster \
--name my-cluster \
--query 'cluster.accessConfig.authenticationMode' \
--output text
# Expected output: CONFIG_MAP | API_AND_CONFIG_MAP | API
New Cluster Default
Clusters created after November 2023 can be provisioned directly in API mode. For greenfield clusters, there is no reason to start with CONFIG_MAP. Set authentication_mode = "API" in Terraform from day one and skip the migration entirely.
4. How Authentication Works — Step by Step
Understanding the precise request lifecycle helps you debug authentication failures systematically.
CLIENT AWS IAM / STS EKS AUTHENTICATOR K8s RBAC │ │ │ │ │── presigned URL ──────▶ │ │ │ (GetCallerIdentity) │ │ │ │◀── Bearer token ─────── │ │ │ (base64, 15min TTL) │ │ │ │ │ │ │ │── kubectl request ─────────────────────────▶ │ │ │ (Bearer token in header) │ │ │ │ │ │ │ │◀── STS call to ────── │ │ │ verify IAM │ │ │ identity │ │ │ │ │ │ │── identity ARN ─────▶│ │ │ │ confirmed │ │ │ │ │ │ │ │ ┌─────────────────┐ │ │ │ │ Check Access │ │ │ │ │ Entries (AWS │ │ │ │ │ service layer) │ │ │ │ └────────┬────────┘ │ │ │ │ identity + │ │ │ │ policies │ │ │ │─────────────────▶ │ │ │ │ │◀──────────────────────────────────────────── RBAC evaluation ── │ 200 OK / 403 Forbidden
The Token Mechanics
EKS uses a pre-signed STS URL as the bearer token. When you run aws eks get-token, it generates a base64-encoded URL for sts:GetCallerIdentity — time-limited to 15 minutes. The EKS authenticator calls STS to verify this URL and extract the caller’s IAM ARN. The ARN is then matched against Access Entries. No shared secrets. No static tokens. No session state.
Tokens expire after 15 minutes. Kubernetes clients with a valid kubeconfig exec credential plugin automatically refresh tokens before expiry. If you see Unauthorized errors on long-running operations, check that your exec plugin is correctly configured to refresh.
5. Access Entry Types
Not all principals are equal, and Access Entries reflect this through a type field. This is the most commonly misunderstood aspect of the feature — many engineers assume Access Entries work the same way for nodes and humans. They do not.
Node-type entries authenticate infrastructure to the control plane. Standard entries authenticate humans and workloads to the cluster API. Conflating these two concerns is the most common misconfiguration.
STANDARD
For human operators and CI/CD service roles — any IAM role or user that needs kubectl access to the cluster. Developers, platform engineers, pipeline runners, and administrators all use this type. Standard entries support both AWS-managed Access Policies and custom Kubernetes RBAC groups.
Supports: Access Policies ✓ | Custom kubernetes_groups ✓
EC2_LINUX
For EC2-based Linux worker nodes. When a node joins the cluster, its EC2 instance profile needs to be recognised as a valid node identity. This entry type handles that — EKS implicitly assigns system:nodes and system:bootstrappers groups. You do not manually assign Access Policies to this type.
Supports: Access Policies ✗ | Auto-assigned implicit groups ✓
EC2_WINDOWS
Identical to EC2_LINUX but for Windows worker nodes. Same implicit group assignments, different OS context.
Supports: Access Policies ✗ | Auto-assigned implicit groups ✓
FARGATE_LINUX
For Fargate-based workloads. The Fargate pod execution role needs cluster-level authentication. EKS handles group assignment automatically at pod scheduling time.
Supports: Access Policies ✗ | Auto-assigned implicit groups ✓
HYBRID_LINUX
For on-premises or edge nodes connected via EKS Hybrid Nodes. The newest entry type, introduced alongside the Hybrid Nodes feature in EKS 1.28+.
Supports: Access Policies ✗ | Auto-assigned implicit groups ✓
Important: Attempting to associate an Access Policy with a node-type entry (EC2_LINUX, EC2_WINDOWS, FARGATE_LINUX, HYBRID_LINUX) returns an error. Only STANDARD entries support Access Policy associations and custom Kubernetes groups. This catches many engineers when first implementing Access Entries.
How EKS Infers Entry Type
If you omit the type field, EKS infers the type based on the principal ARN format. An EC2 instance profile ARN becomes EC2_LINUX. All other role and user ARNs default to STANDARD. Always set the type explicitly in IaC to avoid surprises.
6. Access Policies — Deep Dive
Access Policies are AWS-managed permission sets. You cannot create custom ones. There are exactly five, each mapping to a well-understood Kubernetes permission profile.
AmazonEKSClusterAdminPolicy
Maps to cluster-admin ClusterRole. Full control over all resources in all namespaces — including RBAC itself. A principal with this policy can create, modify, and delete ClusterRoles and ClusterRoleBindings, effectively granting themselves or anyone else any permission in the cluster.
Reserve for: Break-glass access and Terraform automation that genuinely needs cluster-wide write. Nothing else.
Scope: Cluster only | RBAC writes: Yes (full)
AmazonEKSAdminPolicy
Broad admin access minus the ability to manage RBAC resources. Cannot create or modify ClusterRoles or ClusterRoleBindings. This prevents privilege escalation — an admin can operate the cluster but cannot elevate their own or others’ permissions. In multi-tenant environments this distinction is critical.
Assign to: Team leads, senior platform engineers who need broad operational access.
Scope: Cluster or Namespace | RBAC writes: No
AmazonEKSEditPolicy
Maps to the Kubernetes built-in edit ClusterRole. Can create, update, and delete most namespaced resources — Deployments, Services, ConfigMaps, Secrets — but cannot modify RBAC resources. Standard developer access level.
Assign to: Application developers, team engineers deploying into their namespaces.
Scope: Cluster or Namespace | RBAC writes: No
AmazonEKSViewPolicy
Maps to the Kubernetes built-in view ClusterRole. Read-only access to namespaced resources. Cannot read Secrets by default (this is intentional in the upstream Kubernetes view role).
Assign to: On-call engineers who need visibility without write access, observability tooling, dashboards.
Scope: Cluster or Namespace | RBAC writes: No
AmazonEKSAdminViewPolicy
Cluster-wide read-only access including RBAC resources — ClusterRoles, ClusterRoleBindings, ServiceAccounts. Distinct from AmazonEKSViewPolicy which is namespace-scoped. This policy provides visibility into the entire cluster’s permission structure.
Assign to: Security teams and compliance auditors needing full cluster visibility.
Scope: Cluster only | RBAC writes: No (read-only)
A common confusion: AmazonEKSViewPolicy and AmazonEKSAdminViewPolicy sound similar but serve different purposes. The former is the namespaced view role for developers. The latter is cluster-wide read including RBAC resources for security auditors. Assign accordingly.
Listing Available Policies
# List all available access policies
aws eks list-access-policies --output table
# Describe a specific policy to see its permission details
aws eks describe-access-policy \
--name AmazonEKSEditPolicy
7. Namespace Scoping — The Blast Radius Boundary
Every policy association on a STANDARD Access Entry carries an access scope. This is not a cluster-level setting — it is per-policy-association. A single Access Entry can have multiple policy associations with completely different scopes.
Scope Types
Cluster scope: The policy applies across the entire cluster. No namespace restriction. Use for platform teams, security tooling, and break-glass roles that genuinely need cluster-wide access.
Namespace scope: The policy applies only to the specified namespaces. You can list multiple namespaces in a single policy association.
The Recommended Multi-Policy Pattern
For engineering team roles in multi-tenant clusters, the recommended baseline is: broad read, narrow write.
IAM Role: team-payments-engineer
│
├── AmazonEKSViewPolicy → scope: CLUSTER
│ (read everything for debugging — pods, logs, events anywhere)
│
├── AmazonEKSEditPolicy → scope: NAMESPACE [payments, payments-staging]
│ (write only in their own namespaces)
│
└── kubernetes_groups: ["team:payments", "oncall:payments"]
(for custom RBAC — e.g. access to specific Secrets via RoleBinding)
This pattern gives developers the observability they need to debug issues anywhere in the cluster while limiting their write blast radius to their own namespaces. All three associations live on a single Access Entry.
Scope is per-association, not per-entry. A common mistake is thinking you set scope on the Access Entry itself. You don’t. Scope is set on each individual policy association. One entry can simultaneously have AmazonEKSViewPolicy with cluster scope and AmazonEKSEditPolicy with namespace scope. Plan your association matrix before provisioning.
8. The RBAC Bridge
AWS-managed Access Policies cover the 80% case — view, edit, admin, cluster-admin. But production environments always have requirements that don’t fit: access to specific Custom Resource Definitions, fine-grained Secret read access, or organisation-specific role hierarchies like pipeline-deployer or security-auditor.
The kubernetes_groups field on an Access Entry is your bridge to Kubernetes RBAC for everything custom. Any strings you place here are assigned to the authenticated principal as Kubernetes group memberships — exactly as if you had listed them in aws-auth under groups.
# Your Access Entry assigns group: "platform:oncall"
# (via kubernetes_groups in Terraform — shown in Section 9)
---
# Define a ClusterRole for your custom permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: platform-oncall
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/exec"]
verbs: ["get", "list", "watch", "create"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "patch"] # can restart deployments
- apiGroups: ["argoproj.io"] # Custom CRD — ArgoCD
resources: ["applications"]
verbs: ["get", "list"]
---
# Bind the ClusterRole to the group from the Access Entry
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: platform-oncall-binding
subjects:
- kind: Group
name: "platform:oncall" # matches kubernetes_groups value exactly
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: platform-oncall
apiGroup: rbac.authorization.k8s.io
Composability Principle
AWS Access Policies and Kubernetes RBAC groups are additive. A principal’s effective permissions are the union of everything granted by their Access Policies and everything granted via their Kubernetes group memberships. Use Access Policies for standard patterns; use RBAC groups for everything custom. Never recreate standard patterns in custom RBAC — that’s duplication without benefit.
9. Terraform Implementation
Cluster Configuration
resource "aws_eks_cluster" "main" {
name = var.cluster_name
role_arn = aws_iam_role.cluster.arn
version = "1.30"
access_config {
# For new clusters: set API from day one — no migration needed
authentication_mode = "API"
# For existing clusters being migrated: use "API_AND_CONFIG_MAP"
bootstrap_cluster_creator_admin_permissions = false
# Set false — manage cluster creator access explicitly via Access Entry
# This prevents an invisible auto-created ClusterAdmin entry
}
vpc_config {
subnet_ids = var.subnet_ids
}
}
bootstrap_cluster_creator_admin_permissions: When true (the default), EKS automatically creates a STANDARD Access Entry granting AmazonEKSClusterAdminPolicy to the IAM principal that created the cluster. Set this to false and create the admin entry explicitly in Terraform so it is tracked in state and auditable. Invisible auto-created entries are an operational surprise waiting to happen.
Node Group Access Entry
# Node group IAM role — type EC2_LINUX
# No policy association needed — EKS assigns groups implicitly
resource "aws_eks_access_entry" "node_group" {
cluster_name = aws_eks_cluster.main.name
principal_arn = aws_iam_role.node_group.arn
type = "EC2_LINUX" # always set explicitly — do not rely on inference
tags = var.common_tags
}
Engineering Team Access — Multi-Policy Pattern
# STANDARD entry for an engineering team IAM role
resource "aws_eks_access_entry" "team_payments_engineer" {
cluster_name = aws_eks_cluster.main.name
principal_arn = "arn:aws:iam::123456789012:role/team-payments-engineer"
type = "STANDARD"
kubernetes_groups = ["team:payments", "oncall:payments"]
tags = merge(var.common_tags, {
Team = "payments"
Purpose = "engineer-access"
})
}
# Policy 1: Cluster-wide view (for debugging anywhere in cluster)
resource "aws_eks_access_policy_association" "payments_view_cluster" {
cluster_name = aws_eks_cluster.main.name
principal_arn = "arn:aws:iam::123456789012:role/team-payments-engineer"
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope {
type = "cluster"
}
depends_on = [aws_eks_access_entry.team_payments_engineer]
# depends_on is mandatory — policy association must be created after entry
}
# Policy 2: Namespace-scoped edit (write only in their namespaces)
resource "aws_eks_access_policy_association" "payments_edit_ns" {
cluster_name = aws_eks_cluster.main.name
principal_arn = "arn:aws:iam::123456789012:role/team-payments-engineer"
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy"
access_scope {
type = "namespace"
namespaces = ["payments", "payments-staging"]
}
depends_on = [aws_eks_access_entry.team_payments_engineer]
}
depends_on is mandatory. Policy associations must be created after their parent Access Entry. Without an explicit depends_on, Terraform’s parallel execution engine may attempt to create the association before the entry exists, resulting in a ResourceNotFoundException. This is the most common Terraform apply failure when first implementing Access Entries.
Looping Over Multiple Teams — Production Pattern
locals {
team_access = {
"payments" = {
role_arn = "arn:aws:iam::123456789012:role/team-payments-engineer"
namespaces = ["payments", "payments-staging"]
groups = ["team:payments"]
}
"identity" = {
role_arn = "arn:aws:iam::123456789012:role/team-identity-engineer"
namespaces = ["identity", "identity-staging"]
groups = ["team:identity"]
}
"platform" = {
role_arn = "arn:aws:iam::123456789012:role/team-platform-engineer"
namespaces = [] # platform team gets cluster scope — see below
groups = ["team:platform", "oncall:platform"]
}
}
}
resource "aws_eks_access_entry" "teams" {
for_each = local.team_access
cluster_name = aws_eks_cluster.main.name
principal_arn = each.value.role_arn
type = "STANDARD"
kubernetes_groups = each.value.groups
}
# View policy: cluster scope for all teams
resource "aws_eks_access_policy_association" "teams_view" {
for_each = local.team_access
cluster_name = aws_eks_cluster.main.name
principal_arn = each.value.role_arn
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope { type = "cluster" }
depends_on = [aws_eks_access_entry.teams]
}
# Edit policy: namespace scope for non-platform teams only
resource "aws_eks_access_policy_association" "teams_edit" {
for_each = { for k, v in local.team_access : k => v if length(v.namespaces) > 0 }
cluster_name = aws_eks_cluster.main.name
principal_arn = each.value.role_arn
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy"
access_scope {
type = "namespace"
namespaces = each.value.namespaces
}
depends_on = [aws_eks_access_entry.teams]
}
Break-Glass Admin Entry
# A dedicated break-glass role — not used for normal operations
# Protected by SCP to require MFA and restrict to corporate IPs
resource "aws_eks_access_entry" "break_glass" {
cluster_name = aws_eks_cluster.main.name
principal_arn = "arn:aws:iam::123456789012:role/eks-break-glass"
type = "STANDARD"
tags = {
Purpose = "break-glass"
Alert = "true" # tag-based CloudWatch alarm triggers on any use
}
}
resource "aws_eks_access_policy_association" "break_glass_admin" {
cluster_name = aws_eks_cluster.main.name
principal_arn = "arn:aws:iam::123456789012:role/eks-break-glass"
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
access_scope { type = "cluster" }
depends_on = [aws_eks_access_entry.break_glass]
}
10. AWS CLI Reference
# ─── CLUSTER AUTH MODE ─────────────────────────────────────────
# Check current mode
aws eks describe-cluster --name my-cluster \
--query 'cluster.accessConfig'
# Update auth mode (one-directional — cannot revert)
aws eks update-cluster-config --name my-cluster \
--access-config authenticationMode=API_AND_CONFIG_MAP
# Wait for update to complete
aws eks wait cluster-active --name my-cluster
# ─── ACCESS ENTRIES ────────────────────────────────────────────
# List all access entries
aws eks list-access-entries --cluster-name my-cluster
# Create a STANDARD entry
aws eks create-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role \
--type STANDARD \
--kubernetes-groups team:platform oncall:platform
# Create a node entry
aws eks create-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/node-group-role \
--type EC2_LINUX
# Describe a specific entry
aws eks describe-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role
# Delete an entry
aws eks delete-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role
# ─── POLICY ASSOCIATIONS ───────────────────────────────────────
# Associate a policy — cluster scope
aws eks associate-access-policy \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
--access-scope type=cluster
# Associate a policy — namespace scope
aws eks associate-access-policy \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy \
--access-scope 'type=namespace,namespaces=["payments","payments-staging"]'
# List policies on an entry
aws eks list-associated-access-policies \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role
# Disassociate a policy
aws eks disassociate-access-policy \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/my-role \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy
11. Migration Guide — aws-auth to Access Entries
Migration on a live production cluster requires precision. Each step is sequenced to avoid any authentication disruption.
Step 1 — Audit your current aws-auth ConfigMap
Extract all current IAM mappings. This is your migration checklist — every entry needs an equivalent Access Entry.
kubectl get configmap aws-auth -n kube-system -o yaml > aws-auth-backup.yaml
cat aws-auth-backup.yaml
Note every rolearn, its mapped Kubernetes username, and its Kubernetes groups. You will recreate each as an Access Entry with equivalent Access Policies and kubernetes_groups.
Step 2 — Switch to API_AND_CONFIG_MAP mode
Zero impact — both systems active. Do not skip this and jump to API mode.
aws eks update-cluster-config --name my-cluster \
--access-config authenticationMode=API_AND_CONFIG_MAP
# Wait for update to complete (~2 minutes)
aws eks wait cluster-active --name my-cluster
Step 3 — Create Access Entries for all principals
Start with the most critical principals — cluster admins, CI/CD pipelines, node groups. Verify each one by authenticating with that role after creation.
# After creating each entry, verify authentication with that role
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/my-role \
--role-session-name verify-access-entry
# With assumed credentials, verify kubectl access
kubectl auth can-i get pods --namespace default
Step 4 — Remove entries from aws-auth incrementally
Start with the least-critical principals. Remove their rolearn entries from aws-auth. Verify they still authenticate (now via Access Entry, not aws-auth). Work up to the most critical principals last — never remove cluster admin access from aws-auth until its Access Entry equivalent is confirmed working.
Step 5 — Switch to API mode
Only once aws-auth is empty and all principals have been verified via Access Entries. This is final and irreversible.
aws eks update-cluster-config --name my-cluster \
--access-config authenticationMode=API
Step 6 — Delete the aws-auth ConfigMap (optional)
In API mode, aws-auth is ignored entirely. Deleting it removes the operational temptation to edit it and makes it clear the cluster has fully migrated.
kubectl delete configmap aws-auth -n kube-system
12. Production Hardening
Restrict Who Can Manage Access Entries
Use IAM policies or SCPs to restrict which roles can create, modify, or delete Access Entries. Only your Terraform automation role should have mutation access.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessEntryRead",
"Effect": "Allow",
"Action": [
"eks:ListAccessEntries",
"eks:DescribeAccessEntry",
"eks:ListAssociatedAccessPolicies"
],
"Resource": "*"
},
{
"Sid": "DenyAccessEntryMutationForNonTerraform",
"Effect": "Deny",
"NotAction": [
"eks:ListAccessEntries",
"eks:DescribeAccessEntry",
"eks:ListAssociatedAccessPolicies"
],
"Resource": "arn:aws:eks:*:*:access-entry/*",
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn":
"arn:aws:iam::123456789012:role/platform-terraform"
}
}
}
]
}
CloudTrail Alerting on Access Entry Changes
Every Access Entry mutation produces a CloudTrail event. Set up EventBridge rules to alert on the following event names — especially for production clusters:
CreateAccessEntryDeleteAccessEntryAssociateAccessPolicyDisassociateAccessPolicyUpdateAccessEntryUpdateClusterConfig— specifically for authentication mode changes
Drift Detection via Terraform
Run terraform plan in your pipeline on a schedule — not just on code changes — to detect out-of-band Access Entry modifications via the AWS Console or CLI. Access Entries are AWS API objects, so Terraform’s state lock is your source of truth. Any manual change will surface as drift on the next plan.
Enforce Minimum IAM Permissions on Entry Creation
Tag Access Entries consistently and use tag-based IAM conditions to enforce policies. For example, require the Team tag on all Access Entries and alert when untagged entries appear — they may indicate out-of-band creation.
13. Common Pitfalls
Pitfall 1 — Jumping to API mode without validating all principals
The most catastrophic migration error. Switch to API_AND_CONFIG_MAP first. Test every IAM principal that needs cluster access. Only switch to API after full validation. Skipping this locks out principals with no recovery path except recreating Access Entries from outside the cluster.
Pitfall 2 — Assigning AmazonEKSClusterAdminPolicy too broadly
This policy grants the ability to modify RBAC — including creating ClusterRoleBindings that grant any permission. One compromised role with ClusterAdmin can grant itself or anyone else full cluster access. Reserve this for dedicated break-glass and Terraform automation roles only. For team leads and senior engineers, use AmazonEKSAdminPolicy instead.
Pitfall 3 — Missing node group Access Entries during cluster recreation
If you delete and recreate an EKS cluster, all Access Entries are destroyed. Your Terraform must include node group Access Entries or nodes will fail to join. This catches teams who manage the cluster and node groups in separate Terraform states without explicit depends_on wiring between them.
Pitfall 4 — IAM role session ARN vs role ARN mismatch
Access Entries match on the base IAM role ARN, not on assumed-role session ARNs. If you accidentally create an entry with the assumed-role ARN format (arn:aws:sts::123456789012:assumed-role/...) instead of the IAM role ARN format (arn:aws:iam::123456789012:role/...), it will never match. Always use the base IAM role ARN.
Pitfall 5 — Duplicate groups during API_AND_CONFIG_MAP migration
During migration, if both an Access Entry’s kubernetes_groups and the legacy aws-auth assign groups to the same principal, the union of both sets applies. This can cause a principal to have more permissions than intended. Audit group assignments carefully during the transition period.
Pitfall 6 — Forgetting bootstrap_cluster_creator_admin_permissions
When true (the default), an invisible Access Entry granting ClusterAdmin is automatically created for the IAM principal that provisioned the cluster. This entry exists outside your Terraform state. Set bootstrap_cluster_creator_admin_permissions = false and create the admin entry explicitly so it is tracked, audited, and managed as code.
14. What Access Entries Don’t Do
They don’t replace IRSA
IRSA (IAM Roles for Service Accounts) handles pod-level IAM identity — the ability for a running pod to call AWS APIs using a scoped IAM role. Access Entries handle cluster API authentication — the ability to run kubectl commands. These are orthogonal concerns. A mature cluster needs both. Do not conflate them.
They don’t enforce network-level access
Access Entries control what an authenticated principal can do inside Kubernetes, not whether they can reach the API server endpoint. VPN requirements, private endpoint configuration, and security group rules on the API server are separate, independent controls.
They don’t support custom Access Policies
There are exactly five AWS-managed Access Policies and you cannot create custom ones. For permissions beyond those five standard profiles, use the kubernetes_groups field to assign Kubernetes group memberships, then manage fine-grained permissions with your own ClusterRoles and RoleBindings.
They don’t replace Kubernetes audit logging
Access Entries record mutations to authentication configuration in CloudTrail. The Kubernetes audit log — recording which kubectl commands were executed, what resources were accessed, and what changes were made inside the cluster — is configured separately via EKS control plane logging settings. Both are needed for complete observability.
They don’t apply retroactively to existing sessions
Modifying an Access Entry (adding or removing a policy association, changing Kubernetes groups) takes effect on the next authentication token — within 15 minutes for interactive users. Existing active sessions are not immediately revoked. If you need immediate revocation, the IAM role itself needs to be restricted.
Conclusion
EKS Access Entries are not a minor feature increment — they represent a genuine architectural correction to how cluster authentication should work. Moving authentication out of the Kubernetes data plane and into the AWS API layer eliminates an entire class of operational risk that anyone who has managed aws-auth at scale will recognise immediately.
The mental model to carry forward: Access Entries are your IAM-to-Kubernetes identity bridge. Access Policies are your AWS-managed permission sets for standard patterns. Kubernetes RBAC groups are your extension point for everything custom. And authentication mode is the dial that lets you migrate safely without a hard cutover.
For new clusters, there is no decision to make — set API mode from day one. For existing clusters, use API_AND_CONFIG_MAP, migrate incrementally, validate every principal, then switch to API. Delete the ConfigMap when you’re done. Don’t look back.
The aws-auth era is over. It ended at 1:47 AM the last time someone ran kubectl edit and introduced a YAML indent error. Let it stay over.
If this article was useful, the same design thinking applies to IRSA, Security Groups for Pods, and Kyverno admission control — topics covered in my other articles on zero-trust EKS security and federated observability at scale.
Chinmaya Kumar Mishra is a Principal Platform Engineer and EKS Architect with 18 years of engineering experience. CKA · AWS SAA · CKS in progress. Published on Medium.