Knowledge Base

Find answers to common questions about Cloudmersive products and services.



Create an AWS EKS Cluster for Cloudmersive Private Cloud CLI Only
11/30/2025 - Cloudmersive Support


0. Prereqs (one‑time on your machine)

You need:

  • AWS CloudShell with administrator privaleges

You do not need eksctl or the web console.


1. Set fixed variables (names, region, sizes)

Run this once in your shell:

export AWS_REGION=us-east-1

export CLUSTER_NAME=eks-windows-demo
export VPC_STACK_NAME=eks-windows-vpc

export CLUSTER_ROLE_NAME=eksWindowsClusterRole
export LINUX_NODE_ROLE_NAME=eksWindowsLinuxNodeRole
export WINDOWS_NODE_ROLE_NAME=eksWindowsWindowsNodeRole

export LINUX_NODEGROUP_NAME=eks-windows-linux-ng
export WINDOWS_NODEGROUP_NAME=eks-windows-win-ng

# Instance types & sizes
export LINUX_INSTANCE_TYPE=t3.medium
export WINDOWS_INSTANCE_TYPE=m5.large

export LINUX_DESIRED_SIZE=2
export WINDOWS_DESIRED_SIZE=2

We’ll refer only to these variables from now on—no manual name filling.


2. Create an EKS-ready VPC via CloudFormation

Use the official amazon-eks-vpc-private-subnets.yaml template that creates a VPC with 2 private + 2 public subnets suitable for EKS.

aws cloudformation create-stack \
  --region $AWS_REGION \
  --stack-name $VPC_STACK_NAME \
  --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml \
  --parameters \
    ParameterKey=VpcBlock,ParameterValue=10.0.0.0/16 \
    ParameterKey=PrivateSubnet01Block,ParameterValue=10.0.0.0/19 \
    ParameterKey=PrivateSubnet02Block,ParameterValue=10.0.32.0/19 \
    ParameterKey=PublicSubnet01Block,ParameterValue=10.0.64.0/20 \
    ParameterKey=PublicSubnet02Block,ParameterValue=10.0.80.0/20

Wait for it to finish:

aws cloudformation wait stack-create-complete \
  --region $AWS_REGION \
  --stack-name $VPC_STACK_NAME

Grab the subnet and VPC IDs directly from the stack (no manual copying):

export PRIVATE_SUBNET_1=$(aws cloudformation describe-stack-resources \
  --region $AWS_REGION \
  --stack-name $VPC_STACK_NAME \
  --query "StackResources[?LogicalResourceId=='PrivateSubnet01'].PhysicalResourceId" \
  --output text)

export PRIVATE_SUBNET_2=$(aws cloudformation describe-stack-resources \
  --region $AWS_REGION \
  --stack-name $VPC_STACK_NAME \
  --query "StackResources[?LogicalResourceId=='PrivateSubnet02'].PhysicalResourceId" \
  --output text)

export PUBLIC_SUBNET_1=$(aws cloudformation describe-stack-resources \
  --region $AWS_REGION \
  --stack-name $VPC_STACK_NAME \
  --query "StackResources[?LogicalResourceId=='PublicSubnet01'].PhysicalResourceId" \
  --output text)

export PUBLIC_SUBNET_2=$(aws cloudformation describe-stack-resources \
  --region $AWS_REGION \
  --stack-name $VPC_STACK_NAME \
  --query "StackResources[?LogicalResourceId=='PublicSubnet02'].PhysicalResourceId" \
  --output text)

export CLUSTER_SUBNET_IDS="${PRIVATE_SUBNET_1},${PRIVATE_SUBNET_2},${PUBLIC_SUBNET_1},${PUBLIC_SUBNET_2}"

(You could restrict to private-only for the cluster, but using all four is fine for a general lab. )


3. Create IAM roles (cluster + Linux node + Windows node)

3.1 Cluster IAM role (eksWindowsClusterRole)

Trust policy for EKS control plane:

cat > eks-cluster-role-trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

aws iam create-role \
  --role-name $CLUSTER_ROLE_NAME \
  --assume-role-policy-document file://eks-cluster-role-trust-policy.json

aws iam attach-role-policy \
  --role-name $CLUSTER_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

# Needed for Windows VPC resource controller / Windows IPAM
aws iam attach-role-policy \
  --role-name $CLUSTER_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController

3.2 Node IAM roles (Linux + Windows)

Common node trust policy:

cat > eks-node-role-trust-policy.json <<EOF
{
  "Version":"2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "sts:AssumeRole"
      ],
      "Principal": {
        "Service": [
          "ec2.amazonaws.com"
        ]
      }
    }
  ]
}
EOF

Create Linux node IAM role:

aws iam create-role \
  --role-name $LINUX_NODE_ROLE_NAME \
  --assume-role-policy-document file://eks-node-role-trust-policy.json

aws iam attach-role-policy \
  --role-name $LINUX_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy \
  --role-name $LINUX_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly

aws iam attach-role-policy \
  --role-name $LINUX_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

Create Windows node IAM role (same policies; CNI still used):

aws iam create-role \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --assume-role-policy-document file://eks-node-role-trust-policy.json

aws iam attach-role-policy \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly

aws iam attach-role-policy \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

Grab the ARNs for all three roles:

export CLUSTER_ROLE_ARN=$(aws iam get-role \
  --role-name $CLUSTER_ROLE_NAME \
  --query "Role.Arn" \
  --output text)

export LINUX_NODE_ROLE_ARN=$(aws iam get-role \
  --role-name $LINUX_NODE_ROLE_NAME \
  --query "Role.Arn" \
  --output text)

export WINDOWS_NODE_ROLE_ARN=$(aws iam get-role \
  --role-name $WINDOWS_NODE_ROLE_NAME \
  --query "Role.Arn" \
  --output text)

4. Create the EKS cluster (control plane only)

We let EKS choose the latest supported Kubernetes version; Windows nodes are supported for modern versions 1.2x+ as long as you use Windows Server 2019/2022 AMIs.

aws eks create-cluster \
  --region $AWS_REGION \
  --name $CLUSTER_NAME \
  --role-arn $CLUSTER_ROLE_ARN \
  --resources-vpc-config subnetIds=$CLUSTER_SUBNET_IDS,endpointPublicAccess=true

Wait until it’s active:

aws eks wait cluster-active \
  --region $AWS_REGION \
  --name $CLUSTER_NAME

5. Hook up kubectl

Configure kubeconfig for the cluster:

aws eks update-kubeconfig \
  --region $AWS_REGION \
  --name $CLUSTER_NAME

Quick sanity check:

kubectl get svc

You should see at least the kubernetes service.


6. Enable Windows support in the cluster

Follow the Windows support doc steps: attach AmazonEKSVPCResourceController (already done) and enable Windows IPAM via VPC CNI ConfigMap.

6.1 Ensure VPC Resource Controller policy is attached (already, but verify)

aws iam list-attached-role-policies \
  --role-name $CLUSTER_ROLE_NAME

You should see AmazonEKSClusterPolicy and AmazonEKSVPCResourceController in the output.

6.2 Enable Windows IPAM in Amazon VPC CNI

cat > vpc-resource-controller-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: amazon-vpc-cni
  namespace: kube-system
data:
  enable-windows-ipam: "true"
EOF

kubectl apply -f vpc-resource-controller-configmap.yaml

This tells the VPC CNI plugin to manage IPs for Windows nodes.


7. Configure aws-auth ConfigMap (Linux + Windows node roles)

We’ll map:

  • Linux node role → normal node groups
  • Windows node role → normal node groups + eks:kube-proxy-windows (required for Windows DNS)
  • The IAM identity you’re using now → Kubernetes system:masters (cluster admin)

Get your AWS account & caller ARNs:

export ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export CALLER_ARN=$(aws sts get-caller-identity --query "Arn" --output text)

Create aws-auth-configmap.yaml with the correct ARNs baked in:

cat > aws-auth-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: ${LINUX_NODE_ROLE_ARN}
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: ${WINDOWS_NODE_ROLE_ARN}
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
        - eks:kube-proxy-windows
  mapUsers: |
    - userarn: ${CALLER_ARN}
      username: admin
      groups:
        - system:masters
EOF

kubectl apply -f aws-auth-configmap.yaml

You can inspect it if you like:

kubectl get configmap aws-auth -n kube-system -o yaml

You should see the eks:kube-proxy-windows group on the Windows role section, as in the docs.


8. Create the Linux managed node group

We need Linux nodes for CoreDNS and other system pods; Windows-only clusters are not supported.

aws eks create-nodegroup \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $LINUX_NODEGROUP_NAME \
  --node-role $LINUX_NODE_ROLE_ARN \
  --subnets $PRIVATE_SUBNET_1 $PRIVATE_SUBNET_2 \
  --scaling-config minSize=1,maxSize=3,desiredSize=$LINUX_DESIRED_SIZE \
  --instance-types $LINUX_INSTANCE_TYPE \
  --ami-type AL2023_x86_64_STANDARD \
  --disk-size 20 \
  --capacity-type ON_DEMAND

Wait for it to become active:

aws eks wait nodegroup-active \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $LINUX_NODEGROUP_NAME

Check that nodes joined:

kubectl get nodes -o wide

You should see some kubernetes.io/os=linux nodes.


9. Create the Windows managed node group (2 nodes)

Now the fun part: Windows managed node group using EKS-optimized Windows Server 2022 Core AMI type WINDOWS_CORE_2022_x86_64.

aws eks create-nodegroup \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $WINDOWS_NODEGROUP_NAME \
  --node-role $WINDOWS_NODE_ROLE_ARN \
  --subnets $PRIVATE_SUBNET_1 $PRIVATE_SUBNET_2 \
  --scaling-config minSize=$WINDOWS_DESIRED_SIZE,maxSize=4,desiredSize=$WINDOWS_DESIRED_SIZE \
  --instance-types $WINDOWS_INSTANCE_TYPE \
  --ami-type WINDOWS_CORE_2022_x86_64 \
  --disk-size 80 \
  --capacity-type ON_DEMAND

Wait for it to become active:

aws eks wait nodegroup-active \
  --region $AWS_REGION \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name $WINDOWS_NODEGROUP_NAME

Verify that Windows nodes are registered:

kubectl get nodes -o wide

Filter just Windows nodes:

kubectl get nodes -o wide \
  --selector=kubernetes.io/os=windows

You should see 2 nodes there.


10. (Optional) Test with a Windows Pod

Windows pods must use a nodeSelector so they only land on Windows nodes.

cat > windows-iis-demo.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: windows-iis-demo
  labels:
    app: windows-iis-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: windows-iis-demo
  template:
    metadata:
      labels:
        app: windows-iis-demo
    spec:
      nodeSelector:
        kubernetes.io/os: windows
        kubernetes.io/arch: amd64
      containers:
      - name: iis
        image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2022
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: windows-iis-demo-svc
spec:
  type: LoadBalancer
  selector:
    app: windows-iis-demo
  ports:
  - port: 80
    targetPort: 80
EOF

kubectl apply -f windows-iis-demo.yaml

Then:

kubectl get pods -o wide
kubectl get svc windows-iis-demo-svc

Once the service has an external hostname/IP, you can hit it and confirm Windows containers are actually running on your Windows nodes.


11. Next Steps

You are now ready to install Cloudmersive Private Cloud into this cluster.

600 free API calls/month, with no expiration

Sign Up Now or Sign in with Google    Sign in with Microsoft

Questions? We'll be your guide.

Contact Sales