Creating an AWS EKS cluster and solving kubectl connection issues (due to older awscli version)

Written by - 0 comments

Published on - Listed in AWS Cloud Kubernetes


After having just successfully created a Kubernetes cluster in AWS' managed Kubernetes service EKS (Amazon Elastic Kubernetes Service), my local kubectl was unable to communicate with the cluster. Time to find out why - but let's start at the beginning.

Creating a new EKS cluster

To create a new EKS cluster, it is recommended to use the eksctl command. This command can be downloaded from the GitHub releases:

ck@mint ~ $ ARCH=amd64
ck@mint ~ $ PLATFORM=$(uname -s)_$ARCH
ck@mint ~ $ curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"
ck@mint ~ $ curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
eksctl_Linux_amd64.tar.gz: OK
ck@mint ~ $ tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz
ck@mint ~ $ sudo mv /tmp/eksctl /usr/local/bin

The command should now be available:

ck@mint ~ $ eksctl version
0.176.0

To be able to communicate with the AWS account, we need to use the AWS cli and export the necessary keys and tokens (see How to use AWS CLI with MFA how to do this):

ck@mint ~ $ export AWS_ACCESS_KEY_ID="xxx"
ck@mint ~ $ export AWS_SECRET_ACCESS_KEY="xxx"
ck@mint ~ $ export AWS_SESSION_TOKEN="xxx="
ck@mint ~ $ aws ec2 describe-instances | head
{
    "Reservations": []
}

The connection to the AWS accounts works (there are no EC2 instances created, yet), so now we can create the EKS cluster (eksctl uses the aws cli in the background to spin up the cluster's worker nodes as EC2 instances).

ck@mint ~ $ eksctl create cluster --name ck-test-cluster --version 1.28 --region eu-central-1 --nodegroup-name ck-test-nodes --nodes 3 --nodes-min 1 --nodes-max 4 --vpc-cidr 10.111.0.0/16 --tags Creator=CK,Org=Infiniroot,Environment=TEST --managed
2024-04-25 14:16:57 [i]  eksctl version 0.176.0
2024-04-25 14:16:57 [i]  using region eu-central-1
2024-04-25 14:16:57 [i]  setting availability zones to [eu-central-1b eu-central-1a eu-central-1c]
2024-04-25 14:16:57 [i]  subnets for eu-central-1b - public:10.111.0.0/19 private:10.111.96.0/19
2024-04-25 14:16:57 [i]  subnets for eu-central-1a - public:10.111.32.0/19 private:10.111.128.0/19
2024-04-25 14:16:57 [i]  subnets for eu-central-1c - public:10.111.64.0/19 private:10.111.160.0/19
2024-04-25 14:16:57 [i]  nodegroup "ck-test-cluster" will use "" [AmazonLinux2/1.28]
2024-04-25 14:16:57 [i]  using Kubernetes version 1.28
[...]
2024-04-25 14:32:55 [i]  nodegroup "ck-test-nodes" has 3 node(s)
2024-04-25 14:32:55 [i]  node "ip-10-111-16-180.eu-central-1.compute.internal" is ready
2024-04-25 14:32:55 [i]  node "ip-10-111-42-190.eu-central-1.compute.internal" is ready
2024-04-25 14:32:55 [i]  node "ip-10-111-67-91.eu-central-1.compute.internal" is ready
2024-04-25 14:32:55 [y]  created 1 managed nodegroup(s) in cluster "ck-test-cluster"
2024-04-25 14:32:55 [x]  getting Kubernetes version on EKS cluster: error running `kubectl version`: exit status 1 (check 'kubectl version')
2024-04-25 14:32:55 [i]  cluster should be functional despite missing (or misconfigured) client binaries
2024-04-25 14:32:55 [y]  EKS cluster "ck-test-cluster" in "eu-central-1" region is ready

This created a new EKS cluster named "ck-test-cluster" with Kubernetes version 1.28 in AWS region eu-central-1 (Frankfurt). A node group (the compute nodes) named ck-test-nodes with 3 nodes and a new VPC with internal network range 10.111.0.0/16 was created as well. You can also set some tags to help identify or describe a cluster.

Once the cluster is built, it should show up in eksctl:

ck@mint ~ $ eksctl get cluster
NAME            REGION        EKSCTL CREATED
ck-test-cluster    eu-central-1    True

The next step is to use kubectl to communicate and deploy to the cluster.

Create a new standalone kubectl config file

The eksctl create command before should have added the new cluster configuration in .kube/config. However if you want to create a standalone config without other existing data/cluster, you can re-create the config from scratch:

ck@mint ~ $ mv .kube/config{,.bkp}
ck@mint ~ $ aws eks update-kubeconfig --region eu-central-1 --name ck-test-cluster
Added new context arn:aws:eks:eu-central-1:xxx:cluster/ck-test-cluster to /home/ck/.kube/config
ck@mint ~ $ export KUBECONFIG=~/.kube/config

Now with the kubectl config file written, we can start talking to the cluster. At least I thought so...

kubectl error: invalid apiVersion

My first check whether kubectl is able to communicate with the cluster is to list the nodes. But this already failed:

ck@mint ~ $ kubectl get nodes
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

By taking a closer look into the config file it seems the mentioned invalid apiVersion was added under the user context:

ck@mint ~ $ cat .kube/config
apiVersion: v1
clusters:
[...]
users:
- name: arn:aws:eks:eu-central-1:xxx:cluster/ck-test-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-central-1
      - eks
      - get-token
      - --cluster-name
      - ck-test-cluster
      command: aws

My first guess was an incompatible kubectl version, but this turned out to be kubectl 1.29 which is just one version above the Kubernetes cluster (1.28) and should be compatible.

Maybe the v1alpha1 is the issue, as the v1alpha1 was removed in Kubernetes 1.24. Let's try v1beta1 instead.

After editing .kube/config and setting apiVersion: client.authentication.k8s.io/v1beta1, another error showed up:

ck@mint ~ $ kubectl get node
E0425 15:00:42.587159   76590 memcache.go:265] couldn't get current server API group list: Get "https://xxx.gr7.eu-central-1.eks.amazonaws.com/api?timeout=32s": getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"

Although I just changed v1alpha1 to v1beta1 in the kubectl config, the new error again mentions this outdated version tag. Where is this coming from?!

As I couldn't rule out an error during the cluster creation, I decided to delete the EKS cluster at this point:

ck@mint ~ $ eksctl delete cluster --name ck-test-cluster

It's an outdated aws cli!

After some research I finally came across a very helpful hint on a Stackoverflow question:

> You need to update your AWS CLI 

The current version is installed as a deb package in version 1.22.34 - which seems dated.

ck@mint ~ $ dpkg -l awscli
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-==========================================
ii  awscli         1.22.34-1    all          Universal Command Line Environment for AWS

The AWS CLI documentation mentions that version 2 is the most recent version:

The AWS CLI version 2 is the most recent major version of the AWS CLI and supports all of the latest features. Some features introduced in version 2 are not backported to version 1 and you must upgrade to access those features. 

On the other hand by looking at the PyPi project, the newest awscli available is currently version 1.32.91 from April 24th 2024 (one day old).

Install awscli from pypi project using pip3

Let's try using the newer awscli version from pypi first!

First remove the outdated system/deb package, then install the newest awscli version using pip3:

ck@mint ~ $ sudo apt-get remove awscli
ck@mint ~ $ sudo pip3 install awscli
Collecting awscli
[...]
Successfully installed awscli-1.32.91 botocore-1.34.91 docutils-0.16 rsa-4.7.2 s3transfer-0.10.1

Verify the new version is installed:

ck@mint ~ $ aws --version
aws-cli/1.32.91 Python/3.10.12 Linux/6.5.0-28-generic botocore/1.34.91

Re-create the cluster and new kubectl test

As I mentioned above, I already deleted the cluster. At this moment I'm actually not sure if a kubectl would have worked now or if the cluster itself was already created with outdated information from the old aws cli. So it would have been (likely) a wise idea anyway, to create a new EKS cluster from scratch.

ck@mint ~ $ eksctl create cluster --name ck-test-cluster --version 1.28 --region eu-central-1 --nodegroup-name ck-test-nodes --nodes 3 --nodes-min 1 --nodes-max 4 --vpc-cidr 10.111.0.0/16 --tags Creator=CK,Org=Infiniroot,Environment=TEST --managed
2024-04-25 14:16:57 [i]  eksctl version 0.176.0
[...]
2024-04-25 17:41:27 [i]  nodegroup "ck-test-cluster" has 3 node(s)
2024-04-25 17:41:27 [i]  node "ip-10-111-29-59.eu-central-1.compute.internal" is ready
2024-04-25 17:41:27 [i]  node "ip-10-111-40-114.eu-central-1.compute.internal" is ready
2024-04-25 17:41:27 [i]  node "ip-10-111-64-119.eu-central-1.compute.internal" is ready
2024-04-25 17:41:27 [y]  created 1 managed nodegroup(s) in cluster "ck-test-cluster"
2024-04-25 17:41:27 [i]  kubectl command should work with "/home/ckadm/.kube/config", try 'kubectl get nodes'
2024-04-25 17:41:27 [y]  EKS cluster "ck-test-cluster" in "eu-central-1" region is ready

ck@mint ~ $ eksctl get cluster
NAME            REGION        EKSCTL CREATED
ck-test-cluster    eu-central-1    True

Now re-create a new .kube/config file and then attempt the kubectl communication again:

ck@mint ~ $ rm .kube/config
ck@mint ~ $ aws eks update-kubeconfig --region eu-central-1 --name ck-test-cluster
Added new context arn:aws:eks:eu-central-1:xxx:cluster/ck-test-server to /home/ckadm/.kube/config

Let's check the contents of the kubectl config file:

ck@mint ~ $ cat /home/ckadm/.kube/config
apiVersion: v1
clusters:
[...]
users:
- name: arn:aws:eks:eu-central-1:xxx:cluster/ck-test-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - eu-central-1
      - eks
      - get-token
      - --cluster-name
      - ck-test-cluster
      - --output
      - json
      command: aws

Hey, look at this! The apiVersion is now v1beta1!

What about kubectl now? Final attempt:

ck@mint ~ $ kubectl get node
NAME                                             STATUS   ROLES    AGE     VERSION
ip-10-111-29-59.eu-central-1.compute.internal    Ready    <none>   2m47s   v1.28.5-eks-5e0fdde
ip-10-111-40-114.eu-central-1.compute.internal   Ready    <none>   2m46s   v1.28.5-eks-5e0fdde
ip-10-111-64-119.eu-central-1.compute.internal   Ready    <none>   2m47s   v1.28.5-eks-5e0fdde

Heureka! The cluster's worker nodes are showing up; kubectl is now working!


Add a comment

Show form to leave a comment

Comments (newest first)

No comments yet.

RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   OpenSearch   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder