Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-0.21] Update docs for latest EKS-A v0.21 and kubernetes v0.31 #8947

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ MGMT_CLUSTER_KUBECONFIG=${MGMT_CLUSTER}/${MGMT_CLUSTER}-eks-a-cluster.kubeconfig
BACKUP_DIRECTORY=backup-mgmt

# Substitute the EKS Anywhere release version with whatever CLI version you are using
EKSA_RELEASE_VERSION=v0.17.3
EKSA_RELEASE_VERSION=v0.21.0
BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
CLI_TOOLS_IMAGE=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksa.cliTools.uri")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,7 @@ systemctl restart kubelet

```bash
# Substitute the EKS Anywhere release version with whatever CLI version you are using
EKSA_RELEASE_VERSION=v0.18.3
EKSA_RELEASE_VERSION=v0.21.0
BUNDLE_MANIFEST_URL=$(curl -s https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml | yq ".spec.releases[] | select(.version==\"$EKSA_RELEASE_VERSION\").bundleManifestUrl")
CLI_TOOLS_IMAGE=$(curl -s $BUNDLE_MANIFEST_URL | yq ".spec.versionsBundles[0].eksa.cliTools.uri")
Expand Down
4 changes: 2 additions & 2 deletions docs/content/en/docs/clustermgmt/cluster-flux.md
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ Follow these steps if you want to use your initial cluster to create and manage
### Upgrade cluster using Gitops

1. To upgrade the cluster using Gitops, modify the workload cluster yaml file with the desired changes.
As an example, to upgrade a cluster with version 1.24 to 1.25 you would change your spec:
As an example, to upgrade a cluster with version 1.30 to 1.31 you would change your spec:
```bash
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
Expand All @@ -369,7 +369,7 @@ Follow these steps if you want to use your initial cluster to create and manage
kind: VSphereMachineConfig
name: dev
...
kubernetesVersion: "1.25"
kubernetesVersion: "1.31"
...
```

Expand Down
4 changes: 2 additions & 2 deletions docs/content/en/docs/clustermgmt/cluster-terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ Follow these steps if you want to use your initial cluster to create and manage
### Upgrade cluster using Terraform

1. To upgrade a workload cluster using Terraform, modify the desired fields in the Terraform resource file.
As an example, to upgrade a cluster with version 1.24 to 1.25 you would modify your Terraform cluster resource:
As an example, to upgrade a cluster with version 1.30 to 1.31 you would modify your Terraform cluster resource:
```bash
manifest = {
"apiVersion" = "anywhere.eks.amazonaws.com/v1alpha1"
Expand All @@ -198,7 +198,7 @@ Follow these steps if you want to use your initial cluster to create and manage
"namespace" = "default"
}
"spec" = {
"kubernetesVersion" = "1.25"
"kubernetesVersion" = "1.31"
...
...
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ description: >
- It is highly recommended to run the `eksctl anywhere upgrade cluster` command with the `--no-timeouts` option when the command is executed through automation. This prevents the CLI from timing out and enables cluster operators to fix issues preventing the upgrade from completing while the process is running.
- In EKS Anywhere version `v0.15.0`, we introduced the EKS Anywhere cluster lifecycle controller that runs on management clusters and manages workload clusters. The EKS Anywhere lifecycle controller enables you to use Kubernetes API-compatible clients such as `kubectl`, GitOps, or Terraform for managing workload clusters. In this EKS Anywhere version, the EKS Anywhere cluster lifecycle controller rolls out new nodes in workload clusters when management clusters are upgraded. In EKS Anywhere version `v0.16.0`, this behavior was changed such that management clusters can be upgraded separately from workload clusters.
- When running workload cluster upgrades after upgrading a management cluster, a machine rollout may be triggered on workload clusters during the workload cluster upgrade, even if the changes to the workload cluster spec didn't require one (for example scaling down a worker node group).
- Starting with EKS Anywhere `v0.18.0`, the `osImageURL` must include the Kubernetes minor version (`Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` in the cluster spec). For example, if the Kubernetes version is 1.29, the `osImageURL` must include 1.29, 1_29, 1-29 or 129. If you are upgrading Kubernetes versions, you must have a new OS image with your target Kubernetes version components.
- Starting with EKS Anywhere `v0.18.0`, the `osImageURL` must include the Kubernetes minor version (`Cluster.Spec.KubernetesVersion` or `Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion` in the cluster spec). For example, if the Kubernetes version is 1.31, the `osImageURL` must include 1.31, 1_31, 1-31 or 131. If you are upgrading Kubernetes versions, you must have a new OS image with your target Kubernetes version components.
- If you are running EKS Anywhere in an airgapped environment, you must download the new artifacts and images prior to initiating the upgrade. Reference the [Airgapped Upgrades page]({{< relref "./airgapped-upgrades" >}}) page for more information.

### Upgrade Version Skew
Expand Down Expand Up @@ -88,7 +88,7 @@ If you don't have any available hardware that match this requirement in the clus

To perform a cluster upgrade you can modify your cluster specification `kubernetesVersion` field to the desired version.

As an example, to upgrade a cluster with version 1.24 to 1.25 you would change your spec as follows:
As an example, to upgrade a cluster with version 1.30 to 1.31 you would change your spec as follows:

```
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
Expand All @@ -104,7 +104,7 @@ spec:
kind: TinkerbellMachineConfig
name: dev
...
kubernetesVersion: "1.25"
kubernetesVersion: "1.31"
...
```

Expand Down Expand Up @@ -249,7 +249,7 @@ spec:
datacenterRef:
kind: TinkerbellDatacenterConfig
name: my-cluster-name
kubernetesVersion: "1.25"
kubernetesVersion: "1.31"
managementCluster:
name: my-cluster-name
workerNodeGroupConfigurations:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Each EKS Anywhere version includes all components required to create and manage
- Management components (Cluster API controller, EKS Anywhere controller, provider-specific controllers)
- Cluster components (Kubernetes, Cilium)

You can find details about each EKS Anywhere releases in the EKS Anywhere release manifest. The release manifest contains references to the corresponding bundle manifest for each EKS Anywhere version. Within the bundle manifest, you will find the components included in a specific EKS Anywhere version. The images running in your deployment use the same URI values specified in the bundle manifest for that component. For example, see the [bundle manifest](https://anywhere-assets.eks.amazonaws.com/releases/bundles/59/manifest.yaml) for EKS Anywhere version `v0.18.7`.
You can find details about each EKS Anywhere releases in the EKS Anywhere release manifest. The release manifest contains references to the corresponding bundle manifest for each EKS Anywhere version. Within the bundle manifest, you will find the components included in a specific EKS Anywhere version. The images running in your deployment use the same URI values specified in the bundle manifest for that component. For example, see the [bundle manifest](https://anywhere-assets.eks.amazonaws.com/releases/bundles/81/manifest.yaml) for EKS Anywhere version `v0.21.0`.

To upgrade the EKS Anywhere version of a management or standalone cluster, you install a new version of the `eksctl anywhere` CLI, change the `eksaVersion` field in your management or standalone cluster's spec yaml, and then run the `eksctl anywhere upgrade management-components -f cluster.yaml` (as of EKS Anywhere version v0.19) or `eksctl anywhere upgrade cluster -f cluster.yaml` command. The `eksctl anywhere upgrade cluster` command upgrades both management and cluster components.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ There are a few dimensions of versioning to consider in your EKS Anywhere deploy

- **Management clusters to workload clusters**: Management clusters can be at most 1 EKS Anywhere minor version greater than the EKS Anywhere version of workload clusters. Workload clusters cannot have an EKS Anywhere version greater than management clusters.
- **Management components to cluster components**: Management components can be at most 1 EKS Anywhere minor version greater than the EKS Anywhere version of cluster components.
- **EKS Anywhere version upgrades**: Skipping EKS Anywhere minor versions during upgrade is not supported (`v0.17.x` to `v0.19.x`). We recommend you upgrade one EKS Anywhere minor version at a time (`v0.17.x` to `v0.18.x` to `v0.19.x`).
- **Kubernetes version upgrades**: Skipping Kubernetes minor versions during upgrade is not supported (`v1.26.x` to `v1.28.x`). You must upgrade one Kubernetes minor version at a time (`v1.26.x` to `v1.27.x` to `v1.28.x`).
- **EKS Anywhere version upgrades**: Skipping EKS Anywhere minor versions during upgrade is not supported (`v0.19.x` to `v0.21.x`). We recommend you upgrade one EKS Anywhere minor version at a time (`v0.19.x` to `v0.20.x` to `v0.21.x`).
- **Kubernetes version upgrades**: Skipping Kubernetes minor versions during upgrade is not supported (`v1.29.x` to `v1.31.x`). You must upgrade one Kubernetes minor version at a time (`v1.29.x` to `v1.30.x` to `v1.31.x`).
- **Kubernetes control plane and worker nodes**: As of Kubernetes v1.28, worker nodes can be up to 3 minor versions lower than the Kubernetes control plane minor version. In earlier Kubernetes versions, worker nodes could be up to 2 minor versions lower than the Kubernetes control plane minor version.
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ To the format output in json, add `-o json` to the end of the command line.

To perform a cluster upgrade you can modify your cluster specification `kubernetesVersion` field to the desired version.

As an example, to upgrade a cluster with version 1.26 to 1.27 you would change your spec
As an example, to upgrade a cluster with version 1.30 to 1.31 you would change your spec

```
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
Expand All @@ -88,7 +88,7 @@ spec:
kind: VSphereMachineConfig
name: dev
...
kubernetesVersion: "1.27"
kubernetesVersion: "1.31"
...
```

Expand Down
4 changes: 2 additions & 2 deletions docs/content/en/docs/clustermgmt/security/cluster-iam-auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,10 @@ ${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-aws.kubeconfig
1. Ensure the IAM role/user ARN mapped in the cluster is configured on the local machine from which you are trying to access the cluster.
2. Install the `aws-iam-authenticator client` binary on the local machine.
* We recommend installing the binary referenced in the latest `release manifest` of the kubernetes version used when creating the cluster.
* The below commands can be used to fetch the installation uri for clusters created with `1.27` kubernetes version and OS `linux`.
* The below commands can be used to fetch the installation uri for clusters created with `1.31` kubernetes version and OS `linux`.
```bash
CLUSTER_NAME=my-cluster-name
KUBERNETES_VERSION=1.27
KUBERNETES_VERSION=1.31
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
Expand Down
2 changes: 1 addition & 1 deletion docs/content/en/docs/concepts/support-versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Bottlerocket, Ubuntu, and Red Hat Enterprise Linux (RHEL) can be used as operati
|------------|------------------------------|---------------------------------|
| Ubuntu | 22.04 | 0.17 and above
| | 20.04 | 0.5 and above
| Bottlerocket | 1.22.0 | 0.21
| Bottlerocket | 1.26.1 | 0.21
| | 1.20.0 | 0.20
| | 1.19.1 | 0.19
| | 1.15.1 | 0.18
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ spec:
machineGroupRef:
kind: CloudStackMachineConfig
name: my-cluster-name-etcd
kubernetesVersion: "1.28"
kubernetesVersion: "1.31"
managementCluster:
name: my-cluster-name
workerNodeGroupConfigurations:
Expand Down
2 changes: 1 addition & 1 deletion docs/content/en/docs/getting-started/optional/etcd.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ spec:
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name-etcd
kubernetesVersion: "1.27"
kubernetesVersion: "1.31"
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
Expand Down
4 changes: 2 additions & 2 deletions docs/content/en/docs/packages/prereq.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ You can get a list of the available packages from the command line:
```bash
export CLUSTER_NAME=<your-cluster-name>
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
eksctl anywhere list packages --kube-version 1.27
eksctl anywhere list packages --kube-version 1.31
```

Example command output:
Expand All @@ -181,5 +181,5 @@ The example shows how to install the `harbor` package from the [curated package

```bash
export CLUSTER_NAME=<your-cluster-name>
eksctl anywhere generate package harbor --cluster ${CLUSTER_NAME} --kube-version 1.27 > harbor-spec.yaml
eksctl anywhere generate package harbor --cluster ${CLUSTER_NAME} --kube-version 1.31 > harbor-spec.yaml
```
5 changes: 4 additions & 1 deletion docs/content/en/docs/whatsnew/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ description: >
- GPU support for Nutanix provider ([#8745](https://github.com/aws/eks-anywhere/pull/8745))
- Support for worker nodes failure domains on Nutanix ([#8837](https://github.com/aws/eks-anywhere/pull/8837))

### Changed
### Upgraded
- Added EKS-D for 1-31:
- [`v1-31-eks-6`](https://distro.eks.amazonaws.com/releases/1-31/6/)
- Cert Manager: `v1.14.7` to `v1.15.3`
Expand All @@ -71,6 +71,9 @@ description: >
- Hook: `v0.8.1` to `v0.9.1`
- Troubleshoot: `v0.93.2` to `v0.107.4`

### Changed
- Use HookOS embedded images in Tinkerbell Templates by default ([#8708](https://github.com/aws/eks-anywhere/pull/8708) and [#3471](https://github.com/aws/eks-anywhere-build-tooling/pull/3471))

### Removed
- Support for Kubernetes v1.26

Expand Down
2 changes: 1 addition & 1 deletion docs/content/en/docs/workloadmgmt/gpu-sample-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ toc_hide: true
datacenterRef:
kind: TinkerbellDatacenterConfig
name: gpu-test
kubernetesVersion: "1.27"
kubernetesVersion: "1.31"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
Expand Down
2 changes: 1 addition & 1 deletion docs/content/en/docs/workloadmgmt/using-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: >

The [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html) allows GPUs to be exposed to applications in Kubernetes clusters much like CPUs. Instead of provisioning a special OS image for GPU nodes with the required drivers and dependencies, a standard OS image can be used for both CPU and GPU nodes. The NVIDIA GPU Operator can be used to provision the required software components for GPUs such as the NVIDIA drivers, Kubernetes device plugin for GPUs, and the NVIDIA Container Toolkit. See the [licensing section](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html#licenses-and-contributing) of the NVIDIA GPU Operator documentation for information on the NVIDIA End User License Agreements.

In the example on this page, a single-node EKS Anywhere cluster on bare metal is used with an Ubuntu 20.04 image produced from image-builder without modifications and Kubernetes version 1.27.
In the example on this page, a single-node EKS Anywhere cluster on bare metal is used with an Ubuntu 20.04 image produced from image-builder without modifications and Kubernetes version 1.31.

### 1. Configure an EKS Anywhere cluster spec and hardware inventory

Expand Down
16 changes: 8 additions & 8 deletions docs/data/version_support.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
# receiving_patches: Whether or not the release is receiving patches.
eksa:
- version: '0.21'
released: 2024-10-31
released: 2024-10-30
kube_versions: ['1.31', '1.30', '1.29', '1.28', '1.27']
receiving_patches: true

Expand Down Expand Up @@ -121,31 +121,31 @@ eksa:
kube:
- version: '1.31'
releasedIn: '0.21'
expectedEndOfLifeDate: 2025-10-23
expectedEndOfLifeDate: 2025-12-31

- version: '1.30'
releasedIn: '0.20'
expectedEndOfLifeDate: 2025-06-23
expectedEndOfLifeDate: 2025-08-31

- version: '1.29'
releasedIn: '0.19'
expectedEndOfLifeDate: 2025-03-23
expectedEndOfLifeDate: 2025-04-30

- version: '1.28'
releasedIn: '0.18'
expectedEndOfLifeDate: 2024-12-01
expectedEndOfLifeDate: 2024-12-31

- version: '1.27'
releasedIn: '0.16'
expectedEndOfLifeDate: 2024-08-01
expectedEndOfLifeDate: 2024-08-31

- version: '1.26'
releasedIn: '0.15'
expectedEndOfLifeDate: 2024-06-01
expectedEndOfLifeDate: 2024-05-31

- version: '1.25'
releasedIn: '0.14'
expectedEndOfLifeDate: 2024-05-01
expectedEndOfLifeDate: 2024-03-31

- version: '1.24'
releasedIn: '0.12'
Expand Down
6 changes: 3 additions & 3 deletions docs/developer/manifests.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ Each CLI is built with a particular EKS-A semver in its metadata. This pins each
Dev releases are a bit special: we generate new them all the time, very fast. For this reason, we don't use a simple major.minor.patch semver, but we include build metadata. In particular we use `v{major}.{minor}.{patch}-dev+build.{number}` with `number` being a monotonically increasing integer that is bumped every time a new dev release is built.

The version we use for the first part depends on the HEAD: `main` vs release branches:
- For `main`, we use the next minor version to the latest tag available. For example, if the latest prod release is `v0.18.5`, the version used for dev releases will be `v0.19.0-dev+build.{number}`. This aligns with the fact that the code in `main` belongs to the next future prod release `v0.19.0`.
- For `release-*` branches, we use the next patch version to the latest available tag for that minor version. For example, for `release-0.17`, if the latest latest prod release is for v0.17 is `v0.17.7`, dev releases will follow `v0.17.8-dev+build.{number}`.
- For `main`, we use the next minor version to the latest tag available. For example, if the latest prod release is `v0.21.3`, the version used for dev releases will be `v0.22.0-dev+build.{number}`. This aligns with the fact that the code in `main` belongs to the next future prod release `v0.22.0`.
- For `release-*` branches, we use the next patch version to the latest available tag for that minor version. For example, for `release-0.21`, if the latest prod release for v0.21 is `v0.21.5`, dev releases will follow `v0.21.6-dev+build.{number}`.

In order to avoid the dev Release manifest growing forever, we trim the included releases to a max size, dropping always the oldest one. Take this in mind if using a particular version locally. If you do it for too long, it might become unavailable. If it does, just rebuild your CLI.

Expand All @@ -32,6 +32,6 @@ When a CLI is built for dev E2E tests, it's given the latest available EKS-A dev
### Locally building the CLI
When writing and testing code for the CLI/Controller, most of the time we don't care about particular releases and we just want to use the latest available Bundles that contains the latest available set of components. this verifies that our changes are compatible with the current state of EKS-A dependencies.

To avoid having to rebuild the CLI every time we want to refresh the pulled Bundles or even having to care about fetching the latest version, we introduced a special build metadata identifier `+latest`. This instructs the CLI to not look for an exact match with an EKS-A version, but select the newest one that matches our pre-release. For example: if the release manifest has two releases [`v0.19.0-dev+build.1234`, `v0.19.0-dev+build.1233`], then if the CLI has version `v0.19.0-dev+latest`, then the release `v0.19.0-dev+build.1234` will be selected.
To avoid having to rebuild the CLI every time we want to refresh the pulled Bundles or even having to care about fetching the latest version, we introduced a special build metadata identifier `+latest`. This instructs the CLI to not look for an exact match with an EKS-A version, but select the newest one that matches our pre-release. For example: if the release manifest has two releases [`v0.22.0-dev+build.1234`, `v0.22.0-dev+build.1233`], then if the CLI has version `v0.22.0-dev+latest`, then the release `v0.22.0-dev+build.1234` will be selected.

This is the default behavior when building a CLI locally: the Makefile will calculate the appropriate major.minor.patch based on the current HEAD and its closest branch ancestor (either `main` or a `release-*` branch). If you wish to pin your local CLI to a particular version, pass the `DEV_GIT_VERSION` to the make target.
Loading