The NDB operator brings automated and simplified database administration, provisioning, and life-cycle management to Kubernetes.
- Access to an NDB Server.
- A Kubernetes cluster to run against, which should have network connectivity to the NDB server. The operator will automatically use the current context in your kubeconfig file (i.e. whatever cluster
kubectl cluster-info
shows). - The operator-sdk installed.
- A clone of the source code (this repository).
- Cert-manager (only when running in non OpenShift clusters). Follow the instructions here.
With the pre-requisites completed, the NDB Operator can be deployed in one of the following ways:
Runs the controller outside the Kubernetes cluster as a process, but installs the CRDs, services and RBAC entities within the Kubernetes cluster. Generally used while development (without running webhooks):
make install run
Runs the controller pod, installs the CRDs, services and RBAC entities within the Kubernetes cluster. Used to run the operator from the container image defined in the Makefile. Make sure that the cert-manager is installed if not using OpenShift.
make deploy
The Helm charts for the NDB Operator project are available on artifacthub.io and can be installed by following the instructions here.
To deploy the operator from this repository on an OpenShift cluster, create a bundle and then install the operator via the operator-sdk.
# Export these environment variables to overwrite the variables set in the Makefile
export DOCKER_USERNAME=dockerhub-username
export VERSION=x.y.z
export IMG=docker.io/$DOCKER_USERNAME/ndb-operator:v$VERSION
export BUNDLE_IMG=docker.io/$DOCKER_USERNAME/ndb-operator-bundle:v$VERSION
# Build and push the container image to the container registry
make docker-build docker-push
# Build the bundle following the prompts for input, build and push the bundle image to the container registry
make bundle bundle-build bundle-push
# Install the operator (run on the OpenShift cluster)
operator-sdk run bundle $BUNDLE_IMG
NOTE:
The container and bundle image creation steps can be skipped if existing images are present in the container registry.
apiVersion: v1
kind: Secret
metadata:
name: ndb-secret-name
type: Opaque
stringData:
username: username-for-ndb-server
password: password-for-ndb-server
ca_certificate: |
-----BEGIN CERTIFICATE-----
CA CERTIFICATE (ca_certificate is optional)
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Secret
metadata:
name: db-instance-secret-name
type: Opaque
stringData:
password: password-for-the-database-instance
ssh_public_key: SSH-PUBLIC-KEY
Create the secrets:
kubectl apply -f <path/to/secrets-manifest.yaml>
apiVersion: ndb.nutanix.com/v1alpha1
kind: NDBServer
metadata:
labels:
app.kubernetes.io/name: ndbserver
app.kubernetes.io/instance: ndbserver
app.kubernetes.io/part-of: ndb-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: ndb-operator
name: ndb
spec:
# Name of the secret that holds the credentials for NDB: username, password and ca_certificate created earlier
credentialSecret: ndb-secret-name
# NDB Server's API URL
server: https://[NDB IP]:8443/era/v0.9
# Set to true to skip SSL certificate validation, should be false if ca_certificate is provided in the credential secret.
skipCertificateVerification: true
Create the NDBServer resource using:
kubectl apply -f <path/to/NDBServer-manifest.yaml>
Create a Database Resource. A database can either be provisioned or cloned on NDB based on the inputs specified in the database manifest.
apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
# This name that will be used within the kubernetes cluster
name: db
spec:
# Name of the NDBServer resource created earlier
ndbRef: ndb
isClone: false
# Database instance specific details (that is to be provisioned)
databaseInstance:
# Cluster id of the cluster where the Database has to be provisioned
# Can be fetched from the GET /clusters endpoint
clusterId: "Nutanix Cluster Id"
# The database instance name on NDB
name: "Database-Instance-Name"
# The description of the database instance
description: Database Description
# Names of the databases on that instance
databaseNames:
- database_one
- database_two
- database_three
# Credentials secret name for NDB installation
# data: password, ssh_public_key
credentialSecret: db-instance-secret-name
size: 10
timezone: "UTC"
type: postgres
# You can specify any (or none) of these types of profiles: compute, software, network, dbParam
# If not specified, the corresponding Out-of-Box (OOB) profile will be used wherever applicable
# Name is case-sensitive. ID is the UUID of the profile. Profile should be in the "READY" state
# "id" & "name" are optional. If none provided, OOB may be resolved to any profile of that type
profiles:
compute:
id: ""
name: ""
# A Software profile is a mandatory input for closed-source engines: SQL Server & Oracle
software:
name: ""
id: ""
network:
id: ""
name: ""
dbParam:
name: ""
id: ""
# Only applicable for MSSQL databases
dbParamInstance:
name: ""
id: ""
timeMachine: # Optional block, if removed the SLA defaults to NONE
sla : "NAME OF THE SLA"
dailySnapshotTime: "12:34:56" # Time for daily snapshot in hh:mm:ss format
snapshotsPerDay: 4 # Number of snapshots per day
logCatchUpFrequency: 90 # Frequency (in minutes)
weeklySnapshotDay: "WEDNESDAY" # Day of the week for weekly snapshot
monthlySnapshotDay: 24 # Day of the month for monthly snapshot
quarterlySnapshotMonth: "Jan" # Start month of the quarterly snapshot
additionalArguments: # Optional block, can specify additional arguments that are unique to database engines.
listener_port: "8080"
apiVersion: ndb.nutanix.com/v1alpha1
kind: Database
metadata:
# This name that will be used within the kubernetes cluster
name: db
spec:
# Name of the NDBServer resource created earlier
ndbRef: ndb
isClone: true
# Clone specific details (that is to be provisioned)
clone:
# Type of the database to be cloned
type: postgres
# The clone instance name on NDB
name: "Clone-Instance-Name"
# The description of the clone instance
description: Database Description
# Cluster id of the cluster where the Database has to be provisioned
# Can be fetched from the GET /clusters endpoint
clusterId: "Nutanix Cluster Id"
# You can specify any (or none) of these types of profiles: compute, software, network, dbParam
# If not specified, the corresponding Out-of-Box (OOB) profile will be used wherever applicable
# Name is case-sensitive. ID is the UUID of the profile. Profile should be in the "READY" state
# "id" & "name" are optional. If none provided, OOB may be resolved to any profile of that type
profiles:
compute:
id: ""
name: ""
# A Software profile is a mandatory input for closed-source engines: SQL Server & Oracle
software:
name: ""
id: ""
network:
id: ""
name: ""
dbParam:
name: ""
id: ""
# Only applicable for MSSQL databases
dbParamInstance:
name: ""
id: ""
# Name of the secret with the
# data: password, ssh_public_key
credentialSecret: clone-instance-secret-name
timezone: "UTC"
# ID of the database to clone from, can be fetched from NDB REST API Explorer
sourceDatabaseId: source-database-id
# ID of the snapshot to clone from, can be fetched from NDB REST API Explorer
snapshotId: snapshot-id
additionalArguments: # Optional block, can specify additional arguments that are unique to database engines.
expireInDays: 3
Create the Database resource:
kubectl apply -f <path/to/database-manifest.yaml>
Below are the various optional addtionalArguments you can specify along with examples of their corresponding values. Arguments that have defaults will be indicated.
Provisioning Additional Arguments:
# PostGres
additionalArguments:
listener_port: "1111" # Default: "5432"
# MySQL
additionalArguments:
listener_port: "1111" # Default: "3306"
# MongoDB
additionalArguments:
listener_port: "1111" # Default: "27017"
log_size: "150" # Default: "100"
journal_size: "150" # Default: "100"
# MSSQL
additionalArguments:
sql_user_name: "mazin" # Defualt: "sa".
authentication_mode: "mixed" # Default: "windows". Options are "windows" or "mixed". Must specify sql_user.
server_collation: "<server-collation>" # Default: "SQL_Latin1_General_CP1_CI_AS".
database_collation: "<server-collation>" # Default: "SQL_Latin1_General_CP1_CI_AS".
dbParameterProfileIdInstance: "<id-instance>" # Default: Fetched from profile.
vm_dbserver_admin_password: "<admin-password>" # Default: Fetched from database secret.
sql_user_password: "<sq-user-password>" # NO Default. Must specify authentication_mode as "mixed".
windows_domain_profile_id: <domain-profile-id> # NO Default. Must specify vm_db_server_user.
vm_db_server_user: <vm-db-server-use> # NO Default. Must specify windows_domain_profile_id.
vm_win_license_key: <licenseKey> # NO Default.
Cloning Additional Arguments:
MSSQL:
windows_domain_profile_id
era_worker_service_user
sql_service_startup_account
vm_win_license_key
target_mountpoints_location
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
MongoDB:
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
Postgres:
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
MySQL:
expireInDays
expiryDateTimezone
deleteDatabase
refreshInDays
refreshTime
refreshDateTimezone
To deregister the database and delete the VM run:
kubectl delete -f <path/to/database-manifest.yaml>
To deregister the database and delete the VM run:
kubectl delete -f <path/to/NDBServer-manifest.yaml>
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make generate manifests
Add the CRDs to the Kubernetes cluster
make install
Run your controller locally (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run
NOTES:
- You can also run this in one step by running:
make install run
- Run
make --help
for more information on all potentialmake
targets
More information can be found via the Kubebuilder Documentation
Build and push your image to the location specified by IMG
:
make docker-build docker-push IMG=<some-registry>/ndb-operator:tag
Deploy the controller to the cluster with the image specified by IMG
:
make deploy IMG=<some-registry>/ndb-operator:tag
Uninstall the operator based on the installation/deployment environment
# Stops the controller process
ctrl + c
# Uninstalls the CRDs
make uninstall
# Removes the deployment, crds, services and rbac entities
make undeploy
# NAME: name of the release created during installation
helm uninstall NAME
operator-sdk cleanup ndb-operator --delete-all
This project aims to follow the Kubernetes Operator pattern. It uses Controllers which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
A custom resource of the kind Database is created by the reconciler, followed by a Service and an Endpoint that maps to the IP address of the database instance provisioned. Application pods/deployments can use this service to interact with the databases provisioned on NDB through the native Kubernetes service.
Pods can specify an initContainer to wait for the service (and hence the database instance) to get created before they start up.
initContainers:
- name: init-db
image: busybox:1.28
command: ['sh', '-c', "until nslookup <<Database CR Name>>-svc.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for database service; sleep 2; done"]
See the contributing docs
This code is developed in the open with input from the community through issues and PRs. A Nutanix engineering team serves as the maintainer. Documentation is available in the project repository. Issues and enhancement requests can be submitted in the Issues tab of this repository. Please search for and review the existing open issues before submitting a new issue.
Copyright 2022-2023 Nutanix, Inc.
The project is released under version 2.0 of the Apache license.