PolarSPARC |
Introduction to Terraform
Bhaskar S | 03/12/2023 |
Overview
Terraform is an open source Infrastructure as Code deployment tool which enables system administrators to provision and manage the Enterprise infrastructure (public cloud or on-prem) in a consistent and predictable manner.
The idea behind Terraform is that system adminstrators can describe the Enterprise infrastructure configuration in the form of a human readable text, which can be version controlled and maintained just like code.
Terraform integrates with the various infrastructure services via the Providers. Think of them as extensions (or plugins) to the core Terraform platform.
Providers are developed and distributed by partners and vendors via the Terraform Registry.
The Terraform infrastructure configuration code is stored in text files with the extension .tf.
Basics
In the following section, we will describe the commonly used elements of a Terraform configuration file for one to get started quickly.
The following is the generic template for a Terraform configuration file:
variable "variable-name" { type = variable-type description = "some meaningful description of the variable" default = default-value sensitive = false | true } provider "provider-name" { provider-arguments } resource "resource-type-1" "resource-name-1" { resource-arguments-1 } resource "resource-type-2" "resource-name-2" { resource-arguments-2 } resource "resource-type-3" "resource-name-3" { resource-arguments-3 depends_on = [resource-type-1.resource-name-1, resource-type-2.resource-name-2] } output "output-name" { value = output-value description = "some meaningful description of the output" sensitive = false | true }
The following are the definitions for the various Terraform elements from the above template:
variable :: refers to an user supplied input value, which can be referenced in the Terraform configuration file as var.variable-name. There can be more than one input variable defined in the configuration file
variable-type :: can be a primitive type such as string, number, or bool OR a collection type like list or map
description :: a string that provides useful information to the user
default-value :: indicates the default value to use in case the value is not supplied by the user
sensitive :: if set to true, the value of the variable will NOT appear in either the logs or the terminal
provider :: indicates the provider plugin that will be used in the configuration file. As indicated earlier, the provider is the binary extension offered by the respective service owner. For example, for any of the resources to be provisioned in the Amazon cloud, use the aws provider. For any of the resources to be provisioned in the Microsoft cloud, use the azurerm provider, and so on.
There can be multiple providers referenced in the configuration file
provider-arguments :: defines the various parameters that are needed to initialize and configure the specific provider
resource :: indicates the specific resource on the provider that needs to be created or modified
resource-arguments :: defines the various parameters that are needed to initialize and configure the specific resource on the specific provider
depends_on :: indicates the specific resource on which it is defined depends on the list of specific resources on the specific provider, meaning they need to be setup before this
output :: displays the specified value in the logs or the terminal for the user
Installation and Setup
We will perform the installation and setup on a VirtualBox VM running Ubuntu 22.04 LTS.
Also, the logged in username will be alice.
Open a Terminal window to perform the various steps.
To perform a system update and install the prerequisite software, execute the following command:
$ sudo apt update && sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
The following would be a typical trimmed output:
...[ SNIP ]... ca-certificates is already the newest version (20211016ubuntu0.22.04.1). ca-certificates set to manually installed. The following additional packages will be installed: python3-software-properties software-properties-gtk The following NEW packages will be installed: apt-transport-https curl The following packages will be upgraded: python3-software-properties software-properties-common software-properties-gtk 3 upgraded, 2 newly installed, 0 to remove and 14 not upgraded. ...[ SNIP ]...
To add the Docker package repository, execute the following commands:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
The following would be a typical output:
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu jammy stable
To install docker, execute the following command:
$ sudo apt update && sudo apt install docker-ce -y
The following would be a typical trimmed output:
...[ SNIP ]... Get:5 https://download.docker.com/linux/ubuntu jammy InRelease [48.9 kB] Get:6 https://download.docker.com/linux/ubuntu jammy/stable amd64 Packages [13.6 kB] ...[ SNIP ]...
To add the logged in user alice to the group docker, execute the following command:
$ sudo usermod -aG docker ${USER}
Reboot the Ubuntu 22.04 LTS VM for the changes to take effect.
To verify docker installation was ok, execute the following command:
$ docker info
The following would be a typical output:
Client: Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.10.2 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.16.0 Path: /usr/libexec/docker/cli-plugins/docker-compose scan: Docker Scan (Docker Inc.) Version: v0.23.0 Path: /usr/libexec/docker/cli-plugins/docker-scan Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 23.0.1 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc io.containerd.runc.v2 Default Runtime: runc Init Binary: docker-init containerd version: 2456e983eb9e37e47538f59ea18f2043c9a73640 runc version: v1.1.4-0-g5fd4c4d init version: de40ad0 Security Options: apparmor seccomp Profile: builtin cgroupns Kernel Version: 5.19.0-32-generic Operating System: Ubuntu 22.04.2 LTS OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.832GiB Name: xubuntu-vm-1 ID: 859dad55-839f-4a1a-90de-9212fab79df8 Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
For the hands-on demonstration, we will setup a single node development cluster using the lightweight implementation of Kubernetes called the Minikube.
To download and install the minikube binary, execute the following commands:
$ cd $HOME/Downloads
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ sudo install -o root -g root -m 0755 minikube-linux-amd64 /usr/local/bin/minikube
$ rm -f minikube*
To verify the version of the minikue binary, execute the following command:
$ minikube version
The following would be a typical output:
minikube version: v1.29.0 commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3
To start a single node minikube cluster on the VM, execute the following command:
$ minikube start
The following would be a typical output:
minikube v1.29.0 on Ubuntu 22.04 Automatically selected the docker driver Using Docker driver with root privileges Starting control plane node minikube in cluster minikube Pulling base image ... Downloading Kubernetes v1.26.1 preload ... > gcr.io/k8s-minikube/kicbase...: 407.19 MiB / 407.19 MiB 100.00% 52.15 M > preloaded-images-k8s-v18-v1...: 397.05 MiB / 397.05 MiB 100.00% 6.52 Mi Creating docker container (CPUs=2, Memory=2200MB) ... Preparing Kubernetes v1.26.1 on Docker 20.10.23 ... * Generating certificates and keys ... * Booting up control plane ... * Configuring RBAC rules ... Configuring bridge CNI (Container Networking Interface) ... * Using image gcr.io/k8s-minikube/storage-provisioner:v5 Verifying Kubernetes components... Enabled addons: default-storageclass, storage-provisioner Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
To verify the status of the minikube cluster, execute the following command:
$ minikube status
The following would be a typical output:
minikube type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured
We need to create a storage mount point in minikube. To do that, login to the minikube single node cluster by executing the following command:
$ minikube ssh
The shell prompt will change to indicate we are in minikube environment and the following would be a typical output:
Last login: Thu Mar 11 20:37:43 2023 from 192.168.49.1 docker@minikube:~$
Execute the following commands in the minikube environment to create mount point loacted in the root directory /pv-storage and then exit:
docker@minikube:~$ sudo mkdir -p /pv-storage
docker@minikube:~$ exit
On the host VM, we will create a shared persistent directory called $HOME/Downloads/pv-storage that will be attached to the Kubernetes cluster (single node cluster) at the mount point directory called /pv-storage.
To create the shared persistent directory on the host VM, execute the following command:
$ mkdir -p $HOME/Downloads/pv-storage
Open a new Terminal window on the host VM and execute the following command to mount the shared persistent directory on the minikube cluster:
$ minikube mount $HOME/Downloads/pv-storage:/pv-storage
The following would be a typical output:
Mounting host path /home/alice/Downloads/pv-storage into VM as /pv-storage ... * Mount type: * User ID: docker * Group ID: docker * Version: 9p2000.L * Message Size: 262144 * Options: map[] * Bind Address: 192.168.49.1:34935 Userspace file server: ufs starting Successfully mounted /home/alice/Downloads/pv-storage to /pv-storage NOTE: This process must stay alive for the mount to be accessible ...
To download and install the kubectl binary, execute the following commands:
$ cd $HOME/Downloads
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
$ rm -f kubectl*
To verify the version of the kubectl binary, execute the following command:
$ kubectl version --output=yaml
The following would be a typical output:
clientVersion: buildDate: "2023-02-22T13:39:03Z" compiler: gc gitCommit: fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b gitTreeState: clean gitVersion: v1.26.2 goVersion: go1.19.6 major: "1" minor: "26" platform: linux/amd64 kustomizeVersion: v4.5.7 serverVersion: buildDate: "2023-01-18T15:51:25Z" compiler: gc gitCommit: 8f94681cd294aa8cfd3407b8191f6c70214973a4 gitTreeState: clean gitVersion: v1.26.1 goVersion: go1.19.5 major: "1" minor: "26" platform: linux/amd64
Now, to download and install the terraform binary, execute the following commands:
$ cd $HOME/Downloads
$ wget https://releases.hashicorp.com/terraform/1.4.0/terraform_1.4.0_linux_amd64.zip && unzip terraform_1.4.0_linux_amd64.zip
$ sudo install -o root -g root -m 0755 terraform /usr/local/bin/terraform
$ rm -f terraform*
To verify the version of the terraform binary, execute the following command:
$ terraform version
The following would be a typical output:
Terraform v1.4.0 on linux_amd64
Finally, we will create a directory on the host VM, which will act as the root for the Terraform demonstration. To do that, execute the following commands:
$ mkdir -p $HOME/Downloads/TF
$ cd $HOME/Downloads/TF
WALLA !!! - with this we have completed the necessary setup for the demonstration.
Hands-on Terraform
In the article Hands-on Kubernetes Storage, we demonstrated how one can create a local persistent storage on the host and use it in the Kubernetes environment. We will replicate the same Local Storage case using Terraform.
For the demonstration, we will make use of two providers - null and kubernetes providers.
The null provider is a do nothing provider that can be use for executing commands, while the kubernetes provider allows as to provision resources in our minikube cluster.
The following is the basic definition of our Terraform configuration file:
# ### Terraform Configuration # provider "null" { } resource "null_resource" "create-html" { provisioner "local-exec" { command = "ssh -o \"StrictHostKeyChecking no\" -i $HOME/.minikube/machines/minikube/id_rsa docker@$(minikube ip) \"echo 'HOORAY - From Persistent Volume (local) !!!' > /pv-storage/index.html\"" } }
The first step is to perform a Terraform initialization, which prepares the current users working directory, by downloading the specified provider(s) and caching them locally. To do that, execute the following command:
$ terraform init
The following would be a typical output:
Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/null... - Installing hashicorp/null v3.2.1... - Installed hashicorp/null v3.2.1 (signed by HashiCorp) Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
To list all the contents in the current working directory, execute the following command:
$ ls -al
The following would be a typical output:
total 20 drwxrwxr-x 3 alice alice 4096 Mar 11 21:16 . drwxr-xr-x 4 alice alice 4096 Mar 11 20:38 .. -rw-rw-r-- 1 alice alice 371 Mar 11 21:11 main.tf drwxr-xr-x 3 alice alice 4096 Mar 11 21:16 .terraform -rw-r--r-- 1 alice alice 1152 Mar 11 21:16 .terraform.lock.hcl
The second step is to perform a Terraform validation, which verifies the syntax and usage of the various constructs in the configuration file(s). To do that, execute the following command:
$ terraform validate
The following would be a typical trimmed output:
Success! The configuration is valid.
The third step is to create a Terraform plan, which lets one preview the changes that will be made to the infrastructure. To do that, execute the following command:
$ terraform plan -out=./vers-1.tfplan
The following would be a typical output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # null_resource.create-html will be created + resource "null_resource" "create-html" { + id = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ----------------------------------------------------------------------------------------------------------------------------------- Saved the plan to: ./vers-1.tfplan To perform exactly these actions, run the following command to apply: terraform apply "./vers-1.tfplan"
The last step is to execute the Terraform plan, which actually makes changes to the infrastructure. To do that, execute the following command:
$ terraform apply "./vers-1.tfplan"
The following would be a typical output:
null_resource.create-html: Creating... null_resource.create-html: Provisioning with 'local-exec'... null_resource.create-html (local-exec): Executing: ["/bin/sh" "-c" "ssh -o \"StrictHostKeyChecking no\" -i $HOME/.minikube/machines/minikube/id_rsa docker@$(minikube ip) \"echo 'HOORAY - From Persistent Volume (local) !!!' > /pv-storage/index.html\""] null_resource.create-html: Creation complete after 0s [id=5735136570202919635] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
COOL !!! - we have successfully provisioned a resource (creation of the HTML content in our minikube cluster) using Terraform.
We will now add a Terraform variables file and modify the Terraform configuration file to provision a persistent volume in our minikube cluster.
The following is the definition of our Terraform variables file:
# ### Terraform Variables # variable "config-path" { type = string description = "The path to the Kubernetes config directory" default = "~/.kube/config" } variable "config-context" { type = string description = "The current cointext of the Kubernetes cluster" default = "minikube" }
The following is the modified definition of our Terraform configuration file:
# ### Terraform Configuration # provider "null" { } provider "kubernetes" { config_path = var.config-path config_context = var.config-context } resource "null_resource" "create-html" { provisioner "local-exec" { command = "ssh -o \"StrictHostKeyChecking no\" -i $HOME/.minikube/machines/minikube/id_rsa docker@$(minikube ip) \"echo 'HOORAY - From Persistent Volume (local) !!!' > /pv-storage/index.html\"" } } resource "kubernetes_persistent_volume" "pv-storage" { metadata { name = "pv-storage" } spec { storage_class_name = "standard" capacity = { storage = "2Gi" } access_modes = ["ReadWriteOnce"] persistent_volume_reclaim_policy = "Delete" persistent_volume_source { host_path { path = "/pv-storage" } } } }
To initialize Terraform, execute the following command:
$ terraform init
The following would be a typical output:
Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/null from the dependency lock file - Finding latest version of hashicorp/kubernetes... - Installing hashicorp/kubernetes v2.18.1... - Installed hashicorp/kubernetes v2.18.1 (signed by HashiCorp) - Using previously-installed hashicorp/null v3.2.1 Terraform has made some changes to the provider dependency selections recorded in the .terraform.lock.hcl file. Review those changes and commit them to your version control system if they represent changes you intended to make. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
To validate Terraform file(s), execute the following command:
$ terraform validate
The following would be a typical trimmed output:
Success! The configuration is valid.
To create a Terraform execution plan, execute the following command:
$ terraform plan -out=./vers-2.tfplan
The following would be a typical output:
null_resource.create-html: Refreshing state... [id=5735136570202919635] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # kubernetes_persistent_volume.pv-storage will be created + resource "kubernetes_persistent_volume" "pv-storage" { + id = (known after apply) + metadata { + generation = (known after apply) + name = "pv-storage" + resource_version = (known after apply) + uid = (known after apply) } + spec { + access_modes = [ + "ReadWriteOnce", ] + capacity = { + "storage" = "2Gi" } + persistent_volume_reclaim_policy = "Delete" + storage_class_name = "standard" + volume_mode = "Filesystem" + persistent_volume_source { + host_path { + path = "/pv-storage" } } } } Plan: 1 to add, 0 to change, 0 to destroy. ----------------------------------------------------------------------------------------------------------------------------------- Saved the plan to: ./vers-2.tfplan To perform exactly these actions, run the following command to apply: terraform apply "./vers-2.tfplan"
Notice something interesting from the Output.19 above ???
Terraform indicates that there is just ONE addition and no other changes to our minikube cluster, even though we left the previous create-html resource as is in the configuration file.
Once again, let us list the contents of the current working directory by executing the following command:
$ ls -al
The following would be a typical output:
total 40 drwxrwxr-x 3 alice alice 4096 Mar 11 22:14 . drwxr-xr-x 4 alice alice 4096 Mar 11 20:38 .. -rw-rw-r-- 1 alice alice 822 Mar 11 22:06 main.tf drwxr-xr-x 3 alice alice 4096 Mar 11 21:48 .terraform -rw-r--r-- 1 alice alice 2204 Mar 11 22:07 .terraform.lock.hcl -rw-rw-r-- 1 alice alice 2885 Mar 11 22:14 terraform.tfstate -rw-rw-r-- 1 alice alice 578 Mar 11 22:14 terraform.tfstate.backup -rw-rw-r-- 1 alice alice 299 Mar 11 22:05 variables.tf -rw-rw-r-- 1 alice alice 2208 Mar 11 21:48 vers-1.tfplan -rw-rw-r-- 1 alice alice 4067 Mar 11 22:11 vers-2.tfplan
Notice the presence of the file terraform.tfstate. This is the state file created and maintained by Terraform. The state file maintains the current state of the infrastructure and what is defined in the configuration file. This is how Terraform is able to detect changes to the infrastructure.
To execute the Terraform plan, execute the following command:
$ terraform apply "./vers-2.tfplan"
The following would be a typical output:
kubernetes_persistent_volume.pv-storage: Creating... kubernetes_persistent_volume.pv-storage: Creation complete after 0s [id=pv-storage] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
To display the details of the available Persistent Volumes in our minikube cluster, execute the following command:
$ kubectl get pv
The following would be a typical output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-storage 2Gi RWO Delete Available standard 21s
GOOD !!! - we have successfully provisioned the local storage in our minikube cluster using Terraform.
Once again, we will modify the Terraform configuration file to provision a persistent volume claim which references the just created persistent volume from our minikube cluster.
The following is the modified definition of our Terraform configuration file:
# ### Terraform Configuration # provider "null" { } provider "kubernetes" { config_path = var.config-path config_context = var.config-context } resource "null_resource" "create-html" { provisioner "local-exec" { command = "ssh -o \"StrictHostKeyChecking no\" -i $HOME/.minikube/machines/minikube/id_rsa docker@$(minikube ip) \"echo 'HOORAY - From Persistent Volume (local) !!!' > /pv-storage/index.html\"" } } resource "kubernetes_persistent_volume" "pv-storage" { metadata { name = "pv-storage" } spec { storage_class_name = "standard" capacity = { storage = "2Gi" } access_modes = ["ReadWriteOnce"] persistent_volume_reclaim_policy = "Delete" persistent_volume_source { host_path { path = "/pv-storage" } } } } resource "kubernetes_persistent_volume_claim" "pv-storage-claim" { metadata { name = "pv-storage-claim" } spec { storage_class_name = "standard" access_modes = ["ReadWriteOnce"] resources { requests = { storage = "2Gi" } } volume_name = "${kubernetes_persistent_volume.pv-storage.metadata[0].name}" } depends_on = [kubernetes_persistent_volume.pv-storage] }
Once again, to initialize Terraform, execute the following command:
$ terraform init
The following would be a typical output:
Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/kubernetes from the dependency lock file - Reusing previous version of hashicorp/null from the dependency lock file - Using previously-installed hashicorp/kubernetes v2.18.1 - Using previously-installed hashicorp/null v3.2.1 Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Once again, to validate Terraform file(s), execute the following command:
$ terraform validate
The following would be a typical trimmed output:
Success! The configuration is valid.
Once again, to create a Terraform execution plan, execute the following command:
$ terraform plan -out=./vers-3.tfplan
The following would be a typical output:
null_resource.create-html: Refreshing state... [id=5735136570202919635] kubernetes_persistent_volume.pv-storage: Refreshing state... [id=pv-storage] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # kubernetes_persistent_volume_claim.pv-storage-claim will be created + resource "kubernetes_persistent_volume_claim" "pv-storage-claim" { + id = (known after apply) + wait_until_bound = true + metadata { + generation = (known after apply) + name = "pv-storage-claim" + namespace = "default" + resource_version = (known after apply) + uid = (known after apply) } + spec { + access_modes = [ + "ReadWriteOnce", ] + storage_class_name = "standard" + volume_name = "pv-storage" + resources { + requests = { + "storage" = "2Gi" } } } } Plan: 1 to add, 0 to change, 0 to destroy. ----------------------------------------------------------------------------------------------------------------------------------- Saved the plan to: ./vers-3.tfplan To perform exactly these actions, run the following command to apply: terraform apply "./vers-3.tfplan"
Once again, to execute the Terraform plan, execute the following command:
$ terraform apply "./vers-3.tfplan"
The following would be a typical output:
kubernetes_persistent_volume_claim.pv-storage-claim: Creating... kubernetes_persistent_volume_claim.pv-storage-claim: Creation complete after 0s [id=default/pv-storage-claim] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
To display the details of the available Persistent Volume Claims in our minikube cluster, execute the following command:
$ kubectl get pvc
The following would be a typical output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pv-storage-claim Bound pv-storage 2Gi RWO standard 44s
COOL !!! - we have successfully deployed the storage request definition to our minikube cluster.
We will now modify both the Terraform variables and configuration files to deploy a webserver in our minikube cluster.
The following is the modified definition of the Terraform variables file:
# ### Terraform Variables # variable "config-path" { type = string description = "The path to the Kubernetes config directory" default = "~/.kube/config" } variable "config-context" { type = string description = "The current cointext of the Kubernetes cluster" default = "minikube" } variable "nginx-server" { type = string description = "The name of the nginx server that will be deployed to Kubernetes" default = "nginx-server" } variable "nginx-storage" { type = string description = "The name of the storage volume used by the nginx server in Kubernetes" default = "nginx-storage" }
The following is the modified definition of the Terraform configuration file:
# ### Terraform Configuration # provider "null" { } provider "kubernetes" { config_path = var.config-path config_context = var.config-context } resource "null_resource" "create-html" { provisioner "local-exec" { command = "ssh -o \"StrictHostKeyChecking no\" -i $HOME/.minikube/machines/minikube/id_rsa docker@$(minikube ip) \"echo 'HOORAY - From Persistent Volume (local) !!!' > /pv-storage/index.html\"" } } resource "kubernetes_persistent_volume" "pv-storage" { metadata { name = "pv-storage" } spec { storage_class_name = "standard" capacity = { storage = "2Gi" } access_modes = ["ReadWriteOnce"] persistent_volume_reclaim_policy = "Delete" persistent_volume_source { host_path { path = "/pv-storage" } } } } resource "kubernetes_persistent_volume_claim" "pv-storage-claim" { metadata { name = "pv-storage-claim" } spec { storage_class_name = "standard" access_modes = ["ReadWriteOnce"] resources { requests = { storage = "2Gi" } } volume_name = "${kubernetes_persistent_volume.pv-storage.metadata[0].name}" } depends_on = [kubernetes_persistent_volume.pv-storage] } resource "kubernetes_deployment" "nginx-deploy" { metadata { name = "nginx-deploy" labels = { app = var.nginx-server } } spec { replicas = 1 selector { match_labels = { app = var.nginx-server } } template { metadata { labels = { app = var.nginx-server } } spec { container { name = var.nginx-server image = "nginx:1.23.3" port { name = var.nginx-server container_port = 80 } volume_mount { name = var.nginx-storage mount_path = "/usr/share/nginx/html" } } volume { name = var.nginx-storage persistent_volume_claim { claim_name = "pv-storage-claim" } } } } } depends_on = [kubernetes_persistent_volume_claim.pv-storage-claim] } output "nginx-deploy" { value = "${kubernetes_deployment.nginx-deploy.metadata[0].name} deployed successfully !!!" depends_on = [kubernetes_deployment.nginx-deploy] }
One last time, to initialize Terraform, execute the following command:
$ terraform init
The response will be similar to that of the Output.23 from above.
One last time, to validate Terraform file(s), execute the following command:
$ terraform validate
The response will be similar to that of the Output.24 from above.
One last time, to create a Terraform execution plan, execute the following command:
$ terraform plan -out=./vers-4.tfplan
The following would be a typical output:
null_resource.create-html: Refreshing state... [id=5735136570202919635] kubernetes_persistent_volume.pv-storage: Refreshing state... [id=pv-storage] kubernetes_persistent_volume_claim.pv-storage-claim: Refreshing state... [id=default/pv-storage-claim] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # kubernetes_deployment.nginx-deploy will be created + resource "kubernetes_deployment" "nginx-deploy" { + id = (known after apply) + wait_for_rollout = true + metadata { + generation = (known after apply) + labels = { + "app" = "nginx-server" } + name = "nginx-deploy" + namespace = "default" + resource_version = (known after apply) + uid = (known after apply) } + spec { + min_ready_seconds = 0 + paused = false + progress_deadline_seconds = 600 + replicas = "1" + revision_history_limit = 10 + selector { + match_labels = { + "app" = "nginx-server" } } + template { + metadata { + generation = (known after apply) + labels = { + "app" = "nginx-server" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + restart_policy = "Always" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + image = "nginx:1.23.3" + image_pull_policy = (known after apply) + name = "nginx-server" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + port { + container_port = 80 + name = "nginx-server" + protocol = "TCP" } + volume_mount { + mount_path = "/usr/share/nginx/html" + mount_propagation = "None" + name = "nginx-storage" + read_only = false } } + volume { + name = "nginx-storage" + persistent_volume_claim { + claim_name = "pv-storage-claim" + read_only = false } } } } } } Plan: 1 to add, 0 to change, 0 to destroy. Changes to Outputs: + nginx-deploy = "nginx-deploy deployed successfully !!!" ----------------------------------------------------------------------------------------------------------------------------------- Saved the plan to: ./vers-4.tfplan To perform exactly these actions, run the following command to apply: terraform apply "./vers-4.tfplan"
To finally execute the Terraform plan, execute the following command:
$ terraform apply "./vers-4.tfplan"
The following would be a typical output:
kubernetes_deployment.nginx-deploy: Creating... kubernetes_deployment.nginx-deploy: Creation complete after 8s [id=default/nginx-deploy] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: nginx-deploy = "nginx-deploy deployed successfully !!!"
To display the details of the underlying application pod(s), execute the following command:
$ kubectl get po
The following would be a typical output:
NAME READY STATUS RESTARTS AGE nginx-deploy-65f5df4858-8fkr2 1/1 Running 0 16s
GREAT !!! - we have successfully deployed the webserver application to our minikube cluster.
Now is the time to test if our webserver container is able to serve the index.html file from the local storage volume attached to the minikube cluster.
To access the webserver from the deployed pod, execute the following commands:
$ kubectl exec -it nginx-deploy-65f5df4858-8fkr2 -- /bin/bash
root@nginx-deploy-65f5df4858-8fkr2:/# curl http://localhost
The following would be a typical output:
HOORAY - From Persistent Volume (local) !!!
WALLA !!! - we have successfully tested the access to the local storage volume attached to our minikube cluster.
To exit from the webserver pod, execute the following command:
root@nginx-deploy-65f5df4858-8fkr2:/# exit
It is time to perform clean-up by deleting all the deployed resources from our minikube cluster.
To delete the webserver deployment, execute the following command:
$ terraform destroy -auto-approve
The following would be a typical output:
null_resource.create-html: Refreshing state... [id=5735136570202919635] kubernetes_persistent_volume.pv-storage: Refreshing state... [id=pv-storage] kubernetes_persistent_volume_claim.pv-storage-claim: Refreshing state... [id=default/pv-storage-claim] kubernetes_deployment.nginx-deploy: Refreshing state... [id=default/nginx-deploy] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: # kubernetes_deployment.nginx-deploy will be destroyed - resource "kubernetes_deployment" "nginx-deploy" { - id = "default/nginx-deploy" -> null - wait_for_rollout = true -> null - metadata { - annotations = {} -> null - generation = 1 -> null - labels = { - "app" = "nginx-server" } -> null - name = "nginx-deploy" -> null - namespace = "default" -> null - resource_version = "33772" -> null - uid = "1749244f-38d3-407c-b9b4-259032454785" -> null } - spec { - min_ready_seconds = 0 -> null - paused = false -> null - progress_deadline_seconds = 600 -> null - replicas = "1" -> null - revision_history_limit = 10 -> null - selector { - match_labels = { - "app" = "nginx-server" } -> null } - strategy { - type = "RollingUpdate" -> null - rolling_update { - max_surge = "25%" -> null - max_unavailable = "25%" -> null } } - template { - metadata { - annotations = {} -> null - generation = 0 -> null - labels = { - "app" = "nginx-server" } -> null } - spec { - active_deadline_seconds = 0 -> null - automount_service_account_token = true -> null - dns_policy = "ClusterFirst" -> null - enable_service_links = true -> null - host_ipc = false -> null - host_network = false -> null - host_pid = false -> null - node_selector = {} -> null - restart_policy = "Always" -> null - share_process_namespace = false -> null - termination_grace_period_seconds = 30 -> null - container { - args = [] -> null - command = [] -> null - image = "nginx:1.23.3" -> null - image_pull_policy = "IfNotPresent" -> null - name = "nginx-server" -> null - stdin = false -> null - stdin_once = false -> null - termination_message_path = "/dev/termination-log" -> null - termination_message_policy = "File" -> null - tty = false -> null - port { - container_port = 80 -> null - host_port = 0 -> null - name = "nginx-server" -> null - protocol = "TCP" -> null } - resources { - limits = {} -> null - requests = {} -> null } - volume_mount { - mount_path = "/usr/share/nginx/html" -> null - mount_propagation = "None" -> null - name = "nginx-storage" -> null - read_only = false -> null } } - volume { - name = "nginx-storage" -> null - persistent_volume_claim { - claim_name = "pv-storage-claim" -> null - read_only = false -> null } } } } } } # kubernetes_persistent_volume.pv-storage will be destroyed - resource "kubernetes_persistent_volume" "pv-storage" { - id = "pv-storage" -> null - metadata { - annotations = {} -> null - generation = 0 -> null - labels = {} -> null - name = "pv-storage" -> null - resource_version = "24590" -> null - uid = "cd536e4d-d453-463e-a260-6854de327ddf" -> null } - spec { - access_modes = [ - "ReadWriteOnce", ] -> null - capacity = { - "storage" = "2Gi" } -> null - mount_options = [] -> null - persistent_volume_reclaim_policy = "Delete" -> null - storage_class_name = "standard" -> null - volume_mode = "Filesystem" -> null - claim_ref { - name = "pv-storage-claim" -> null - namespace = "default" -> null } - persistent_volume_source { - host_path { - path = "/pv-storage" -> null } } } } # kubernetes_persistent_volume_claim.pv-storage-claim will be destroyed - resource "kubernetes_persistent_volume_claim" "pv-storage-claim" { - id = "default/pv-storage-claim" -> null - wait_until_bound = true -> null - metadata { - annotations = {} -> null - generation = 0 -> null - labels = {} -> null - name = "pv-storage-claim" -> null - namespace = "default" -> null - resource_version = "24592" -> null - uid = "39be4394-25b0-42c0-8169-315d41d8010a" -> null } - spec { - access_modes = [ - "ReadWriteOnce", ] -> null - storage_class_name = "standard" -> null - volume_name = "pv-storage" -> null - resources { - limits = {} -> null - requests = { - "storage" = "2Gi" } -> null } } } # null_resource.create-html will be destroyed - resource "null_resource" "create-html" { - id = "5735136570202919635" -> null } Plan: 0 to add, 0 to change, 4 to destroy. Changes to Outputs: - nginx-deploy = "nginx-deploy deployed successfully !!!" -> null null_resource.create-html: Destroying... [id=5735136570202919635] null_resource.create-html: Destruction complete after 0s kubernetes_deployment.nginx-deploy: Destroying... [id=default/nginx-deploy] kubernetes_deployment.nginx-deploy: Destruction complete after 0s kubernetes_persistent_volume_claim.pv-storage-claim: Destroying... [id=default/pv-storage-claim] kubernetes_persistent_volume_claim.pv-storage-claim: Destruction complete after 2s kubernetes_persistent_volume.pv-storage: Destroying... [id=pv-storage] kubernetes_persistent_volume.pv-storage: Destruction complete after 0s Destroy complete! Resources: 4 destroyed.
BINGO !!! - we have successfully demonstrated the use of Terraform as a infrastructure management tool on our minikube cluster.
References