In this lab, you will deploy HashiCorp Vault. Vault is an identity-based secrets and encryption management system. We will use it in a later lab to show how we can use Vault to securely store our applications secrets and inject them into our application’s pods.
Workload Identity #
We touched on Workload Identity in a previous lab but it is now time to dive into it further so that we can leverage it for our Vault deployment. Recall, Workload Identity allows workloads in GKE to impersonate GCP IAM service accounts. In this instance, we will leverage Workload Identity for Vault to access our KMS key ring and encryption key.
Creating the Dedicated GCP Service Account #
First, let’s create the service account in GCP and assign the necessary IAM roles.
You can create the service account through the UI or with the following gcloud command:
gcloud iam service-accounts create sa-vault
Next, we need to assign IAM roles to the service account. HashiCorp Vault requires three IAM roles at a minimum:
Compute ViewerCloud KMS CryptoKey Encrypter/DecrypterCloud KMS Viewer
Use the following gcloud command to assign the above permissions to your service account. Note, you will need to run this command separately for each IAM role.
Hint: If you need help finding the IAM role ID to use in the below command, use the Roles page under IAM & Admin in the cloud console.
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> \
--member "serviceAccount:sa-vault@<YOUR_PROJECT_ID>.iam.gserviceaccount.com" \
--role "<IAM_ROLE_ID>"
Once you have added the IAM roles, the last step is to link the GCP service account to the future Kubernetes service account that will be running the Vault pods. Note that the Kubernetes service account does not actually exist yet but we will create it when we deploy Vault. For now, we can assume the following:
- Vault will be deployed in the
vaultnamespace - The Kubernetes service account will be called
vault
Use the following gcloud command to map the GCP service account to the future Vault Kubernetes service account:
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:<YOUR_PROJECT_ID>.svc.id.goog[<VAULT_NAMESPACE_NAME>/<VAULT_SERVICE_ACCOUNT_NAME]" \
sa-vault@<YOUR_PROJECT_ID>.iam.gserviceaccount.com
Deploying HashiCorp Vault with Helm and ArgoCD #
In the previous lab, you deployed ArgoCD. In this lab, you will now use ArgoCD to deploy Vault via its Helm chart.
You can deploy applications to ArgoCD with ArgoCD’s Application CustomResourceDefinition (CRD).
Create a dedicated directory for this lab and switch into it:
cd ~
mkdir vault-workload-identity && cd vault-workload-identity
We are going to deploy Vault with the following ArgoCD Application manifest. Copy it as a starting point and be sure to update the necessary values for your environment.
Note: Do not change the HOSTNAME or VAULT_K8S_POD_NAME.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vault
namespace: argocd
spec:
destination:
namespace: vault
server: https://kubernetes.default.svc
source:
repoURL: https://helm.releases.hashicorp.com
targetRevision: 0.24.1
chart: vault
helm:
values: |-
global:
tlsDisable: true
injector:
enabled: false
server:
image:
repository: hashicorp/vault
tag: 1.13.2
dataStorage:
size: 5Gi
service:
enabled: true
type: ClusterIP
ingress:
enabled: true
activeService: false
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
cert-manager.io/cluster-issuer: letsencrypt-production
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingressClassName: "nginx"
hosts:
- host: "vault.<YOUR_STUDENT_ID>.<RANDOM_ID>.workshops.acceleratorlabs.ca"
tls:
- secretName: vault-tls
hosts:
- "vault.<YOUR_STUDENT_ID>.<RANDOM_ID>.workshops.acceleratorlabs.ca"
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: <YOUR_GCP_VAULT_SERVICE_ACCOUNT_EMAIL>
ha:
enabled: true
replicas: 3
apiAddr: "http://$(VAULT_K8S_POD_NAME).vault-internal:8200"
raft:
enabled: true
setNodeId: true
config: |
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
retry_join {
auto_join = "provider=k8s namespace=vault label_selector=\"component=server,app.kubernetes.io/name=vault\""
auto_join_scheme = "http"
}
}
seal "gcpckms" {
project = "<YOUR_PROJECT_ID>"
region = "<YOUR_KMS_REGION>"
key_ring = "<YOUR_KMS_KEY_RING_NAME>"
crypto_key = "<YOUR_KMS_ENCRYPTION_KEY_NAME>"
}
service_registration "kubernetes" {}
api_addr = "http://HOSTNAME.vault-internal:8200"
cluster_addr = "http://HOSTNAME.vault-internal:8201"
ui = true
extraEnvironmentVars:
VAULT_ADDR: http://$(VAULT_K8S_POD_NAME).vault-internal:8200
ui:
enabled: true
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Once you have filled out all the necessary pieces, save the file. You can now move onto deploying Vault by applying your manifest to your cluster.
You can now go into the UI and click on the Vault tile and watch ArgoCD roll out all of the objects stand up Vault in your cluster.
You can validate that your Vault pods have deployed exclusively to the second node pool by running the following command:
kubectl get pods --namespace vault --output wide
In the output, you will see the node the pod is scheduled on and you will see the 2 in the name.
Initializing Vault #
Once the Vault pods are deployed, they will stay in a non-ready state until we initialize Vault. This essentially means that we create the cryptographic barrier for Vault. To do this, we will run the command vault operator init in one of the Vault pods.
kubectl exec -it vault-0 --namespace vault -- vault operator init
You will see an output similar to below:
Recovery Key 1: GxVm+6AOrAalUfJeqtJjAgGWad9wG2Mp+NrnHjAZKx1p
Recovery Key 2: DuwTn/Mwc4Deafd4lNt2LJksQ8yf8AiQpBCRUt1EG4gV
Recovery Key 3: /CBbHVI47VDqvG3drj1uIpsdhDmyvgW8pzzTGQG45nPC
Recovery Key 4: RePHQwjMFfbjjh2Cz7o3BGo3Up92NthBwyE2miwP7+ls
Recovery Key 5: 21nqKd+gy4tbuN4hAFwWacBS1xVn3hy0M8/yYLwzK/SC
Initial Root Token: hvs.ErLNjAznXDU1gsBKjRAPzH13
Success! Vault is initialized
Recovery key initialized with 5 key shares and a key threshold of 3. Please
securely distribute the key shares printed above.
Make note of your root token and recovery keys.
You can also validate that ArgoCD is reporting that your Vault deployment is healthy by using the argocd tool:
argoPass=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
argocd login argocd.<YOUR_STUDENT_ID>.<RANDOM_ID>.workshops.acceleratorlabs.ca --username admin --password $argoPass
argocd app list
You will see an output similar to below:
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
argocd/vault https://kubernetes.default.svc vault default Synced Healthy Auto-Prune <none> https://helm.releases.hashicorp.com 0.24.1
Confirm with an instructor that your Vault cluster is healthy before moving to the next lab.