Storage
Kubernetes Volumes
- Same lifetime as its pod
- Data preserved across container restarts
- pod goes away => Volumes goes away
Using Volumes¶
- Pod spec indicates which volumes to provide for the pod (spec.volumes)
- Pod spec indicates where to mount these volumes (spec.containers.volumeMounts)
- Seen from ontainer's perspective as the file system
- Volumes cannot mount onto other volumes
- No hard links to other volumes
- Each pod must specify where each volume is mounted
Types of Volumes¶
Kubernetes supports several types of Volumes:
AWS storage class for your Amazon EKS cluster¶
Create an AWS storage class manifest file for your storage class.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
1 |
|
Output:
1 |
|
List the existing storage classes for your cluster.
1 |
|
Output:
1 2 3 |
|
Choose a storage class and set it as your default by setting the storageclass.kubernetes.io is-default-class=true annotation.
1 |
|
Output:
1 |
|
Verify that the storage class is now set as default
1 |
|
Output:
1 2 |
|
Persistent Volumes¶
Volume Mode¶
You can set the value of volumeMode to raw to use a raw block device, or filesystem to use a filesystem. filesystem is the default if the value is omitted.
Access Modes¶
A PersistentVolume can be mounted on a host in any way supported by the resource provider. Each PV gets its own set of access modes describing that specific PV’s capabilities.
- ReadWriteOnce (RWO) – the volume can be mounted as read-write by a single node
- ReadOnlyMany (ROX) – the volume can be mounted read-only by many nodes
- ReadWriteMany (RWX) – the volume can be mounted as read-write by many nodes
Reclaim Policy¶
Current reclaim policies are:
- Retain – manual reclamation
- Recycle – basic scrub (rm -rf /thevolume/*)
- Delete – associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
NFS Volumes¶
An NFS is useful for two reasons.
-
what's already stored in the NFS is not deleted when a pod is destroyed. Data is persistent.
-
An NFS can be accessed from multiple pods at the same time. An NFS can be used to share data between pods!
This is really useful for running applications that need a filesystem that’s shared between multiple application servers. You can use an NFS to run Wordpress on Kubernetes!
A sample YAML deployment to mount 10.10.10.1:/data/shared
as /shared
inside the pod.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.10.10.1
path: /data/shared
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: nfs-volume
mountPath: /shared
PV and PVC using NFS¶
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-jeevandk
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Mi
mountOptions:
- vers=4.1
- proto=tcp
- port=2049
nfs:
path: /home/jeevandk
server: 10.163.128.223
persistentVolumeReclaimPolicy: Delete
storageClassName: nfs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-jeevandk
namespace: rstudio
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
volumeName: pv-jeevandk
volumeMode: Filesystem
resources:
requests:
storage: 1Mi
PVC as volumes¶
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod’s namespace and uses it to get the PersistentVolume backing the claim. The volume is then mounted to the host and into the pod.
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
hostpath volumes¶
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
NFS Dynamic Provisioner¶
helm install --name nfs-client-provisioner --set nfs.server=10.158.53.104 --set nfs.path=/export stable/nfs-client-provisioner