Cleanup Kubernetes documentation (#6678)

master
Nitish Tiwari 6 years ago committed by GitHub
parent b99aaab42e
commit 7b7be66fa1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 603
      docs/orchestration/kubernetes-yaml/README.md
  2. 638
      docs/orchestration/kubernetes/README.md
  3. 0
      docs/orchestration/kubernetes/minio-distributed-daemonset.yaml
  4. 0
      docs/orchestration/kubernetes/minio-distributed-headless-service.yaml
  5. 0
      docs/orchestration/kubernetes/minio-distributed-service.yaml
  6. 0
      docs/orchestration/kubernetes/minio-distributed-statefulset.yaml
  7. 0
      docs/orchestration/kubernetes/minio-gcs-gateway-deployment.yaml
  8. 0
      docs/orchestration/kubernetes/minio-gcs-gateway-service.yaml
  9. 0
      docs/orchestration/kubernetes/minio-standalone-deployment.yaml
  10. 0
      docs/orchestration/kubernetes/minio-standalone-pvc.yaml
  11. 0
      docs/orchestration/kubernetes/minio-standalone-service.yaml
  12. 47
      docs/orchestration/minikube/README.md
  13. 58
      docs/orchestration/minikube/minio_distributed.sh
  14. 78
      docs/orchestration/minikube/statefulset.yaml

@ -1,603 +0,0 @@
# Cloud Native Deployment of Minio on Kubernetes [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
## Table of Contents
- [Prerequisites](#prerequisites)
- [Minio Standalone Server Deployment](#minio-standalone-server-deployment)
- [Standalone Quickstart](#standalone-quickstart)
- [Create Persistent Volume Claim](#create-persistent-volume-claim)
- [Create Deployment](#create-minio-deployment)
- [Create LoadBalancer Service](#create-minio-service)
- [Update existing Minio Deployment](#update-existing-minio-deployment)
- [Resource cleanup](#standalone-resource-cleanup)
- [Minio Distributed Server Deployment](#minio-distributed-server-deployment)
- [Distributed Quickstart](#distributed-quickstart)
- [Create Minio Headless Service](#create-minio-headless-service)
- [Create Minio Statefulset](#create-minio-statefulset)
- [Create LoadBalancer Service](#create-minio-service)
- [Update existing Minio StatefulSet](#update-existing-minio-statefulset)
- [Deploying on cluster nodes with local host path](#deploying-on-cluster-nodes-with-local-host-path)
- [Resource cleanup](#distributed-resource-cleanup)
- [Minio GCS Gateway Deployment](#minio-gcs-gateway-deployment)
- [GCS Gateway Quickstart](#gcs-gateway-quickstart)
- [Create GCS Credentials Secret](#create-gcs-credentials-secret)
- [Create Minio GCS Gateway Deployment](#create-minio-gcs-gateway-deployment)
- [Create Minio LoadBalancer Service](#create-minio-loadbalancer-service)
- [Update Existing Minio GCS Deployment](#update-existing-minio-gcs-deployment)
- [Resource cleanup](#gcs-gateway-resource-cleanup)
## Prerequisites
To run this example, you need Kubernetes version >=1.4 cluster installed and running, and that you have installed the [`kubectl`](https://kubernetes.io/docs/tasks/kubectl/install/) command line tool in your path. Please see the [getting started guides](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
## Minio Standalone Server Deployment
The following section describes the process to deploy standalone [Minio](https://minio.io/) server on Kubernetes. The deployment uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This section uses following core components of Kubernetes:
- [_Pods_](https://kubernetes.io/docs/user-guide/pods/)
- [_Services_](https://kubernetes.io/docs/user-guide/services/)
- [_Deployments_](https://kubernetes.io/docs/user-guide/deployments/)
- [_Persistent Volume Claims_](https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims)
### Standalone Quickstart
Run the below commands to get started quickly
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-standalone-pvc.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-standalone-deployment.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-standalone-service.yaml?raw=true
```
### Create Persistent Volume Claim
Minio needs persistent storage to store objects. If there is no
persistent storage, the data stored in Minio instance will be stored in the container file system and will be wiped off as soon as the container restarts.
Create a persistent volume claim (PVC) to request storage for the Minio instance. Kubernetes looks out for PVs matching the PVC request in the cluster and binds it to the PVC automatically.
This is the PVC description.
```sh
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. This is used in deployment.
name: minio-pv-claim
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
# The volume is mounted as read-write by a single node
- ReadWriteOnce
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 10Gi
```
Create the PersistentVolumeClaim
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-standalone-pvc.yaml?raw=true
persistentvolumeclaim "minio-pv-claim" created
```
### Create Minio Deployment
A deployment encapsulates replica sets and podsso, if a pod goes down, replication controller makes sure another pod comes up automatically. This way you won’t need to bother about pod failures and will have a stable Minio service available.
This is the deployment description.
```sh
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio
spec:
strategy:
# Specifies the strategy used to replace old Pods by new ones
# Refer: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
type: Recreate
template:
metadata:
labels:
# This label is used as a selector in Service definition
app: minio
spec:
# Volumes used by this deployment
volumes:
- name: data
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Volume mounts for this container
volumeMounts:
# Volume 'data' is mounted to path '/data'
- name: data
mountPath: "/data"
# Pulls the lastest Minio image from Docker Hub
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
args:
- server
- /data
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
# Readiness probe detects situations when Minio server instance
# is not ready to accept traffic. Kubernetes doesn't forward
# traffic to the pod till readiness checks fail.
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
# Liveness probe detects situations where Minio server instance
# is not working properly and needs restart. Kubernetes automatically
# restarts the pods if liveness checks fail.
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
```
Create the Deployment
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-standalone-deployment.yaml?raw=true
deployment "minio-deployment" created
```
### Create Minio Service
Now that you have a Minio deployment running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
# This name uniquely identifies the service
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
# Looks for labels `app:minio` in the namespace and applies the spec
app: minio
```
Create the Minio service
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-standalone-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Update existing Minio Deployment
You can update an existing Minio deployment to use a newer Minio release. To do this, use the `kubectl set image` command:
```sh
kubectl set image deployment/minio-deployment minio=<replace-with-new-minio-image>
```
Kubernetes will restart the deployment to update the image. You will get a message as shown below, on successful update:
```
deployment "minio-deployment" image updated
```
### Standalone Resource cleanup
You can cleanup the cluster using
```sh
kubectl delete deployment minio \
&& kubectl delete pvc minio-pv-claim \
&& kubectl delete svc minio-service
```
## Minio Distributed Server Deployment
The following document describes the process to deploy [distributed Minio](https://docs.minio.io/docs/distributed-minio-quickstart-guide) server on Kubernetes. This example uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This example uses following core components of Kubernetes:
- [_Pods_](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [_Services_](https://kubernetes.io/docs/concepts/services-networking/service/)
- [_Statefulsets_](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/)
### Distributed Quickstart
Run the below commands to get started quickly
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-distributed-headless-service.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-distributed-statefulset.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-distributed-service.yaml?raw=true
```
### Create Minio Headless Service
Headless Service controls the domain within which StatefulSets are created. The domain managed by this Service takes the form: `$(service name).$(namespace).svc.cluster.local` (where “cluster.local” is the cluster domain), and the pods in this domain take the form: `$(pod-name-{i}).$(service name).$(namespace).svc.cluster.local`. This is required to get a DNS resolvable URL for each of the pods created within the Statefulset.
This is the Headless service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
```
Create the Headless Service
```sh
$ kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-distributed-headless-service.yaml?raw=true
service "minio" created
```
### Create Minio Statefulset
A StatefulSet provides a deterministic name and a unique identity to each pod, making it easy to deploy stateful distributed applications. To launch distributed Minio you need to pass drive locations as parameters to the minio server command. Then, you’ll need to run the same command on all the participating pods. StatefulSets offer a perfect way to handle this requirement.
This is the Statefulset description.
```sh
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
# This name uniquely identifies the StatefulSet
name: minio
spec:
serviceName: minio
replicas: 4
selector:
matchLabels:
app: minio # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: minio # has to match .spec.selector.matchLabels
spec:
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
args:
- server
- http://minio-0.minio.default.svc.cluster.local/data
- http://minio-1.minio.default.svc.cluster.local/data
- http://minio-2.minio.default.svc.cluster.local/data
- http://minio-3.minio.default.svc.cluster.local/data
ports:
- containerPort: 9000
# These volume mounts are persistent. Each pod in the PetSet
# gets a volume mounted based on this field.
volumeMounts:
- name: data
mountPath: /data
# Liveness probe detects situations where Minio server instance
# is not working properly and needs restart. Kubernetes automatically
# restarts the pods if liveness checks fail.
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
Create the Statefulset
```sh
$ kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-distributed-statefulset.yaml?raw=true
statefulset "minio" created
```
### Create Minio Service
Now that you have a Minio statefulset running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
```
Create the Minio service
```sh
$ kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-distributed-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
$ kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Update existing Minio StatefulSet
You can update an existing Minio StatefulSet to use a newer Minio release. To do this, use the `kubectl patch statefulset` command:
```sh
kubectl patch statefulset minio --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"<replace-with-new-minio-image>"}]'
```
On successful update, you should see the output below
```
statefulset "minio" patched
```
Then delete all the pods in your StatefulSet one by one as shown below. Kubernetes will restart those pods for you, using the new image.
```sh
kubectl delete minio-0
```
### Resource cleanup
You can cleanup the cluster using
```sh
kubectl delete statefulset minio \
&& kubectl delete svc minio \
&& kubectl delete svc minio-service
```
### Deploying on cluster nodes with local host path
If your cluster does not have a storage solution or PV abstraction, you must explicitly define what nodes you wish to run Minio on, and define a homogeneous path to a local fast block device available on every host.
This must be changed in the example daemonset: [minio-distributed-daemonset.yaml](minio-distributed-daemonset.yaml)
Specifically the hostpath:
```yaml
hostPath:
path: /data/minio/
```
And the list of hosts:
```yaml
- http://hostname1:9000/data/minio
- http://hostname2:9000/data/minio
- http://hostname3:9000/data/minio
- http://hostname4:9000/data/minio
```
Once deployed, tag the defined host with the `minio-server=true` label:
```bash
kubectl label node hostname1 -l minio-server=true
kubectl label node hostname2 -l minio-server=true
kubectl label node hostname3 -l minio-server=true
kubectl label node hostname4 -l minio-server=true
```
## Minio GCS Gateway Deployment
The following section describes the process to deploy [Minio](https://minio.io/) GCS Gateway on Kubernetes. The deployment uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This section uses following core components of Kubernetes:
- [_Secrets_](https://kubernetes.io/docs/concepts/configuration/secret/)
- [_Services_](https://kubernetes.io/docs/user-guide/services/)
- [_Deployments_](https://kubernetes.io/docs/user-guide/deployments/)
### GCS Gateway Quickstart
Create the Google Cloud Service credentials file using the steps mentioned [here](https://github.com/minio/minio/blob/master/docs/gateway/gcs.md#create-service-account-key-for-gcs-and-get-the-credentials-file).
Use the path of file generated above to create a Kubernetes `secret`.
```sh
kubectl create secret generic gcs-credentials --from-file=/path/to/gcloud/credentials/application_default_credentials.json
```
Then download the `minio-gcs-gateway-deployment.yaml` file
```sh
wget https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-gcs-gateway-deployment.yaml?raw=true
```
Update the section `gcp_project_id` with your GCS project ID. Then run
```sh
kubectl create -f minio-gcs-gateway-deployment.yaml
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-gcs-gateway-service.yaml?raw=true
```
### Create GCS Credentials Secret
A `secret` is intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.
Create the Google Cloud Service credentials file using the steps mentioned [here](https://github.com/minio/minio/blob/master/docs/gateway/gcs.md#create-service-account-key-for-gcs-and-get-the-credentials-file).
Use the path of file generated above to create a Kubernetes `secret`.
```sh
kubectl create secret generic gcs-credentials --from-file=/path/to/gcloud/credentials/application_default_credentials.json
```
### Create Minio GCS Gateway Deployment
A deployment encapsulates replica sets and podsso, if a pod goes down, replication controller makes sure another pod comes up automatically. This way you won’t need to bother about pod failures and will have a stable Minio service available.
Minio Gateway uses GCS as its storage backend and need to use a GCP `projectid` to identify your credentials. Update the section `gcp_project_id` with your
GCS project ID. This is the deployment description.
```sh
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the secret created earlier
volumes:
- name: gcs-credentials
secret:
# Name of the Secret created earlier
secretName: gcs-credentials
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
args:
- gateway
- gcs
- gcp_project_id
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
# Google Cloud Service uses this variable
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/credentials/application_default_credentials.json"
ports:
- containerPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: gcs-credentials
mountPath: "/etc/credentials"
readOnly: true
```
Create the Deployment
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-gcs-gateway-deployment.yaml?raw=true
deployment "minio-deployment" created
```
### Create Minio LoadBalancer Service
Now that you have a Minio deployment running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
```
Create the Minio service
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/minio-gcs-gateway-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Update Existing Minio GCS Deployment
You can update an existing Minio deployment to use a newer Minio release. To do this, use the `kubectl set image` command:
```sh
kubectl set image deployment/minio-deployment minio=<replace-with-new-minio-image>
```
Kubernetes will restart the deployment to update the image. You will get a message as shown below, on successful update:
```
deployment "minio-deployment" image updated
```
### GCS Gateway Resource Cleanup
You can cleanup the cluster using
```sh
kubectl delete deployment minio-deployment \
&& kubectl delete secret gcs-credentials
```

@ -1,151 +1,611 @@
# Deploy Minio on Kubernetes [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio) # Deploy Minio on Kubernetes [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Kubernetes concepts like Deployments and StatefulSets provide perfect platform to deploy Minio server in standalone, distributed or shared mode. There are multiple options to deploy Minio on Kubernetes, you can choose the one that suits you the most. Kubernetes concepts like Deployments and StatefulSets provide perfect platform to deploy Minio server in standalone, distributed or gateway mode. There are multiple options to deploy Minio on Kubernetes, you can choose the one that best suits your requirements.
- Minio [Helm](https://helm.sh) Chart offers a customizable and easy Minio deployment, with a single command. Read more about Minio Helm deployment [here](#prerequisites). - Helm Chart: Minio Helm Chart offers customizable and easy Minio deployment with a single command. Refer [Minio Helm Chart repository documentation](https://github.com/helm/charts/tree/master/stable/minio) for more details.
- You can also explore Kubernetes [Minio example](https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes-yaml/README.md) to deploy Minio using `.yaml` files. - YAML File: Minio can be deployed with `yaml` files via `kubectl`. This document outlines steps required to deploy Minio using `yaml` files.
- If you'd like to get started with Minio on Kubernetes without having to create a real container cluster, you can also [deploy Minio locally](https://raw.githubusercontent.com/minio/minio/master/docs/orchestration/minikube/README.md) with MiniKube. ## Table of Contents
<a name="prerequisites"></a> - [Prerequisites](#prerequisites)
## 1. Prerequisites - [Minio Standalone Server Deployment](#minio-standalone-server-deployment)
- [Standalone Quickstart](#standalone-quickstart)
- [Create Persistent Volume Claim](#create-persistent-volume-claim)
- [Create Deployment](#create-minio-deployment)
- [Create LoadBalancer Service](#create-minio-service)
- [Update existing Minio Deployment](#update-existing-minio-deployment)
- [Resource cleanup](#standalone-resource-cleanup)
* Kubernetes 1.4+ with Beta APIs enabled for default standalone mode. - [Minio Distributed Server Deployment](#minio-distributed-server-deployment)
* Kubernetes 1.5+ with Beta APIs enabled to run Minio in [distributed mode](#distributed-minio). - [Distributed Quickstart](#distributed-quickstart)
* PV provisioner support in the underlying infrastructure. - [Create Minio Headless Service](#create-minio-headless-service)
* Helm package manager [installed](https://github.com/kubernetes/helm#install) on your Kubernetes cluster. - [Create Minio Statefulset](#create-minio-statefulset)
- [Create LoadBalancer Service](#create-minio-service)
- [Update existing Minio StatefulSet](#update-existing-minio-statefulset)
- [Deploying on cluster nodes with local host path](#deploying-on-cluster-nodes-with-local-host-path)
- [Resource cleanup](#distributed-resource-cleanup)
## 2. Deploy Minio using Helm Chart - [Minio GCS Gateway Deployment](#minio-gcs-gateway-deployment)
- [GCS Gateway Quickstart](#gcs-gateway-quickstart)
- [Create GCS Credentials Secret](#create-gcs-credentials-secret)
- [Create Minio GCS Gateway Deployment](#create-minio-gcs-gateway-deployment)
- [Create Minio LoadBalancer Service](#create-minio-loadbalancer-service)
- [Update Existing Minio GCS Deployment](#update-existing-minio-gcs-deployment)
- [Resource cleanup](#gcs-gateway-resource-cleanup)
Install Minio chart by ## Prerequisites
```bash To run this example, you need Kubernetes version >=1.4 cluster installed and running, and that you have installed the [`kubectl`](https://kubernetes.io/docs/tasks/kubectl/install/) command line tool in your path. Please see the [getting started guides](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
$ helm install stable/minio
```
Above command deploys Minio on the Kubernetes cluster in the default configuration. Below section lists all the configurable parameters of the Minio chart and their default values.
### Configuration
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image` | Minio image name | `minio/minio` |
| `imageTag` | Minio image tag. Possible values listed [here](https://hub.docker.com/r/minio/minio/tags/).| `RELEASE.2018-10-18T00-28-58Z`|
| `imagePullPolicy` | Image pull policy | `Always` |
| `mode` | Minio server mode (`standalone`, `shared` or `distributed`)| `standalone` |
| `numberOfNodes` | Number of nodes (applicable only for Minio distributed mode). Should be 4 <= x <= 16 | `4` |
| `accessKey` | Default access key | `AKIAIOSFODNN7EXAMPLE` |
| `secretKey` | Default secret key | `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY` |
| `configPath` | Default config file location | `~/.minio` |
| `mountPath` | Default mount location for persistent drive| `/export` |
| `serviceType` | Kubernetes service type | `LoadBalancer` |
| `servicePort` | Kubernetes port where service is exposed| `9000` |
| `persistence.enabled` | Use persistent volume to store data | `true` |
| `persistence.size` | Size of persistent volume claim | `10Gi` |
| `persistence.storageClass` | Type of persistent volume claim | `generic` |
| `persistence.accessMode` | ReadWriteOnce or ReadOnly | `ReadWriteOnce` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `100m` |
You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash ## Minio Standalone Server Deployment
$ helm install --name my-release \
--set persistence.size=100Gi \ The following section describes the process to deploy standalone [Minio](https://minio.io/) server on Kubernetes. The deployment uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
stable/minio
This section uses following core components of Kubernetes:
- [_Pods_](https://kubernetes.io/docs/user-guide/pods/)
- [_Services_](https://kubernetes.io/docs/user-guide/services/)
- [_Deployments_](https://kubernetes.io/docs/user-guide/deployments/)
- [_Persistent Volume Claims_](https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims)
### Standalone Quickstart
Run the below commands to get started quickly
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-standalone-pvc.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-standalone-deployment.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-standalone-service.yaml?raw=true
``` ```
The above command deploys Minio server with a 100Gi backing persistent volume. ### Create Persistent Volume Claim
Minio needs persistent storage to store objects. If there is no
persistent storage, the data stored in Minio instance will be stored in the container file system and will be wiped off as soon as the container restarts.
Create a persistent volume claim (PVC) to request storage for the Minio instance. Kubernetes looks out for PVs matching the PVC request in the cluster and binds it to the PVC automatically.
This is the PVC description.
```sh
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. This is used in deployment.
name: minio-pv-claim
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
# The volume is mounted as read-write by a single node
- ReadWriteOnce
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 10Gi
```
Alternately, you can provide a YAML file that specifies parameter values while installing the chart. For example, Create the PersistentVolumeClaim
```bash ```sh
$ helm install --name my-release -f values.yaml stable/minio kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-standalone-pvc.yaml?raw=true
persistentvolumeclaim "minio-pv-claim" created
``` ```
### Distributed Minio ### Create Minio Deployment
A deployment encapsulates replica sets and podsso, if a pod goes down, replication controller makes sure another pod comes up automatically. This way you won’t need to bother about pod failures and will have a stable Minio service available.
This is the deployment description.
```sh
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio
spec:
strategy:
# Specifies the strategy used to replace old Pods by new ones
# Refer: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
type: Recreate
template:
metadata:
labels:
# This label is used as a selector in Service definition
app: minio
spec:
# Volumes used by this deployment
volumes:
- name: data
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Volume mounts for this container
volumeMounts:
# Volume 'data' is mounted to path '/data'
- name: data
mountPath: "/data"
# Pulls the lastest Minio image from Docker Hub
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
args:
- server
- /data
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
# Readiness probe detects situations when Minio server instance
# is not ready to accept traffic. Kubernetes doesn't forward
# traffic to the pod till readiness checks fail.
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
# Liveness probe detects situations where Minio server instance
# is not working properly and needs restart. Kubernetes automatically
# restarts the pods if liveness checks fail.
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
```
This chart provisions a Minio server in standalone mode, by default. To provision Minio server in [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide), set the `mode` field to `distributed`, Create the Deployment
```bash ```sh
$ helm install --set mode=distributed stable/minio kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-standalone-deployment.yaml?raw=true
deployment "minio-deployment" created
``` ```
This provisions Minio server in distributed mode with 4 nodes. To change the number of nodes in your distributed Minio server, set the `numberOfNodes` field, ### Create Minio Service
Now that you have a Minio deployment running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
# This name uniquely identifies the service
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
# Looks for labels `app:minio` in the namespace and applies the spec
app: minio
```
Create the Minio service
```bash ```sh
$ helm install --set mode=distributed,numberOfNodes=8 stable/minio kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-standalone-service.yaml?raw=true
service "minio-service" created
``` ```
This provisions Minio server in distributed mode with 8 nodes. Note that the `numberOfNodes` value should be an integer between 4 and 16 (inclusive). The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
#### StatefulSet [limitations](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations) applicable to distributed Minio ### Update existing Minio Deployment
* StatefulSets need persistent storage, so the `persistence.enabled` flag is ignored when `mode` is set to `distributed`. You can update an existing Minio deployment to use a newer Minio release. To do this, use the `kubectl set image` command:
* When uninstalling a distributed Minio release, you'll need to manually delete volumes associated with the StatefulSet.
### Shared Minio ```sh
kubectl set image deployment/minio-deployment minio=<replace-with-new-minio-image>
```
To provision Minio servers in [shared mode](https://github.com/minio/minio/blob/master/docs/shared-backend/README.md), set the `mode` field to `shared`, Kubernetes will restart the deployment to update the image. You will get a message as shown below, on successful update:
```bash ```
$ helm install --set mode=shared stable/minio deployment "minio-deployment" image updated
``` ```
This provisions 4 Minio server nodes backed by single storage. To change the number of nodes in your shared Minio deployment, set the `numberOfNodes` field, ### Standalone Resource cleanup
```bash You can cleanup the cluster using
$ helm install --set mode=shared,numberOfNodes=8 stable/minio
```sh
kubectl delete deployment minio \
&& kubectl delete pvc minio-pv-claim \
&& kubectl delete svc minio-service
``` ```
This provisions Minio server in shared mode with 8 nodes. ## Minio Distributed Server Deployment
### Persistence The following document describes the process to deploy [distributed Minio](https://docs.minio.io/docs/distributed-minio-quickstart-guide) server on Kubernetes. This example uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This chart provisions a PersistentVolumeClaim and mounts corresponding persistent volume to default location `/export`. You'll need physical storage available in the Kubernetes cluster for this to work. If you'd rather use `emptyDir`, disable PersistentVolumeClaim by: This example uses following core components of Kubernetes:
```bash - [_Pods_](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
$ helm install --set persistence.enabled=false stable/minio - [_Services_](https://kubernetes.io/docs/concepts/services-networking/service/)
- [_Statefulsets_](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/)
### Distributed Quickstart
Run the below commands to get started quickly
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-distributed-headless-service.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-distributed-statefulset.yaml?raw=true
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-distributed-service.yaml?raw=true
```
### Create Minio Headless Service
Headless Service controls the domain within which StatefulSets are created. The domain managed by this Service takes the form: `$(service name).$(namespace).svc.cluster.local` (where “cluster.local” is the cluster domain), and the pods in this domain take the form: `$(pod-name-{i}).$(service name).$(namespace).svc.cluster.local`. This is required to get a DNS resolvable URL for each of the pods created within the Statefulset.
This is the Headless service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
``` ```
> *"An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever."* Create the Headless Service
## 3. Updating Minio Release using Helm ```sh
$ kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-distributed-headless-service.yaml?raw=true
service "minio" created
```
You can update an existing Minio Helm Release to use a newer Minio Docker image. To do this, use the `helm upgrade` command: ### Create Minio Statefulset
A StatefulSet provides a deterministic name and a unique identity to each pod, making it easy to deploy stateful distributed applications. To launch distributed Minio you need to pass drive locations as parameters to the minio server command. Then, you’ll need to run the same command on all the participating pods. StatefulSets offer a perfect way to handle this requirement.
This is the Statefulset description.
```sh
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
# This name uniquely identifies the StatefulSet
name: minio
spec:
serviceName: minio
replicas: 4
selector:
matchLabels:
app: minio # has to match .spec.template.metadata.labels
template:
metadata:
labels:
app: minio # has to match .spec.selector.matchLabels
spec:
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
args:
- server
- http://minio-0.minio.default.svc.cluster.local/data
- http://minio-1.minio.default.svc.cluster.local/data
- http://minio-2.minio.default.svc.cluster.local/data
- http://minio-3.minio.default.svc.cluster.local/data
ports:
- containerPort: 9000
# These volume mounts are persistent. Each pod in the PetSet
# gets a volume mounted based on this field.
volumeMounts:
- name: data
mountPath: /data
# Liveness probe detects situations where Minio server instance
# is not working properly and needs restart. Kubernetes automatically
# restarts the pods if liveness checks fail.
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 120
periodSeconds: 20
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
```bash Create the Statefulset
$ helm upgrade --set imageTag=<replace-with-minio-docker-image-tag> <helm-release-name> stable/minio
```sh
$ kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-distributed-statefulset.yaml?raw=true
statefulset "minio" created
```
### Create Minio Service
Now that you have a Minio statefulset running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
```
Create the Minio service
```sh
$ kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-distributed-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
$ kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Update existing Minio StatefulSet
You can update an existing Minio StatefulSet to use a newer Minio release. To do this, use the `kubectl patch statefulset` command:
```sh
kubectl patch statefulset minio --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"<replace-with-new-minio-image>"}]'
``` ```
On successful update, you should see the output below On successful update, you should see the output below
```bash ```
Release "your-helm-release" has been upgraded. Happy Helming! statefulset "minio" patched
``` ```
## 4. Uninstalling the Chart Then delete all the pods in your StatefulSet one by one as shown below. Kubernetes will restart those pods for you, using the new image.
Assuming your release is named as `my-release`, delete it using the command: ```sh
kubectl delete minio-0
```
```bash ### Resource cleanup
$ helm delete my-release
You can cleanup the cluster using
```sh
kubectl delete statefulset minio \
&& kubectl delete svc minio \
&& kubectl delete svc minio-service
``` ```
The command removes all the Kubernetes components associated with the chart and deletes the release. ### Deploying on cluster nodes with local host path
### Notes If your cluster does not have a storage solution or PV abstraction, you must explicitly define what nodes you wish to run Minio on, and define a homogeneous path to a local fast block device available on every host.
* An instance of a chart running in a Kubernetes cluster is called a release. Helm automatically assigns a unique release name after installing the chart. You can also set your preferred name by: This must be changed in the example daemonset: [minio-distributed-daemonset.yaml](minio-distributed-daemonset.yaml)
```bash Specifically the hostpath:
$ helm install --name my-release stable/minio ```yaml
hostPath:
path: /data/minio/
``` ```
* To override the default keys, pass the access and secret keys as arguments to helm install. And the list of hosts:
```yaml
- http://hostname1:9000/data/minio
- http://hostname2:9000/data/minio
- http://hostname3:9000/data/minio
- http://hostname4:9000/data/minio
```
Once deployed, tag the defined host with the `minio-server=true` label:
```bash ```bash
$ helm install --set accessKey=myaccesskey,secretKey=mysecretkey \ kubectl label node hostname1 -l minio-server=true
stable/minio kubectl label node hostname2 -l minio-server=true
kubectl label node hostname3 -l minio-server=true
kubectl label node hostname4 -l minio-server=true
```
## Minio GCS Gateway Deployment
The following section describes the process to deploy [Minio](https://minio.io/) GCS Gateway on Kubernetes. The deployment uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This section uses following core components of Kubernetes:
- [_Secrets_](https://kubernetes.io/docs/concepts/configuration/secret/)
- [_Services_](https://kubernetes.io/docs/user-guide/services/)
- [_Deployments_](https://kubernetes.io/docs/user-guide/deployments/)
### GCS Gateway Quickstart
Create the Google Cloud Service credentials file using the steps mentioned [here](https://github.com/minio/minio/blob/master/docs/gateway/gcs.md#create-service-account-key-for-gcs-and-get-the-credentials-file).
Use the path of file generated above to create a Kubernetes `secret`.
```sh
kubectl create secret generic gcs-credentials --from-file=/path/to/gcloud/credentials/application_default_credentials.json
```
Then download the `minio-gcs-gateway-deployment.yaml` file
```sh
wget https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-gcs-gateway-deployment.yaml?raw=true
```
Update the section `gcp_project_id` with your GCS project ID. Then run
```sh
kubectl create -f minio-gcs-gateway-deployment.yaml
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-gcs-gateway-service.yaml?raw=true
```
### Create GCS Credentials Secret
A `secret` is intended to hold sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image.
Create the Google Cloud Service credentials file using the steps mentioned [here](https://github.com/minio/minio/blob/master/docs/gateway/gcs.md#create-service-account-key-for-gcs-and-get-the-credentials-file).
Use the path of file generated above to create a Kubernetes `secret`.
```sh
kubectl create secret generic gcs-credentials --from-file=/path/to/gcloud/credentials/application_default_credentials.json
```
### Create Minio GCS Gateway Deployment
A deployment encapsulates replica sets and podsso, if a pod goes down, replication controller makes sure another pod comes up automatically. This way you won’t need to bother about pod failures and will have a stable Minio service available.
Minio Gateway uses GCS as its storage backend and need to use a GCP `projectid` to identify your credentials. Update the section `gcp_project_id` with your
GCS project ID. This is the deployment description.
```sh
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the secret created earlier
volumes:
- name: gcs-credentials
secret:
# Name of the Secret created earlier
secretName: gcs-credentials
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
args:
- gateway
- gcs
- gcp_project_id
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
# Google Cloud Service uses this variable
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/credentials/application_default_credentials.json"
ports:
- containerPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: gcs-credentials
mountPath: "/etc/credentials"
readOnly: true
```
Create the Deployment
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-gcs-gateway-deployment.yaml?raw=true
deployment "minio-deployment" created
```
### Create Minio LoadBalancer Service
Now that you have a Minio deployment running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
```
Create the Minio service
```sh
kubectl create -f https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/minio-gcs-gateway-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Update Existing Minio GCS Deployment
You can update an existing Minio deployment to use a newer Minio release. To do this, use the `kubectl set image` command:
```sh
kubectl set image deployment/minio-deployment minio=<replace-with-new-minio-image>
```
Kubernetes will restart the deployment to update the image. You will get a message as shown below, on successful update:
```
deployment "minio-deployment" image updated
```
### GCS Gateway Resource Cleanup
You can cleanup the cluster using
```sh
kubectl delete deployment minio-deployment \
&& kubectl delete secret gcs-credentials
``` ```
### Explore Further ### Explore Further

@ -1,47 +0,0 @@
# Deploy distributed Minio locally with minikube [![Slack](https://slack.minio.io/slack?type=svg)](https://slack.minio.io) [![Go Report Card](https://goreportcard.com/badge/minio/minio)](https://goreportcard.com/report/minio/minio) [![Docker Pulls](https://img.shields.io/docker/pulls/minio/minio.svg?maxAge=604800)](https://hub.docker.com/r/minio/minio/) [![codecov](https://codecov.io/gh/minio/minio/branch/master/graph/badge.svg)](https://codecov.io/gh/minio/minio)
Minikube runs a single-node Kubernetes cluster inside a VM on your computer. This makes it easy to deploy distributed Minio server on
Kubernetes running locally on your computer.
## 1. Prerequisites
[Minikube](https://github.com/kubernetes/minikube/blob/master/README.md#installation) and [`kubectl`](https://kubernetes.io/docs/user-guide/prereqs/)
installed on your system.
## 2. Steps
* Download `minio_distributed.sh` and `statefulset.yaml`
```sh
wget https://raw.githubusercontent.com/minio/minio/master/docs/orchestration/minikube/minio_distributed.sh
wget https://raw.githubusercontent.com/minio/minio/master/docs/orchestration/minikube/statefulset.yaml
```
* Execute the `minio_distributed.sh` script in command prompt.
```sh
./minio_distributed.sh
```
After the script is executed successfully, you should get an output like this
```sh
service "minio-public" created
service "minio" created
statefulset "minio" created
```
This means Minio is deployed on your local Minikube installation.
Note that the service `minio-public` is a [clusterIP](https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types) service. It exposes the service on a cluster-internal IP. To connect to your Minio instances via `kubectl port-forward` command, execute
```
kubectl port-forward minio-0 9000:9000
```
Minio server can now be accessed at `http://localhost:9000`, with accessKey and secretKey as mentioned in the `statefulset.yaml` file.
## 3. Notes
Minikube currently does not support dynamic provisioning, so we manually create PersistentVolumes(PV) and PersistentVolumeClaims(PVC). Once the PVs and PVCs are created, we call the `statefulset.yaml` configuration file to create the distributed Minio setup.
This setup runs on a laptop/computer. Hence only one disk is used as the backend for all the minio instance PVs. Minio sees these PVs as separate disks and reports the available storage incorrectly.

@ -1,58 +0,0 @@
#!/usr/bin/env bash
# Minio Cloud Storage, (C) 2014, 2015, 2016, 2017 Minio, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -exuo pipefail
# Clean up anything from a prior run:
kubectl delete statefulsets,persistentvolumes,persistentvolumeclaims,services,poddisruptionbudget -l app=minio
# Make persistent volumes and (correctly named) claims. We must create the
# claims here manually even though that sounds counter-intuitive. For details
# see https://github.com/kubernetes/contrib/pull/1295#issuecomment-230180894.
for i in $(seq 0 3); do
cat <<EOF | kubectl create -f -
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv${i}
labels:
type: local
app: minio
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/minio/data-store/${i}"
EOF
cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-minio-${i}
labels:
app: minio
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
EOF
done;
kubectl create -f statefulset.yaml

@ -1,78 +0,0 @@
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the object store. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: minio-public
labels:
app: minio
spec:
ports:
- port: 9000
targetPort: 9000
selector:
app: minio
---
apiVersion: v1
kind: Service
metadata:
# This service only exists to create DNS entries for each pod in the stateful
# set such that they can resolve each other's IP addresses. It does not
# create a load-balanced ClusterIP and should not be used directly by clients
# in most circumstances.
name: minio
labels:
app: minio
spec:
ports:
- port: 9000
targetPort: 9000
clusterIP: None
selector:
app: minio
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: "minio"
replicas: 4
template:
metadata:
labels:
app: minio
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: data-minio
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
image: minio/minio:RELEASE.2018-10-18T00-28-58Z
imagePullPolicy: IfNotPresent
args: ["server", "http://minio-0.minio.default.svc.cluster.local/data", "http://minio-1.minio.default.svc.cluster.local/data", "http://minio-2.minio.default.svc.cluster.local/data", "http://minio-3.minio.default.svc.cluster.local/data"]
ports:
- containerPort: 9000
# These volume mounts are persistent. Each pod in the PetSet
# gets a volume mounted based on this field.
volumeMounts:
- name: data
mountPath: /data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
Loading…
Cancel
Save