From 1f69a75efa5513029a66cea868621162be5bc03b Mon Sep 17 00:00:00 2001 From: Dee Koder Date: Thu, 29 Jun 2017 11:41:21 -0700 Subject: [PATCH] Updated docs with latest images. (#4611) --- docs/distributed/README.md | 4 ++-- docs/erasure/README.md | 2 +- docs/multi-tenancy/README.md | 6 +++--- docs/orchestration/README.md | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/distributed/README.md b/docs/distributed/README.md index 968693cbc..410bae976 100644 --- a/docs/distributed/README.md +++ b/docs/distributed/README.md @@ -69,7 +69,7 @@ minio.exe server http://192.168.1.11/C:/data http://192.168.1.12/C:/data ^ http://192.168.1.17/C:/data http://192.168.1.18/C:/data ``` -![Distributed Minio, 8 nodes with 1 disk each](https://raw.githubusercontent.com/minio/minio/master/docs/screenshots/Architecture-diagram_distributed_8.png) +![Distributed Minio, 8 nodes with 1 disk each](https://github.com/minio/minio/blob/master/docs/screenshots/Architecture-diagram_distributed_8.jpg?raw=true) Example 2: Start distributed Minio instance with 4 drives each on 4 nodes, by running this command on all the 4 nodes. @@ -103,7 +103,7 @@ minio.exe server http://192.168.1.11/C:/data1 http://192.168.1.11/C:/data2 ^ http://192.168.1.14/C:/data3 http://192.168.1.14/C:/data4 ``` -![Distributed Minio, 4 nodes with 4 disks each](https://raw.githubusercontent.com/minio/minio/master/docs/screenshots/Architecture-diagram_distributed_16.png) +![Distributed Minio, 4 nodes with 4 disks each](https://github.com/minio/minio/blob/master/docs/screenshots/Architecture-diagram_distributed_16.jpg?raw=true) ## 3. Test your setup diff --git a/docs/erasure/README.md b/docs/erasure/README.md index cc4deef19..5c4b5f151 100644 --- a/docs/erasure/README.md +++ b/docs/erasure/README.md @@ -10,7 +10,7 @@ Erasure code is a mathematical algorithm to reconstruct missing or corrupted dat Erasure code protects data from multiple drives failure unlike RAID or replication. For eg RAID6 can protect against 2 drive failure whereas in Minio erasure code you can lose as many as half number of drives and still the data remains safe. Further Minio's erasure code is at object level and can heal one object at a time. For RAID, healing can only be performed at volume level which translates into huge down time. As Minio encodes each object individually with a high parity count. Storage servers once deployed should not require drive replacement or healing for the lifetime of the server. Minio's erasure coded backend is designed for operational efficiency and takes full advantage of hardware acceleration whenever available. -![Erasure](https://raw.githubusercontent.com/minio/minio/master/docs/screenshots/erasure-code.png?raw=true) +![Erasure](https://github.com/minio/minio/blob/master/docs/screenshots/erasure-code.jpg?raw=true) ## What is Bit Rot protection? diff --git a/docs/multi-tenancy/README.md b/docs/multi-tenancy/README.md index 2a4fdeaff..920afd17a 100644 --- a/docs/multi-tenancy/README.md +++ b/docs/multi-tenancy/README.md @@ -12,7 +12,7 @@ minio --config-dir ~/tenant2 server --address :9002 /disk1/data/tenant2 minio --config-dir ~/tenant3 server --address :9003 /disk1/data/tenant3 ``` -![Example-1](https://raw.githubusercontent.com/minio/minio/master/docs/screenshots/Example-1.png) +![Example-1](https://github.com/minio/minio/blob/master/docs/screenshots/Example-1.jpg?raw=true) #### Example 2 : Single host, multiple drives (erasure code) @@ -22,7 +22,7 @@ minio --config-dir ~/tenant1 server --address :9001 /disk1/data/tenant1 /disk2/d minio --config-dir ~/tenant2 server --address :9002 /disk1/data/tenant2 /disk2/data/tenant2 /disk3/data/tenant2 /disk4/data/tenant2 minio --config-dir ~/tenant3 server --address :9003 /disk1/data/tenant3 /disk2/data/tenant3 /disk3/data/tenant3 /disk4/data/tenant3 ``` -![Example-2](https://raw.githubusercontent.com/minio/minio/master/docs/screenshots/Example-2.png) +![Example-2](https://github.com/minio/minio/blob/master/docs/screenshots/Example-2.jpg?raw=true) ## Distributed Deployment To host multiple tenants in a distributed environment, run several distributed Minio instances concurrently. @@ -45,7 +45,7 @@ export MINIO_SECRET_KEY= minio --config-dir ~/tenant3 server --address :9003 http://192.168.10.11/disk1/data/tenant3 http://192.168.10.12/disk1/data/tenant3 http://192.168.10.13/disk1/data/tenant3 http://192.168.10.14/disk1/data/tenant3 ``` -![Example-3](https://raw.githubusercontent.com/minio/minio/master/docs/screenshots/Example-3.png) +![Example-3](https://github.com/minio/minio/blob/master/docs/screenshots/Example-3.jpg?raw=true) ## Cloud Scale Deployment For large scale multi-tenant Minio deployments, we recommend using one of the popular container orchestration platforms, e.g. Kubernetes, DC/OS or Docker Swarm. Refer [this document](https://docs.minio.io/docs/minio-deployment-quickstart-guide) to get started with Minio on orchestration platforms. diff --git a/docs/orchestration/README.md b/docs/orchestration/README.md index e3aa535af..5f0ac24e6 100644 --- a/docs/orchestration/README.md +++ b/docs/orchestration/README.md @@ -20,4 +20,4 @@ Minio is built ground up on the cloud-native premise. With features like erasure In a typical modern infrastructure deployment, application, database, key-store, etc. already live in containers and are managed by orchestration platforms. Minio brings robust, scalable, AWS S3 compatible object storage to the lot. -![Cloud-native](https://raw.githubusercontent.com/NitishT/minio/master/docs/screenshots/Minio_Cloud_Native_Arch.png?raw=true) +![Cloud-native](https://github.com/minio/minio/blob/master/docs/screenshots/Minio_Cloud_Native_Arch.jpg?raw=true)