MinIO HDFS gateway adds Amazon S3 API support to Hadoop HDFS filesystem. Applications can use both the S3 and file APIs concurrently without requiring any data migration. Since the gateway is stateless and shared-nothing, you may elastically provision as many MinIO instances as needed to distribute the load.
Using docker is experimental, most Hadoop environments are not dockerized and may require additional steps in getting this to work properly. You are better off just using the binary in this situation.
*MinIO gateway* comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 to ensure that your server has started successfully.
`mc` provides a modern alternative to UNIX commands such as ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services.
### Configure `mc`
```
mc config host add myhdfs http://gateway-ip:9000 access_key secret_key
```
### List buckets on hdfs
```
mc ls myhdfs
[2017-02-22 01:50:43 PST] 0B user/
[2017-02-26 21:43:51 PST] 0B datasets/
[2017-02-26 22:10:11 PST] 0B assets/
```
### Known limitations
Gateway inherits the following limitations of HDFS storage layer:
- No bucket policy support (HDFS has no such concept)
- No bucket notification APIs are not supported (HDFS has no support for fsnotify)
- No server side encryption support (Intentionally not implemented)
- No server side compression support (Intentionally not implemented)
## Roadmap
- Additional metadata support for PutObject operations
- Additional metadata support for Multipart operations
- Background append to provide concurrency support for multipart operations