Now you have successfully created a ZFS pool for further reading please refer to [ZFS Quickstart Guide](https://www.freebsd.org/doc/handbook/zfs-quickstart.html)
@ -68,20 +54,17 @@ However, this pool is not taking advantage of any ZFS features, so let's create
```sh
zfs create minio-example/compressed-objects
zfs set compression=lz4 minio-example/compressed-objects
zfs create minio-example/compressed-objects
zfs set compression=lz4 minio-example/compressed-objects
```
To keep monitoring your pool use
```sh
zpool status
pool: minio-example
state: ONLINE
scan: none requested
zpool status
pool: minio-example
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
@ -89,7 +72,6 @@ config:
md0 ONLINE 0 0 0
errors: No known data errors
```
#### Step 2.
@ -97,26 +79,22 @@ errors: No known data errors
Now start minio server on the ``/minio-example/compressed-objects``, change the permissions such that this directory is accessibly by a normal user
@ -93,7 +93,6 @@ GetObject() holds a read lock on `fs.json`.
_, err = io.CopyBuffer(writer, reader, buf)
... after successful copy operation unlocks the read lock ...
```
A concurrent PutObject is requested on the same object, PutObject() attempts a write lock on `fs.json`.
@ -134,4 +133,4 @@ On minio3
Once lock is acquired the minio2 validates if the file really exists to avoid obtaining lock on an fd which is already deleted. But this situation calls for a race with a third server which is also attempting to write the same file before the minio2 can validate if the file exists. It might be potentially possible `fs.json` is created so the lock acquired by minio2 might be invalid and can lead to a potential inconsistency.
This is a known problem and cannot be solved by POSIX fcntl locks. These are considered to be the limits of shared filesystem.
This is a known problem and cannot be solved by POSIX fcntl locks. These are considered to be the limits of shared filesystem.