The globalMaxObjectSize limit is instilled in S3 spec perhaps
due to certain limitations on S3 infrastructure. For minio we
don't have such limitations and we can stream a larger file
instead.
So we are going to bump this limit to 16GiB.
Fixes#3825
This function was returning BucketNotFound for all errors
which at least hides the fact that disks could be corrupted.
This commit fixes the behavior by returning all errors that,
are, by the way, Object API errors.
Add missing protection from deleting multiple objects
in parallel. Currently we are deleting objects without
proper locking through this API.
This can cause significant amount of races.
Ignore a disk which wasn't able to successfully perform an action to
avoid eventual perturbations when the disk comes back in the middle
of write change.
This removal comes to avoid some redundant requirements
which are adding more problems on a production setup.
Here are the list of checks for time as they happen
- Fresh connect (during server startup) - CORRECT
- A reconnect after network disconnect - CORRECT
- For each RPC call - INCORRECT.
Verifying time for each RPC aggravates a situation
where a RPC call is rejected in a sequence of events
due to enough load on a production setup. 3 second
might not be enough time window for the call to be
initiated and received by the server.
Currently we document as IP:PORT which doesn't provide
if someone can use HOSTNAME:PORT. This is a change
to clarify this by calling it as ADDRESS:PORT which
encompasses both a HOSTNAME and an IP.
Fixes#3799
This PR is for readability cleanup
- getOrderedDisks as shuffleDisks
- getOrderedPartsMetadata as shufflePartsMetadata
Distribution is now a second argument instead being the
primary input argument for brevity.
Also change the usage of type casted int64(0), instead
rely on direct type reference as `var variable int64` everywhere.
Existing objects before overwrites are renamed to
temp location in completeMultipart. We make sure
that we delete it even if subsequenty calls fail.
Additionally move verifying of parent dir is a
file earlier to fail the entire operation.
Ref #3784
Content-Encoding is set to "aws-chunked" which is an S3 specific
API value which is no meaning for an object. This is how S3
behaves as well for a streaming signature uploaded object.
Make sure to skip reserved bucket names in `ListBuckets()`
current code didn't skip this properly and also generalize
this behavior for both XL and FS.