This commit takes the existing remove bucket functionality written by
brendanashworth, integrates it to the current UI with a dropdown for
each bucket, and fixes small issues that were present, like the dropdown
not disappearing after the user clicks on 'Delete' for certain buckets.
This feature only deletes a bucket that is empty (that has no objects).
Fixes#4166
- Add storage class metadata validation for request header
- Change storage class header values to be consistent with AWS S3
- Refactor internal method to take only the reqd argument
HealFile() does not process the case when an empty file is lost in
some disks. Since, Reedsolomon erasure doesn't handle restoring empty
data, HealFile will create empty files similarly to CreateFile().
This adds configurable data and parity options on a per object
basis. To use variable parity
- Users can set environment variables to cofigure variable
parity
- Then add header x-amz-storage-class to putobject requests
with relevant storage class values
Fixes#4997
- Use it to send the Content-MD5 header correctly encoded to S3
Gateway
- Fixes a bug in PutObject (including anonymous PutObject) and
PutObjectPart with S3 Gateway found when testing with Mint.
Manta is an Object Storage by [Joyent](https://www.joyent.com/)
This PR adds initial support for Manta. It is intended as non-production
ready so that feedback can be obtained.
This PR allows 'minio update' to not only shows update banner
but also allows for in-place upgrades.
Updates are done safely by validating the downloaded
sha256 of the binary.
Fixes#4781
This PR handles following situations
- secure endpoints provided, server should fail to start
if TLS is not configured
- insecure endpoints provided, server starts ignoring
if TLS is configured or not.
Fixes#5251
- Adds a metadata argument to the CopyObjectPart API to facilitate
implementing encryption for copying APIs too.
- Update vendored minio-go - this version implements the
CopyObjectPart client API for use with the S3 gateway.
Fixes#4885
This check incorrectly rejects most valid filenames. The only filenames Sia
forbids are leading forward slashes and path traversal characters, but it's
better to simply allow Sia to reject invalid names on its own rather than try
to anticipate errors from Sia:
https://github.com/NebulousLabs/Sia/blob/master/doc/api/Renter.md#path-parameters-4
The problem in existing code was the following line
```
start := int(keyCrc%uint32(cardinality)) | 1
```
A given a value of N cardinality the ending result
because of the the bitwise '|' would lead to always
higher affinity to odd sequences.
As can be seen from the test cases that this can
lead to many objects being allocated the same set
of disks or atleast the first disk is an odd disk
always. This introduces a performance problem
for majority of the objects under concurrent load.
Remove `| 1` to provide a more cleaner distribution
and the new code will be.
```
start := int(keyCrc % uint32(cardinality))
```
Thanks to Krishna Srinivas for pointing out the bitwise
situation here.
This change introduces following simplified steps to follow
during config migration.
```
// Steps to move from version N to version N+1
// 1. Add new struct serverConfigVN+1 in config-versions.go
// 2. Set configCurrentVersion to "N+1"
// 3. Set serverConfigCurrent to serverConfigVN+1
// 4. Add new migration function (ex. func migrateVNToVN+1()) in config-migrate.go
// 5. Call migrateVNToVN+1() from migrateConfig() in config-migrate.go
// 6. Make changes in config-current_test.go for any test change
```
Current implementation we faked the makeBucket operations
to allow for s3 clients to behave properly. But instead
we can create a placeholder zero byte file instead, which
is a hexadecimal representation of the bucket name itself.
The Sia gateway had a bug with uploading that prevented the user's uploads
from reaching the Sia backend. The PutObject function called fsRemoveFile at
the end of the function, which didn't give the Sia backend enough time to
upload the file to the Sia network.
This adds a goroutine that watches the file upload progress and doesn't delete
the file until the upload reaches 100% complete.
Note that this solution has the limitation where if the minio process dies in
the middle of upload, it will leave orphaned files in the SIA_TEMP directory
that the user will need to remove manually.
This PR changes the behavior of DecryptRequest.
Instead of returning `object-tampered` if the client provided
key is wrong DecryptRequest will return `access-denied`.
This is AWS S3 behavior.
Fixes#5202
Apache Spark sends getObject requests with trailing "/".
This PR updates the getObjectInfo to stat for files
even if they are sent with trailing "/".
Fixes#2965
Previously ListenBucketNotificationHandler could deadlock with
PutObjectHandler's eventNotify call when a client closes its
connection. This change removes the cyclic dependency between the
channel and map of ARN to channels by using a separate done channel to
signal that the client has quit.
This change brings public data-types such that
we can ask projects to implement gateway projects
externally than maintaining in our repo.
All publicly exported structs are maintained in object-api-datatypes.go
completePart --> CompletePart
uploadMetadata --> MultipartInfo
All other exported errors are at object-api-errors.go
S3 spec requires that MethodNotAllowed error be return if object name is part
of the URL.
Fix postpolicy related unit tests to not set object name as part of target URL.
Fixes#5141
On windows having a preceding "/" will cause problems, if the
command line already has C:/<export-folder/ in it. Final resulting
path on windows might become C:/C:/ this will cause problems
of starting minio server properly in distributed mode on windows.
As a special case make sure to trim off the separator.
NOTE: It is also perfectly fine for windows users to have a path
without C:/ since at that point we treat it as relative path
and obtain the full filesystem path as well. Providing C:/
style is necessary to provide paths other than C:/,
such as F:/, D:/ etc.
Another additional benefit here is that this style also
supports providing UNC paths as well.
Fixes#5136
This chnage replaces the current SSE-C key derivation scheme. The 'old'
scheme derives an unique object encryption key from the client provided key.
This key derivation was not invertible. That means that a client cannot change
its key without changing the object encryption key.
AWS S3 allows users to update there SSE-C keys by executing a SSE-C COPY with
source == destination. AWS probably updates just the metadata (which is a very
cheap operation). The old key derivation scheme would require a complete copy
of the object because the minio server would not be able to derive the same
object encryption key from a different client provided key (without breaking
the crypto. hash function).
This change makes the key derivation invertible.