Verify() was being called by caller after the data
has been successfully read after io.EOF. This disconnection
opens a race under concurrent access to such an object.
Verification is not necessary outside of Read() call,
we can simply just do checksum verification right inside
Read() call at io.EOF.
This approach simplifies the usage.
In some cases, Cache manager returns ErrCacheFull error when creating a
new cache buffer but the code still sends object data to nil cache buffer data.
Dont print the error errFileNotFound, as it is expected that concurrent
complete-multipart-uploads or abort-multipart-uploads would have deleted
the file, and the file may not be found
Fixes: https://github.com/minio/minio/issues/5056
Every so often we get requirements for creating
directories/prefixes and we end up rejecting
such requirements. This PR implements this and
allows empty directories without any new file
addition to backend.
Existing lower APIs themselves are leveraged to provide
this behavior. Only FS backend supports this for
the time being as desired.
s3cmd cli fails when trying to upload a file to azure gateway.
Previous fixes in azure to handle client side encryption alone
did not completely address the problem.
We need to possibilly convert all the x-amz-meta-<name>
, i.e specifically <name> should be converted into a
C# identifier as mentioned in the docs for `put-blob`.
https://docs.microsoft.com/en-us/rest/api/storageservices/put-blob
```
s3cmd put README.md s3://myanis/
upload: 'README.md' -> 's3://myanis/README.md' [1 of 1]
4598 of 4598 100% in 0s 47.24 kB/s done
upload: 'README.md' -> 's3://myanis/README.md' [1 of 1]
4598 of 4598 100% in 0s 50.47 kB/s done
ERROR: S3 error: 400 (InvalidArgument): Your metadata headers are not supported.
```
There is a separate issue with s3cmd after this fix is applied where
the ETag is wronly validated https://github.com/s3tools/s3cmd/issues/880
But that is an upstream s3cmd problem which wrongly interprets ETag
to be md5sum of the content that was uploaded.
This PR addresses a long standing dependency on
`gopkg.in/check.v1` project used for our tests.
All tests are re-written to use the go default
testing framework instead.
There was no reason for us to use an external
package where Go tools are sufficient for this.
This is done to avoid repeated declaration of not-implemented
functions for each gateway. It also avoids a possible bug in go
https://github.com/golang/go/issues/18468 which is triggered on
our multiple PRs already.
- Add release-time conversion helpers
- Split GetCurrentReleaseTime() into two simpler functions.
- Avoid appending strings when assembling user-agent string.
- Reorder release info URLs to check the newer URLs earlier.
- Remove trivial low-level functions created solely for the purpose of
writing tests.
- Remove some unnecessary tests.
Amazon S3 API expects all incoming stream has a content-length
set it was superflous for us to support object layer which supports
unknown sized stream as well, this PR removes such requirements
and explicitly error out if input stream is less than zero.
* Enable ListMultipartUploads and ListObjectParts for FS.
Previously we had disabled ListMultipartUploads and ListObjectParts
to see if any clients break. Docker registry broke. This patch
enables ListMultipartUploads and ListObjectParts, however
ListMultipartUploads with prefix based listing is not
supported (which is not used by docker registry anyway).
i.e ListMultipartUploads will need exact object name.
Gateway implementation of ListObjectsV1 does not validate maxKeys range.
Raise an InvalidArgument when maxKeys is negative so that ListObjects
call is compatible with S3 on all gateways.
Gateway interface implementations of GetBucketInfo() under
azure and s3 gateway did not perform any bucketname input
validation resulting in incorrect responses when the tests
are expecting InvalidBucketName.
Fixes#4983
When running `make test` in docker, two test cases cause hanging.
This Patch fixes the problem by removing those test cases.
Thanks to @ws141 for identifying the problem.
The reedsolomon library now avoids allocations during reconstruction.
This change exploits that to reduce memory allocs and GC preasure during
healing and reading.
Previously we were wrongly adding `?` as part
of the resource name, add a test case to check
if this is handled properly.
Thanks to @kannappanr for reproducing this.
Without this change presigned URL generated with following
command would fail with signature mismatch.
```
aws s3 presign s3://testbucket/functional-tests.sh
```
It can happen that an incoming PutObject() request might
have inputs of following form eg:-
- bucketName is 'testbucket'
- objectName is '/'
bucketName exists and was previously created but there
are no other objects in this bucket. In a situation like
this parentDirIsObject() goes into an infinite loop.
Verifying that if '/' is an object fails on both backends
but the resulting `path.Dir('/')` returns `'/'` this causes
the closure to loop onto itself.
Fixes#4940
This change removes the ReadFileWithVerify function from the
StorageAPI. The ReadFile was basically a redirection to ReadFileWithVerify.
This change removes the redirection and moves the logic of
ReadFileWithVerify directly into ReadFile.
This removes a lot of unnecessary code in all StorageAPI implementations.
Fixes#4946
* review: fix doc and typos
Previously init multipart upload stores metadata of an object which is
used for complete multipart. This patch makes azure gateway to store
metadata information of init multipart object in azure in the name of
'minio.sys.tmp/multipart/v1/<UPLOAD-ID>/meta.json' and uses this
information on complete multipart.
This change refactor the ObjectLayer PutObject and PutObjectPart
functions. Instead of passing an io.Reader and a size to PUT operations
ObejectLayer expects an HashReader.
A HashReader verifies the MD5 sum (and SHA256 sum if required) of the object.
This change updates all all PutObject(Part) calls and removes unnecessary code
in all ObjectLayer implementations.
Fixes#4923
This is an improvement upon existing implementation
by avoiding transfer of access and secret keys over
the network. This change only exchanges JWT tokens
generated by an rpc client. Even if the JWT can be
traced over the network on a non-TLS connection, this
change makes sure that we never really expose the
secret key over the network.