Object upload from browser should save additional
incoming metadata. Additionally should also notify
through bucket notifications once they are set.
Fixes#2292
* Unsatisfied conditions will return AccessDenied instead of MissingFields
* Require form-field `file` in POST policy and make `filename` an optional attribute
* S3 feature: Replace in Key by filename attribute passed in multipart
While the existing code worked, it went to an entire cycle
of constructing event structure and end up not sending it.
Avoid this in the first place, but returning quickly if
notifications are not set on the bucket.
* Implement basic S3 notifications through queues
Supports multiple queues and three basic queue types:
1. NilQueue -- messages don't get sent anywhere
2. LogQueue -- messages get logged
3. AmqpQueue -- messages are sent to an AMQP queue
* api: Implement bucket notification.
Supports two different queue types
- AMQP
- ElasticSearch.
* Add support for redis
In current master ListObjectsV2 was merged into ListObjectsHandler
which also implements V1 API as well.
Move the detection of ListObject types to its rightful place
in http router.
This patch brings in the removal of debug logging altogether, instead
we bring in the functionality of being able to trace the errors properly
pointing back to the origination of the problem.
To enable tracing you need to enable "MINIO_TRACE" set to "1" or "true"
environment variable which would print back traces whenever there is an
error which is unhandled or at the handler layer.
By default this tracing is turned off and only user level logging is
provided.
S3 API returns BucketAlreadyExists error when some another user has such bucket.
If user that creates the bucket already has it, s3 returns BucketAlreadyOwnedByYou.
As minio has only one user, it should behave accordingly.
Otherwise it causes failures in the applications that ignore creation of already existing bucket in the account, but fail when bucket name is used by someone else.
- Rename dir.go as 'fs-multipart-dir.go'
- Move the push/pop to fs-multipart.go and rename them as save/lookup.
- Rename objectInfo instances in fs-multipart as multipartObjInfo.
- Marker should be escaped outside in handlers.
- Delimiter should be handled outside in handlers.
- Add missing comments and change the function names.
- Handle case of 'maxKeys' when its set to '0', its a valid
case and should be treated as such.
This type of check is added for making sure that we can support
custom regions.
ListBuckets and GetBucketLocation are always "us-east-1" rest
should look for the configured region.
Fixes#1278
Signature calculation has now moved out from being a package to
top-level as a layered mechanism.
In case of payload calculation with body, go-routines are initiated
to simultaneously write and calculate shasum. Errors are sent
over the writer so that the lower layer removes the temporary files
properly.
This API takes input XML input in following form.
```
<?xml version="1.0" encoding="UTF-8"?>
<Delete>
<Quiet>true</Quiet>
<Object>
<Key>Key</Key>
</Object>
<Object>
<Key>Key</Key>
</Object>
...
</Delete>
```
and responds the list of successful deletes, list of errors
for all the deleted objects.
```
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Deleted>
<Key>sample1.txt</Key>
</Deleted>
<Error>
<Key>sample2.txt</Key>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
</Error>
</DeleteResult>
```
This commit makes code cleaner and reduces the repetitions in the code
base. Specifically, it reduces the clutter in setObjectHeaders. It also
merges encodeSuccessResponse and encodeErrorResponse together because
they served no purpose differently. Finally, it adds a simple test for
generateRequestID.