On unix systems it is possible to set max memory used by
running processes using 'ulimit -m' or 'syscall.RLIMIT_AS'.
A process whence exceeds this limit, kernel would pro-actively
kill such a server with OOM. To avoid this problem of defaulting
our cache size to 8GB we should look for if the current system
limits are lower and set the cache size appropriately.
Currently `xl.json` saves algorithm information for bit-rot
verification. Since the bit-rot algo's can change in the
future make sure the erasureReadFile doesn't default to
a particular algo. Instead use the checkSumInfo.
Object upload from browser should save additional
incoming metadata. Additionally should also notify
through bucket notifications once they are set.
Fixes#2292
* Unsatisfied conditions will return AccessDenied instead of MissingFields
* Require form-field `file` in POST policy and make `filename` an optional attribute
* S3 feature: Replace in Key by filename attribute passed in multipart
While the existing code worked, it went to an entire cycle
of constructing event structure and end up not sending it.
Avoid this in the first place, but returning quickly if
notifications are not set on the bucket.
Fresh disks can be provided in any order, we need to make sure
to preserve existing disk order and populate the fresh disks
in new positions.
Thanks for Anis Elleuch <vadmeste@gmail.com> for finding this issue.
* XL/erasure-read: optimize memory allocation during erasure-read by using temporary buffer pool.
With the change the buffer needed during GetObject by erasureReadFile is allocated only once.
* fs: Set nextMarker independent of it having a slash or not.
* tests: Using listObjects clean up remaining tree walk go routines.
* tests: Use slices to hold data instead of enumerating test cases by hand
... also fixed numbering of test cases.
* Implement basic S3 notifications through queues
Supports multiple queues and three basic queue types:
1. NilQueue -- messages don't get sent anywhere
2. LogQueue -- messages get logged
3. AmqpQueue -- messages are sent to an AMQP queue
* api: Implement bucket notification.
Supports two different queue types
- AMQP
- ElasticSearch.
* Add support for redis
The reason is any function relying on `getLoadBalancedQuorumDisks`
cannot possibly have an idempotent behavior.
The problem comes from given a set of N disks returning just a
shuffled N/2 disks. In case of a scenario where we have N/2
number of failed disks, the returned value of `getLoadBalancedQuorumDisks`
is not equal to the same failed disks so essentially calls using such
disks might succeed or fail randomly at different intervals in time.
This proposal change is we move to `getLoadBalancedDisks()`
and use the shuffled N disks as a whole. Since most of the time we might
hit a good disk since we are not reducing our solution space. This
also provides consistent behavior for all the functions which rely
on shuffled disks.
Fixes#2242