Existing objects before overwrites are renamed to
temp location in completeMultipart. We make sure
that we delete it even if subsequenty calls fail.
Additionally move verifying of parent dir is a
file earlier to fail the entire operation.
Ref #3784
Make sure to skip reserved bucket names in `ListBuckets()`
current code didn't skip this properly and also generalize
this behavior for both XL and FS.
Current implementation didn't honor quorum properly and didn't
handle the errors generated properly. This patch addresses that
and also moves common code `cleanupMultipartUploads` into xl
specific private function.
Fixes#3665
This is implemented so that the issues like in the
following flow don't affect the behavior of operation.
```
GetObjectInfo()
.... --> Time window for mutation (no lock held)
.... --> Time window for mutation (no lock held)
GetObject()
```
This happens when two simultaneous uploads are made
to the same object the object has returned wrong
info to the client.
Another classic example is "CopyObject" API itself
which reads from a source object and copies to
destination object.
Fixes#3370Fixes#2912
This change brings in changes at multiple places
- Reuse buffers at almost all locations ranging
from rpc, fs, xl, checksum etc.
- Change caching behavior to disable itself
under low memory conditions i.e < 8GB of RAM.
- Only objects cached are of size 1/10th the size
of the cache for example if 4GB is the cache size
the maximum object size which will be cached
is going to be 400MB. This change is an
optimization to cache more objects rather
than few larger objects.
- If object cache is enabled default GC
percent has been reduced to 20% in lieu
with newly found behavior of GC. If the cache
utilization reaches 75% of the maximum value
GC percent is reduced to 10% to make GC
more aggressive.
- Do not use *bytes.Buffer* due to its growth
requirements. For every allocation *bytes.Buffer*
allocates an additional buffer for its internal
purposes. This is undesirable for us, so
implemented a new cappedWriter which is capped to a
desired size, beyond this all writes rejected.
Possible fix for #3403.
XL multipart fails to remove tmp files when an error occurs during upload, this case covers the scenario where an upload is canceled manually by the client in the middle of job.
- abstract out instrumentation information.
- use separate lockInstance type that encapsulates the nsMutex, volume,
path and opsID as the frontend or top-level lock object.
- Reads and writes of uploads.json in XL now uses quorum for
newMultipart, completeMultipart and abortMultipart operations.
- Each disk's `uploads.json` file is read and updated independently for
adding or removing an upload id from the file. Quorum is used to
decide if the high-level operation actually succeeded.
- Refactor FS code to simplify the flow, and fix a bug while reading
uploads.json.
opsID, a variable on the stack, changes over the course of
Completemultipartupload function in xl-v1-multipart.go. This was
being used in a function closure which was passed to defer
statement. The variables used in the closure depend on their values at
the time of evaluation which is indeterminate behaviour. It is
incorrect to depend on values of variables on stack at the end of
function, when deferred functions are executed.
In a multipart upload scenario disks going down and coming backup
can lead to certain parts missing on the disk/server which was
going down. This is a valid case since these blocks can be
missing and should be healed through heal operation. But we are
not supposed to fail prematurely since we have enough data on
the other disks as well within read-quorum.
This fix relaxes previous assumption, fixes a major corruption
issue reproduced by @vadmeste.
Fixes#2976
These messages based on our prep stage during XL
and prints more informative message regarding
drive information.
This change also does a much needed refactoring.
- Using gjson for constructing xlMetaV1{} in realXLMeta.
- Test for parsing constructing xlMetaV1{} using gjson.
- Changes made since benchmarks showed 30-40% improvement in speed.
- Follow up comments in issue https://github.com/minio/minio/issues/2208
for more details.
- gjson parsing of parts from xl.json for listParts.
- gjson parsing of statInfo from xl.json for getObjectInfo.
- Vendorizing gjson dependency.
- Instrumentation for locks.
- Detailed test coverage.
- Adding RPC control handler to fetch lock instrumentation.
- RPC control handlers suite tests with a test RPC server.
This patch introduces new command line 'control'
- minio control
TO manage minio server connecting through GoRPC API frontend.
- minio control heal
Is implemented for healing objects.
This API is precursor before implementing `minio lambda` and `mc` continous replication.
This new api is an extention to BucketNofication APIs.
// Request
```
GET /bucket?notificationARN=arn:minio:lambda:us-east-1:10:minio HTTP/1.1
...
...
```
// Response
```
{"Records": ...}
...
...
...
{"Records": ...}
```
Currently `xl.json` saves algorithm information for bit-rot
verification. Since the bit-rot algo's can change in the
future make sure the erasureReadFile doesn't default to
a particular algo. Instead use the checkSumInfo.
The reason is any function relying on `getLoadBalancedQuorumDisks`
cannot possibly have an idempotent behavior.
The problem comes from given a set of N disks returning just a
shuffled N/2 disks. In case of a scenario where we have N/2
number of failed disks, the returned value of `getLoadBalancedQuorumDisks`
is not equal to the same failed disks so essentially calls using such
disks might succeed or fail randomly at different intervals in time.
This proposal change is we move to `getLoadBalancedDisks()`
and use the shuffled N disks as a whole. Since most of the time we might
hit a good disk since we are not reducing our solution space. This
also provides consistent behavior for all the functions which rely
on shuffled disks.
Fixes#2242
* unit-tests: Unit tests for erasureCreateFile and erasureReadFile.
* appendFile() should return errXLWriteQuorum.
* TestErasureReadFileOffsetLength() tests erasureReadFile() for different offset and lengths.
* Fix for the failure seen in the erasure read unit test case. Issue #2227
* Move common erasure setup code to newErasureTestSetup()
* Review fixes. Add few more test cases for erasureReadFile.