In a multipart upload scenario disks going down and coming backup
can lead to certain parts missing on the disk/server which was
going down. This is a valid case since these blocks can be
missing and should be healed through heal operation. But we are
not supposed to fail prematurely since we have enough data on
the other disks as well within read-quorum.
This fix relaxes previous assumption, fixes a major corruption
issue reproduced by @vadmeste.
Fixes#2976
- Using gjson for constructing xlMetaV1{} in realXLMeta.
- Test for parsing constructing xlMetaV1{} using gjson.
- Changes made since benchmarks showed 30-40% improvement in speed.
- Follow up comments in issue https://github.com/minio/minio/issues/2208
for more details.
- gjson parsing of parts from xl.json for listParts.
- gjson parsing of statInfo from xl.json for getObjectInfo.
- Vendorizing gjson dependency.
Currently `xl.json` saves algorithm information for bit-rot
verification. Since the bit-rot algo's can change in the
future make sure the erasureReadFile doesn't default to
a particular algo. Instead use the checkSumInfo.
The reason is any function relying on `getLoadBalancedQuorumDisks`
cannot possibly have an idempotent behavior.
The problem comes from given a set of N disks returning just a
shuffled N/2 disks. In case of a scenario where we have N/2
number of failed disks, the returned value of `getLoadBalancedQuorumDisks`
is not equal to the same failed disks so essentially calls using such
disks might succeed or fail randomly at different intervals in time.
This proposal change is we move to `getLoadBalancedDisks()`
and use the shuffled N disks as a whole. Since most of the time we might
hit a good disk since we are not reducing our solution space. This
also provides consistent behavior for all the functions which rely
on shuffled disks.
Fixes#2242
* XL: Refactor of xl.PutObjectPart and erasureCreateFile.
* GetCheckSum and AddCheckSum methods for xlMetaV1
* Simple unit test case for erasureCreateFile()
This change is needed to make reading from objects future proof
in-terms of handling online disks. Our current counter is not
based on affirmative knowledge and relies on arithmetic sequence
which can lead to bugs.
Using modTime simplifies the understanding of `xl.json` and future
tooling / debugging of the format.
Each metadata ops have a list of errors which can be
ignored, this is essentially needed when
- disks are not found
- disks are found but cannot be accessed (permission denied)
- disks are there but fresh disks were added
This is needed since we don't have healing code in place where
it would have healed the fresh disks added.
Fixes#2072
AppendFile ensures that it appends the entire buffer. Returns
an error otherwise, this patch removes the necessity for the
caller to look for 'n' return on short writes.
Ref #1893
Strided erasure distribution uses a new randomized
block distribution for each Put operation. This
information is captured inside `xl.json` for subsequent
Get operations.
* xl/selfheal: selfheal based on read quorum on GET
* xl: getReadableDisks() also returns whether self-heal is needed so that this info can be used by ReadFile/SelfHeal/StatFile.
* xl: trigger selfheal from StatFile.