- Data from disk was being read after bitrot verification to return
data for GetObject. Strictly speaking this does not guarantee bitrot
protection, as disks may return bad data even temporarily.
- This fix reads data from disk, verifies data for bitrot and then
returns data to the client directly.
This PR implements an object layer which
combines input erasure sets of XL layers
into a unified namespace.
This object layer extends the existing
erasure coded implementation, it is assumed
in this design that providing > 16 disks is
a static configuration as well i.e if you started
the setup with 32 disks with 4 sets 8 disks per
pack then you would need to provide 4 sets always.
Some design details and restrictions:
- Objects are distributed using consistent ordering
to a unique erasure coded layer.
- Each pack has its own dsync so locks are synchronized
properly at pack (erasure layer).
- Each pack still has a maximum of 16 disks
requirement, you can start with multiple
such sets statically.
- Static sets set of disks and cannot be
changed, there is no elastic expansion allowed.
- Static sets set of disks and cannot be
changed, there is no elastic removal allowed.
- ListObjects() across sets can be noticeably
slower since List happens on all servers,
and is merged at this sets layer.
Fixes#5465Fixes#5464Fixes#5461Fixes#5460Fixes#5459Fixes#5458Fixes#5460Fixes#5488Fixes#5489Fixes#5497Fixes#5496
It can happen that an incoming PutObject() request might
have inputs of following form eg:-
- bucketName is 'testbucket'
- objectName is '/'
bucketName exists and was previously created but there
are no other objects in this bucket. In a situation like
this parentDirIsObject() goes into an infinite loop.
Verifying that if '/' is an object fails on both backends
but the resulting `path.Dir('/')` returns `'/'` this causes
the closure to loop onto itself.
Fixes#4940
The reason is any function relying on `getLoadBalancedQuorumDisks`
cannot possibly have an idempotent behavior.
The problem comes from given a set of N disks returning just a
shuffled N/2 disks. In case of a scenario where we have N/2
number of failed disks, the returned value of `getLoadBalancedQuorumDisks`
is not equal to the same failed disks so essentially calls using such
disks might succeed or fail randomly at different intervals in time.
This proposal change is we move to `getLoadBalancedDisks()`
and use the shuffled N disks as a whole. Since most of the time we might
hit a good disk since we are not reducing our solution space. This
also provides consistent behavior for all the functions which rely
on shuffled disks.
Fixes#2242
Each metadata ops have a list of errors which can be
ignored, this is essentially needed when
- disks are not found
- disks are found but cannot be accessed (permission denied)
- disks are there but fresh disks were added
This is needed since we don't have healing code in place where
it would have healed the fresh disks added.
Fixes#2072
Previously xl.isObject() returns false if one of the disk doesn't have
the object. Its possible that object may be present in another disk.
This patch fixes the issue by returning false only if given prefix
doesn't exist in all disks.
Fixes#1855
* xl/selfheal: selfheal based on read quorum on GET
* xl: getReadableDisks() also returns whether self-heal is needed so that this info can be used by ReadFile/SelfHeal/StatFile.
* xl: trigger selfheal from StatFile.