As an optimization of the healing, HealObjects() avoid sending an
object to the background healing subsystem when the object is
present in all disks.
However, HealObjects() should have checked the scan type, if this
deep, always pass the object to the healing subsystem.
- Background Heal routine receives heal requests from a channel, either to
heal format, buckets or objects
- Daily sweeper lists all objects in all buckets, these objects
don't necessarly have read quorum so they can be removed if
these objects are unhealable
- Heal daily ops receives objects from the daily sweeper
and send them to the heal routine.
This PR implements an object layer which
combines input erasure sets of XL layers
into a unified namespace.
This object layer extends the existing
erasure coded implementation, it is assumed
in this design that providing > 16 disks is
a static configuration as well i.e if you started
the setup with 32 disks with 4 sets 8 disks per
pack then you would need to provide 4 sets always.
Some design details and restrictions:
- Objects are distributed using consistent ordering
to a unique erasure coded layer.
- Each pack has its own dsync so locks are synchronized
properly at pack (erasure layer).
- Each pack still has a maximum of 16 disks
requirement, you can start with multiple
such sets statically.
- Static sets set of disks and cannot be
changed, there is no elastic expansion allowed.
- Static sets set of disks and cannot be
changed, there is no elastic removal allowed.
- ListObjects() across sets can be noticeably
slower since List happens on all servers,
and is merged at this sets layer.
Fixes#5465Fixes#5464Fixes#5461Fixes#5460Fixes#5459Fixes#5458Fixes#5460Fixes#5488Fixes#5489Fixes#5497Fixes#5496
- Changes related to moving admin APIs
- admin APIs now have an endpoint under /minio/admin
- admin APIs are now versioned - a new API to server the version is
added at "GET /minio/admin/version" and all API operations have the
path prefix /minio/admin/v1/<operation>
- new service stop API added
- credentials change API is moved to /minio/admin/v1/config/credential
- credentials change API and configuration get/set API now require TLS
so that credentials are protected
- all API requests now receive JSON
- heal APIs are disabled as they will be changed substantially
- Heal API changes
Heal API is now provided at a single endpoint with the ability for a
client to start a heal sequence on all the data in the server, a
single bucket, or under a prefix within a bucket.
When a heal sequence is started, the server returns a unique token
that needs to be used for subsequent 'status' requests to fetch heal
results.
On each status request from the client, the server returns heal result
records that it has accumulated since the previous status request. The
server accumulates upto 1000 records and pauses healing further
objects until the client requests for status. If the client does not
request any further records for a long time, the server aborts the
heal sequence automatically.
A heal result record is returned for each entity healed on the server,
such as system metadata, object metadata, buckets and objects, and has
information about the before and after states on each disk.
A client may request to force restart a heal sequence - this causes
the running heal sequence to be aborted at the next safe spot and
starts a new heal sequence.
This change brings public data-types such that
we can ask projects to implement gateway projects
externally than maintaining in our repo.
All publicly exported structs are maintained in object-api-datatypes.go
completePart --> CompletePart
uploadMetadata --> MultipartInfo
All other exported errors are at object-api-errors.go
In the algorithm to check if an object requires healing, in addition to
checking if all disks have xl.json present we should check if all parts
of the object are present and have valid blake2b checksums.
Also fixed a minor compilation error in heal-objects-list.go.
For listing of objects needing heal, we list all objects present on all
the disks and return the set union. We were incorrectly dropping objects
that weren't already seen in disks so far.
Sample directory layout of disks in a 4-disk setup:
`/tmp/1`, `/tmp/2`, `/tmp/3`, `/tmp/4` are directories used as disks here.
`test` is the bucket, `obj1` and obj2` are the objects.
```
/tmp/1/test
└── obj2
├── part.1
├── part.2
└── xl.json
/tmp/2/test
└── obj1
├── part.1
├── part.2
└── xl.json
/tmp/3/test
├── obj1
│ ├── part.1
│ ├── part.2
│ └── xl.json
└── obj2
├── part.1
├── part.2
└── xl.json
/tmp/4/test
[This is empty]
```
Previously, erasure backend's `listDirFactory` may return errors which
were explicitly ignored. With this change, it returns nil. Superfluous
checks at higher-layers for ignored errors are removed as well.
Make sure to skip reserved bucket names in `ListBuckets()`
current code didn't skip this properly and also generalize
this behavior for both XL and FS.
This is implemented so that the issues like in the
following flow don't affect the behavior of operation.
```
GetObjectInfo()
.... --> Time window for mutation (no lock held)
.... --> Time window for mutation (no lock held)
GetObject()
```
This happens when two simultaneous uploads are made
to the same object the object has returned wrong
info to the client.
Another classic example is "CopyObject" API itself
which reads from a source object and copies to
destination object.
Fixes#3370Fixes#2912
- abstract out instrumentation information.
- use separate lockInstance type that encapsulates the nsMutex, volume,
path and opsID as the frontend or top-level lock object.
- Instrumentation for locks.
- Detailed test coverage.
- Adding RPC control handler to fetch lock instrumentation.
- RPC control handlers suite tests with a test RPC server.
This patch introduces new command line 'control'
- minio control
TO manage minio server connecting through GoRPC API frontend.
- minio control heal
Is implemented for healing objects.
The reason is any function relying on `getLoadBalancedQuorumDisks`
cannot possibly have an idempotent behavior.
The problem comes from given a set of N disks returning just a
shuffled N/2 disks. In case of a scenario where we have N/2
number of failed disks, the returned value of `getLoadBalancedQuorumDisks`
is not equal to the same failed disks so essentially calls using such
disks might succeed or fail randomly at different intervals in time.
This proposal change is we move to `getLoadBalancedDisks()`
and use the shuffled N disks as a whole. Since most of the time we might
hit a good disk since we are not reducing our solution space. This
also provides consistent behavior for all the functions which rely
on shuffled disks.
Fixes#2242