- total number of S3 API calls per server
- maximum wait duration for any S3 API call
This implementation is primarily meant for situations
where HDDs are not capable enough to handle the incoming
workload and there is no way to throttle the client.
This feature allows MinIO server to throttle itself
such that we do not overwhelm the HDDs.
- acquire since leader lock for all background operations
- healing, crawling and applying lifecycle policies.
- simplify lifecyle to avoid network calls, which was a
bug in implementation - we should hold a leader and
do everything from there, we have access to entire
name space.
- make listing, walking not interfere by slowing itself
down like the crawler.
- effectively use global context everywhere to ensure
proper shutdown, in cache, lifecycle, healing
- don't read `format.json` for prometheus metrics in
StorageInfo() call.
- Add conservative timeouts upto 3 minutes
for internode communication
- Add aggressive timeouts of 30 seconds
for gateway communication
Fixes#9105Fixes#8732Fixes#8881Fixes#8376Fixes#9028
- extract userTags from Get/Head request (#1249)
- fix: Context cancellation not handled (#1250)
- Check for correct http status in remove object tagging (#1248)
- simplify extracting metadata in Head/Get object (#1245)
- fix: close and remove .minio.part file on errors (#1243)
NAS gateway creates non-multipart-uploads with mode 0666.
But multipart-uploads are created with a differing mode of 0644.
Both modes should be equal! Else it leads to files with different
permissions based on its file-size. This patch solves that by
using 0666 for both cases.
This is to improve responsiveness for all
admin API operations and allowing callers
to cancel any on-going admin operations,
if they happen to be waiting too long.
canonicalize the ENVs such that we can bring these ENVs
as part of the config values, as a subsequent change.
- fix location of per bucket usage to `.minio.sys/buckets/<bucket_name>/usage-cache.bin`
- fix location of the overall usage in `json` at `.minio.sys/buckets/.usage.json`
(avoid conflicts with a bucket named `usage.json` )
- fix location of the overall usage in `msgp` at `.minio.sys/buckets/.usage.bin`
(avoid conflicts with a bucket named `usage.bin`
As an optimization of the healing, HealObjects() avoid sending an
object to the background healing subsystem when the object is
present in all disks.
However, HealObjects() should have checked the scan type, if this
deep, always pass the object to the healing subsystem.
Currently, a tree walking, needed to a list objects in a specific
set quits listing as long as it finds no entries in a disk, which
is wrong.
This affected background healing, because the latter is using
tree walk directly. If one object does not exist in the first
disk for example, it will be seemed like the object does not
exist at all and no healing work is needed.
This commit fixes the behavior.
The staleness of a lock should be determined by
the quorum number of entries returning stale,
this allows for situations when locks are held
when nodes are down - we don't accidentally
clear locks unintentionally when they are valid
and correct.
Also lock maintenance should be run by all servers,
not one server, stale locks need to be run outside
the requirement for holding distributed locks.
Thanks @klauspost for reproducing this issue
Some AWS SDKs latently rely on this value some times
to calculate the right number of parts during a parallel
GetObject request, this is feature used along with
content-range - we should support this as well.
This commit fixes the env. variable in the
KMS guide used to specify the CA certificates
for the KES server.
Before the env. variable `MINIO_KMS_KES_CAPATH` has
been used - which works in non-containerized environments
due to how MinIO merges the config file and environment
variables. In containerized environments (e.g. docker)
this does not work and trying to specify `MINIO_KMS_KES_CAPATH`
instead of `MINIO_KMS_KES_CA_PATH` eventually leads to MinIO not
trusting the certificate presented by the kes server.
See: cfd12914e1/cmd/crypto/config.go (L186)
- avoid setting last heal activity when starting self-healing
This can be confusing to users thinking that the self healing
cycle was already performed.
- add info about the next background healing round
OperationTimedout error occurs when locking
timesout, trying to acquire a lock. This
error should be returned appropriately to
the client with http status "408" (request timedout)
This translation was broken, fix it.
Bulk delete API was using cleanupObjectsBulk() which calls posix
listing and delete API to remove objects internal files in the
backend (xl.json and parts) one by one.
Add DeletePrefixes in the storage API to remove the content
of a directory in a single call.
Also use a remove goroutine for each disk to accelerate removal.