This is a consolidation effort, avoiding usage
of naked strings in codebase. Whenever possible
use constants which can be repurposed elsewhere.
This also fixes `goconst ./...` reported issues.
This is to utilize an optimized version of
sha256 checksum which @fwessels implemented.
blake2b lacks such optimizations on ARM platform,
this can provide us significant boost in performance.
blake2b on ARM64 as expected would be slower.
```
BenchmarkSize1K-4 30000 44015 ns/op 23.26 MB/s
BenchmarkSize8K-4 5000 335448 ns/op 24.42 MB/s
BenchmarkSize32K-4 1000 1333960 ns/op 24.56 MB/s
BenchmarkSize128K-4 300 5328286 ns/op 24.60 MB/s
```
sha256 on ARM64 is faster by orders of magnitude giving close to
AVX performance of blake2b.
```
BenchmarkHash8Bytes-4 1000000 1446 ns/op 5.53 MB/s
BenchmarkHash1K-4 500000 3229 ns/op 317.12 MB/s
BenchmarkHash8K-4 100000 14430 ns/op 567.69 MB/s
BenchmarkHash1M-4 1000 1640126 ns/op 639.33 MB/s
```
This change brings in changes at multiple places
- Reuse buffers at almost all locations ranging
from rpc, fs, xl, checksum etc.
- Change caching behavior to disable itself
under low memory conditions i.e < 8GB of RAM.
- Only objects cached are of size 1/10th the size
of the cache for example if 4GB is the cache size
the maximum object size which will be cached
is going to be 400MB. This change is an
optimization to cache more objects rather
than few larger objects.
- If object cache is enabled default GC
percent has been reduced to 20% in lieu
with newly found behavior of GC. If the cache
utilization reaches 75% of the maximum value
GC percent is reduced to 10% to make GC
more aggressive.
- Do not use *bytes.Buffer* due to its growth
requirements. For every allocation *bytes.Buffer*
allocates an additional buffer for its internal
purposes. This is undesirable for us, so
implemented a new cappedWriter which is capped to a
desired size, beyond this all writes rejected.
Possible fix for #3403.
Currently `xl.json` saves algorithm information for bit-rot
verification. Since the bit-rot algo's can change in the
future make sure the erasureReadFile doesn't default to
a particular algo. Instead use the checkSumInfo.
If requested offset/length of an object is equal to
erasureInfo.BlockSize, getBlockInfo() returns one more block added to
actual end block. This patch fixes the issue.
This patch also adds unit test for get objects with big files.
Requires skipping necessary parts of dataBlocks during
decoding phase and requires us to properly skip the
entries as needed.
Thanks to Karthic for reproducing this important issue.
Fixes#1503
* xl/selfheal: selfheal based on read quorum on GET
* xl: getReadableDisks() also returns whether self-heal is needed so that this info can be used by ReadFile/SelfHeal/StatFile.
* xl: trigger selfheal from StatFile.