In current implementation we used as many dsync clients
as per number of endpoints(along with path) which is not
the expected implementation. The implementation of Dsync
was expected to be just for the endpoint Host alone such
that if you have 4 servers and each with 4 disks we need
to only have 4 dsync clients and 4 dsync servers. But
we currently had 8 clients, servers which in-fact is
unexpected and should be avoided.
This PR brings the implementation back to its original
intention. This issue was found #5160
This fix removes logrus package dependency and refactors the console
logging as the only logging mechanism by removing file logging support.
It rearranges the log message format and adds stack trace information
whenever trace information is not available in the error structure.
It also adds `--json` flag support for server logging.
When minio server is started with `--json` flag, all log messages are
displayed in json format, with no start-up and informational log
messages.
Fixes#5265#5220#5197
This change introduces following simplified steps to follow
during config migration.
```
// Steps to move from version N to version N+1
// 1. Add new struct serverConfigVN+1 in config-versions.go
// 2. Set configCurrentVersion to "N+1"
// 3. Set serverConfigCurrent to serverConfigVN+1
// 4. Add new migration function (ex. func migrateVNToVN+1()) in config-migrate.go
// 5. Call migrateVNToVN+1() from migrateConfig() in config-migrate.go
// 6. Make changes in config-current_test.go for any test change
```
This makes lock RPCs similar to other RPCs where requests to the local
server bypass the network. Requests to the local lock-subsystem may
bypass the network layer and directly access the locking
data-structures.
This incidentally fixes#4451.
This patch fixes below
* Previously fatalIf() never writes log other than first logging target.
* quiet flag is not honored to show progress messages other than startup messages.
* Removes console package usage for progress messages.
* Rename GenericArgs to AuthRPCArgs
* Rename GenericReply to AuthRPCReply
* Remove authConfig.loginMethod and add authConfig.ServiceName
* Rename loginServer to AuthRPCServer
* Rename RPCLoginArgs to LoginRPCArgs
* Rename RPCLoginReply to LoginRPCReply
* Version and RequestTime are added to LoginRPCArgs and verified by
server side, not client side.
* Fix data race in lockMaintainence loop.
- Add a lockStat type to group counters
- Remove unnecessary helper functions
- Fix stats computation on force unlock
- Removed unnecessary checks and cleaned up comments
This is implemented so that the issues like in the
following flow don't affect the behavior of operation.
```
GetObjectInfo()
.... --> Time window for mutation (no lock held)
.... --> Time window for mutation (no lock held)
GetObject()
```
This happens when two simultaneous uploads are made
to the same object the object has returned wrong
info to the client.
Another classic example is "CopyObject" API itself
which reads from a source object and copies to
destination object.
Fixes#3370Fixes#2912
This patch fixes a possible bug, reproduced rarely only seen
once.
```
panic: runtime error: index out of range
goroutine 136 [running]:
panic(0xac1a40, 0xc4200120b0)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
github.com/minio/minio/vendor/github.com/minio/dsync.lock.func1(0xc4203d2240, 0x4, 0xc420474080, 0x4, 0x4, 0xc4202abb60, 0x0, 0xa86d01, 0xefcfc0, 0xc420417a80)
/go/src/github.com/minio/minio/vendor/github.com/minio/dsync/drwmutex.go:170 +0x69b
created by github.com/minio/minio/vendor/github.com/minio/dsync.lock
/go/src/github.com/minio/minio/vendor/github.com/minio/dsync/drwmutex.go:191 +0xf4
```
- abstract out instrumentation information.
- use separate lockInstance type that encapsulates the nsMutex, volume,
path and opsID as the frontend or top-level lock object.
For command line arguments we are currently following
- <node-1>:/path ... <node-n>:/path
This patch changes this to
- http://<node-1>/path ... http://<node-n>/path
Added clear subcommand for control lock with following options:
```
3. Clear lock named 'bucket/object' (exact match).
$ minio control lock clear http://localhost:9000/bucket/object
4. Clear all locks with names that start with 'bucket/prefix' (wildcard match).
$ minio control lock --recursive clear http://localhost:9000/bucket/prefix
5. Clear all locks older than 10minutes.
$ minio control lock --older-than=10m clear http://localhost:9000/
6. Clear all locks with names that start with 'bucket/a' and that are older than 1hour.
$ minio control lock --recursive --older-than=1h clear http://localhost:9000/bucket/a
```
These messages based on our prep stage during XL
and prints more informative message regarding
drive information.
This change also does a much needed refactoring.
- Instrumentation for locks.
- Detailed test coverage.
- Adding RPC control handler to fetch lock instrumentation.
- RPC control handlers suite tests with a test RPC server.
This PR contains various fixes for the distributed release:
- Use DRWMutex in namespace-lock only for a single Lock()/RLock() call in conformance to server-side rw-locking as implemented in minio/dsync
- Implement missing cases in lock-rpc-server to catch Unlock() for active read locks and RUnlock() for an active write lock
- Refactor RPCClient to release local mutex while making actual RPC.Call()
This patch brings in the removal of debug logging altogether, instead
we bring in the functionality of being able to trace the errors properly
pointing back to the origination of the problem.
To enable tracing you need to enable "MINIO_TRACE" set to "1" or "true"
environment variable which would print back traces whenever there is an
error which is unhandled or at the handler layer.
By default this tracing is turned off and only user level logging is
provided.