In a distributed setup that the server should not perform any operation
on the storage layer after it is exported via RPC. e.g, cleaning up of
temporary directories under .minio.sys/tmp may interfere with ongoing
PUT objects being served by the distributed setup.
* Implements a Peer RPC router that sends info to all Minio servers in the cluster.
* Bucket notifications are propagated to all nodes via this RPC router.
* Bucket listener configuration is persisted to separate object layer
file (`listener.json`) and peer RPCs are used to communicate changes
throughout the cluster.
* When events are generated, RPC calls to send them to other servers
where bucket listeners may be connected is implemented.
* Some bucket notification tests are now disabled as they cannot work in
the new design.
* Minor fix in `funcFromPC` to use `path.Join`
- Servers do not exit for invalid credentials instead they print and wait.
- Servers do not exit for version mismatch instead they print and wait.
- Servers do not exit for time differences between nodes they print and wait.
These messages based on our prep stage during XL
and prints more informative message regarding
drive information.
This change also does a much needed refactoring.
* Test code for controller-handler operations:
* Heal operations
* List operation
* Switch to "testing" lib, moving away from gocheck
* Minor refactors
* Remove extra call to initGracefulShutdown
* Remove dead code in mainControl:
Dead code found by the TestControlMain() test function that always
passes.
* Add tests for control-*-main.go
Initialization when disk was down the network disk
reported an incorrect error rather than errDiskNotFound.
This resulted in incorrect error handling during
prepInitStorage() stage.
Fixes#2577
This change initializes rpc servers associated with disks that are
local. It makes object layer initialization on demand, namely on the
first request to the object layer.
Also adds lock RPC service vendorized minio/dsync
certs directory was created only if config was not present, our
expectancy is we need 'certs' directory to be present all the
time making it easier to be documented.
On unix systems it is possible to set max memory used by
running processes using 'ulimit -m' or 'syscall.RLIMIT_AS'.
A process whence exceeds this limit, kernel would pro-actively
kill such a server with OOM. To avoid this problem of defaulting
our cache size to 8GB we should look for if the current system
limits are lower and set the cache size appropriately.
By default server heals/creates missing directories and re-populates
`format.json`, in some scenarios when disk is down for maintainenance
it would be beneficial for users to ignore such disks rather than
mistakenly using `root` partition.
Fixes#2128
The object cache implementation is XL cache, which defaults
to 8GB worth of read cache. Currently GetObject() transparently
writes to this cache upon first client read and then subsequently
serves reads from the same cache.
Currently expiration is not implemented.
This commit replaces the call to `errorIf` with `fatalIf`, so that the
minio server exits with an non-zero exit status if something fails, e.g.
the port was already openend by another process.
This patch brings in the removal of debug logging altogether, instead
we bring in the functionality of being able to trace the errors properly
pointing back to the origination of the problem.
To enable tracing you need to enable "MINIO_TRACE" set to "1" or "true"
environment variable which would print back traces whenever there is an
error which is unhandled or at the handler layer.
By default this tracing is turned off and only user level logging is
provided.
The functionality provided by minhttp will be implemented
cleanly through our own APIs. Since we are not going to
send SIGUSR2 and manage configuration in that manner, it
doesn't make sense to use minhttp.
Fixes#1586