fix: docs typos and keywords

master
Harshavardhana 4 years ago
parent 6a66f142d4
commit b43906f6ee
  1. 4
      docs/bucket/notifications/README.md
  2. 2
      docs/config/README.md
  3. 8
      docs/distributed/DESIGN.md
  4. 4
      docs/distributed/README.md
  5. 2
      pkg/s3select/internal/parquet-go/gen-go/parquet/parquet.go
  6. 2
      pkg/s3select/internal/parquet-go/parquet.thrift

@ -907,7 +907,7 @@ MINIO_NOTIFY_POSTGRES_MAX_OPEN_CONNECTIONS (number) maximum number o
```
> NOTE: If the `max_open_connections` key or the environment variable `MINIO_NOTIFY_POSTGRES_MAX_OPEN_CONNECTIONS` is set to `0`, There will be no limit set on the number of
> open connections to the database. This setting is generally NOT recommended as the behaviour may be inconsistent during recursive deletes in `namespace` format.
> open connections to the database. This setting is generally NOT recommended as the behavior may be inconsistent during recursive deletes in `namespace` format.
MinIO supports persistent event store. The persistent store will backup events when the PostgreSQL connection goes offline and replays it when the broker comes back online. The event store can be configured by setting the directory path in `queue_dir` field and the maximum limit of events in the queue_dir in `queue_limit` field. For eg, the `queue_dir` can be `/home/events` and `queue_limit` can be `1000`. By default, the `queue_limit` is set to 100000.
@ -1041,7 +1041,7 @@ MINIO_NOTIFY_MYSQL_COMMENT (sentence) optionally add a co
```
> NOTE: If the `max_open_connections` key or the environment variable `MINIO_NOTIFY_MYSQL_MAX_OPEN_CONNECTIONS` is set to `0`, There will be no limit set on the number of
> open connections to the database. This setting is generally NOT recommended as the behaviour may be inconsistent during recursive deletes in `namespace` format.
> open connections to the database. This setting is generally NOT recommended as the behavior may be inconsistent during recursive deletes in `namespace` format.
`dsn_string` is required and is of form `"<user>:<password>@tcp(<host>:<port>)/<database>"`

@ -296,7 +296,7 @@ Once set the crawler settings are automatically applied without the need for ser
### Healing
Healing is enabled by default. The following configuration settings allow for more staggered delay in terms of healing. The healing system by default adapts to the system speed and pauses up to '1sec' per object when the system has `max_io` number of concurrent requests. It is possible to adjust the `max_delay` and `max_io` values thereby increasing the healing speed. The delays between each operation of the healer can be adjusted by the `mc admin config set alias/ max_delay=1s` and maximum concurrent requests allowed before we start slowing things down can be configured with `mc admin config set alias/ max_io=30` . By default the wait delay is `1sec` beyond 10 cocurrent operations. This means the healer will sleep *1 second* at max for each heal operation if there are more than *10* concurrent client requests.
Healing is enabled by default. The following configuration settings allow for more staggered delay in terms of healing. The healing system by default adapts to the system speed and pauses up to '1sec' per object when the system has `max_io` number of concurrent requests. It is possible to adjust the `max_delay` and `max_io` values thereby increasing the healing speed. The delays between each operation of the healer can be adjusted by the `mc admin config set alias/ max_delay=1s` and maximum concurrent requests allowed before we start slowing things down can be configured with `mc admin config set alias/ max_io=30` . By default the wait delay is `1sec` beyond 10 concurrent operations. This means the healer will sleep *1 second* at max for each heal operation if there are more than *10* concurrent client requests.
In most setups this is sufficient to heal the content after drive replacements. Setting `max_delay` to a *lower* value and setting `max_io` to a *higher* value would make heal go faster.

@ -94,22 +94,22 @@ Input for the key is the object name specified in `PutObject()`, returns a uniqu
- MinIO does erasure coding at the object level not at the volume level, unlike other object storage vendors. This allows applications to choose different storage class by setting `x-amz-storage-class=STANDARD/REDUCED_REDUNDANCY` for each object uploads so effectively utilizing the capacity of the cluster. Additionally these can also be enforced using IAM policies to make sure the client uploads with correct HTTP headers.
- MinIO also supports expansion of existing clusters in server sets. Each zone is a self contained entity with same SLA's (read/write quorum) for each object as original cluster. By using the existing namespace for lookup validation MinIO ensures conflicting objects are not created. When no such object exists then MinIO simply uses the least used zone.
- MinIO also supports expansion of existing clusters in server pools. Each zone is a self contained entity with same SLA's (read/write quorum) for each object as original cluster. By using the existing namespace for lookup validation MinIO ensures conflicting objects are not created. When no such object exists then MinIO simply uses the least used zone.
__There are no limits on how many server sets can be combined__
__There are no limits on how many server pools can be combined__
```
minio server http://host{1...32}/export{1...32} http://host{5...6}/export{1...8}
```
In above example there are two server sets
In above example there are two server pools
- 32 * 32 = 1024 drives zone1
- 2 * 8 = 16 drives zone2
> Notice the requirement of common SLA here original cluster had 1024 drives with 16 drives per erasure set, second zone is expected to have a minimum of 16 drives to match the original cluster SLA or it should be in multiples of 16.
MinIO places new objects in server sets based on proportionate free space, per zone. Following pseudo code demonstrates this behavior.
MinIO places new objects in server pools based on proportionate free space, per zone. Following pseudo code demonstrates this behavior.
```go
func getAvailableZoneIdx(ctx context.Context) int {
serverPools := z.getServerPoolsAvailableSpace(ctx)

@ -77,10 +77,10 @@ For example:
minio server http://host{1...4}/export{1...16} http://host{5...12}/export{1...16}
```
Now the server has expanded total storage by _(newly_added_servers\*m)_ more disks, taking the total count to _(existing_servers\*m)+(newly_added_servers\*m)_ disks. New object upload requests automatically start using the least used cluster. This expansion strategy works endlessly, so you can perpetually expand your clusters as needed. When you restart, it is immediate and non-disruptive to the applications. Each group of servers in the command-line is called a zone. There are 2 server sets in this example. New objects are placed in server sets in proportion to the amount of free space in each zone. Within each zone, the location of the erasure-set of drives is determined based on a deterministic hashing algorithm.
Now the server has expanded total storage by _(newly_added_servers\*m)_ more disks, taking the total count to _(existing_servers\*m)+(newly_added_servers\*m)_ disks. New object upload requests automatically start using the least used cluster. This expansion strategy works endlessly, so you can perpetually expand your clusters as needed. When you restart, it is immediate and non-disruptive to the applications. Each group of servers in the command-line is called a zone. There are 2 server pools in this example. New objects are placed in server pools in proportion to the amount of free space in each zone. Within each zone, the location of the erasure-set of drives is determined based on a deterministic hashing algorithm.
> __NOTE:__ __Each zone you add must have the same erasure coding set size as the original zone, so the same data redundancy SLA is maintained.__
> For example, if your first zone was 8 drives, you could add further server sets of 16, 32 or 1024 drives each. All you have to make sure is deployment SLA is multiples of original data redundancy SLA i.e 8.
> For example, if your first zone was 8 drives, you could add further server pools of 16, 32 or 1024 drives each. All you have to make sure is deployment SLA is multiples of original data redundancy SLA i.e 8.
## 3. Test your setup
To test this setup, access the MinIO server via browser or [`mc`](https://docs.min.io/docs/minio-client-quickstart-guide).

@ -7767,7 +7767,7 @@ func (p *ColumnIndex) String() string {
// position in the list, matching the position of the column in the schema.
//
// Without column_orders, the meaning of the min_value and max_value fields is
// undefined. To ensure well-defined behaviour, if min_value and max_value are
// undefined. To ensure well-defined behavior, if min_value and max_value are
// written to a Parquet file, column_orders must be written as well.
//
// The obsolete min and max fields are always sorted by signed comparison

@ -870,7 +870,7 @@ struct FileMetaData {
* position in the list, matching the position of the column in the schema.
*
* Without column_orders, the meaning of the min_value and max_value fields is
* undefined. To ensure well-defined behaviour, if min_value and max_value are
* undefined. To ensure well-defined behavior, if min_value and max_value are
* written to a Parquet file, column_orders must be written as well.
*
* The obsolete min and max fields are always sorted by signed comparison

Loading…
Cancel
Save