Use separate sync.Pool for writes/reads
Avoid passing buffers for io.CopyBuffer()
if the writer or reader implement io.WriteTo or io.ReadFrom
respectively then its useless for sync.Pool to allocate
buffers on its own since that will be completely ignored
by the io.CopyBuffer Go implementation.
Improve this wherever we see this to be optimal.
This allows us to be more efficient on memory usage.
```
385 // copyBuffer is the actual implementation of Copy and CopyBuffer.
386 // if buf is nil, one is allocated.
387 func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error) {
388 // If the reader has a WriteTo method, use it to do the copy.
389 // Avoids an allocation and a copy.
390 if wt, ok := src.(WriterTo); ok {
391 return wt.WriteTo(dst)
392 }
393 // Similarly, if the writer has a ReadFrom method, use it to do the copy.
394 if rt, ok := dst.(ReaderFrom); ok {
395 return rt.ReadFrom(src)
396 }
```
From readahead package
```
// WriteTo writes data to w until there's no more data to write or when an error occurs.
// The return value n is the number of bytes written.
// Any error encountered during the write is also returned.
func (a *reader) WriteTo(w io.Writer) (n int64, err error) {
if a.err != nil {
return 0, a.err
}
n = 0
for {
err = a.fill()
if err != nil {
return n, err
}
n2, err := w.Write(a.cur.buffer())
a.cur.inc(n2)
n += int64(n2)
if err != nil {
return n, err
}
```
Current implementation didn't honor quorum properly and didn't
handle the errors generated properly. This patch addresses that
and also moves common code `cleanupMultipartUploads` into xl
specific private function.
Fixes#3665
`principalId` i.e user identity is kept as AccessKey in
accordance with S3 spec.
Additionally responseElements{} are added starting with
`x-amz-request-id` is a hexadecimal of the event time itself in nanosecs.
`x-minio-origin-server` - points to the server generating the event.
Fixes#3556
This commit makes code cleaner and reduces the repetitions in the code
base. Specifically, it reduces the clutter in setObjectHeaders. It also
merges encodeSuccessResponse and encodeErrorResponse together because
they served no purpose differently. Finally, it adds a simple test for
generateRequestID.
Golang 1.6 is default version for the build now.
Additionally set 'GODEBUG=cgocheck=0' for now, until
we fix the erasure coding package.
Readmore here https://tip.golang.org/doc/go1.6#cgo
- over the course of a project history every maintainer needs to update
its dependency packages, the problem essentially with godep is manipulating
GOPATH - this manipulation leads to static objects created at different locations
which end up conflicting with the overall functionality of golang.
This also leads to broken builds. There is no easier way out of this other than
asking developers to do 'godep restore' all the time. Which perhaps as a practice
doesn't sound like a clean solution. On the other hand 'godep restore' has its own
set of problems.
- govendor is a right tool but a stop gap tool until we wait for golangs official
1.5 version which fixes this vendoring issue once and for all.
- govendor provides consistency in terms of how import paths should be handled unlike
manipulation GOPATH.
This has advantages
- no more compiled objects being referenced in GOPATH and build time GOPATH
manging which leads to conflicts.
- proper import paths referencing the exact package a project is dependent on.
govendor is simple and provides the minimal necessary tooling to achieve this.
For now this is the right solution.
- All test files have been renamed to their respective <package>_test name,
this is done in accordance with
- https://github.com/golang/go/wiki/CodeReviewComments#import-dot
imports are largely used in testing, but to avoid namespace collision
and circular dependencies
- Never use _* in package names other than "_test" change fragment_v1 to expose
fragment just like 'gopkg.in/check.v1'