* Disable incorrect check for hidden services in Socket
Hidden services can only be accessed with an HTTP proxy, in which
case the host seen by the Socket class will be the proxy, not the
target host.
Hidden services are already filtered in `Request#initialize`.
* Use our Socket class to connect to HTTP proxies
Avoid the timeout logic being bypassed
* Add support for IP addresses in Request::Socket
* Refactor a bit, no need to keep the DNS resolver around
* Disable incorrect check for hidden services in Socket
Hidden services can only be accessed with an HTTP proxy, in which
case the host seen by the Socket class will be the proxy, not the
target host.
Hidden services are already filtered in `Request#initialize`.
* Use our Socket class to connect to HTTP proxies
Avoid the timeout logic being bypassed
* Add support for IP addresses in Request::Socket
* Refactor a bit, no need to keep the DNS resolver around
* Add request pool to improve delivery performance
Fix#7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
* Fix connect timeout not being enforced
The loop was catching the timeout exception that should stop execution, so the next IP would no longer be within a timed block, which led to requests taking much longer than 10 seconds.
* Use timeout on each IP attempt, but limit to 2 attempts
* Fix code style issue
* Do not break Request#perform if no block given
* Update method stub in spec for Request
* Move timeout inside the begin/rescue block
* Use Resolv::DNS with timeout of 1 to get IP addresses
* Update Request spec to stub Resolv::DNS instead of Addrinfo
* Fix Resolve::DNS stubs in Request spec
* If an Update is signed with known key, skip re-following procedure
Because it means the remote actor did *not* lose their database
* Add CLI method for rotating keys
bin/tootctl accounts rotate [USERNAME]
Generates a new RSA key per account and sends out an Update activity
signed with the old key.
* Key rotation: Space out Update fan-outs every 5 minutes per 1000 accounts
* Skip suspended accounts in key rotation
If Mastodon accesses to the hidden service via transparent proxy, it's needed to avoid checking whether it's a private address, since `.onion` is resolved to a private address.
I was previously using the `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` to provide that function. However, I realized that using `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` is redundant, since this specification is always used with `ALLOW_ACCESS_TO_HIDDEN_SERVICE`. Therefore, I decided to integrate the setting of `HIDDEN_SERVICE_VIA_TRANSPARENT_PROXY` into` ALLOW_ACCESS_TO_HIDDEN_SERVICE`.
* Add support for HTTP client proxy
* Add access control for darknet
Supress error when access to darknet via transparent proxy
* Fix the codes pointed out
* Lint
* Fix an omission + lint
* any? -> include?
* Change detection method to regexp to avoid test fail
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
* request: in the event of failure, try other IPs (#6761)
In the case where a name has multiple A/AAAA records, we should
try subsequent records instead of immediately failing when we have a
failure on the first IP address.
This significantly improves delivery success when there are network
connectivity problems affecting only IPv4 or IPv6.
* fix method call style
* request_spec: adjust test case to use Addrinfo
* request: Request/open: move private addr check to within begin/rescue
* request_spec: add case to test failover, fix exception check
* Double Addrinfo.foreach so that it correctly yields instances
I see no reason to allow more than that. Usually a redirect is
HTTP->HTTPS, then maybe URL structure changed, but more than that
is highly unlikely to be a legitimate use case.
* Add Request class with HTTP signature generator
Spec: https://tools.ietf.org/html/draft-cavage-http-signatures-06
* Add HTTP signature verification concern
* Add test for SignatureVerification concern
* Add basic test for Request class
* Make PuSH subscribe/unsubscribe requests use new Request class
Accidentally fix lease_seconds not being set and sent properly, and
change the new minimum subscription duration to 1 day
* Make all PuSH workers use new Request class
* Make Salmon sender use new Request class
* Make FetchLinkService use new Request class
* Make FetchAtomService use the new Request class
* Make Remotable use the new Request class
* Make ResolveRemoteAccountService use the new Request class
* Add more tests
* Allow +-30 seconds window for signed request to remain valid
* Disable time window validation for signed requests, restore 7 days
as PuSH subscription duration (which was previous default due to a bug)