* Fix Delete and Create-related locks expiring too fast
Fixes#16238
By default, RedisLock expires after 10 seconds, which may not be enough to
process statuses, especially when those have attached media files.
This commit extends those 10 seconds to 15 minutes, which should be plenty
enough to handle any status, while being short enough to not waste many
sidekiq job retries in the exceedingly rare case in which a sidekiq process
would crash when processing a `Create` or `Delete`.
* Fix other RedisLock autorelease durations
Fixes#15645
- things that only perform a few simple database queries (e.g. finding and
saving a record) have been left unchanged, so they'll still use the default
10s duration
- things that perform significantly more complex database queries have been
changed to a 5 minutes timeout
- things that perform multiple HTTP queries have been changed to a 15 minutes
timeout
If a status with a hashtag becomes very popular, it stands to
reason that the hashtag should have a chance at trending
Fix no stats being recorded for hashtags that are not allowed
to trend, and stop ignoring bots
Remove references to hashtags in profile directory from the code
and the admin UI
* Add tests
* Ensure deleted statuses are marked as such
* Save some redis memory by not storing URIs in delete_upon_arrival values
* Avoid possible race condition when processing incoming Deletes
* Avoid potential duplicate Delete forwards
* Lower lock durations to reduce issues in case of hard crash of the Rails process
* Check for `lock.aquired?` and improve comment
* Refactor RedisLock usage in app/lib/activitypub
* Fix using incorrect or non-existent sender for relaying Deletes
* Update devise-two-factor to unreleased fork for Rails 6 support
Update tests to match new `rotp` version.
* Update nsa gem to unreleased fork for Rails 6 support
* Update rails to 6.1.3 and rails-i18n to 6.0
* Update to unreleased fork of pluck_each for Ruby 6 support
* Run "rails app:update"
* Add missing ActiveStorage config file
* Use config.ssl_options instead of removed ApplicationController#force_ssl
Disabled force_ssl-related tests as they do not seem to be easily testable
anymore.
* Fix nonce directives by removing Rails 5 specific monkey-patching
* Fix fixture_file_upload deprecation warning
* Fix yield-based test failing with Rails 6
* Use Rails 6's index_with when possible
* Use ActiveRecord::Cache::Store#delete_multi from Rails 6
This will yield better performances when deleting an account
* Disable Rails 6.1's automatic preload link headers
Since Rails 6.1, ActionView adds preload links for javascript files
in the Links header per default.
In our case, that will bloat headers too much and potentially cause
issues with reverse proxies. Furhermore, we don't need those links,
as we already output them as HTML link tags.
* Switch to Rails 6.0 default config
* Switch to Rails 6.1 default config
* Do not include autoload paths in the load path
* Update twitter-text from 1.14 to 3.1.0
* Disable emoji parsing
* Properly depend on twitter-text for url detection
* Fix some URLs being wrongly detected client-side
* Add test for server-side validation of non-autolinkable URLs
* Fix server-side status length counting
* Change ResolveAccountService's handling of skip_webfinger
Change it so it never makes any webfinger query, as the name would imply.
* Add tests
* Change FollowService to not take an URI for target_account
* Restore domain-block check in FollowService
* Fix tests
* disable NewCops
* update TargetRubyVersion
* Fix Lint/MissingSuper for ActiveModelSerializers::Model
* Fix Lint/MissingSuper for feed
* Fix Lint/FloatComparison
* Do not use instance variables
* Fix being able to import more than allowed number of follows
Without this commit, if someone tries importing a second list of accounts to
follow before the first one has been processed, this will queue imports for
the two whole lists, even if they exceed the account's allowed number of
outgoing follows.
This commit changes it so the individual queued imports aren't exempt from
the follow limit check (they remain exempt from the rate-limiting check
though).
* Catch validation errors to not re-queue failed follows
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
As a regression from the recent optimizations, mentions were left untouched
until `account.destroy`, which would then delete them individually,
and executing queries to find and delete associated notifications, resulting
in a massive slowdown.
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Delete status records by batches of 50
* Do not precompute values that are only used once
* Do not generate redis events for removal of public toots older than two weeks
* Filter reported toots a priori for polls and status deletion
* Do not process reblogs when cleaning up public timelines
As in Mastodon proper, reblogs don't appear in public TLs
* Clean the deleted account's own feed in one go
* Refactor Account#clean_feed_manager and List#clean_feed_manager
* Delete instead of destroy a few more associations
* Fix preloading
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Fix deleting polls not deleting notifications
* Fix fav notification deletion when deleting a toot
* Refactor DeleteAccountService spec
* Add DeleteAccountService tests for other associations and notifications
* Add favourite handling spec in status removal
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Fix account deletion workers being queued multiple times for a single account
* Fix poll votes being unnecessarily instantiated on poll deletion
* Fix favourites being unnecessarily instantiated on status deletion
* Remove inaccurate comments
* Delete polls instead of destroying them
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Fix ResolveAccountService accepting mismatching acct: URI
* Set attributes that should be updated regardless of suspension
* Fix key fetching
* Automatically merge remote accounts with duplicate `uri`
* Add tests
* Add "tootctl accounts fix-duplicates"
Finds duplicate accounts sharing a same ActivityPub `id`, re-fetch them and
merge them under the canonical `acct:` URI.
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Improve searching for private toots from URL
Most of the time, when sharing toots, people use the toot URL rather than
the toot URI, which makes sense since it is the user-facing URL.
In Mastodon's case, the URL and URI are different, and Mastodon does not
have an index on URL, which means searching a private toot by URL is done
with a slow query that will only succeed for very recent toots.
This change gets rid of the slow query, and attempts to guess the URI from
URL instead, as Mastodon's are predictable.
* Add tests
* Only return status with guessed uri if url matches
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Add indication to admin UI of whether a report has been forwarded
* Rework how forwarded status is displayed
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Extract logic for determining ActivityPub inboxes to send deletes
to to its own class and explicitly include the person the status
replied to (even if not mentioned), people who favourited it, and
people who replied to it (though that one is still not recursive)
* Fix webfinger redirect handling in ResolveAccountService
ResolveAccountService#process_webfinger! handled a one-step webfinger
redirection, but only accepting the result if it matched the exact URI passed
as input, defeating the point of a redirection check.
Instead, use the same logic as in `ActivityPub::FetchRemoteAccountService`,
updating the resulting `acct:` URI with the result of the first webfinger
query.
* Add tests
Nginx can be configured to bypass proxy cache when a special header
is in the request. If the response is cacheable, it will replace
the cache for that request. Proxy caching of media files is
desirable when using object storage as a way of minimizing bandwidth
costs, but has the drawback of leaving deleted media files for
a configured amount of cache time. A cache buster can make those
media files immediately unavailable. This especially makes sense
when suspending and unsuspending an account.
Most of the time, when sharing toots, people use the toot URL rather than
the toot URI, which makes sense since it is the user-facing URL.
In Mastodon's case, the URL and URI are different, and Mastodon does not
have an index on URL, which means searching a private toot by URL is done
with a slow query that will only succeed for very recent toots.
This change gets rid of the slow query, and attempts to guess the URI from
URL instead, as Mastodon's are predictable.
* Fix crash in SuspendAccountWorker
`follows` is an array thanks to `to_a`
* Fix code style issue
Co-authored-by: Eugen Rochko <eugen@zeonfederated.com>