The default TaskScheduler may also mark tasks as long running after a
while and then spawn an extra thread regardless. But this way we make
sure to not block the threadpool by spawning the tasks in separate
threads instead in the normal pool.
Also keep the denychildattach which Run would have added. See:
https://devblogs.microsoft.com/pfxteam/task-run-vs-task-factory-startnew/
Instead of backfilling every note we come across that has a reply
collection, only schedule a backfill job if someone wants to see the
replies (on GET MastoAPI /context, or Iceshrimp API /descendants)
Reply backfilling is also done on a ThreadIdOrId basis as opposed to the
previous way of backfilling individual notes. This allows us finer
grained control over the recursion and frees up the job queue, alongside
allowing for easier implementation of context collection backfill in the
future (by mapping each context collection to a thread)
---
Currently, note threads are implicit based on the "threadId" column of a
note, which can be null (where it's the same as the note's "id")
This commit turns note threads into an actual entity, and as a part of
that, makes "threadId" non-nullable (by specifically setting it to "id"
for those cases)
This is done to attach extra metadata to the entire thread, currently
just the time of when it was last backfilled, but more may be added in
the future (the context collection associated with this thread, for example)
---
The data format for backfill jobs have backwards-incompatibly changed
since the introduction of the feature. We can drop all old jobs without
causing too much trouble as they will be re-scheduled on demand
---
Signed-off-by: Laura Hausmann <laura@hausmann.dev>
While the previous fix was likely enough, there was still a razor-thin theoretical race condition remaining. This commit fixes said race condition, and simplifies some if statements across the file.
This prevents a queue worker stall when a job fails to execute due to a database exception (leaving unsaved changes in the DbContext change tracker, preventing the job status from being set as failed)