-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New priority queue & exponential backoff client #4745
Conversation
* main: meta: fix js2ts check meta: add support for TypeScript plugins (#4640)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice simplifications! makes it so much easier to follow code when we have a simple promise based queue like p-queue
this.requests = this.opts[internalRateLimitedQueue] | ||
} else { | ||
this.requests = new RateLimitedQueue(this.opts.limit) | ||
this.uppy.queue.concurrency = this.opts.limit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems odd for a plugin to mutate a deep property on the central uppy instance like this. what if more plugins were to do this, won't it lead to undeterministic behaviour?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the only way to do backwards compatibility. If we want to do breaking changes for this api as well, that should be discussed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, so before this PR we had one queue per plugin, but now the goal is to have a single queue. I think this has the side-effect that when uploading to multiple destinations, there will be lower performance, because max 6 requests will be enforced in total (as opposed to max 6 requests per destination). But I guess it's an acceptable trade-off in order to get maintainable code. In the future we might want to implement a multi-queue which dispatches requests to the corresponding queue based on which domain name the request will connect to, as I believe that's what the browsers 6 connection limit is based on (per domain)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't support multiple uploaders, so multiple destinations can only mean to Companion or your own backend for signing, which is only a thing on multipart. I wouldn't overcomplicate the queue in core for one upload plugin. Instead I think it's best we wait till S3 supports HTTP/2.0 and we can have high limits everywhere.
...options, | ||
onTimeout: () => { | ||
const seconds = Math.ceil(options.timeout / 1000) | ||
const error = new Error(this.i18n('uploadStalled', { seconds })) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we should put i18n strings in error messages. Error objects are not user-facing but developer-facing. Having error messages in different languages could make debugging production issues harder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was already the case, I just moved the code. It is user facing though, people view this message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I see. I just don't think that Error objects (and their message
) should be user facing, but yea we can fix it in a diferent pr
* @returns {AbortController} | ||
* The abort signal. | ||
*/ | ||
export function getUppyAbortController(uppy, id, additionalSignal) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export function getUppyAbortController(uppy, id, additionalSignal) { | |
export function createUppyAbortController(uppy, id, additionalSignal) { |
|
||
xhr.send(formData) | ||
await this.uppy.queue.add(async () => { | ||
return uppyFetch(endpoint, { ...this.opts, body, signal }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldn't we just call uppyFetch here? Why do we need it to return a function?
return uppyFetch(endpoint, { ...this.opts, body, signal }) | |
return this.#uppyFetch(files)(endpoint, { ...this.opts, body, signal }) |
}) | ||
|
||
if (id) { | ||
uppy.on('file-removed', (file) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
won't this leak event listeners? we never uppy.off
all of these events as far as i can see?
const host = getSocketHost(file.remote.companionUrl) | ||
const socket = new Socket({ target: `${host}/api/${token}`, autoOpen: false }) | ||
const { capabilities } = this.uppy.getState() | ||
const deferred = defer() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need defer here? can't we just use the new Promise
constructor? I thought defer was considered an anti-pattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly because we need to return the promise itself in the queue add:
this.uppy.queue.add(() => {
if (file.isPaused) {
socket.send('pause', {})
} else {
socket.open()
}
return deferred.promise
}, { signal: controller.signal })
But could probably be rewritten
'content-type', | ||
'uppy-auth-token', | ||
] | ||
const fallbackAllowedHeaders = ['accept', 'content-type', 'uppy-auth-token'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a lot of unrelated changes - makes it hard to review. we should get our eslint config and prettier fixed
* @returns {Promise<XMLHttpRequest>} | ||
* A Promise that resolves to the response text if the request succeeds, and rejects with an error if it fails. | ||
*/ | ||
export function fetcher(url, options = {}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you tried to use ky
first but we need upload progress which is not supported. I think if we are implementing our own request abstraction, ideally it should have unit tests because it becomes such an important core component of Uppy. Could we alternatively look into using axios
instead of implementing our own wrapper? axios is extremely popular and seems to support upload events too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer our own abstraction over a library, especially since it's mostly just a promise wrapper. But your right about tests!
Closing this. Breaking up the relevant parts into smaller PRs. Not doing the queue for now. |
Problems
Consider this example, which is already duplicated between tus and S3 multipart, and shows the problem of needing to manually orchestrate
setTimeout
's and callingrateLimit
with the right values. It also keeps track of aretryDelayIterator
and apreviousRetryDelay
to use the timeout for the timeout.Because it's tightly coupled, it's hard to reuse this and regardless it exposes complexities which should be hidden away.
uppy/packages/@uppy/aws-s3-multipart/src/index.js
Lines 132 to 192 in ab0c656
Solution
Separation of concerns: a priority queue, which does that and nothing else, and a
fetcher
util with exponential backoff built-in. Conceptually, the queue shouldn't care if a retry is needed or not, it just adds and removes promises when they resolve/reject. It is the promise inside the queue that should not resolve/reject until it has retried.p-queue
to@uppy/core
as a centralised priority queue.fetcher
in@uppy/utils
, a wrapper aroundXMLHttpRequest
with exponential backoff.@uppy/xhr-upload
to leverage both.Notes
EventManager
, which is unnecessary abstraction as far as I'm concerned.RateLimitedQueue
is not in this diff yet as the other packages still depend on it.Breaking changes
Pitfalls
fetcher
usefetch
? I started this refactor with usingky
but I found out thatfetch
can't consistently measure upload progress (it can do download). There is theoretically a way by turning your file into aReadableStream
or pushing it through aTransformStream
and using theduplex: 'half'
option onfetch
. However, that won't work forFormData
(the default of xhr-upload) and it will only work over HTTP/2.0 or 3.0 (which S3 doesn't use)....Adoption plan
It's important that there is not one person who feels comfortable with the queue and retrying. Therefor I propose the following:
@uppy/tus
(since tus handles uploading internally, only the queue needs to be swapped)@uppy/aws-s3-multipart
(not easy)@uppy/companion-client
. Doesn't have a queue but would need to use the newfetcher
util for retrying. Note that in Companion+client stability fixes, error handling and retry #4734 custom logic withp-retry
is added, but this should only be temporarily and we should end up with one util that instead of different implementations.