-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipeTo(): the stream queues should be used #653
Comments
Is this because we never explicitly state you must use (the equivalent of) WritableStreamDefaultWriterWrite/the counterpart reader methods? But aren't the "backpressure must be enforced" requirements effective in enforcing this? Or is the issue that, since we don't use WritableStreamDefaultWriterWrite, an implementation might directly "write" by sending to the underlying sink? I guess we should state what we mean by "write" then. Although, we also want to allow room for optimization, e.g. bypassing the queue in the case of bypassing intermediate identity transforms or similar... Not sure what to do here. |
I guess this is related to #618 (comment) |
My first implementation for Chrome didn't start a read() until the previous WritableStreamDefaultWriterWrite() had finished. It was totally compliant and passed all the tests, but had exactly this problem. |
I have an idea for a test, although not the spec wording to go with it. readable
.pipeThrough(new TransformStream(undefined, new CountQueuingStrategy(64))
.pipeTo(writable); Initially writable blocks, and readable produces one chunk immediately for every call to pull(). After 64 chunks have been created, readable blocks (a Promise is returned from pull() that doesn't resolve) and writable starts accepting chunks. Expect 64 chunks to be written to the writable. What do you think? Is it reasonable to expect every implementation to pass this test, or is it okay for an implementation to deliver <64 in this case? |
I see. So what we really want is some suggestion that you read ASAP, using only the "backpressure must be enforced" signals instead of any other signals. Here would be my spec phrasing. I would nest it under "Backpressure must be enforced":
I think it's probably reasonable to expect it to pass "eventually". Codifying that in a specific test (e.g. do we give them a grace period?) will be a bit tricky. But I think it's reasonable to adopt an approach of adding such a test, adding a code comment that it tests a "should" requirement, and seeing if anyone complains. |
sgtm. My idea is that the test framework provides the grace period. If 64 chunks don't show up before the test times out, you lose. Actually, 64 is unnecessarily large. I think I'll make the HWM 16. Or maybe 4. No point in burning CPU. I am assigning myself. I don't think I'll get to this today, but I should be able to do it soon. |
Fixes #653. This also removes the heuristic allowance for determining flow rate, which was somewhat in contradiction with the instruction added here to not delay reads/writes. In practice, such a heuristic would not give many gains: it would simply allow backpressure to be applied slightly sooner.
An implementation of pipeTo() that doesn't make use of the queues in the streams is compliant with the standard. Consider
versus
It is reasonable for content authors to expect that the extra queuing in [2] will stop the pipe from stalling due to the burstiness of the sink. However, user agents are permitted to treat these two cases as exactly the same. I think this hurts predictability of the platform.
Similarly, adding a queue to a ReadableStream should be effective in mitigating a bursty or high-latency source.
Two issues:
The text was updated successfully, but these errors were encountered: