Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pipeTo(): the stream queues should be used #653

Closed
ricea opened this issue Jan 16, 2017 · 6 comments
Closed

pipeTo(): the stream queues should be used #653

ricea opened this issue Jan 16, 2017 · 6 comments
Assignees
Labels

Comments

@ricea
Copy link
Collaborator

ricea commented Jan 16, 2017

An implementation of pipeTo() that doesn't make use of the queues in the streams is compliant with the standard. Consider

let writable = new WritableStream(burstySink); // [1]
readable.pipeTo(writable);

versus

let writable = new WritableStream(burstySink, new CountQueuingStrategy(100)); // [2]
readable.pipeTo(writable);

It is reasonable for content authors to expect that the extra queuing in [2] will stop the pipe from stalling due to the burstiness of the sink. However, user agents are permitted to treat these two cases as exactly the same. I think this hurts predictability of the platform.

Similarly, adding a queue to a ReadableStream should be effective in mitigating a bursty or high-latency source.

Two issues:

  1. Balancing the requirement against "WritableStreamDefaultWriterGetDesiredSize(writer) may be used to determine the flow rate heuristically, e.g. by delaying reads while it is judged to be "low" compared to the size of chunks that have been typically read." is difficult.
  2. Because of 1., phrasing it in a way that makes it testable is difficult.
@ricea ricea added the piping label Jan 16, 2017
@domenic
Copy link
Member

domenic commented Jan 17, 2017

An implementation of pipeTo() that doesn't make use of the queues in the streams is compliant with the standard.

Is this because we never explicitly state you must use (the equivalent of) WritableStreamDefaultWriterWrite/the counterpart reader methods? But aren't the "backpressure must be enforced" requirements effective in enforcing this? Or is the issue that, since we don't use WritableStreamDefaultWriterWrite, an implementation might directly "write" by sending to the underlying sink?

I guess we should state what we mean by "write" then. Although, we also want to allow room for optimization, e.g. bypassing the queue in the case of bypassing intermediate identity transforms or similar... Not sure what to do here.

@domenic
Copy link
Member

domenic commented Jan 17, 2017

I guess this is related to #618 (comment)

@ricea
Copy link
Collaborator Author

ricea commented Jan 18, 2017

My first implementation for Chrome didn't start a read() until the previous WritableStreamDefaultWriterWrite() had finished. It was totally compliant and passed all the tests, but had exactly this problem.

@ricea
Copy link
Collaborator Author

ricea commented Jan 18, 2017

I have an idea for a test, although not the spec wording to go with it.

readable
  .pipeThrough(new TransformStream(undefined, new CountQueuingStrategy(64))
  .pipeTo(writable);

Initially writable blocks, and readable produces one chunk immediately for every call to pull(). After 64 chunks have been created, readable blocks (a Promise is returned from pull() that doesn't resolve) and writable starts accepting chunks. Expect 64 chunks to be written to the writable.

What do you think? Is it reasonable to expect every implementation to pass this test, or is it okay for an implementation to deliver <64 in this case?

@domenic
Copy link
Member

domenic commented Jan 18, 2017

My first implementation for Chrome didn't start a read() until the previous WritableStreamDefaultWriterWrite() had finished. It was totally compliant and passed all the tests, but had exactly this problem.

I see. So what we really want is some suggestion that you read ASAP, using only the "backpressure must be enforced" signals instead of any other signals.

Here would be my spec phrasing. I would nest it under "Backpressure must be enforced":

  • After taking into account these backpressure signals, reading and writing should be done as fast as possible; other signals should not be used to delay the continual reading/writing.

    EXAMPLE: An implementation that waits for each write to successfully complete before proceeding to the next read/write operation violates this recommendation. In doing so, such an implementation makes the internal queue of dest useless, as it ensures dest always contains at most one queued chunk.

What do you think? Is it reasonable to expect every implementation to pass this test, or is it okay for an implementation to deliver <64 in this case?

I think it's probably reasonable to expect it to pass "eventually". Codifying that in a specific test (e.g. do we give them a grace period?) will be a bit tricky. But I think it's reasonable to adopt an approach of adding such a test, adding a code comment that it tests a "should" requirement, and seeing if anyone complains.

@ricea ricea self-assigned this Jan 19, 2017
@ricea
Copy link
Collaborator Author

ricea commented Jan 19, 2017

sgtm.

My idea is that the test framework provides the grace period. If 64 chunks don't show up before the test times out, you lose. Actually, 64 is unnecessarily large. I think I'll make the HWM 16. Or maybe 4. No point in burning CPU.

I am assigning myself. I don't think I'll get to this today, but I should be able to do it soon.

domenic added a commit that referenced this issue Mar 31, 2017
Fixes #653. This also removes the heuristic allowance for determining flow rate, which was somewhat in contradiction with the instruction added here to not delay reads/writes. In practice, such a heuristic would not give many gains: it would simply allow backpressure to be applied slightly sooner.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

2 participants