-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transferable streams: the double transfer problem #1063
Comments
left a ball of roller coster and created more issues thats need to be fixed, don't sound like an easy task to write spec and working cross multiple threads and pipes 😅 so here is rough workaround for those who stubble up on this issue and seeks for a solution that works right now MessageChannel solution// this method pass the readableStream into a messageChannel in order to create a more direct communication
var worker = new Worker(`data:text/javascript,onmessage = evt => postMessage(evt.data, [evt.data]);`);
var rs = new ReadableStream({
start(c) {
c.enqueue('a')
c.enqueue('b')
c.enqueue('c')
c.close()
}
})
// send the transferable readableStream
// to a messageChannel port
var mc = new MessageChannel()
mc.port1.postMessage(rs, [rs])
mc.port1.close()
// post the port instead of the transfered stream
worker.postMessage(mc.port2, [mc.port2])
worker.onmessage = evt => {
var port = evt.data
// now you can terminate the worker since you now got a messageChannel port that you can listen to instead.
// the web worker don't have anything to do with the transfered readableStream anymore
worker.terminate()
port.onmessage = evt => {
port.close()
port.onmessage = null
// sucess
evt.data.pipeTo(new WritableStream({
write(x) {
console.log(x)
},
close() {
console.log('closed')
}
}))
}
} |
I think the proper model here is that transferring a ReadableStream transfers the ability to read from the stream - the end that writes to the stream is not transferred, and the connection between the writing end and the reading end remains in place. Isn't this the same model as for message channesl? |
That's the idea, yes. The problem is mostly technical: we have to figure out how to make that work. There's a whole discussion on the original issue about all the peculiarities we have to deal with, such as:
|
Here's a sketch of an approach which reconciles the atomic nature of transfer with the asynchronous nature of streams. I'm going to talk about the WritableStream case because I think it is the harder of the two.
After step 9, realm A has been "unhooked" and can safely be destroyed. There can be an arbitrary delay for queued writes to complete before A is "unhooked". Maybe we can force the queued chunks from A into O's queue by ignoring backpressure to make this delay as short as possible? |
That is an ergonomic issue: one calls postMessage, everything is fine so one navigates away thinking everything is good, but too quickly so that the unhooking does not happen. |
Hmm, delaying the transfer of the queued chunks is indeed quite risky. While the proposed solution could work for const controller = new AbortController();
const reader = readable.getReader({ signal: controller.signal });
reader.read(); // causes the cross-realm readable to send a "pull" message
controller.abort(); // discards the pending read request
// At this point, we have no pending read requests, but we are already pulling...
// After some time, we receive a "chunk" message and put the chunk in the queue.
// Which means that if you now transfer the stream...
worker.postMessage(readable, { transfer: [readable] });
// ...we have to do something with the chunks in the queue first. I think it's better if we transfer the entire queue synchronously as part of the transfer steps. In the transfer-receiving steps, we would re-enqueue those chunks with
So the transfer steps for
Is step 9 needed? Can't A close itself immediately after step 7? |
The original transferable streams issue has been closed now that support has been landed, however the discussion of the double transfer problem that started at #276 (comment) and consumed the rest of the thread is not concluded.
Summarising the issue, the following code works fine:
However, if you subsequently run
worker.terminate()
thentransferred_rs
will start returning errors. For other transferable types, no connection remains with any previous realms that the object was passed through, but in the case of streams, data is still being proxied via the worker.See the linked thread for why this is hard to fix.
The text was updated successfully, but these errors were encountered: