-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[webgl] Remove BufferWithAccessor #1864
Conversation
This comment was marked as off-topic.
This comment was marked as off-topic.
Actually, checking the PR I revise my comment. The goal was to remove these extra methods from Buffer as they are not part of a minimal GPU |
Would be happy to do this however you prefer. Shall we just delete the methods for now (or see what that looks like in a PR, at least) and deal with changes in Transform and Deck.gl as we come to those? Or move the methods into utility functions somewhere? |
Yes. But I think before breaking deck better get @Pessimistress opinion. It may just be a question of timing, when to land this so it doesn't disrupt some other refactor in deck. |
I've updated the top post with status for each method. I've left getData() and subData() on WEBGLBuffer for now, I'm not sure what to do with those two yet. Next steps on my side will be to add more unit test coverage, fixing the CI failure. |
Looking through DeckGL, I found things depending on these methods:
I think we'll probably want to remove use of |
I would focus on polishing the new Buffer API which is WebGPU aligned to do what we need, so that we don't need the old methods.
https://github.com/visgl/luma.gl/blob/master/modules/core/src/adapter/resources/buffer.ts#L79 For reallocate, deck.gl could perhaps create its own helper, it might be doable using existing methods? |
One other thought on the API – perhaps methods |
// TODO(donmccurdy): Do we have tests to confirm this is working? | ||
const commandEncoder = source.device.createCommandEncoder(); | ||
commandEncoder.copyTextureToBuffer({ | ||
source: source as Texture, | ||
width: sourceWidth, | ||
height: sourceHeight, | ||
origin: [sourceX, sourceY], | ||
destination: target, | ||
byteOffset: targetByteOffset | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to get some tests for this added ASAP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have commandEncoder.copyTextureToBuffer do we still need readPixelsToBuffer
?
I suspect that deck only uses readPixelsToArray, so maybe just port that to use commandEncoder and drop the other wrapper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually commandEncoder.copyTextureToBuffer
might be all we need, yes! No strong preference there. I'm working on unit tests for it, and I think there are some important cases that aren't working correctly yet, but once that's more stable perhaps we switch over.
@@ -54,7 +54,7 @@ export class WebGPUBuffer extends Buffer { | |||
); | |||
} | |||
|
|||
override async readAsync(byteOffset: number = 0, byteLength: number = this.byteLength): Promise<ArrayBuffer> { | |||
override async readAsync(byteOffset: number = 0, byteLength: number = this.byteLength): Promise<Uint8Array> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to match WEBGLBuffer implementation (and would be my slight preference anyway).
We could also consider making readAsync
take a target Uint8Array (optional? required?) and writing data into that, so that calling this in the frame loop doesn't necessarily allocate a new buffer each time.
@@ -77,7 +77,7 @@ export abstract class Buffer extends Resource<BufferProps> { | |||
} | |||
|
|||
write(data: ArrayBufferView, byteOffset?: number): void { throw new Error('not implemented'); } | |||
readAsync(byteOffset?: number, byteLength?: number): Promise<ArrayBuffer> { throw new Error('not implemented'); } | |||
readAsync(byteOffset?: number, byteLength?: number): Promise<Uint8Array> { throw new Error('not implemented'); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you haven't done so, probably worth your time to read up on the WebGPU spec to understand how things work there https://www.w3.org/TR/webgpu/#buffer-mapping
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, have done a little work with this area of WebGPU – I don't know of anything in WebGL or WebGPU that strongly affects our choice of ArrayBuffer/Uint8Array, just trying to preserve our options for avoiding copies down the road.
@@ -183,7 +185,9 @@ export class WEBGLBuffer extends Buffer { | |||
// static INDIRECT = 0x0100; | |||
// static QUERY_RESOLVE = 0x0200; | |||
|
|||
function getWebGLTarget(usage: number): GL.ARRAY_BUFFER | GL.ELEMENT_ARRAY_BUFFER | GL.UNIFORM_BUFFER { | |||
function getWebGLTarget( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Wonder why your editor/prettier cut this line differently if nothing was changed...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since Prettier has disabled repo-wide, I think there are a lot of formatting changes stacked up from the last few months of changes.
@@ -230,10 +229,14 @@ export class WEBGLVertexArray extends VertexArray { | |||
const byteLength = constantValue.byteLength * elementCount; | |||
const length = constantValue.length * elementCount; | |||
|
|||
if (this.buffer && byteLength !== this.buffer.byteLength) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should just destroy the old buffer and create a new one, instead of reallocate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, better to destroy/create than reallocate. Wasn't sure if that was the responsibility of webgl-vertex-array or the application, in this context... may want to decide if/when we find a test that hits this error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It used to be an issue where desktop GPU drivers didn't let you use a constant for attribute 0. Not sure if that is still a problem.
But in WebGPU, there is no support whatsoever for constant (disabled) attributes, so we'll need to manufacture a buffer for every constant.
deck wants to continue to leverage constant attributes on WebGL for the memory savings...
(To allow constants in WebGPU we'd probably need fancy shader transformation code that replaces attributes with uniforms, but then those uniforms need to be in dynamically generated and populated uniform buffers so it is fairly involved)
// TODO(donmccurdy): Do we have tests to confirm this is working? | ||
const commandEncoder = source.device.createCommandEncoder(); | ||
commandEncoder.copyTextureToBuffer({ | ||
source: source as Texture, | ||
width: sourceWidth, | ||
height: sourceHeight, | ||
origin: [sourceX, sourceY], | ||
destination: target, | ||
byteOffset: targetByteOffset | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have commandEncoder.copyTextureToBuffer do we still need readPixelsToBuffer
?
I suspect that deck only uses readPixelsToArray, so maybe just port that to use commandEncoder and drop the other wrapper?
Corresponding changes for DeckGL: |
Co-authored-by: Ib Green <[email protected]>
dbb7c56
to
52f690a
Compare
Removes WebGL-specific BufferWithAccessor subclass.
I'm not sure whether the goal was to migrate its functionality to WEBGLBuffer, or to remove cases that depend on it (e.g. not resizing buffers). For now I've moved the extra methods to WEBGLBuffer, and addedUse of the additional methods has been replaced where possible, but a couple methods (getData, subData) remain.@deprecated
tags to each, but I'm happy to change that. Build and tests pass, but I haven't tested beyond that yet.Status
WEBGLBuffer#reallocate()
- Deleted, buffer size is immutable.WEBGLBuffer#initialize()
- Deleted, buffer size is immutable.WEBGLBuffer#bind()
- Deleted, use CommandEncoder or other APIs.WEBGLBuffer#unbind()
- Deleted, use CommandEncoder or other APIs.WEBGLBuffer#copyData()
- Deleted, use CommandEncoder APIs.WEBGLBuffer#getData()
- Deleted, use.readAsync()
.WEBGLBuffer#setData()
- Deleted, buffer size is immutable.WEBGLBuffer#subData()
- Deleted, use.write()
.One notable thing here is that WEBGLBuffer is untyped, and methods like
getData()
(if we keep it) do not know the type of the underlying data. You can either pass the correct TypedArray class togetData()
, or wrap the result in a new ArrayBufferView after receiving it.