You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FIrst of all, maybe it is my misunderstanding of the paper, so hope somebody could explain it for me, thanks! :
in the paper, the loss is defined as
where e is the codebook defined at the beginning of the Section:
So, in the paper, the codebook loss and commitment loss are MSE between z_e(x) and e.
However, in the implementation, they are implemented as MSE between z_e(x)(inputs) and z_q(x)(quantized), where variable quantized means quantized encoding of the image, namely z_q:
Are they actually the same thing? why?
If the paper stated is right. how the dimension matches between z_e(x)(H' * W' * D) and e(K * D)?
if the implementation is right. how z_q(x)(quantized) backprop since its calculation contains argmin?
The text was updated successfully, but these errors were encountered:
Probably, e in the loss formula in the paper actually stands for the z_q(x). The author did not write as z_q(x) because its calculation evolves argmin, which is not-differentiable. However, this is not a problem to implement it naively as z_q(x), because tensorflow, as well as pytorch, will stop the gradient before argmin operation, thus it works as intended and causes no BUG.
Probably, e in the paper stands for the z_q(x). The author did not write as z_q(x) because its calculation evolves argmin, which is not-differentiable. However, this is not a problem to implement it naively as z_q(x), because tensorflow, as well as pytorch, will stop the gradient before argmin operation, thus it works as intended and causes no BUG.
That is my new understanding.
please close this if admin thinks this explanation is right.
FIrst of all, maybe it is my misunderstanding of the paper, so hope somebody could explain it for me, thanks! :
in the paper, the loss is defined as

where

e
is the codebook defined at the beginning of the Section:So, in the paper, the codebook loss and commitment loss are MSE between
z_e(x)
ande
.However, in the implementation, they are implemented as MSE between

z_e(x)
(inputs) andz_q(x)
(quantized), where variable quantized means quantized encoding of the image, namelyz_q
:Are they actually the same thing? why?
z_e(x)
(H' * W' * D) ande
(K * D)?z_q(x)
(quantized) backprop since its calculation contains argmin?The text was updated successfully, but these errors were encountered: