|
mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) |
Hi, great work on Vidar! I have a quick question regarding the image normalization settings. In the config, the normalization for image inputs is set with std=1, which implies no standardization is applied on the pixel values (e.g., just using mean subtraction).
Could you clarify the reason behind this design choice? Was it found to be more effective during training, or is it related to the specific properties of the generative learning framework in latent space?
Thanks in advance for your explanation!
ViDAR/projects/configs/vidar_pretrain/nusc_fullset/vidar_full_nusc_1future.py
Line 46 in 4d773e6
Hi, great work on Vidar! I have a quick question regarding the image normalization settings. In the config, the normalization for image inputs is set with std=1, which implies no standardization is applied on the pixel values (e.g., just using mean subtraction).
Could you clarify the reason behind this design choice? Was it found to be more effective during training, or is it related to the specific properties of the generative learning framework in latent space?
Thanks in advance for your explanation!