The CLI documentation states the rxdelay is a value between 0-20 with 0 being default and disabling the feature. What isn't documented is that values between 0 and 1 invert the delay relationship to SNR. If users for instance set a rxdelay of 0.1, then the following table of delays would be applied, delaying high SNR packets and not delaying poor SNR packets. This obviously makes auto pathing choose the poorest route by design.
The formula that dictates the relationship above is here.
To fix this I suggest we restrict the value range from 1-20 with 1 yielding the disabled state where there's no delay based on SNR. Based on the math, if rxdelay is set to 1 the delay is always 0ms.
I would welcome an open discussion as to whether the default state should be left with this feature disabled. Since auto-pathing relies on first route that arrives, why would we not want repeaters to be delaying packets of poorer SNR by default to bias best pathing? Perhaps a lower value where the scaling isn't as significant like 1.5 or 2 should be default for repeaters?
Once this issue is addressed I'll work on opening a separate PR to fill in a lot of the missing context in the CLI manual as nothing about how this truly functions is explained in there.
The CLI documentation states the rxdelay is a value between 0-20 with 0 being default and disabling the feature. What isn't documented is that values between 0 and 1 invert the delay relationship to SNR. If users for instance set a rxdelay of 0.1, then the following table of delays would be applied, delaying high SNR packets and not delaying poor SNR packets. This obviously makes auto pathing choose the poorest route by design.
The formula that dictates the relationship above is here.
To fix this I suggest we restrict the value range from 1-20 with 1 yielding the disabled state where there's no delay based on SNR. Based on the math, if rxdelay is set to 1 the delay is always 0ms.
I would welcome an open discussion as to whether the default state should be left with this feature disabled. Since auto-pathing relies on first route that arrives, why would we not want repeaters to be delaying packets of poorer SNR by default to bias best pathing? Perhaps a lower value where the scaling isn't as significant like 1.5 or 2 should be default for repeaters?
Once this issue is addressed I'll work on opening a separate PR to fill in a lot of the missing context in the CLI manual as nothing about how this truly functions is explained in there.