Hi, the filtering pipeline strictly enforces double-precision (filtering.py >_check_filterable).
When dealing with long recordings at high sampling rates, people often reduce precision to suit their storage and computational needs. I feel enforcing this datatype might be an overkill. Was there a specific computational reason for this?
Hi, the filtering pipeline strictly enforces double-precision (filtering.py >_check_filterable).
When dealing with long recordings at high sampling rates, people often reduce precision to suit their storage and computational needs. I feel enforcing this datatype might be an overkill. Was there a specific computational reason for this?