The current decoding processes assume that the sample_size is always an integer value (and atm, we assume that this is also nanoseconds, although this doesn't matter too much). This works fine for the current digitisers in use, but is causing problems with the decoding of new hardware (lecroy oscilloscopes being the main cuplrit) where they have sample size stored as floats < 1.
There should be a catch check for sample size to ensure that if the value is incongruent with being cast as an integer (for example, 0.4 --> 0, or 1.9 --> 1), there is a warning at the very least, or perhaps some method for handling the result.
The current decoding processes assume that the
sample_sizeis always an integer value (and atm, we assume that this is also nanoseconds, although this doesn't matter too much). This works fine for the current digitisers in use, but is causing problems with the decoding of new hardware (lecroy oscilloscopes being the main cuplrit) where they have sample size stored as floats < 1.There should be a catch check for sample size to ensure that if the value is incongruent with being cast as an integer (for example, 0.4 --> 0, or 1.9 --> 1), there is a warning at the very least, or perhaps some method for handling the result.