Discussion:
FPGA Filter response issue
(too old to reply)
stu@viewpointusa.com
2008-02-28 04:40:05 UTC
Permalink
I am observing a difference between 16 bit, 24 bit and 32 bit inputs into the FPGA butterworth filter express function in LabVIEW FPGA.  I can not find a reference to the differences in the documentation.  I have attached a test case showing input vs output of the function as well as a frequency response.
I believe that all options should produce the same data.  24 and 32 would support a greater input range but should produce the same value output if a I16 input is supplied.  I am wrong?
 
LabVIEW 8.5
 
Stu


FPGA LP Filter response.vi:
http://forums.ni.com/attachments/ni/170/304854/1/FPGA LP Filter response.vi
Meghan M
2008-02-28 23:40:04 UTC
Permalink
Hi Stu,
I apologize for not understanding your question.  If these are the same results that you are seeing when running this Butterworth filter on the FPGA, then this is definately an issue that we need to investigate.  I have addressed our R&D team and they are currently looking into this.  We will keep you updated.  Thanks Stu, any feedback is greatly appreciated!
JLewis
2008-02-29 18:10:04 UTC
Permalink
Hi Stu,
The short answer is that the 24 and 32-bit modes are not bit-true with the 16-bit mode for 16-bit input data. I agree that the documentation is deficient and will work to correct that (probably via the DevZone article linked from the online help). I consulted with the original designer, who provided the explanation below. Both of us are operating under the assumption that your filter response curves were all produced with a 16-bit stimulus. Let me know if that is not the case.
"The FPGA filters are designed as a trade-off between quality and FPGA resource usage. The goal was to cover most practical real world applications and the filters therefore use internal dynamic corresponding to 32 bit resulting in an overall dynamic range of 26-28 bit depending on filter order and cut-off frequency. You will always lose some bits due to scaling for internal headroom and re-quantization errors. But 26-28 bits is still much better than practically any real world signal, the best A/D converters can not give you more than 24 bit or even less.
When you input an I16 bit signal the dynamic range is internally moved up to use the upper 16 bit of the 26-28 bit range and therefore you do not loose any dynamic in the process. However if you are using the 32 bit mode but only input an I16 signal, you are applying your signal to the lower 16 bit and your output noise will now correspond to at best (26-28)-16 = (10-12 bit). You are not using your dynamic range optimally. It is like inputting a low-level input signal of 10 mV when using the 10 V input range of an acquisition board. To fix 'the problem' you need to prescale/post-scale your signal. Try for example to shift your input signal 16 bit up and your output signal 16 bit down like shown on the attached screenshot."
I hope this explanation helps. The 3 response curves produced with the scaled data described above are practically indistinguishable. It is possible to modify the 32-bit implementation to use a 64-bit internal path and make it behave more like a superset of the 16-bit implementation--let me know if you have a use case for this.
JimMessage Edited by JLewis on 02-29-2008 11:49 AMMessage Edited by JLewis on 02-29-2008 11:50 AM


BW response test.JPG:
http://forums.ni.com/attachments/ni/170/305413/1/BW response test.JPG
JLewis
2008-03-03 15:40:08 UTC
Permalink
Hi Stu,
The filters expect the input data to be scaled such that it utilizes most of the range specified. The modifications to the example above were mainly to demonstrate the behaviors--if your input signal is actually only 16 bits, the recommended course of action is to use the 16-bit filter, not to scale up and use the 32-bit version. In fact, the only difference between the 24-bit and 32-bit implementations is that for 24 bits, we take advantage of your promise not to exceed 2^23 input magnitude so we can scale up to give better accuracy results while preventing overflow for steady-state signals. The 32-bit implementation has no room to scale up, because it assumes the input data will fill the entire 32-bit range.
It sounds like you have a 32-bit velocity encoder, and your DC value is relatively small? If you need to retain the 32-bit precision (ie, scaling the encoder output down to 24 or 16 bits is not an option), then I think you will need a customized 32-bit implementation that uses 64-bit internal paths. I will work on putting an example of this together for you to try.
Jim
JLewis
2008-08-06 21:10:07 UTC
Permalink
Hi Stu,
 
After some further investigation, I did find a scaling problem that was degrading the DC performance beyond the limitations I discussed above (which still apply). This has been fixed in LabVIEW 8.6 by expanding some internal paths slightly and delaying some scaling operations in order to minimize loss of precision due to underflows.
 
One thing you can do to improve the behavior in LabVIEW 8.5 is to check the "Show configuration terminal" option. The original nonreconfigurable design (from LabVIEW 8.2) uses a modified (and more expensive) implementation for filters with very low cutoff frequencies (less than .01 * Fs), and the standard implementation for other cutoffs. With the online reconfigurability feature introduced in 8.5, we need to be able to handle all cutoff frequencies with a single run-time filter architectures so we used a more accurate generic implementation at the expense of an extra multiplier (only if you choose to show the configuration terminal). The nonreconfigurable implementations were left as is to maintain compatibility with existing code.
 
Thanks for the feedback!
 
Jim

Loading...