We use cookies to provide our visitors with an optimal site experience. View our privacy notice and cookie notice to learn more about how we use cookies and how to manage your settings. By proceeding on our website you consent to the use of cookies.
------Question for M3031-000005-05KPG Please Put your question below------
When reading the voltage output from this sensor what is the proper way to account for the fact that the output is ratiometric and if the input voltage varies the output will vary accordingly?
I have made the assumption that the min voltage is (0.1 x Excitation voltage) and max voltage is (0.9 x Excitation voltage) and since the output is linear i just used the general equation y=mx+b.
I just want to know if this makes sense, this question is similar to another one but that question did not address how to deal with an excitation voltage that is not exactly 5v. I did reorganize the equation to solve for pressure which is what I am after.
The most convenient means of interfacing to a ratiometric sensor is usually to use an ADC that’s also ratiometric/referenced to the same supply, such that minor supply variations are common to both and cancel out. From there the basic approach would be subtract the zero-pressure (offset) result from a given reading, and multiply what’s left by the sensor’s sensitivity or gain figure.
The equation mentioned above would seem to encompass this general idea, though things will get more complicated depending on how many potential variables a person wants to try to account for. The offset isn’t (necessarily) going to also be ratiometric for example, and things usually tend to shift about a bit with temperature.
Thank you that makes sense. I am using a Labjack DAQ (https://labjack.com/) and have followed their advice as listed below for measuring the voltage output to the sensor which is why I wrote the equation the way I did.
The equation above is about as good a representation of things as one can make, if one’s simply going to take the datasheet figures at face value and hope for the best.
If one’s planning on calibrating things however to clean up some of the ±2% slop in gain and offset allowed by datasheet, the stability of the excitation voltage is more a concern than its absolute accuracy; so long as the absolute value doesn’t change much after calibration, it doesn’t matter much whether the excitation is initially a bit above or below nominal, since it’ll get accounted for in the calibration process.
If one’s running things from your DAQ’s 5v aux source, so long as the load on it is constant and the temperature’s fairly stable, its stability might be surprisingly good. Something to investigate as a possible simplification, depending on your needs.
At a glance, it looks like the DAQ devices mentiond have fixed-range analog inputs, with options for ±10V or 0~2.5V depending on the input chosen. Measuring a .5~4.5v signal directly with one of the wide-range channels would take 2 bits of resolution off the top. One may want to consider dividing the sensor signal by 2 and using one of the narrow-range channels instead.