Accuracy of Gyroscope

Hello,

I’m working on a class project and we’re looking to attach a gyroscope to a spoke on a wheel. It is important for us to be able to accurately know where in the wheel’s 360 degree position the sensor is currently at.

When looking at gyroscopes on DigiKey we see the sensitivity measurement in LSB/°/s. My understanding is that this tells us how fast the sensor can register change in its position, but how would this tell me how accurately the sensor can give its current position.

As an example, how accurate is this sensor? : https://www.digikey.com/en/products/detail/analog-devices-inc/ADXRS290BCEZ/5055894

Thank You,

Garrett

Hello Garrett!

Welcome to the forum!

The description we have for that parameter is “for digital-output devices, the typical change in output code ( number of least significant bits) for a 1 degree per second change in angular rate input around the sensed axis”

Greetings,

A gyroscope senses angular velocity (rotational speed) in the same sense that a speedometer on a car measures linear speed. One cannot simply read a gyro once and determine rotational position, for the same reasons that one can’t glance at a speedometer and determine how far down the road one has traveled.

Extracting a measurement of position using a sensor that measures velocity requires that the velocity signal be integrated with respect to time. The gyro sensitivity figure is important for doing that, yes, but perhaps more important for a task like this is the amount of offset present in the signal. For the example part given, one can find this line in the datasheet:


What this is telling you is that when the part is stationary and therefore would ideally output a signal indicating zero velocity, in reality most parts will indicate a velocity that’s within 9°/s of zero. This is a “typical” value, meaning it’s not guaranteed, and there’s a possibility that it could be more.

The amount of error that results if one simply takes the sensor output at face value then, depends on what the actual offset ends up being and the amount of time that one is integrating (adding up) that error. If the offset is 9°/s and one measures over a 1second period, the resulting position error would be 9°. If measuring for 5 seconds, the error would be 45°.

A partial answer to that is calibration; measure what the error is at rest, and subtract that value from subsequent measurements. It’s only a partial answer, because it only solves the problem if the offset never changes. Lots of things can cause such changes, but a big one is temperature. Further on down the datasheet, one sees these charts:


These represent how the offset of 16 different devices changed with temperature. Some do better than others, but the moral of the story is that a few°C shift in temperature is probably going to sift the offset by some 10s of °/s, which is probably going to be non-trivial if you want to come up with a position estimate over the course of a few minutes.

Long story short, getting position measurements using inertial sensors is not the easiest trick in the world. It can be done, but the good sensors cost $$$, there’s a lot of math, computation, & calibration involved, and the results are still probably going to be pretty crummy compared to what you get if you have the option of using a different sensor that’s capable of measuring position directly.

1 Like