C#, Xamarin

Sensor Fusion: Rolling your own


Last time, I wrote about sensors and sensor fusion. Over the last couple of years at Linknode, we’ve gained considerable practical experience with the sensors that are built into modern tablets. This blog post is a fairly technical explanation of some of that knowledge and understanding, and why we ended up doing what we did – if you don’t like maths, I suggest you skip this blog post.

I mentioned last time that most sensor fusion algorithms have been written with gaming in mind. Whatever anyone says, games are the things that push the boundaries of the hardware on mobile devices. There are two needs for speed on mobile devices these days – animating the transitions within app and providing detailed 3D experiences within games. Mobile devices are not (yet) used for number crunching or other processor intensive operations.

Problems

We have seen some specific problems with manufactures implementation of sensor fusion. A lot of the algorithms are about rate of change and responsiveness over absolute accuracy. With VentusAR, we require accuracy foremost as the visualisations we produce could end up under expert scrutiny. One of our early clients said that they would accept a tolerance of +/- 2° from the compass (and much less in the other sensors). We provide calibration tools to allow sensors to get within a tolerance of 0.1°. We found two main problems with default sensor fusion algorithms:

  • Jitter – the compass / fusion on some android tablets would jitter unacceptably. The terrain model would jump by +/- 10° while the device was sitting on the table.
  • Inaccuracy – the wireline could be a few degrees out when trying to align it to real world terrain. This could sometimes be corrected by rotating the device in 3D then pointing at the view again

Rolling your own

Rolling your own sensor fusion isn’t too hard, we started with some basic requirements:

  • Smoothing – the sensor fusion algorithm should produce a smooth output. If the device is setting stationary on a table, the inputs should not “jitter”. For example, using My View, the terrain should be accurate and not jitter
  • Accurate – prioritise the input from the compass over all over input – we should be using the other sensors to smooth and enhance the compass, not using the compass to provide stability to the gyroscope
  • Fast – The sensor data is read from the device at either 50 times or 60 times a second (50 on android, 60 on iOS). This means the CPU has 20 to 25ms to process each package of sensor data to ensure that we are keeping up.

With these requirements in mind we set about writing our own sensor fusion algorithm. The pseudo code for the algorithm we ended up with looks little like:

  • Calculate a ‘smoothed heading’
    • Calculate change between current heading value and last heading value
    • If change > THRESHOLD
      • smoothedHeading = lastHeading +LARGE_OFFSET
    • Else
      • smoothedHeading = lastHeading + SMALL_OFFSET
  • Maximise accuracy of the compass component
    • Remove current heading component from the output of the manufactures sensor fusion
    • Multiply by smoothed heading from above
  • Normalize

When smoothing, if we have a big change in magnetic heading (ie if change is > THRESHOLD) the ‘Smoothed Heading’ will respond to that quickly, however small changes will be smoothed over by the small offset. The values for THRESHOLD, LARGE_OFFSET and SMALL_OFFSET are device specific and have been found by running appropriate testing on each device.

Results

To give an example of how the smoothing function of our sensor fusion works, I did some work to export the raw data to excel and do some analysis on it. The graph below shows the input (raw heading) plotted next two the output of the sensor fusion. This graph shows the input to the sensor fusion classes in blue and the output in orange. The x axis shows the number of data points (were receiving approximately 50 per second so this shows 2000 data points over about 40 seconds). The y axis shows the heading of the device is shown on the left (values between 0° and 360°).

40 of smoothing data from Linknode Sensor fusion implementation
40 seconds of smoothing data from Linknode Sensor fusion implementation

This shows our smoothing functions working correctly:

  • The peak and troughs on the graph are less extreme
  • Curves are smoother so there is less jitter
  • The peak is delayed by approximately 15 samples. This equates to approximately 0.25s which we have decided to be acceptable performance.

Comparison
Below are two crops of the My View function of VentusAR. The left hand (red wireline) shows the jitter as seen in v2.1. While the right video (black wireline) shows much less jitter in v2.2

Our custom sensor fusion has been a considerable bit of work at Linknode which we hope is useful for other people who want to understand the way the sensors work on these types of devices.