I donโt know if this thread is open or even if you are still trying to use this approach, but I could at least contribute to this, given that I tried the same thing.
As Ali said .... it's awful! The smallest measurement error in accelerometers is ambiguous after double integration. And because of the constant increase and decrease in acceleration when walking (with each step), in fact, this error quickly accumulates over time.
Sorry for the bad news. I also did not want to believe it until I tried to do it ... filtering unwanted measurements also does not work.
I have a different approach, perhaps plausible, if you are interested in continuing your project. (the approach I followed the dissertation at my computer level) ... through image processing!
You basically follow the theory for optical mice. Optical flow, or what is called a view, Ego-Motion. Image processing algorithms implemented in Androids NDK. Even implemented OpenCV via NDK simplifies the algorithms. You convert images to shades of gray (compensate for different illumination values), then set a threshold value, improve the image on the images (to compensate for blurriness of images while walking), then detect the angle (increase accuracy to evaluate the overall result), then match with the pattern, does the actual comparison between image frames and the actual offset of the estimates in the number of pixels.
Then you go through the trial version and the error to evaluate how many pixels represent what distance, and multiply by this value to convert the pixel offset to the actual movement. This works up to a certain speed, although the real problem is that the camera images are still too blurry for accurate comparisons due to walking. This can be improved by setting the camera shutters or ISO (I still play with this).
So hope this helps ... otherwise google for Egomotion for real-time applications. In the end, you get the right things and find out what I just explained. enjoy :)
PwC Oct 09 2018-12-12T00: 00Z
source share