 Research Article
 Open Access
 Published:
A graph optimization approach for motion estimation using inertial measurement unit data
ROBOMECH Journal volume 5, Article number: 14 (2018)
Abstract
This study presents a novel approach for processing motion data from a sixdegreeoffreedom inertial measurement unit (IMU). Trajectory estimation through double integration of acceleration measurements results in the generation and accumulation of multiple errors. Existing IMUbased measurement methods often use constrained initial and final states to resolve these errors. The constraints on the initial and final states lead to a uniform distribution of the accumulated errors throughout the calculated trajectory so that they cancel each other. We develop a generalized method that can incorporate the constraints from the measurements of intermediate states. The proposed approach is inspired by graphbased simultaneous localization and mapping processes from robotics research. We tested the proposed method with simulated and actual IMU data and found that our method estimates trajectories more accurately than conventional methods with acceptably higher computational costs.
Introduction
Background
Measuring athlete motion is an important aspect of sports science. The typical instruments employed for this purpose include highspeed video cameras and infrared motion capture cameras. However, these instruments are expensive and require time and effort for installation.
Recently, small and inexpensive inertial measurement units (IMUs) have become available, making sports motion measurements accessible to a wide range of people. For example, measurement devices that can be attached to a golf club [1, 2] or a tennis racket [3] have been developed.
Considering these applications, this study seeks to improve the dataprocessing algorithm for extracting trajectories from IMU data. The IMU employed herein is assumed to comprise a threeaxis accelerometer and a threeaxis gyroscope. We aim to estimate a time series of the attitude, velocity, and position values with a time series of acceleration and angular velocity measurements.
Main issues to be addressed
The integration of IMU measurements is the simplest way to extract a trajectory from IMU data. Theoretically, attitude can be calculated by integrating the angular velocity, velocity can be calculated by integrating the acceleration, and position can be calculated by the double integration of the acceleration. However, position estimation by simple double integration amplifies the error [4]. Even a small error in the integration of the acceleration can lead to a bias drift in the velocity values. Additionally, the integration of biased velocity causes rapid generation of the positional errors. In addition, if the measurement is performed for a long period, the attitude and velocity estimations are affected by the bias drift in the angular velocity and acceleration values. Thus, it is necessary to correct the accumulated errors.
Related work
Human gait analysis is a wellstudied topic in IMUbased motion estimation. Several methods that can precisely track human gait have been developed [5, 6]. Error correction methods for gait data rely on the assumption that the foot velocity is zero during the stance phase, which is the period during which the foot is in contact with the ground. The errors accumulated in the velocity data can be eliminated using this assumption [6, 7].
A method to correct motion errors without such zerovelocity assumptions needs to be developed because the assumptions are not always applicable to various sports motions. Sagawa et al. developed a method for measuring the pitching motion used in baseball [8]. They constrained the target motion with a known initial state and a known final state. The accumulated errors are corrected with regard to these constraints.
However, this method is limited as error correction is performed only with one constraint, thereby returning inaccurate trajectories for motions that extend over a long period of time. Therefore, a more powerful error correction method is needed to measure longer sets of motions, such as a rally in a tennis match.
Contribution
Herein, we propose a novel error correction method that can incorporate multiple constraints. While the relation between the initial and final states is the only constraint employed in the conventional motion estimation method, the proposed method employs generalized constraints that can represent the differences between any two known points in time. The problem is formulated as a leastsquares fit of the constraint errors. Steps reducing the computational complexity of the algorithm are also introduced to make computations practical. We quantify the performance of the proposed algorithm with simulated and actual data.
IMUbased motion estimation: review
Problem definition
We consider the problem of estimating a time series of the states of an IMU.
Here, \(\varvec{u}, \varvec{v},\) and \(\varvec{t}\) denote the attitude, velocity, and position, respectively. We assume that the initial state of the IMU, \(\varvec{x}_0\), is known and that the IMU collects threedimensional (3D) acceleration and 3D angular velocity over time \(\{\varvec{a}_i, \varvec{\omega }_i \}_{i=1}^{n}\).
Herein, we employ a rotation vector [9] to represent the 3D attitude \(\varvec{u}\). The definition of the rotation vector representation and the operations performed with it are available in Appendix.
Naive integration
The simplest method for IMU state estimation is the naive integration of the measured data. We estimate the attitude by integrating 3D angular velocity data obtained by the IMU. Assuming that the time step \(\Delta t\) is small, the attitude at each time step can be incrementally calculated as follows:
where the \(*\) operator represents the concatenation of two rotation vectors.
Velocity can be estimated through the integration of the 3D acceleration data. Note that we need to compensate for the gravitational acceleration, which is denoted by \(\varvec{g}\), as follows:
where \(\varvec{R}_i\) represents the rotation matrix representation of the attitude \(\varvec{u}_{i}\) and \(\varvec{R}_i\varvec{a}_i\) represents the acceleration transformed into the world frame.
Position can be estimated through the integration of velocity data as follows:
Sagawa et al.’s error correction method
The aforementioned integration typically leads to significant error accumulation. We now review a method proposed by Sagawa et al. [8] for correcting such errors.
This method assumes that the initial and final states are known, so the relative difference between the initial and final states is used as a constraint on trajectory reconstruction. The estimated timeseries states are corrected by uniformly distributing the accumulated errors along the time sequence so that the error at the final state is canceled out.
This method uses the following steps:

1.
Initial estimates of attitude are calculated via simple integration. The accumulated errors in these attitude values are corrected using the constraint.

2.
Velocities are estimated via integration using the acceleration data and the corrected attitudes. The accumulated errors are again corrected using the constraint.

3.
Positions are estimated through the integration of the corrected velocities, and these estimations are corrected with respect to the constraint.
Although the algorithm in the original paper [8] was described using the rotation matrix representation, we will detail the steps in rotation vector notation for easy comparison with the proposed algorithm.
Attitude correction
The attitudes estimated via simple integration are denoted as \(\{ \tilde{\varvec{u}}_i \}^n_{i=1}\), and the known final attitude is denoted by \({\varvec{u}}_n^{\mathrm {GT}}\). The accumulated attitude error can be expressed as a single rotation:
Attitude error correction is accomplished by uniformly distributing the error according to the time sequence as follows:
Velocity correction
The velocity error is corrected so that the estimated final velocity \(\tilde{\varvec{v}}_n\) matches the given final velocity \({\varvec{v}}_n^{\mathrm {GT}}\). The accumulated error is assumed to be caused by a constant acceleration bias, \({\varvec{a}}^{\mathrm {E}}\), which is calculated as follows:
The corrected velocity sequence \(\{\hat{\varvec{v}}_i\}_{i=1}^n\) is then calculated as follows:
Position correction
Although the position correction step was not detailed in the original paper [8], we assume that Sagawa et al. repeated the above process. The difference between the final position estimated via simple integration \(\tilde{\varvec{t}}_n\) and the given final position \({\varvec{t}}_n^{\mathrm {GT}}\) assumed to be caused by a constant velocity bias, \({\varvec{v}}^E\), which satisfies the following relation:
The corrected position sequence \(\{\hat{\varvec{t}}_i\}_{i=1}^n\) is then calculated as follows:
Proposed method
The aforementioned error correction employed a single constraint. Thus, we extend the problem to multiple constraints.
We treat the problem as an optimization of a graphical model shown in Fig. 1. The state at each time is modeled as a graph node. The constraints representing a relation between two nodes are modeled as graph edges.
Objective function
We define a vector containing all timeseries state parameters as follows:
This vector is used as an objective variable in our optimization problem.
Next, given a set of constraints, we aim to find the optimal vector \(\varvec{x}^{*}\). Constraint \(\varvec{c}_{ij}\) is a ninedimensional vector representing the state of \(\varvec{x}_j\) relative to \(\varvec{x}_i\), which is obtained from IMU measurements and prior knowledge about the target motion. The problem is formulated as the maximization of the posterior density as follows:
Assuming that the errors in the constraints are statistically independent of each other and Gaussian, we can express the posterior density as follows:
where \(f(\varvec{x}  \varvec{\mu }, \varvec{\Sigma })\) is the probability density function of the normal distribution \(\mathcal {N}(\varvec{\mu }, \varvec{\Sigma })\) and \(\varvec{e}_{ij}(\varvec{x})\) denotes the error of the constraint \(\varvec{c}_{ij}\).
By taking the logarithm of Eq. (6), the optimization problem can be rewritten as follows:
Constraints
Each component (attitude, velocity, and position) in the constraint \(\varvec{c}_{ij}\) is denoted as \(\varvec{u}^{(ij)}, \varvec{v}^{(ij)}\), and \(\varvec{t}^{(ij)}\), respectively. Two types of constraints are employed: a constraint between consecutive nodes (\(j = i + 1\)) and that between nonsequential nodes (loopclosing constraints). The former type of constraints can be calculated using IMU measurements as follows:
In contrast, the nonsequential constraints can be obtained from prior knowledge about the motion or observations with external sensors. The single constraint used in Sagawa et al.’s method is given in this form. For example, a constraint that the initial (time \(t = 0\)) and final (time \(t = n\)) states are identical can be expressed as follows:
Error function
We define the error function as follows:
The error function returns a zero vector when the state of node j relative to node i matches \(\varvec{c}_{ij}\).
Optimization
We employ the Gauss–Newton method to minimize function F in Eq. (8). This method is a gradient descent method that can be used for nonlinear leastsquares problems. We initiate the objective variable with an initial estimate, \(\varvec{\hat{x}}\), and iterate over the following three steps until convergence:

1.
Linearization: Calculate \(\bar{F}_{\varvec{\hat{x}}}(\varvec{d})\), i.e., the linear approximation of \(F(\varvec{x})\) at \(\varvec{x} = \varvec{\hat{x}}\).

2.
Minimization: Determine the update vector \(\varvec{d}\), which minimizes \(\bar{F}_{\varvec{\hat{x}}}(\varvec{d})\).

3.
Update: Update the current estimate \(\varvec{\hat{x}}\) using \(\varvec{d}.\)
Linearization
In the first step, we obtain the linearized objective function \(\bar{F}_{\varvec{\hat{x}}}(\varvec{d})\) as follows:
Here, \(\varvec{J}_{ij}\) is the Jacobian matrix of \(\varvec{e}_{ij}(\varvec{x})\) at \(\varvec{x} = \varvec{\hat{x}}\).
The linearized objective function \(\bar{F}_{\varvec{\hat{x}}}(\varvec{d})\) can be expanded into a quadratic function as follows:
Minimization
By setting the derivative of Eq. (10) to zero, the minimizer \(\hat{\varvec{d}}\) is obtained by solving the following linear equation:
\(\varvec{H}\) is a \(9n \ \times \ 9n\) matrix, and it grows rapidly as the number of nodes increases. Therefore, naive methods, including Gauss–Seidel minimization, require a large amount of time for computations. Since \(\varvec{H}\) is symmetric, positivedefinite, and sparse (i.e., the number of loopclosing constraints is small), the linear system can be efficiently solved via sparse Cholesky decomposition or the preconditioned conjugate gradient method [10, 11].
Update
Finally, using the obtained minimizer \(\hat{\varvec{d}}\), we update the current estimates using the following equation:
Relevance to graphbased simultaneous localization and mapping
The formulation can be considered as an extension of graphbased simultaneous localization and mapping (SLAM) [12, 13]. Graphbased SLAM produces attitude and position estimates, and the proposed method extends the state vector with velocity as an extra degree of freedom.
Improving initial estimates
The computational cost of the proposed method is higher than those of the graphbased SLAM methods for the same number of nodes and constraints as \(\varvec{H}\) becomes larger. Therefore, to achieve practical processing time, we employ an initialization technique, which is described below.
The positions estimated via simple integration are typically far from the optimum value. If we begin optimization with such poor initial estimates, the optimization will require a large amount of time to converge.
Therefore, we improve the initial estimates by solving a subproblem. The subproblem employs the same graphbased formulation as the main problem and only optimize the attitude and velocity estimates. More specifically,
is used as an objective variable. The attitude and velocity components of the constraints were also extracted and employed.
We initialize the position estimates for the main problem by integrating the attitudes and velocities returned by subproblem optimization, thereby reducing the iterations required for convergence in the main optimization problem.
Results
Simulation experiments
Experiments with simulated data were conducted as follows. IMUSim [14] was employed to generate 100 different random trajectories with ideal 100Hz IMU measurements. The duration of each generated trajectory was 3 s.
To improve the realism of the simulation, artificial white noise was added to the ideal measurements. The standard deviations of the noise terms were 0.1 \(\mathrm {m/s^2}\) for acceleration and 0.3°/s for angular velocity. Bias errors of 0.2°/s were also added to the angular velocity data.
First, we tested the processing methods for motions with only one constraint. The estimation accuracy of the proposed method was compared with those of the naive integration and Sagawa et al.’s conventional method [8].
The examples of estimated trajectories and RMS errors of each method are shown in Fig. 2 and summarized in Fig. 3. Regarding position and velocity estimations, the proposed method returned errors that were almost same as those returned by the other methods, which makes sense because both methods used the same information.
However, the proposed method exhibited significantly smaller attitude errors in comparison with Sagawa et al.’s method (\(p < 0.001\) using the Wilcoxon signedrank test). The typical attitude estimation results are plotted in Fig. 4, which is generated using the same motion as shown in Fig. 2. This plot shows that both correction methods accurately estimated the final attitude; however, Sagawa et al.’s method exhibited larger errors for intermediate states. This was because the method uniformly distributed the attitude errors along a single rotation axis. Thus, the correction was not expected to necessarily match with the direction of the errors.
Next, we tested the method with multiple constraints. Several observation points were chosen in the simulated data sequences, and the true IMU states at these points were used as additional constraints. The relation between the number of constraints used and the estimation accuracy is summarized in Fig. 5. This figure clearly shows that the estimation accuracy improves as the number of constraints increases.
Trajectory estimation using real IMU data
The accuracy of trajectory estimation was evaluated using real IMU data collected with an Xsense MTi3 IMU at 100 Hz. The motion measured is shown in Fig. 6. The IMU was moved to four vertices of a 0.3 m \(\times\) 0.2 m rectangle and returned to the initial position and orientation. The motion lasted approximately 7 s. To perform the calibration of the IMU’s acceleration and angular velocity bias, approximately 3.0 s of motionless time was included at the beginning of the motion.
We again compared the proposed method and Sagawa et al.’s method [8] in terms of estimation accuracy. Both methods were provided the same constraint, i.e., the initial and final states were identical. Extra constraints were employed in the proposed method, which are defined by the relative positions of three vertices and the assumption that the device stops moving at these vertices. The times when the IMU visited the vertices was calculated based on the acceleration spikes caused by collisions with the table surface.
The estimated 3D trajectories were projected into a two dimensional plane (Fig. 7) for comparison with the reference trajectory. The Fréchet distance (a similarity measure) between the estimated trajectory and reference trajectory was employed for measuring errors. The errors were 0.062 m from Sagawa et al.’s method and 0.022 m from the proposed method.
The changes in the estimation accuracy for different number of constraints are plotted in Fig. 8. This figure shows that the estimation accuracy improves as the number of constraints increases.
The processing times for the proposed method and Sagawa et al.’s method were 0.58 and 0.022 s, respectively, on a laptop equipped with a Core i77600U processor (mean of 10 trials). Although the proposed method performed slower than Sagawa et al.’s method, the performance of our method does not appear to be a significant problem for the purpose of offline motion analysis.
Attitude estimation using real IMU data
The attitude estimation accuracy was also evaluated using the MTi3 IMU. In this evaluation, the IMU was rotated randomly and returned to the initial state, as depicted in Fig. 9. The constraint that the initial and final states are identical was employed. The mean attitude errors for the collected 53 sequences of the IMU data are shown in Fig. 10. As can be seen from the figure, the estimation error of the proposed method was smaller than that of Sagawa et al.’s method (\(p = 0.0376\)).
Analysis of the effect of data frequency and length
To evaluate how the data acquisition rate affects the performance of the proposed method, we conducted an additional experiment using simulated IMU data of 1000 Hz from the same 100 random motions employed in the abovementioned simulation experiments. The estimation results are summarized in Fig. 11. The graph shows similar trend with that of 100 Hz (Fig. 3). The mean processing time of the proposed method was 3.4 s for 1000 Hz data and it is 15 times slower than that for 100 Hz (0.23 s).
The estimation accuracy for longer motions was evaluated using the same settings as the trajectory estimation experiments using the MTi3 IMU. Two additional data sequences were collected by slowly moving along the rectangle (same procedure as shown in Fig. 6); the duration of the data was approximately 14 and 21 s. Trajectory estimation results are summarized in Fig. 12. It was observed that the estimation accuracy of both the proposed method and Sagawa et al.’s method degraded rapidly as motion length increased.
Conclusions and future work
Using IMU data, the proposed motion estimation method corrects the accumulated errors via leastsquares fitting with constraints that are obtained from IMU measurements and from prior knowledge of the motion. This method can incorporate more than one constraint in error correction steps, thereby improving the estimation accuracy. The effect of accuracy improvement was verified by experiments using simulated and real data. When using a single constraint, the proposed method estimated attitude more accurately than the conventional methods. The computation time of the proposed method was practical for motions with duration of several seconds, thanks to the use of an initialization technique. Nevertheless, the computational cost will be large on data with long duration or high frequency. One way to alleviate the problem would be node thinning.
In future work, we plan to apply this method to actual sports data. An analysis of table tennis rallies is in progress.
The proposed method has a limitation: it assumes that the constraints are independent; however, biased measurements can violate this assumption. A mathematical model that considers gyroscope and accelerometer biases can compensate for this limitation.
References
 1.
Seaman A, McPhee J (2012) Comparison of optical and inertial tracking of full golf swings. Procedia Eng 34:461–466
 2.
TruSwing™. http://www.garmin.com.hk/products/intosports/truswing/
 3.
Smart Tennis Sensor for Tennis Rackets. https://www.sony.com/electronics/smartdevices/ssetn1w
 4.
Woodman OJ (2007) An introduction to inertial navigation. Technical report, University of Cambridge
 5.
Madgwick SOH, Harrison AJL, Vaidyanathan R (2011) Estimation of IMU and MARG orientation using a gradient descent algorithm. In: IEEE international conference on rehabilitation robotics, pp 1–7
 6.
Ojeda L, Borenstein J (2007) NonGPS navigation for security personnel and first responders. J Navig 60(3):391–407
 7.
Sagawa K, Ohkubo K (2015) 2D trajectory estimation during free walking using a tiptoemounted inertial sensor. J Biomech 48(10):2054–2059
 8.
Sagawa K, Abo S, Tsukamoto T, Kondo I (2009) Forearm trajectory measurement during pitching motion using an elbowmounted sensor. J Adv Mech Design Syst Manuf 3(4):299–311
 9.
Diebel J (2006) Representing attitude: Euler angles, unit quaternions, and rotation vectors. Report, Stanford University
 10.
Konolige K, Grisetti G, Kummerle R, Burgard W, Limketkai B, Vincent R (2010) Efficient sparse pose adjustment for 2D mapping. In: Proceedings of the IEEE international conference on intelligent robots & systems (IROS), pp 22–9
 11.
Montemerlo M, Thrun S (2006) Largescale robotic 3D mapping of urban structures. In: Ang MH, Khatib O (eds) Experimental robotics IX: the 9th international symposium on experimental robotics, vol 21. Springer, Berlin, pp 141–150
 12.
Lu F, Milios E (1997) Globally consistent range scan alignment for environment mapping. Auton Robots 4:333–349
 13.
Grisetti G, Kummerle R, Stachniss C, Burgard W (2010) A tutorial on graphbased slam. IEEE Intell Transp Syst Mag 2(4):31–43
 14.
Young AD, Ling MJ, Arvind DK (2011) IMUSim: a simulation environment for inertial sensing algorithm design and evaluation. In: Proceedings of international conference on information processing in sensor networks (IPSN), pp 199–210
Authors’ contributions
The author read and approved the final manuscript.
Competing interests
The author declares that he has no competing interests.
Ethics approval and consent to participate
Not applicable.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Affiliations
Corresponding author
Appendix
Appendix
Rotation vector definition
A rotation vector is a 3D vector representing a 3D rotation. It can be calculated from a 3D unit vector representing the rotation axis \(\varvec{a} = [a_x, a_y, a_z]^\top\) and the angle of rotation \(\theta\)[rad] as follows:
The inverse rotation can be calculated as follows:
For brevity, the norm of \(\varvec{v}\) is denoted as v.
Conversion between unit quaternion
Conversion from a rotation vector to unit quaternion \(\varvec{q} = [q_w, q_x, q_y, q_z]^\top\) can be expressed as follows:
Conversion from unit quaternion to rotation vector can be expressed as follows:
Rotation vector to rotation matrix
A rotation vector can be converted to a rotation matrix as follows (Rodrigues’s formula):
Rotation vector multiplication
The product of the two rotation vectors \(\varvec{v}\) and \(\varvec{u}\) is represented by \(\varvec{v} * \varvec{u}\), which is calculated in terms of the quaternion product.
This operation represents the concatenation of two rotations (first rotation by \(\varvec{v}\), and the next rotation by \(\varvec{u}\)).
Derivatives of rotation vector multiplication
Derivatives of three rotation vector multiplication are calculated as follows:
Here, H, G, Q, and \(\bar{Q}\) are given as follows [9]:
\(\mathrm {where\ } c:=\frac{1}{1q_w^2}, \ \ d:=\frac{ \arccos (q_w)}{\sqrt{1q_w^2}},\)
\(\mathrm {where\ } S:=\sin \frac{v}{2}, \ a:= v\cos \frac{v}{2}2S,\)
Jacobian matrix calculation
The Jacobian matrix of \(\varvec{e}_{ij}(\varvec{x})\) at \(\varvec{x}=\hat{\varvec{x}}\) has the following sparse structure:
Here, \(\varvec{J}_{ij}^{(i)}\) and \(\varvec{J}_{ij}^{(j)}\) are the partial derivatives of \(\varvec{e}_{ij}(\varvec{x})\) by \(\varvec{x}_i\) and \(\varvec{x}_j\), respectively, and they can be calculated as:
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Irie, K. A graph optimization approach for motion estimation using inertial measurement unit data. Robomech J 5, 14 (2018). https://doi.org/10.1186/s4064801801101
Received:
Accepted:
Published:
Keywords
 Motion estimation
 Inertial measurement unit
 Graphbased simultaneous localization and mapping