Knowing 'where I am' is always essential and a prior to answer for a moving vehicle. Among numerous onboard sensors, a GNSS receiver for single-frequency Precise Point Positioning and camera are competitive due to the fairly lower cost and the potential to provide a lot of useful
...
Knowing 'where I am' is always essential and a prior to answer for a moving vehicle. Among numerous onboard sensors, a GNSS receiver for single-frequency Precise Point Positioning and camera are competitive due to the fairly lower cost and the potential to provide a lot of useful information.
However, due to the degraded GNSS solution performance in city valleys, a tight integration is considered combining the two sensors at the observation level, ie. processing the GNSS ranges and the vision measurements in the image of the camera. The availability of High Definition Maps (HD Maps) aids vehicle positioning by providing extra information on the environment. In this project, landmark positions are retrieved through vision and the HD map, and can complement GNSS in city valleys. Additionally, the project focuses on building the mathematical model for the integration of observed landmark position (using a single camera, considering the ease of implementation and cost) and GNSS measurements, analyzing the performance as well as the feasibility for vehicle positioning. The project emphasizes the feasibility study of the proposed mathematical model, which is flexible and capable of using all available input automatedly, and providing a position solution with the best precision.
The uncertainty in the available landmark positions (for instance errors in the HD maps) is handled in two different ways: one is to include the landmark position coordinates as measurements into the model, the other one projects the uncertainty onto the measurements in the camera image. The latter method turns out to be much more efficient. To integrate vision and GNSS measurements, a conversion between an ECEF (earth-centered, earth-fixed coordinate frame), typically used for GNSS, and a world coordinate frame for the camera measurements, is required. A position offset between the GNSS antenna and camera is considered, since the camera lens center does not coincide with the GNSS antenna center. In the simulation and experiment, an extended integration is also presented and discussed which leaves out the position offset, for instance when the GNSS antenna is very close to the camera, which can further improve the redundancy and lower the computational load.
From the simulation and experiment, we conclude that the integration model is able to produce a position solution when one of the sensors is unable to produce a position solution and the other one still can; the extended integration model is able to produce a position solution even when both sensors individually fail to produce a position solution. Among these scenarios, the one when GNSS fails and vision operates, the integration model can produce a position solution within a quarter of a meter in local horizontal coordinates, and the GNSS measurements do not contribute much to the position solution. Compared to the integration model, the extended integration improves the model by reducing or eliminating the (typically heavy) correlation between the estimates, in particular those for the camera-antenna position offset, the GNSS receiver clock error and the vertical coordinate. Under the same scenarios, the extended integration improves the standard deviation in vertical coordinate and receiver clock error, within a quarter of a meter and one-third of a meter respectively.
Further study is recommended in the direction of applying full image processing procedures to obtain more realistic vision measurements, to include GNSS carrier phase observations to replace the current GNSS positioning based on Precise Point Positioning, in order to have a position solution of similar quality as the vision part. The dimension of system gets larger when carrier phase measurements (phase ambiguities) are added as well as two additional rotations for a camera; the extra rotations introduce a significant amount of nonlinearity in the model. A larger model with increased nonlinearity may call for an alternative model formulation.