- Open Access
- Total Downloads : 0
- Authors : He Junlong, Du Feng
- Paper ID : IJERTV7IS100086
- Volume & Issue : Volume 07, Issue 10 (October – 2018)
- Published (First Online): 05-01-2019
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
Unified Calibration Method for Millimeter-Wave Radar and Machine Vision
Unified Calibration Method for Millimeter-Wave Radar and Machine Vision
He Junlong
School of Automotive and Transportation Tianjin University of Technology and Education Tianjin, China
DU Feng
School of Automotive and Transportation Tianjin University of Technology and Education Tianjin, China
Abstract Firstly, the camera and millimeter-wave radar are separately calibrated, then the joint calibration between the camera and the millimeter-wave radar is carried out. Finally, a fusion model based on camera and millimeter-wave radar is built to detect pedestrians. The test results show that the millimeter-wave radar and camera with the calibration of the target have high measurement accuracy and can provide reliable data for the vehicle passive safety system.
Keywords Automotive safety; millimeter-wave radar; joint calibration Introduction
The millimeter wave radar has high detection accuracy and strong environmental adaptability, and is widely used in automobile assisted driving collision avoidance systems. However, the millimeter wave radar has a low azimuth resolution and cannot accurately locate obstacles. In view of this, the literature [1, 2, 3, 4] uses the method of millimeter wave radar and CCD camera information fusion, using the high spatial resolution of the CCD camera to compensate for the low azimuth resolution of the millimeter wave radar, thus achieving two advantage information of the person is complementary and complementary. Many scholars at home and abroad have conducted research on sensor information fusion and have achieved many good results. The vehicle detection method based on vision and laser radar proposed by Lili Huang et al. has good reliability and can be applied in traffic monitoring and highway cruise test [5]. In the fusion system, the lidar uses the feature clustering model to estimate the possible location of the vehicle, and converts the information acquired by the lidar into image coordinate information to form different regions of interest. Finally, the region of interest is determined based on the Adaboost method; Shanghai Jiao tong University uses the camera and lidar sensor to realize the detection and tracking of the vehicle in the field of view. Using the vehicle distance information acquired by the laser radar, the camera acquires the vehicle characteristic information of the image and verifies the vehicle presence. The combined method of Kalman filtering and particle wave is used to track the vehicle and solve the problem of vehicle tracking loss during sharp turns [6].
The Key Project of Tianjin Natural Science Foundation of China(16JCZDJC38200)
Tianjin Science and Technology Innovation Platform Project
(16PTGCCX00150)
In this paper, the millimeter wave radar and camera are used to acquire the external environment information, then the two data are merged to detected the pedestrian area in the image, which improves the reliability of pedestrian detection and can overcome these disadvantages, such as small amount of information and low reliability.
-
SENSOR CALIBRATION
-
Camera calibration
The model of the camera selected in this paper is UNIC-301, as shown in Figure 1, and its performance parameters are shown in Table 1. In order to ensure that the installation angle of the camera meets the requirements, it is necessary to know the internal and external parameters of the camera before performing camera calibration. Firstly, the internal parameters of the camera are calibrated. The calibration methods of the camera mainly include the optimization calibration method, the camera transformation matrix calibration method, and the Zhang Zhengyou calibration method. Among them, the Zhang Zhengyou calibration method is simple, adaptable and accurate in the camera calibration process. Is one of the most commonly used camera calibration methods [7], so this article also uses Zhang Zhengyou calibration method for camera calibration.
Fig.1. UNIQ-301
Tab.1. camera parameter
Resolution 752×582
Model CCD
Pixel size 8.6um×8.3um
Shutter 1/60~1/31,000
Minimum illumination 0.031ux
Signal to noise ratio 56Db Dimensions 50mm×39mm×83mm
Weight 155g
Based on the calibration principle of Zhang Zhengyou, the MATLAB calibration toolbox can be used to complete the calibration simply, accurately and efficiently, and generate the parameters needed to fuse the model. Camera calibration requires the selection of a suitable calibration object, usually using a regular rule board such as a checkerboard because the flat checkerboard mode is easier to handle. First, the internal parameters of the camera are calibrated. The checkerboard with a square of 9×6 is selected, and the side of the square is 3cm×3cm. In order to construct a three-dimensional scene, the hand board is transformed into various directions to obtain a checkerboard image. 20 gamut grayscale images of different orientations were taken by the CCD camera, as shown in Figure 2, and entered into the MATLAB calibration toolbox. Figure 3 shows very intuitively the 3D position of the 20 checkerboard plan relative to the camera plane.
Through calibration, the camera's internal parameter matrix and distortion coefficient vector are respectively:
8.112 0 2.989
Fig.3. extrinsic parameters visualization
Internal parameter matrix: [
Distortion coefficient vector:
0 8.096 2.831]
0 0 1
[0.139 0.143 0.001 0.007]Second, calibrate the external parameters of the camera.
The external parameters of the camera are composed of a rotation matrix R and a translation vector T, which are determined according to the relative positions of the selected world coordinate system and the camera coordinate system, and the world coordinate systems at different positions correspond to different camera external parameters. The origin of the default world coordinate system of the MATLAB calibration toolbox is the first corner of the checkerboard that the user clicks on using a certain calibration image.
By calibration, the camera's rotation matrix and translation vector are:
0.031 1 0.026
Rotation matrix:[0.993 0.032 0.112 ]
0.113 0.022 0.993
Translation vector:[2.208 2.140 0.149]
Fig.2. calibration images
Fig.4. mean reprojection error per image
-
Millimeter-wave radar calibration
Millimeter-wave radars need to be installed to ensure that their horizontal angle, yaw angle and pitch angle meet the installation requirements. The horizontal angle and the pitch angle can be measured by tools such as angle gauges and weights, and the radar installation mechanism can be adjusted to meet the angle requirements of the radar installation.
In order to make the normal vector of the millimeter- wave radar plane parallel to the longitudinal symmetry plane of the vehicle, rod-shaped obstacles with small cross- sectional area are placed at 5m and 10m directly in front of the vehicle as the target of millimeter-wave radar detection. In the process of calibrating the yaw angle, the radar yaw angle is adjusted by the adjustment mechanism at the same step size each time, and the lateral distances of the two obstacles placed directly in front are measured, respectively, as d1 and d2, according to the formula (1) Calculate the yaw angle calibration factor k. When the adjustment mechanism is adjusted such that k takes a minimum value, the normal vector of the radar detection surface is considered to be parallel to thelongitudinal symmetry plane of the vehicle.
= |1 2| (1)
-
-
MILLIMETER-WAVE RADAR AND CAMERA
INFORMATION FUSION
-
Spatial fusion
A point in the space detected by the millimeter-wave radar is accurately projected onto a point in the image plane acquired by the camera, involving a millimeter-wave radar
coordinate system, a world coordinate system, a camera coordinate system, an image coordinate system, and a pixel coordinate system conversion between coordinate systems. The conversion relationship between them is shown in Figure 5.
Assuming that the camera model is an ideal linear pinhole model, according to the conversion relationship between the camera coordinate system and the image coordinate system, the image coordinates P(x, y) before the distortion correction of the P point can be obtained:
Calibration
0 0 0
Measure
Extrinsic parameters
Internal parameters
[] = [00 0] [ ] (4)
1 0 0 1 0
Pixel coordinate system
1
Millimeter -wave radar coordinate system
World coordinate system
Camera coordinate system
Distortion correction
Image coordinate system
Due to the distortion of the camera lens, the image
captured by the camera deviates from the real image.
Therefore, the image coordinates need to be corrected for distortion, and the image coordinates P(x,,y,) after
Fig.5. coordinate system transformation relationship
distortion are obtained:
, 2 4 6
The relative positional relationship between the
= + (1
+ 2
+ 3 )
millimeter-wave radar coordinate system, the world coordinate system and the camera coordinate system is shown in Figure 6.
+[2(22 + 2) + 21]
, = + (12 + 24 + 36)
{ +[1(22 + 2) + 22]
(5)
After the distorted image coordinates are obtained, the
actual pixel position coordinates P(u,,v,) of the P point in the image are obtained by the conversion relationship between the image coordinate system and the pixel coordinate system:
1
,
0 0 ,
[,] = [1
0 ] [,] (6)
0
1
1 0 0 1
Fig.6. positional relationship between sensor coordinate system and world coordinate system
In Figure 6, the millimeter-wave radar coordinate system is OfXfYfZf, the world coordinate system is ObXbYbZb, the camera coordinate system is OuXuYuZu, the YOZ planes of the three coordinate systems coincide, the XOY planes are parallel to each other, and the world coordinate system coincides with the XOZ plane of the camera coordinate system. And parallel to the XOZ plane of the radar coordinate system. Z0 and Z1 are the distances between the radar coordinate system and the camera coordinate system, the camera coordinate system and the world coordinate system in the Z-axis direction, and H is the distance between the radar coordinate system and the camera coordinate system and the world coordinate system in the Y-axis direction.
Since the millimeter wave radar and the camera are installed at a relatively fixed position, the conversion relationship between the radar coordinate system P(R, a) and the camera coordinate system P (Xu, Yu, Zu) can be obtained:
=
After obtaining these information of the distance R and
the angle about the target by the millimeter-wave radar, using the internal parameters such as the camera focal length, the principal point coordinate, and the distortion parameter obtained by the above, and calculating according to the above formula, the projection coordinates of the target in the image can be obtained. Thereby, the conversion of the radar coordinate system P(R, a) to the pixel coordinate system P(u,, v,) is realized.
-
Time fusion
In the actual road environment detection process, the environmental information at different times is different due to vehicle movement and environmental changes, especially for high-speed sports vehicles, the instantaneous time difference may lead to different environmental information, so different sensors are required to detect. The environmental information to be obtained must be at the same time for spatial data fusion. The environmental information detected by each sensor is based on the current time information, and data acquisition is performed at different sampling frequencies on the time axis. Therefore, data collected by different sensors at the same time is extracted for fusion, and the sampling interval is
{ =
= 0 +
Convert to matrix form as:
0
0
0
sin
0
0]
[ ] + [
0
0
0
1
0
1
0
0
0
0
1
1
(2)
] (3)
completed. It may be short to ensure the real-time and validity of the detected data. The equations are an exception to the prescribed specifications of this template. You will need to determine whether or not your equation should be typed using either the Times New Roman or the Symbol font (please no other font). To create multileveled
equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled.
The millimeter-wave radar used in this paper has a sampling frequency of 25 Hz, that is, 40 frames of radar data is acquired per second, and the data interval is 25ms per frame. The camera shooting frame rate is 30 fps, that is, 30 frames of image data per second are acquired, and the data time per frame is 33.3ms. In this paper, time-data fusion is performed in a backward compatible manner based on a sensor with a long sampling period. For example, the time node is set every 100ms. When the millimeter-wave radar acquires the target data message at the time node, the image information of the current time is simultaneously acquired, and the spatial data fusion is performed by the coordinate transformation, thereby completing data fusion among the millimeter-wave radar and camera data at the moment. Realizing the synchronization of radar and camera data in time. The fusion process in time is shown in Figure 7.
Fig.7. time fusion
-
-
TEST VERIFICATION
Since the millimeter wave radar and the camera can independently measure the relative positional relationship between the target and the workshop, and the target position detected by the millimeter wave radar can be converted into the pixel coordinate system by the equation (6), the target recognition and the like jobs can be performed. Therefore, during the test verification, the measurement data obtained by the millimeter wave radar and the camera will be compared with the actual data. The forward distance of the car collision prediction system is much closer than the forward distance of the car active safety system such as the adaptive cruise system. The horizontal distance of attention is limited to the lane of the vehicle. Therefore, this article selects pedestrians within 30m in front of the car and 2m in the left and right as the detection target. The measured data and actual data of the two types of sensors are shown in Table 2.
Tab.2. test data
15.23
0.00
15.45
0.08
15.56
0.02
20.05
1.50
20.24
1.45
20.15
1.52
20.05
-1.50
20.29
-1.48
2.30
-1.53
20.05
0.00
20.28
0.05
20.31
0.01
29.85
1.50
30.42
1.65
29.45
1.53
29.85
-1.50
30.38
-1.40
30.12
-1.52
29.85
0.00
30.37
0.10
30.05
0.02
The camera measurements in Table 2 are the data obtained after the conversion of the ranging model. It can be seen from the table that the longitudinal distance error of the two kinds of sensor measurement targets in the test area is within 0.6m, and the measurement error of the camera in the test area within 20m is small, in the test area between 20~30m, the radar measurement error is small. The main reason is that the actual spatial distance corresponding to the unit pixel when the camera measures the distant target becomes large, so the actual spatial measurement error due to the pixel calculation error in the process of image measurement becomes large. The lateral distance measurement error of both sensors is within 0.15m. The angular resolution of the millimeter wave radar is not high, and the lateral distance measurement error is larger than the camera. The radar has an average of longitudinal and lateral errors over the entire test range.
-
IN CONCLUSION
In this paper, the installation angle of the vehicle's millimeter wave radar and camera is calibrated, and the internal and external parameters of the camera are calibrated, and the three coordinate axes of the radar projection coordinate system and the camera projection coordinate system are unified by locating the longitudinal symmetry plane of the vehicle as the calibration reference.A joint calibration of the millimeter wave radar and the camera is realized.
REFERENCES
[1] Xi Guangyao, Chen Rong, Zhang Jianfeng. Obstacle Detection Based on Millimeter Wave Radar and Machine Vision Information Fusion[J]. Journal of Internet of Things, 2017, 1(02): 76-83. [2] Zeng Jie. Research on front vehicle detection based on radar and machine vision information fusion [A]. Southwest Automobile Information [C].: Chongqing Automotive Engineering Society, 2017: 6. [3] JIN Lisheng, CHENG Lei, CHENG Bo. Night-time Vehicle Detection Based on Millimeter Wave Radar and Machine Vision[J]. Journal of Automotive Safety and Energy, 2016, 7(02): 167-174. [4] Tan Lifan. Research on front vehicle detection method combining machine vision and millimeter wave radar [D]. Hunan University, 2018. [5] Wang Baofeng, Qi Zhiquan, Ma Guocheng, Chen Sizhong. A Vehicle Identification Method Based on Radar and Machine Vision Information Fusion[J]. Automotive Engineering, 2015,Actual distance
Radar measurement
Camera measurement
37(06): 674-678.
Longitu- Dinal |
Trans verse |
Longitu- dinal |
Trans verse |
Longitu- dinal |
Trans verse |
10.12 |
1.50 |
10.37 |
1.55 |
10.23 |
1.51 |
10.12 |
-1.50 |
10.45 |
-1.58 |
10.25 |
-1.50 |
10.12 |
0.00 |
10.46 |
0.02 |
10.18 |
-0.01 |
15.23 |
1.50 |
15.49 |
1.48 |
15.38 |
1.55 |
15.23 |
-1.50 |
15.40 |
-1.47 |
15.36 |
-1.53 |