一、ROS下安装环境

1、按照官方指导第一步,但并未成功配置,也不知道在哪make

wget https://raw.githubusercontent.com/oroca/oroca-ros-pkg/master/ros_install.sh && \
chmod 755 ./ros_install.sh && bash ./ros_install.sh catkin_ws kinetic

2、然后选择自己下载编译源码

git clone https://github.com/slightech/MYNT-EYE-S-SDK
cd <SDK>
make ros

会自动运行

make init
make install
make samples
make tools

运行完,配置成功

小觅摄像头 VINS-MONO安装_摄像头


3、测试,试试samples

make samples
./samples/_output/bin/api/camera_a

二、相机测试、标定参数获取

1、测试相机

cd <sdk>
make ros //仅第一次需要
source wrappers/ros/devel/setup.bash
roslaunch mynt_eye_ros_wrapper mynteye.launch // 打开摄像头,但默认不显示图像
roslaunch mynt_eye_ros_wrapper display.launch //也可以运行这个节点,会打开 RViz 显示图像

2、获得当前工作设备的图像校准参数与imu校准参数:

cd <sdk>
./samples/_output/bin/tutorials/get_img_params //获取相机标定参数
./samples/_output/bin/tutorials/get_imu_params //获取IMU标定参数

3、安装vins并建立启动文件
vins源码:https://github.com/HKUST-Aerial-Robotics/VINS-Mono

cd ~/ws_vins/src   //your vins_workspace
git clone https://github.com/HKUST-Aerial-Robotics/VINS-Mono.git
cd ../
catkin_make
source ~/ws_vins/devel/setup.bash

TIP:如果报错,需要看是否缺少ceres库
1、打开终端 ​​​git clone https://github.com/ceres-solver/ceres-solver.git​

2、把ceres库拷贝下来。

3、安装依赖项和编译工具:

3.1、 CMake安装:

sudo apt-get install cmake

3.2、 google-glog + gflags

sudo apt-get install libgoogle-glog-dev

3.3、 BLAS & LAPACK

sudo apt-get install libatlas-base-dev

3.4、 Eigen3

sudo apt-get install libeigen3-dev

3.5、 SuiteSparse and CXSparse (optional)

sudo apt-get install libsuitesparse-dev

4、编译ceres库:在终端打开进入ceres-solver

mkdir build
cd build
cmake ..
make
sudo make install

这样就安装到系统的默认地方。

三、建立启动文件

第一步:在​​~/ws_vins/src/VINS-Mono/vins_estimator/launch​​​下新建一个​​mynteye.launch​​​文件。
第二步:在​​​/home/fish/ws_vins/src/VINS-Mono/config​​​文件下建立一个名为​​mynteye​​​的文件夹,并新建​​mynteye_config.yaml​​​文件。
两个文件内容分别如下:
mynteye.launch :

<node name="feature_tracker" pkg="feature_tracker" type="feature_tracker" output="log">
<param name="config_file" type="string" value="$(arg config_path)" />
<param name="vins_folder" type="string" value="$(arg vins_path)" />
</node>

<node name="vins_estimator" pkg="vins_estimator" type="vins_estimator" output="screen">
<param name="config_file" type="string" value="$(arg config_path)" />
<param name="vins_folder" type="string" value="$(arg vins_path)" />
</node>

<node name="pose_graph" pkg="pose_graph" type="pose_graph" output="screen">
<param name="config_file" type="string" value="$(arg config_path)" />
<param name="visualization_shift_x" type="int" value="0" />
<param name="visualization_shift_y" type="int" value="0" />
<param name="skip_cnt" type="int" value="0" />
<param name="skip_dis" type="double" value="0" />
</node>

mynteye_config.yaml :
注意将部分参数更换(如topic和相机、imu参数)

%YAML:1.0

#common parameters
imu_topic: "/mynteye/imu/data_raw" #换成你的IMU的话题
image_topic: "/mynteye/left/image_raw" #换成你的相机的话题
output_path: "/home/fish/ws_vins/src/VINS-Mono/config/output_path/" #换成你的路径

#camera calibration
model_type: PINHOLE
camera_name: camera
image_width: 752 #换成你的相机参数(step2中获取的参数)
image_height: 480 #换成你的相机参数
distortion_parameters: #换成你的畸变参数
k1: -0.266278
k2: 0.0527945
p1: -0.000182013
p2: 0.000422317
projection_parameters: #换成你的相机内参
fx: 365.75
fy: 373.236
cx: 357.402
cy: 241.418

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [-0.00646620000000000, -0.99994994000000004, -0.00763565000000000, 0.99997908999999996, -0.00646566000000000, -0.00009558000000000, 0.00004620000000000, -0.00763611000000000, 0.99997084000000003]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [0.00533646000000000, -0.04302922000000000, 0.02303124000000000]

#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 30 # min distance between two features
freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
equalize: 1 # if image is too dark or light, trun on equalize to find enough features
fisheye: 0 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.08 # accelerometer measurement noise standard deviation. #0.2 0.04
gyr_n: 0.004 # gyroscope measurement noise standard deviation. #0.05 0.004
acc_w: 0.00004 # accelerometer bias random work noise standard deviation. #0.02
gyr_w: 2.0e-6 # gyroscope bias random work noise standard deviation. #4.0e-5
g_norm: 9.81007 # gravity magnitude

#loop closure parameters
loop_closure: 1 # start loop closure
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
fast_relocalization: 0 # useful in real-time and large project
pose_graph_save_path: "/home/fish/ws_vins/src/VINS-Mono/config/output_path/" # #换成你的路径

#unsynchronization parameters
estimate_td: 0 # online estimate time offset between camera and imu
td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#rolling shutter parameters
rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet).

#visualization parameters
save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0
visualize_imu_forward: 0 # output imu forward propogation to achieve low latency and high frequence results
visualize_camera_size: 0.4 # size of camera marker in RVIZ

四、开启相机节点并运行vins

1、 启动相机

cd <sdk>
source ./wrappers/ros/devel/setup.bash
roslaunch mynt_eye_ros_wrapper mynteye.launch

2、启动vins

cd ws_vins     //your vins_workspace
source devel/setup.bash
roslaunch vins_estimator mynteye.launch

3、启动可视化环境

cd ws_vins     //your vins_workspace
source devel/setup.bash
roslaunch vins_estimator vins_rviz.launch

完成。