CORC  > 沈阳自动化研究所  > 中国科学院沈阳自动化研究所  > 其他
题名服务机器人视觉目标感知研究
作者张强
学位类别博士
答辩日期2017-11-28
授予单位中国科学院沈阳自动化研究所
授予地点沈阳
导师曲道奎 ; 徐方
关键词服务机器人,误匹配剔除,多目标实例,抓取检测,手眼标定
其他题名Research on Visual Target Perception of Service Robot
学位专业机械电子工程
中文摘要本文以国家科技支撑计划“国产机器人嵌入式实时操作系统开发与应用示范”为依托,针对服务机器人的视觉目标感知进行了深入细致的研究,以满足服务机器人在特定目标检测、物体合理抓取位置检测、服务机器人手眼标定三个方面的视觉应用需求。主要的研究内容和研究成果如下: (1)研究服务机器人目标实例检测问题。针对目标实例检测时SIFT特征匹配结果存在错误匹配的问题,提出了一种匹配特征分布区域约束的误匹配剔除算法。首先在离线阶段对参考图像提取特征并计算每个特征点与目标图像中参考点的偏移向量;在线检测阶段进行特征近邻匹配,并将匹配特征点坐标向各自参考点映射,通过估计映射结果的概率密度计算有效区域的中心及图像间的显著尺度比,从而确定有效特征映射点分布区域,达到剔除错误匹配的目的。通过图像公开测试集验证,证明区域约束SIFT特征误匹配剔除算法对尺度缩放、旋转以及视角变化等场景具有良好的适应性。 (2)研究了基于视觉的服务机器人多目标实例检测问题,提出了一种基于双层概率密度估计的多目标实例检测框架。该框架采用了SIFT局部尺度不变特征匹配和特征关键点位置映射方法。在第一层概率密度估计用于估计图像间的显著尺度比,并以此为第二层概率密度估计提供参考聚类参数。在目标模板图像特征重提取和重匹配后,采用了级联的误匹配剔除算法剔除错误匹配结果。在第二层概率密度估计中,通过对参考聚类参数乘以经验系数来确定自适应聚类参数。其中,经验系数通过实验确定。最后采用基于自适应参数的概率密度估计方法寻找所有候选对象实例。实验证明本研究提出的多目标实例检测方法具有较高的鲁棒性和检测效率。 (3)研究了服务机器人视觉目标合理抓取位置检测问题,提出了基于深度学习的目标合理抓取位置鲁棒回归方法。针对非同源数据融合问题,提出了一种基于深度卷积神经网络的彩色图像数据与深度图像数据深度融合的方法。另外,为了解决训练过程过早收敛的问题,将Welsch函数引入到损失函数的设计中。实验结果验证了所提算法的有效性。 (4)最后,对服务机器人的视觉目标感知进行了集成应用,并研究了服务机器人的手眼标定问题。为了解决手眼标定对数据粗差敏感的问题,提出了一种基于误差分布估计的加权最小二乘鲁棒估计方法,以提高服务机器人手眼标定参数的精度。首先,通过最小二乘方法计算手眼标定参数;之后计算数据重建误差;根据误差值的分布概率初始化对应坐标数据的权值;最后采用加权的最小二乘估计重新计算机器人手眼标定矩阵。另外,引入了迭代估计策略用于进一步提高手眼标定的精度。设计的机器人手眼标定实验及结果证明,所提算法能够在数据粗差影响下保持较高的标定精度,适用于服务机器人的视觉目标抓取集成应用。
英文摘要Based on the National Key Technology R&D Program "Development and application demonstration of embedded real time operating system for domestic robot", vision perception has been intensively studied to meet the requirements of specific target detection, robot grasp detection and hand-eye calibration. The main research contents and research results are as follows: (1)To solve the problem of false point matching using SIFT, a region restriction based method was proposed. In the off-line procedure, SIFT features are extracted and the vector between each feature point and the reference point of the training image is also computed. During on-line detection, SIFT feature matching is employed firstly. Then each matched key-point is mapped to its reference point. The expected region center can be derived by density estimation and the radius is determined according to dominant scale ratio. Matches corresponding to the mapped points out side of the region are considered invalid. Experiment results on benchmark images show the robustness of the proposed method for scale, orientation and viewpoint change. (2) This research introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the ?rst layer of the density estimation. A cascade of ?lters is applied after feature template reconstruction and re?ned feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is ?nalized by multiplying an empirical coef?cient for the reference value. The coef?cient is identi?ed experimentally. Adaptive threshold-based grid voting is applied to ?nd all candidate object instances. Error detection is eliminated using ?nal geometric veri?cation in accordance with RANdom SAmple Consensus (RANSAC). Experimental results demonstrate that the approach provides high robustness and low latency for inventory management application. (3) Accurate robot grasp detection for model free objects plays an important role in robotics. This research proposes a convolutional neural networks (CNN) based approach combined with regression and classification. In the CNN model, the colour and the depth modal data are deeply fused together to achieve accurate feature expression. Additionally, Welsch function is also introduced into the approach to enhance robustness of the training process. Experiment results demonstrates the superiority of the proposed method. (4) Accurate hand-eye calibration is a very significant task in robotics. This research introduces an error distribution estimation based weighted least square estimation model for the robot hand-eye calibration task. Firstly, transformation matrix is computed by traditional LSE. Error distribution is estimated and the data is weighted according to the density estimation. Then the fine result can be conducted based on the weighted data. Lastly, iteration scheme is proposed to further improve the calibration accuracy. To evaluate the proposed method, an experiment was designed and the test result demonstrates the robustness of the proposed approach.
语种中文
产权排序1
页码105页
内容类型学位论文
源URL[http://ir.sia.cn/handle/173321/21269]  
专题沈阳自动化研究所_其他
推荐引用方式
GB/T 7714
张强. 服务机器人视觉目标感知研究[D]. 沈阳. 中国科学院沈阳自动化研究所. 2017.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace