RGB-T fusion tracking:papers,datasets & results

A list of papers, datasets (benchmarks) and results in RGB-T fusion tracking.

  • If you think this is useful, please consider giving a star, thanks!
  • If you think some information is wrong, please fell free to contact me to correct.
  • If you think some papers are missing and you want to add, please feel free to raise an issue or contact me.
  • Contact detail: xingchen.zhang@imperial.ac.uk

Table of Contents

Papers

Review

  1. Xingchen Zhang, Ping Ye, Henry Leung, Ke Gong, Gang Xiao.
    Object Fusion Tracking Based on Visible and Infrared Images: A Comprehensive Review. Information Fusion, vol.63, pp.166-187,2020.[paper]

2022

Journal

  1. Xiaohu Liu, Yichuang Luo, Keding Yan, Jianfei Chen, Zhiyong Lie.
    "CMC2R: Cross‐modal collaborative contextual representation for RGBT tracking". IET Image Processing, 2022.

  2. Bin Kang, Dong Liang, Junxi Mei, Xiaoyang Tan, Quan Zhou, Dengyin Zhang.
    "Robust RGB-T Tracking via Graph Attention-Based Bilinear Pooling". IEEE Transactions on Neural Networks and Learning Systems, 2022.

  3. Yadong Li, Huicheng Lai, Liejun Wang, Zhenhong Jia.
    "Multibranch Adaptive Fusion Network for RGBT Tracking". IEEE Sensors Journal, 2022.

  4. Yong Wang, Xian Wei, Xuan Tang, Jingjing Wu, Jiangxiong Fang.
    "Response map evaluation for RGBT tracking". Neural Computing and Applications, 2022.

  5. Longfeng Shen, Xiaoxiao Wang, Lei Liu, Bin Hou, Yulei Jian, Jin Tang, Bin Luo.
    "RGBT Tracking based on Cooperative Low-Rank Graph Model". Neurocomputing, 2022.

  6. Weidai Xia, Dongming Zhou, Jinde Cao, Yanyu Liu, Ruichao Hou.
    "CIRNet: An improved RGBT tracking via Cross-Modality Interaction and Re-Identification". Neurocomputing, 2022.

Conference

arXiv

  1. Zhangyong Tang, Tianyang Xu, Xiao-Jun Wu.
    "A Survey for Deep RGBT Tracking", 2022.

2021

Journal

  1. Qin Xu, Yiming Mei, Jinpei Liu, Chenglong Li
    "Multimodal Cross-Layer Bilinear Pooling for RGBT Tracking", IEEE Transactions on Multimedia, 2021. [Paper]

  2. Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, Xiaoyun Yang
    "Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking", IEEE TIP, Vol. 30, 2021. [Paper]

  3. Yabin Zhu, Chenglong Li, Jin Tang, Bin Luo, Liang Wang.
    "RGBT Tracking by Trident Fusion Network", IEEE TCSVT, 2021.

  4. Tianlu Zhang, Xueru Liu, Qiang Zhang, Jungong Han.
    "SiamCDA: Complementarity- and distractor-aware RGB-T tracking based on Siamese network", IEEE TCSVT, 2021.

  5. Chenglong Li, Zhiqiang Xiang, Jin Tang, Bin Luo, Futian Wang.
    "RGBT Tracking via Noise-Robust Cross-Modal Ranking", IEEE TNNLS, 2021. [Paper]

  6. Chang Guo, Dedong Yang, Chang Li, Peng Song.
    "Dual Siamese network for RGBT tracking via fusing predicted position maps", The Visual Computer, 2021.

  7. Yong Wang, Xian Wei, Xuan Tang, Hao Shen, Huanlong Zhang.
    "Adaptive Fusion CNN Features for RGBT Object Tracking", IEEE Transactions on Intelligent Transpotation Systems, 2021. [Paper]

  8. Jiatian Mei, Dongming Zhou, Jinde Cao, Rencan Nie, Yanbu Guo.
    "HDINet: Hierarchical Dual-sensor Interation Network for RGBT Tracking", IEEE Sensors Journal, 2021.

  9. Pengyu Zhang, Dong Wang, Huchuan Lu, Xiaoyun Yang.
    "Learning Adaptive Attribute-Driven Representation for Real-Time RGB-T Tracking", International Journal of Computer Vision, 2021. [paper][Code]

arXiv

  1. Jingchao Peng, Haitao Zhao, Zhengwei Hu, Yi Zhuang, Bofan Zhang.
    Siamese Infrared and Visible Light Fusion Network for RGB-T tracking, arXiv, 2021. [Paper]

  2. Xiao Wang, Xiujun Shu, Shiliang Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, Feng Wu.
    "MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking", arXiv, 2021. [Paper]

2020

Journal

  1. Xingchen Zhang, Ping Ye, Shengyun Peng, Jun Liu, Gang Xiao
    "DSiamMFT: An RGB-T fusion tracking method via dynamic Siamese networks using multi-layer feature fusion", Signal Processing: Image Communication, 2020. [paper]

  2. Xingchen Zhang, Ping Ye, Henry Leung, Ke Gong, Gang Xiao.
    "Object Fusion Tracking Based on Visible and Infrared Images: A Comprehensive Review". Information Fusion, 2020.[paper]

  3. Mingzheng Feng, Kechen Song, Yanyan Wang, Jie Liu, Yunhui Yan.
    "Learning Discriminative Update Adaptive Spatial-Temporal Regularized Correlation Filter for RGB-T Tracking", Journal of Visual Communication and Image Representation,2020.[paper]

  4. Hui Zhang, Lei Zhang, Li Zhuo, Jing Zhang.
    "Object tracking in RGB-T videos using modal-aware attention network and competitive learning". Sensors, 2020. [paper]

  5. Yabin Zhu, Chenglong Li, Jin Tang, Bin Luo.
    "Quality-aware Feature Aggregation Network for Robust RGBT Tracking". IEEE Transactions on Intelligent Vehicles, 2020. [paper]

  6. Xiangyuan Lan, Mang Ye, Shengping Zhang, Huiyu Zhou, Pong C. Yuen.
    "Modality-correlation-aware sparse representation for RGB-infrared object tracking".Pattern Recognition Letters, 2020. [paper]

  7. Mengzheng Feng, Kechen Song, Yanyan Wang, Jie Liu, Yunhui Yan.
    "Learning Discriminative Update Adaptive Spatial-Temporal Regularized Correlation Filter for RGB-T Tracking", Journal of Visual Communication and Image Representation, Vol. 72, 2020.

Conference

  1. Chenglong Li, Lei Liu, Andong Lu, Qing Ji, and Jin Tang
    "Challenge-Aware RGBT Tracking". ECCV 2020. [paper]
  2. Chaoqun Wang, Chunyan Xu, Zhen Cui, Ling Zhou, Tong Zhang, Xiaoya Zhang, Jian Yang
    "Cross-Modal Pattern-Propagation for RGB-T Tracking".CVPR 2020.[paper]

arXiv

  1. Andong Lu, Cun Qian, Chenglong Li, Jin Tang, Liang Wang.
    "Duality-Gated Mutual Condition Network for RGBT tracking", arXiv, 2020. [Paper]

  2. Zhengzheng Tu, Chun Lin, Chenglong Li, Jin Tang, Bin Luo.
    "M5L: Multi-Modal Multi-Margin Metric Learning for RGBT Tracking". arXiv:2003.07650, 2020. [paper]

  3. Andong Liu, Chenglong Li, Yuqing Yan, Jin Tang, Bin Luo.
    "RGBT Tracking via Multi-Adapter Network with Hierarchical Divergence Loss". arXiv:2011.07189, 2020. [paper]

2019

Journal

  1. Xiao Yun , Yanjing Sun , Xuanxuan Yang, and Nannan Lu
    "Discriminative fusion correlation learning for visible and infrared tracking." Mathematical Problems in Engineering, 2019. [paper]

  2. Bin Kang , Dong Liang , Wan Ding, Huiyu Zhou, and Wei-Ping Zhu
    "Grayscale-Thermal Tracking via Inverse Sparse Representation-Based Collaborative Encoding." IEEE Transactions on Image Processing, 2019. [paper]

  3. Chengwei Luo, Bin Suna, Ke Yanga, Taoran Lua, Wei-Chang Yeh
    "Thermal infrared and visible sequences fusion tracking based on a hybrid tracking framework with adaptive weighting scheme." Infrared Physics & Technology, 2019.[paper]

  4. Xiangyuan Lan, Wei Zhang, Shengping Zhang, Deepak Kumar Jain, Huiyu Zhou
    "Robust Multi-modality Anchor Graph-based Label Prediction for RGB-Infrared Tracking", IEEE Transactions on Industrial Informatics, 2019. [paper]

  5. Xingchen Zhang, Ping Ye, Shengyun Peng, Jun Liu, Ke Gong, Gang Xiao
    "SiamFT: An RGB-infrared fusion tracking method via fully convolutional siamese networks." IEEE Access, 2019.[Paper][Code]

  6. Singh, Satbir, Arun Khosla, and Rajiv Kapoor.
    "Object Tracking with a Novel Visual-Thermal Sensor Fusion Method in Template Matching." International Journal of Image, Graphics and Signal Processing, 2019.[paper]

  7. Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, Jin Tang
    "RGB-T object tracking: Benchmark and baseline", Pattern Recognition, 2019. [paper]

  8. Can-Long Zhang, Yan-Ping Tang, Zhi-Xin Lia, Zhi-Wen Wang
    "Joint spatiograms for multi-modality tracking with online update", Pattern Recognition Letters, 2019.[paper]

  9. Xiangyuan Lan, Mang Ye, Rui Shan, Bineng Zhong, Deepak Kumar Jain, Huiyu Zhou
    “Online non-negative multi-modality feature template learning for RGB-assisted infrared tracking”. IEEE Access, 2019. [paper]

  10. Chengwei Luo, Bin Sun, Ke Yang, Taoran Lu, Wei-Chang Yeh.
    “Thermal infrared and visible sequences fusion tracking based on a hybrid tracking framework with adaptive weighting scheme”. Infrared Physics & Technology, 2019. [paper]

  11. Sulan Zhai, Pengpeng Shao, Xinyan Liang, Xin Wang.
    "Fast RGB-T Tracking via Cross-Modal Correlation Filters." Neurocomputing, 2019.[paper]

  12. Xiangyuan Lan, Mang Ye, Rui Shao, Bineng Zhong, Pong C.Yuen, Huiyu Zhou.
    "Learning Modality-Consistency Feature Templates: A Robust RGB-Infrared Tracking System." IEEE Transactions on Industrial Electronics, 2019.[paper]

  13. Chenglong Li, Chengli Zhu, Jian Zhang, Bin Luo, Xiaohao Wu, and Jin Tang.
    "Learning Local-Global Multi-Graph Descriptors for RGB-T Object Tracking". IEEE Transactions on Circuits and Systems for Video Technology, 2019.[paper]

Conference

  1. Xingchen Zhang, Ping Ye, Dan Qiao, Junhao Zhao, Shengyun Peng, Gang Xiao
    "Object fusion tracking based on visible and infrared images using fully convolutional siamese networks." International Conference on Information Fusion (FUSION), 2019. [paper]

  2. Chenglong Li, Andong Lu, Aihua Zheng, Zhengzheng Tu, Jin Tang
    "Multi-adapter rgbt tracking." ICCVW, 2019. [paper]

  3. Matej Kristan et al.
    "The Seventh Visual Object Tracking VOT2019 Challenge Results", ICCVW, 2019. [paper]

  4. Yuan Gao, Chenglong Li, Yabin Zhu, Jin Tang, Tao He, Futian Wang
    "Deep adaptive fusion network for high performance rgbt tracking." ICCVW, 2019. [paper]

  5. Lichao Zhang, Martin Danelljan, Abel Gonzalez-Garcia, Joost van de Weijer, Fahad Shahbaz Khan
    "Multi-modal fusion for end-to-end rgb-t tracking." ICCVW. 2019. [paper]

  6. Stephane Vujasinovic, Stefan Becker, Norbert Scherer-Negenborn, and Michael Arens
    "Impact of Fused Visible-Infrared Video Streams on Visual Tracking." Iberian Conference on Pattern Recognition and Image Analysis. 2019. [paper]

  7. Yabin Zhu, Chenglong Li, Jin Tang, Bin Luo, Xiao Wang
    "Dense feature aggregation and pruning for rgbt tracking." ACM MM. 2019.[paper]

  8. Rui Yang, Yabin Zhu, Xiao Wang, Chenglong Li *, and Jin Tang.
    “Learning Target-oriented Dual Attention for Robust RGB-T Tracking”. IEEE International Conference on Image Processing (ICIP), 2019.[paper]

2018

Journal

  1. Keyan Ren, Xiao Zhang, Yu Han, Yibin Hou.
    "Robust night target tracking via infrared and visible video fusion". Applications of Digital Image Processing, 2018. (asynchronous VI and IR videos)[paper]

  2. Chenglong Li, Xiaohao Wu, Nan Zhao, Xiaochun Cao, and Jin Tang.
    "Fusing Two-Stream Convolutional Neural Networks for RGB-T Object Tracking". Neurocomputing (NEUCOM), 281: 78-85, 2018.[paper]

  3. Chenglong Li, Chengli Zhu, Shaofei Zheng, Bin Luo, and Jin Tang.
    "Two-Stage Modality-Graphs Regularized Manifold Ranking for RGB-T Tracking". Signal Processing: Image Communication (SPIC), 68: 207-217, 2018. [paper]

  4. Meng Ding, Yao Yuheng, Li Wei, Yunfeng Cao.
    "Visual tracking using Locality-constrained Linear Coding and saliency map for visible light and infrared image sequences". Signal Processing: Image Communication, 2018.[paper]

Conference

  1. Xingming Zhang, Xuehan Zhang, Xuedan Du, Xiangming Zhou, Jun Yin.
    "Learning Multi-domain Convolutional Network for RGB-T Visual Tracking." CISP-BMEI, 2018. [paper]

  2. Chenglong Li, Chengli Zhu, Yan Huang, Jin Tang, Liang Wang.
    "Cross-Modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking." ECCV 2018.[paper]

  3. Xiangyuan Lan, Mang Ye, Shengping Zhang, Pong C. Yuen.
    "Robust Collaborative Discriminative Learning for RGB-Infrared Tracking". AAAI 2018.

  4. Yulong Wang, Chenglong Li, and Jin Tang.
    “Learning Soft-Consistent Correlation Filters for RGB-T Object Tracking”. PRCV 2018.[paper]

  5. Ningwen Xu, Gang Xiao, Xingchen Zhang, Durga Prasad Bavirisetti.
    "Relative Object Tracking Algorithm Based on Convolutional Neural Network for Visible and Infrared Video Sequences". 4th International Conference on Virtual Reality, 2018. [paper]

  6. Ningwen Xu, Gang Xiao, Fang He, Xingchen Zhang, Durga Prasad Bavirisetti.
    "Object Tracking via Deep Multi-View Compressive Model for Visible and Infrared Sequences". Fusion 2018. [paper]

  7. Chengwei Luo, Bin Sun, Qiao Deng, Zihao Wang, Dengwei Wang.
    "Comparison of Different Level Fusion Schemes for Infrared-Visible Object Tracking: An Experimental Survey". 2018 2nd International Conference on Robotics and Automation Sciences. [paper]

2017

Journal

  1. Chenglong Li, Xiang Sun, Xiao Wang, Lei Zhang, and Jin Tang.
    "Grayscale-thermal Object Tracking via Multi-task Laplacian Sparse Representation". IEEE Transactions on Systems, Man, and Cybernetics: Systems (T-SMCS), 47(4): 673-681, 2017.[paper]

Conference

  1. Chenglong Li, Nan Zhao, Yijuan Lu, Chengli Zhu, and Jin Tang.
    "Weighted Sparse Representation Regularized Graph Learning for RGB-T Object Tracking". ACM International Conference on Multimedia (ACM MM), 2017.

2016

Journal

  1. Chenglong Li, Hui Cheng, Shiyi Hu, Xiaobai Liu, Jin Tang, Liang Lin.
    “Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking”, IEEE Transactions on Image Processing, 25(12): 5743-5756, 2016. [paper]

  2. Xiao Yun, Zhongliang Jing, Bo Jin. "Visible and infrared tracking based on multi-view multi-kernel fusion model". Optical Review, 2016.[paper]

  3. Xiao Yun, Zhongliang Jing, Gang Xiao, Bo Jin, Canlong Zhang.
    "A compressive tracking based on time-space Kalman fusion model". Science China Information Sciences, 2016.[paper]

  4. Supriya Mangale, Madhuri Khambete.
    "Camouflaged target detection and tracking using thermal infrared and visible spectrum imaging". Advances in Intelligent Systems and Computing, 2016.[paper]

Conference

  1. Chenglong Li, Shiyi Hu, Sihan Gao, and Jin Tang.
    "Real-time Grayscale-thermal Tracking via Laplacian Sparse Representation". International Conference on Multimedia Modelling (MMM), Miami, 2016. [paper]

Before 2016

  • Erhan Gundogdu, Huseyin Ozkan, H. Seckin Demir, Hamza Ergezer, Erdem Akag¨und¨uz, S. Kubilay Pakin.
    "Comparison of infrared and visible imagery for object tracking: Toward trackers with superior IR performance". CVPR 2015. [paper]
  • Alex Lipchen Chan, Stephen R. Schnelle.
    "Fusing concurrent visible and infrared videos for improved tracking performance". Optical Engineering, 2013.[paper]
  • Huaping Liu, Fuchun Sun.
    "Fusion tracking in color and infrared images using joint sparse representation". Science China Information Sciences, 2012.[paper]
  • Alex Lipchen Chan, Stephen R. Schnelle.
    "Target tracking using concurrent visible and infrared imageries". SPIE, 2012.[paper]
  • Stephen R. Schnelle, Alex Lipchen Chan.
    "Enhanced target tracking through infrared-visible image fusion". Fusion 2011. [paper]
  • K. Senthil Kumar1, G. Kavitha, R. Subramanian, G. Ramesh.
    "Visual and Thermal Image Fusion of UAV Based Target Tracking". Intech open, 2011. [paper]
  • Huaping Liu, Fuchun Sun.
  • "Fusion tracking in color and infrared images using sequential belief propagation". Proceedings - IEEE International Conference on Robotics and Automation, 2008. [paper]

Other papers

Multispectral person detection (may give some ideas to fusion tracking)

  • Daniel K¨onig, Michael Adam, Christian Jarvers, Georg Layher, Heiko Neumann, and Michael Teutsch.
    "Fully Convolutional Region Proposal Networks for Multispectral Person Detection". CVPR 2017.

Datasets and benchmark

GTOT

  • Paper: Chenglong Li, Hui Cheng, Shiyi Hu, Xiaobai Liu, Jin Tang, Liang Lin.
    “Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking”, IEEE Transactions on Image Processing (T-IP), 25(12): 5743-5756, 2016. [paper]
  • Download Link [Google drive] [Baidu Cloud]

RGBT210

  • Paper: Chenglong Li, Nan Zhao, Yijuan Lu, Chenglin Zhu, Jin Tang.
    “Weighted Sparse Representation Regularized Graph Learning for RGB-T Object Tracking”, ACM International Conference on Multimedia (ACM MM), 2017. [paper]
  • Download Link [Google drive][Baidu Cloud]

RGBT234

  • Paper: Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, and Jin Tang.
    “RGB-T Object Tracking:Benchmark and Baseline”, Pattern Recognition, 2019. [paper][project]
  • Download Link [Google drive]

LasHeR

  • Paper: Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang. "LasHeR: A Large-scale High-diversity Benchmark for RGBT Tracking", IEEE Transactions on Image Processing, 2021.
  • Link [Github]

Results

GTOT-Results

Name       | PR   | SR   | Year | Author     |  Type    | FPS |
FANet      | 88.5 | 69.8 | 2018 | Li et al.  | DL-based | 1.3 |
SCCF       | 85   | 68.1 | 2018 | Li et al.  | CF-based | 50  |
LGMG       | 82.9 | 65.5 | 2018 | Li et al.  |          |  7  |
Cross-modal| 82.7 | 64.3 | 2018 | Li et al.  |          |  8  |
Fast RGB-T | 77   | 63.2 | 2019 | Zhai et al.| CF-based | 227 |
Weighted   | 85.12| 62.8 | 2017 | Li et al.  |          |  5  |
Fusing two | 85.2 | 62.6 | 2018 | Li et al.  | DL-based | 15  |
Two stage  | 84.2 | 62.2 | 2018 | Li et al.  |          | 7   |
CSR        | 75   | 62   | 2016 | Li et al.  |          |     |

RGBT210-Results

Name       | PR   | SR   | Year | Author     |  Type    | FPS |
LGMG       | 71.1 | 46.8 | 2018 | Li et al.  |          |  7  |
Cross-modal| 69.4 | 46.3 | 2018 | Li et al.  |          |  8  | 
SiamFT     | 65.0 | 44.3 | 2019 |Zhang et al.| DL-bsed  | 25+ |
Weighted   | 67.5 | 43.0 | 2017 | Li et al.  |          |  5  |  
Fast RGB-T | 52.9 | 36.6 | 2019 | Zhai et al.| CF-based | 227 |

RGBT234-Results

Name         | PR   | SR   | Year |Author       |  Type    | FPS |
DAPNet       | 76.6 | 53.7 | 2019 | Li et al.   | DL-based |
FANet        | 76.4 | 53.2 | 2018 | Li et al.   | DL-based | 1.3 |
SGT          | 72.0 | 47.2 | 2018 | Li et al.   |          | 
SiamFT       | 65.9 | 44.8 | 2019 | Zhang et al.| DL-based | 25+ |
SiamFC_RGT   | 61.0 | 42.8 | 2019 | Zhang et al.| DL-based |     |  
Multi-domain | 61.7 | 38.7 | 2018 | Zhang et al.| DL-based |     |

VOT-RGBT Challenge

VOT-RGBT-2020

VOT-RGBT-2019