Chengxu Liu

I am a final-year Ph.D. student at Xi'an Jiaotong University (XJTU), my advisors are Xueming Qian. Also, I was a visiting student at Vision and Learning Lab, University of California, Merced, supervised by Ming-Hsuan Yang. I was fortunate to be involved in internship program at Megvii Research, Microsoft Research Asia (MSRA). I received the B.S. degree from School of the Information engineering, XJTU in 2019.

I'll be an assistant professor at XJTU in the fall of 2025.

Email  /  CV  /  Google Scholar  /  LinkedIn /  Github

profile photo

News
  • [2025/01] Two paper accepted by IEEE TMM.
  • [2024/07] One paper accepted by IEEE TMM.
  • [2024/07] Two papers accepted by ECCV 2024.
  • [2024/02] One paper accepted by CVPR 2024.
  • [2024/02] One paper accepted by IEEE TMM.
  • [2024/01] One paper accepted by AAAI 2024.
  • [2023/08] One paper accepted by IEEE TIP.
  • [2023/07] Two papers accepted by ICCV 2023.
  • [2023/06] One paper accepted by IEEE TIP.
  • [2023/05] One paper accepted by IEEE TCSVT.
  • [2022/06] One paper accepted by IEEE TNNLS.
  • [2022/03] One oral paper accepted by CVPR 2022.
  • [2021/08] One paper accepted by IEEE TCSVT.

  • Research

    I'm interested in image/video restoration, low-quality/degraded detection, and fine-grained recognition. Much of my research is about low-level vision. Representative papers are highlighted. (* indicates equal contribution)

    MISCFilter Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring
    Chengxu Liu, Xuan Wang, Xiangyu Xu, Ruhao Tian, Shuai Li, Xueming Qian, Ming-Hsuan Yang
    CVPR, 2024
    [ArXiv] [Code] [Supp]

    We introduce a new perspective to handle motion blur in image space instead of features and propose a novel motion-adaptive separable collaborative (MISC) filter.

    SDPDet SDPDet: Learning Scale-Separated Dynamic Proposals for End-to-End Drone-View Detection
    Nengzhong Yin*, Chengxu Liu*, Ruhao Tian, Xueming Qian
    IEEE TMM, 2024
    [PDF] [Code]

    We propose a novel one-step detector, called SDPDet , to enable effective object learning in drone-view images.

    DDRNet Decoupling Degradations with Recurrent Network for Video Restoration in Under-Display Camera
    Chengxu Liu, Xuan Wang, Yuanting Fan, Shuai Li, Xueming Qian
    AAAI, 2024
    [PDF] [ArXiv] [Code]

    We propose a novel network with long- and short-term video representation learning by decoupling video degradations for the UDC video restoration task (DDRNet), which is the first work to address UDC video degradation.

    Split-Check Split-Check: Boosting Product Recognition via Instance-Level Retrieval
    Chengxu Liu, Zongyang Da, Yuanzhi Liang, Yao Xue, Guoshuai Zhao, Xueming Qian
    IEEE TII, 2023

    We propose a product recognition approach based on intelligent UVMs, called Split-Check, which first splits the region of interest of products by detection and then check product by instance-level retrieval.

    FSI FSI: Frequency and Spatial Interactive Learning for Image Restoration in Under-Display Cameras
    Chengxu Liu, Xuan Wang, Shuai Li, Yuzhi Wang, Xueming Qian
    ICCV, 2023
    [PDF] [ArXiv] [Code] [Supp]

    We introduce a new perspective to handle various diffraction in UDC images by jointly exploring the feature restoration in the frequency and spatial domains, and present a Frequency and Spatial Interactive Learning Network (FSI).

    CSDA CSDA: Learning Category-Scale Joint Feature for Domain Adaptive Object Detection
    Changlong Gao*, Chengxu Liu*, Yujie Dun, Xueming Qian
    ICCV, 2023
    [PDF] [Code]

    For better category-level feature alignment, we propose a novel DAOD framework of joint category and scale information, dubbed CSDA, such a design enables effective object learning for different scales.

    TTVFI TTVFI: Learning Trajectory-Aware Transformer for Video Frame Interpolation
    Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
    IEEE TIP, 2023
    [PDF] [arXiv] [Code]

    We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI), which formulate the warped features with inconsistent motions as query tokens, and formulate relevant regions in a motion trajectory from two original consecutive frames into keys and values.

    4DLUT 4D LUT: Learnable Context-Aware 4D Lookup Table for Image Enhancement
    Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
    IEEE TIP, 2023
    [PDF] [arXiv] [Code]

    We propose a novel learnable context-aware 4-dimensional lookup table (4D LUT), which achieves content-dependent enhancement of different contents in each image via adaptively learning of photo context.

    AJENet AJENet: Adaptive Joints Enhancement Network for Abnormal Behavior Detection in Office Scenario
    Chengxu Liu, Yaru Zhang, Yao Xue, Xueming Qian
    IEEE TCSVT, 2023
    [PDF]

    We focus on human joints and take one step further to enable effective behavior characteristics learning in office scenarios. In particular, we propose a novel Adaptive Joints Enhancement Network (AJENet).

    ClassAD Anomaly Detection Framework for Unmanned Vending Machines
    Zongyang Da, Yujie Dun, Chengxu Liu, Yuanzhi Liang, Yao Xue, Xueming Qian
    KBS, 2023

    We propose an unmanned retail anomaly detection method based on deep convolutional neural networks (CNNs) called the complexity-classification anomaly detection (ClassAD) framework.

    PRUVM Product Recognition for Unmanned Vending Machines
    Chengxu Liu, Zongyang Da, Yuanzhi Liang, Yao Xue, Guoshuai Zhao, Xueming Qian
    IEEE TNNLS, 2022

    We propose a method for large-scale categories product recognition based on intelligent UVMs. The highlights of our method are mine potential similarity between large-scale category products and optimization through hierarchical multigranularity labels

    TTVSR Learning Trajectory-Aware Transformer for Video Super-Resolution
    Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
    CVPR (Oral presentation), 2022
    [PDF] [arXiv] [Code] [Supp]

    We propose a novel Trajectory-aware Transformer for Video Super-Resolution (TTVSR), which formulate video frames into several pre-aligned trajectories which consist of continuous visual tokens. For a query token, self-attention is only learned on relevant visual tokens along spatio-temporal trajectories.

    AFN Food and Ingredient Joint Learning for Fine-Grained Recognition
    Chengxu Liu, Yuanzhi Liang, Yao Xue, Xueming Qian, Jianlong Fu
    IEEE TCSVT, 2021

    We propose an Attention Fusion Network (AFN) and Food-Ingredient Joint Learning module for fine-grained food and ingredients recognition.


    Honors and Awards
  • [2025/01] Young Elite Scientists Sponsorship Program by CAST - Doctoral Student Track.
  • [2024/12] CMCC Scholarship, China.
  • [2024/12] National Scholarship, Ministry of Education of China.
  • [2024/06] Gold Award, 12th "Challenge Cup" National College Student Business Plan Competition.
  • [2024/01] Baidu Scholarship Top 20 in the World.
  • [2023/12] Huawei Scholarship, China.
  • [2023/12] The postgraduate “Academic Star” of Xi'an Jiaotong University.(10 per year)
  • [2023/10] Principal Scholarship of Postgraduates, Xi'an Jiaotong University.
  • [2022/12] Pacesetter for Postgraduate Student (The highest honor in XJTU, 16 Ph.D. per year), Xi'an Jiaotong University.
  • [2022/12] National Scholarship, Ministry of Education of China.
  • [2022/08] National Most Commercially Valuable Award, Postgraduate Electronics Design Contest, Chinese Institute of Electronics (8/5700+, 0.14\% award rate).
  • [2022/08] National 1st Prize, Postgraduate Electronics Design Contest, Chinese Institute of Electronics (32/5700+, 0.56\% award rate).
  • [2022/05] Stars of Tomorrow Internship Program, Microsoft Research Asia.

  • Reviewer
  • Conference: SIGGRAPH 2024, CVPR 2025,2024,2023,2022, ICCV 2023, ECCV 2024,2022, NeurIPS 2024, ICLR 2025, ICML 2025.
  • Journal: IEEE TPAMI, IJCV, IEEE TIP, IEEE TMM, IEEE TNNLS, IEEE TCSVT.


  • Based on Jon Barron's website.