Here is Songning Lai.( You can call me Sony. )
I am a junior student studying in the School of Information Science and Engineering(Chongxin College), Shandong University in China,supervised by Prof. Zhi Liu. I am also an incoming PhD student (25 Fall) at HKUST@AI Thrust&INFO Hub, supervised by Prof. Yutao Yue.
My primary research interest lies in the domain of Trustworthy AI, encompassing explainability, robustness, faithfulness, and safety of AI. Specifically, I have focused extensively on Concept Bottleneck Models (CBMs) within the realm of explainability. My past research includes an investigation into the robustness and generalization of CBMs in unsupervised settings (ICLR 2024), application of CBMs in multimodal contexts for unsupervised tasks (Under Review), pioneering work on continual learning with CBMs (Under review), as well as the first exploration of CBMs in the context of security, particularly backdoor attacks (Under review 1; Under review 2). Furthermore, my research has extended to applying CBMs in medical fields (NIPS 2024; Under review) and autonomous driving applications (ICRA 2025).
Beyond my work with CBMs, I have also explored issues related to robustness and faithfulness in time series (ICML 2025; Under review), continual learning for time series classification tasks (Under review), and image segmentation tasks (Under review). Prior to these endeavors, my research efforts were directed towards computer vision (Image and Vison Computing; ICASSP 2025), multimodal sentiment analysis (IJCNN 2024; Displays), and community detection (Neurocomputing).
Looking ahead, I am keen on pursuing Joint PhD opportunities starting in 2025 FALL (With HKUST(GZ)). My future research aims to delve deeper into Trustworthy AI [ICLR24], particularly focusing on explainability, robustness, faithfulness, and safety, alongside exploring its applications in AI4Science [NIPS24], autonomous driving [ICRA25;IJCAI25], Time Series [ICML25] and Embodied AI.
If you are interested in any aspect of me, I would love to chat and collaborate, please email me at - songninglai[at]hkust-gz[dot]edu[dot]cn.
🔥 News
- 04/2025: Our paper “IMTS is Worth Time X Channel Patches: Visual Masked Autoencoders for Irregular Multivariate Time Series Prediction” has been accepted by ICML 2025 (CCF A)!
- 04/2025: Our paper “Class Incremental Semantic Segmentation Based on Linear Closed-form Solution” has been accepted by CVPR 2025 workshop BASE!
- 04.2025: Our paper “Beyond Patterns:HarnessingCausal Logic for Autonomous DrivingTrajectory Prediction” has been accepted by IJCAI 2025 (CCF A)!
- 02.2025: Our paper “Enhancing domain adaptation for plant diseases detection through Masked Image Consistency in Multi-Granularity Alignment” has been accepted by Expert Systems With Applications(JCR Q1, IF:8.4, CCF C).
- 01.2025: Our paper “Dependable Robust Interpretable Visionary Ensemble Framework in Autonomous Driving” has been accepted by ICRA 2025 (CCF B)!
- 12.2024: Our paper “PEPL:Precision-Enhanced Pseudo-Labeling forFine-Grained lmage Classification inSemi-Supervised Learning” has been accepted at ICASSP 2025 (CCF B)!
- 09.2024: Our paper “Towards Multi-dimensional Explanation Alignment for Medical Classification” has been accepted by (NeurIPS 2024) (CCF A)!
- 07.2024: Our paper on Time Series has been accepted by IJCAI 2024 workshop.
- 06.2024: Our paper on Community Detection has been accepted by Neurocomputing(JCR Q1; CCF C).
- 03.2024: I am awarded the honor of excellent graduate of Shandong Province and excellent graduate of Shandong University.
- 03.2024: Our paper on Multimodal Sentiment Analysis has been accepted by IJCNN2024(CCF C).
- 01.2024: Our paper “Faithful Vision-Language Interpretation via Concept Bottleneck Models” has been accepted at The 12th International Conference on Learning Representations (ICLR 2024)!.
- 10.2023: Our paper on Multimodal Sentiment Analysis has been accepted by the journal Displays (JCR Q1).
- 10.2023: Our paper on Computer Vison has been accepted by the journal Image and Vison Computing (JCR Q1; CCF C).
- 11.2022: Get the First Prize in Contemporary Undergraduate Mathematical Contest in Modeling National (top 0.6%).
- 11.2022: I am very glad to give an oral report at the international conference CISP-BMEI 2022 and win the Best Paper Award.
- 10.2022: Our paper on Bioinformation has been accepted by CISP-BMEI 2022 (Tsinghua B)
📝 Publications (Selected)

Faithful Vision-Language Interpretation via Concept Bottleneck Models
Songning Lai, Lijie Hu, Junxiao Wang, Laure Berti and Di Wang
The Twelfth International Conference on Learning Representations (ICLR2024). (CCF None)
We introduce the Faithful Vision-Language Concept (FVLC) model, addressing the instability of label-free Concept Bottleneck Models (CBMs). Our FVLC model demonstrates superior stability against input and concept set perturbations across four benchmark datasets, with minimal accuracy degradation compared to standard CBMs, offering a reliable solution for model interpretation.

Towards Multi-dimensional Explanation Alignment for Medical Classification
Lijie Hu†, Songning Lai†, Wenshuo Chen†, Hongru Xiao, Hongbin Lin, Lu Yu, Jingfeng Zhang, and Di Wang
The Conference on Neural Information Processing Systems (NeurIPS 2024).(CCF A)
- We proposed an end-to-end framework called Med-MICN, which leverages the strength of different XAI methods such as concept-based models, neural symbolic methods, saliency maps, and concept semantics.
- Our outputs are interpreted in multiple dimensions, including concept prediction, saliency maps, and concept reasoning rules, making it easier for experts to identify and correct errors.
- Med-MICN demonstrates superior performance and interpretability compared with other concept-based models and the black-box model baselines.

DRIVE: Dependable Robust Interpretable Visionary Ensemble Framework in Autonomous Driving
Songning Lai†, Tianlang Xue, Hongru Xiao, Lijie Hu, Jiemin Wu, Ruiqiang Xiao, Ninghui Feng, Haicheng Liao, Zhenning Yang, Yutao Yue~
The Conference on (ICRA 2025).(CCF B)
We introduce DRIVE, a framework designed to enhance the dependability and stability of explanations in end-to-end unsupervised autonomous driving models, addressing instability issues and improving trustworthiness through consistent and stable interpretability and output, as demonstrated by empirical evaluations. This framework provides novel metrics for assessing the reliability of concept-based explainable autonomous driving systems, advancing their real-world deployment.

Bowen Tian†, Songning Lai†, Lujundong Li, Zhihao Shuai, Runwei Guan, Tian Wu, Yutao Yue~
The Conference on (ICASSP 2025).(CCF B)
We introduce Precision-Enhanced Pseudo-Labeling (PEPL), a semi-supervised learning approach for fine-grained image classification that generates and refines pseudo-labels using Class Activation Maps (CAMs) to capture essential details, significantly improving accuracy and robustness over existing methods on benchmark datasets. The approach consists of initial and semantic-mixed pseudo-label generation phases to enhance the quality of labels and has been open-sourced for public use.

Songning Lai, Xifeng Hu, Jing Han, Chun Wang, Subhas Mukhopadhyay, Zhi Liu~ and Lan Ye~
2022 15th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI 2022).(Tsinghua B)
We developed a model using protein feature acquisition, F_Score selection, KNN cleaning, and SMOTE for positive sample synthesis, combined with BERT classification based on Transformer, achieving an accuracy of up to 99.61% and an MCC of 99.1%, surpassing previous models. This work demonstrates significant potential for future applications.

Songning La†, Jiakang Li, Guinan Guo, Xifeng Hu, Yulong Li, Yuan Tan, Zichen Song, Yutong Liu, Zhaoxia Ren~, Chun Wang~, Danmin Miao~ and Zhi Liu~
International Joint Conference on Neural Networks (IJCNN 2024). (CCF C)
We propose a deep learning module that captures shared information across modalities using a covariance matrix and introduces a self-supervised label generation module to extract modality-specific private information, enhancing multimodal sentiment analysis performance through multi-task learning. Extensive experiments on benchmark datasets demonstrate the model’s effectiveness in capturing subtle multimodal sentiments.

A Comprehensive Review of Community Detection in Graphs
Jiakang Li†, Songning Lai†, Zhihao Shuai, Yuan Tan, Yifan Jia, Mianyang Yu, Zichen Song, Xiaokang Peng, Ziyang Xu, Yongxin Ni, Haifeng Qiu, Jiayu Yang, Yutong Liu, Yonggang Lu~
Neurocomputing (JCR Q1 (IF: 6.0) CCF C)
This review explores community detection in graphs, covering methods such as modularity-based, spectral clustering, probabilistic modelling, and deep learning, and introduces a new method, while comparing performances across datasets with and without ground truth. The review offers a comprehensive understanding of the current landscape in community detection techniques.

Multimodal Sentiment Analysis: A Survey
Songning Lai, Haoxuan Xu, Xifeng Hu, Zhaoxia Ren~ and Zhi Liu~
Displays (JCR Q1 (IF: 4.3))
This review provides an overview of multimodal sentiment analysis, covering its definition, history, recent datasets, advanced models, challenges, and future prospects, offering guidance on promising research directions. The review aims to support researchers in developing more effective models in this rapidly evolving field.

Cross-domain car detection model with integrated convolutional block attention mechanism
Haoxuan Xu†, Songning Lai† and Yang Yang~
Image and Vision Computing (JCR Q1 (IF:4.7) CCF C)
We propose a Cross-Domain Car Detection Model with an integrated convolutional block Attention mechanism (CDCDMA), featuring a complete cross-domain detection framework, unpaired target domain image generation emphasizing car headlights, GIOU loss function, and a two-headed CBAM, which improved detection performance by 40% over non-CDCDMA models on the SODA 10 M and BDD100K datasets. This model significantly enhances cross-domain car recognition, outperforming most existing advanced models.

Enhancing domain adaptation for plant diseases detection through Masked Image Consistency in Multi-Granularity Alignment
Guinan Guo, Songning Lai, Qingyang Wu,, Yuntao Shou, and Wenxu Shi
Expert Systems With Applications (JCR Q1 (IF:8.4) CCF C)
This study proposes MIC-MGA, a deep learning approach combining Multi-Granularity Alignment and Masked Image Consistency, to enhance cross-domain generalization in plant disease detection. The method improves object detector architecture while integrating domain adaptation and style-augmented training to address feature distribution shifts across different environmental conditions. Experimental results demonstrate state-of-the-art performance in cross-domain detection tasks (highest mAP), showing significant potential for improving disease prevention and sustainable agriculture.
🎖 Honors and Awards
- NIPS 2024 Travel Awards
- IEEE/EI ( CISP-BMEI 2022) Best Paper Award
- First Prize in Contemporary Undergraduate Mathematical Contest in Modeling National(Top 0.6%)
- First Prize in MathorCup University Mathematical Modeling Challenge National(Top 3%)
- Second Prize in National Undergraduate Electronic Design Contest ( Shandong Province )
- Second Prize in National Crypto-math Challenge Second (East China Competition)
- More than 40 university-level awards, including academic competition, social practice, innovation and entrepreneurship, sports, aesthetic education, volunteer, scholarship and other aspects, are not displayed here.
- Outstanding graduates of Shandong Province
- Outstanding graduate of Shandong University
📖 Educations
- Sep 2025 - Future: Hong Kong University of Science and Technology (Guangzhou) (Incoming AI Phd, supervised by Prof. Yutao Yue)
- Apr 2024 - Sep 2025: HKUST(GZ) (Intern)
- Apr 2023 - Mar 2024: KAUST (Visiting Student)
- Sep 2020 - June 2024: Shandong University (BSc, EECS)
💻 Internships
- Reviewer: ECAI2024, Expert Systems with Applications, IJCNN2024, ICML2024, KDD2024, ICLR2025, ICASSP2025, ICRA2025, AISTATS2025, CVPR2025, ICML2025, IJCAI2025, WWW2025, ICCV2025, NIPS2025
- Monitor of Chongxin College of Shandong University (The class was awarded as Shandong Provincial Excellent Class and Shandong University Top Ten Class)
- Outstanding Volunteer of Shandong University with a total volunteer time of 130h.