Research

The modern machine learning technology achieves remarkable success, but it requires a large amount of high-quality data, large learning models as well as high computational power for training, which also leads to the lack of reliability and interpretability of well-trained models. Our team aims to develop the innovative models and algorithms for efficient, robust, and interpretable machine learning. In particular, we develop various tensor-based methods, leveraging tensor factorization and tensor networks, for efficient and robust representation learning as well as fast computation. We also conduct research on their theoretical analysis and applications in computer vision and neuroscience fields.

Research Subjects:
  • Tensor factorization and tensor networks
  • Robust and interpretable machine learning
  • Real-world applications in computer vision and neuroscience

News

Feb 2024

Our team has one paper accepted by CVPR 2024.

Jan 2024

Our proposal for international workshop on TMME: Tensor Models for Machine lEarning - Empowering Efficiency, lnterpretability, and Reliability in conjunction with IEEE CAI 2024 has been accepted.

Jan 2024

Our team have two papers accepted by ICLR 2024 with one spotlight and one poster presentation.

Dec 2023

Our team have 3 papers accepted by AAAI 2024.

Sep 2023

Our two papers have been accepted to NeurIPS 2023.

Jul 2021

We organize a Special Issue in Frontier in Physics, Tensor Network Approaches for Quantum Many-body Physics and Machine Learning.

For previous news, see the News List


Our Videos on