Yifan Yuan, PhD, is an Assistant Professor and Master's Supervisor at the School of Artificial Intelligence, Shenzhen University. He received his Ph.D. in Computer Science and his Bachelor of Science in Physics from Fudan University in 2019 and 2025, respectively, under the supervision of Professor Junping Zhang.
His main research interests include AIGC, large models, trustworthy image and video editing, multimodal content understanding and generation, and emotional agents. His research focuses on controllability and consistency issues in generation and editing tasks, such as editing accuracy, semantic consistency, identity preservation, and illusions, biases, and security risks in large models. He also explores the practical applications of generative models in high-value vertical scenarios such as emotional companionship, virtual try-on, and digital human modeling.
Dr. Yuan has participated in multiple national and provincial-level research projects, including the National Natural Science Foundation of China, the National Key Research and Development Program of China, and projects funded by the Ministry of Education. He also serves as a reviewer for several international conferences and journals, including IEEE TIV, CVPR, ICML, AAAI, and ACM MM.
🎓 Education Background
2019.09 – 2025.03 Fudan University, Computer Science and Technology, PhD
2015.09 – 2019.06 Fudan University, Physics, Bachelor of Science
💡 Research Interests
AIGC, Multimodal Large Models, Image and Video Generation and Editing, Affective Agents, Virtual Try-On, 3D Generation, World Models, Embodied Intelligence
👭 Lab Introduction
Yuan Yifan currently leads the MUSE Lab (Multimodal Understanding and Synthesis for Explainability Lab). The lab focuses on cutting-edge areas such as AIGC, multimodal understanding and generation, large-scale models, knowledge retrieval enhancement (RAG), personalized language modeling, and emotional agents, aiming to promote the research and practical application of intelligent agents with controllability, explainability, and credibility.
The lab adopts a flexible and open research mechanism, encouraging students to explore at the intersection of research, engineering, and creativity. We welcome like-minded students to join us in building intelligent systems that are both warm and impactful.
📢 Admissions Information
The MUSE Lab recruits master's and undergraduate students year-round, targeting those interested in AIGC, large-scale models, image and video generation and editing, multimodal understanding and generation, emotional agents, virtual try-on, 3D generation, world models, and embodied intelligence.
💻 Project-Related Technology Stack
1. Deep Learning and Generative Model Frameworks: PyTorch, Diffusers, Transformers
2. Multimodal Understanding and Retrieval Enhancement: CLIP, BLIP, Qwen-VL, RAG
3. Parameter Efficiency and Command Fine-tuning: LoRA, PEFT, RLHF, Preference Alignment
4. Image and Video Generation and Editing: Stable Diffusion, ControlNet, ViT, DiT, AnimateDiff, SVD
5. 3D Generation and Virtual Try-On: NeRF, 3D Gaussian Splatting, Virtual-TryOn
6. World Models and Embodied Intelligence: DreamerV3, VLA, GenSim
7. Personalization and Emotional Agents: Persona LoRA, Memory-Augmented LLM
8. Prompt Engineering and Context Orchestration: LangChain, LlamaIndex
✅
We hope you:
1. Possess strong programming skills
2. Have a basic understanding of deep learning (familiar with CNNs, Transformers, and Diffusion Models)
3. Are interested in AIGC, large-scale models, and related fields
4. Hope to gain experience in algorithm and system development through real-world research projects
5. Have ideas, think critically, and possess strong logical and communication skills.
🙌🏻
What will you gain?
1. A free research environment: I earned my PhD at 23, am young and energetic, hold an MBTI ESFP certification, am humorous, meticulous, and responsible, and excel at providing students with comprehensive support in learning, research, interests, and life. The research group has a relaxed and free atmosphere; my motto is "Happiness is paramount." I believe passion can withstand the test of time, and genuine passion will help us produce impactful work. Therefore, I encourage and support you in finding a research direction that interests you.
2. One-on-one mentoring: A customized support path will be provided based on your development stage. If you are more senior, we will work together to hone your leadership and mentorship skills; if you are more junior, I will provide more hands-on technical training and research companionship. No matter what stage you are at, I hope to grow and enjoy life with you!
3. Aiming for high-quality research output: We encourage publication in Nature sub-journals and CCF Class A conferences and journals, and provide full guidance.
4. Emphasis on both research and application: You will have the opportunity to explore the real-world applications of large-scale models and we will help you connect with resources from companies such as Tencent, Alibaba, ByteDance, SenseTime, China Telecom, and China Mobile for implementation.
5. Diverse development: We encourage overseas academic exchange and cross-disciplinary collaboration. Besides conducting solid scientific research, we hope everyone will also explore life and see the world together, visit art exhibitions, concerts, and combine art and technology to create something cool! We look forward to having someone passionate about research and innovation join our team to jointly promote the development of intelligent vision and creative technologies!
📩
How to Join
We welcome students interested in exploring AIGC to join the MUSE Lab!
Please send your resume to yifanyuan@szu.edu.cn
Email subject format: “School – Name – Year – Master’s/Undergraduate – Research Interests”
Email content: Please attach your resume, which can briefly describe your motivation for applying, your academic ranking, and your research experience. You can also include your creativity, unique qualities, and strengths.
📝 Representative scientific research achievements
[1]Y. Yuan, G. Yang, J. Z. Wang, H. Zhang, H. Shan, FY Wang and J. Zhang. Dissecting and Mitigating Semantic Discrepancy in Stable Diffusion for Image-to-Image Translation. IEEE/CAA Journal of Automatica Sinica, 2025, 12(4): 705-718. [DOI: 10.1109/JAS.2024.124800] SCI IF: 19.3, Top 1 (Q1)
[2]Y. Yuan*, S. Ma*, H. Shan, and J. Zhang. DO-FAM: Disentangled Non-Linear Latent navigation for Facial Attribute Manipulation. IEEE International Conference on Acoustics, Speech and Signal Processing, 2023.
[3]Y. Yuan, S. Ma, and J. Zhang. VR-FAM: Variance-Reduced Encoder with Nonlinear Transformation for Facial Attribute Manipulation. IEEE International Conference on Acoustics, Speech and Signal Processing, 2022: 1755-1759.
[4] F. Xu,Y. Yuan, J. Zhang, and J. Z. Wang. Chapter 7: High-Speed Joint Learning of Action Units and Facial Expressions. Modeling Visual Aesthetics, Emotion, and Artistic Style. Berlin, Heidelberg: Springer Berlin Heidelberg, 2023.
[5] F. Xu,Y. Yuan, J. Zhang, and J. Z. Wang. Chapter 8: ExpressionFlow: A Microexpression Descriptor for Efficient Recognition. Modeling Visual Aesthetics, Emotion, and Artistic Style. Berlin, Heidelberg: Springer Berlin Heidelberg, 2023.
[6] G. Li,Y. Yuan, X. Ben, and J. Zhang. Spatiotemporal attention network for microexpression recognition. Journal of Image and Graphics, 2020, 25(11): 2380-2390. [DOI: 10.11834/jig. 200325]
🙇🏻♀️
Representative Research Projects
[1] Shenzhen University Young Teachers' Research Start-up Fund, Research on Reliability Assessment and Controllable Editing Mechanism of Generative Image Editing, 2025.08-2028.08, Principal Investigator
[2] Shanghai 2024 "Science and Technology Innovation Action Plan" Popular Science Special Project, Production and Promotion of Long Videos for Popular Science on Artificial Intelligence for Teenagers, 2024.09.01-2025.08.31, Completed, Participant
[3] National Natural Science Foundation of China General Program, Research on Deep Learning Based on Graph Convolutional Neural Network and Decoupling Learning, 2022.01-2025.12, Completed, Participant
[4] Shanghai 2021 "Science and Technology Innovation Action Plan" Popular Science Special Project, Short Video Popular Science on the Frontier Progress and Application of Artificial Intelligence, 2021.06.01-2023.06.30, Completed, Participant
[5] Ministry of Education Project, Research on Planning of Human-Machine Collaborative Hybrid Enhanced Intelligent Algorithm, 2020.01-2021.12, Completed, Participant
[6] Horizontal Project, Shanghai Central Meteorological Observatory, Development Project of Heavy Precipitation Forecasting Technology Based on Machine Learning, 2020.03-2021.04, Completed, Participant
[7] National Key Research and Development Program (13th Five-Year Plan), Key Theories and Technologies of Human-Machine Collaboration Based on Teaching and Imitation Learning, Project No. 2018YFB1305104, 2019.06-2022.05, Completed, Participant
[8] Shanghai Science and Technology Innovation Action Plan, Automatic Correction Technology of Numerical Simulation Error Based on Big Data Deep Learning, Project No. 18DZ1200404, 2018.04-2021.03, Completed, Participant
[9] National Natural Science Foundation of China, Research on Multimodal and Multi-view Pedestrian Gait Recognition Based on Machine Learning, Project No. 61673118, 2017.01-2020.12, Completed, Participant