Liwei Wang
I am an Assistant Professor in Computer Science and Engineering department at The Chinese University of Hong Kong (CUHK). Before coming to HK, I have worked for more than two years as a Senior Researcher in Tencent AI Lab at Bellevue, US.
I got my PhD from Computer Science Department, University of Illinois at Urbana-Champaign, advised by Prof. Svetlana Lazebnik. Here is my Short Bio.
The Language and Vision (LaVi) Lab, which I founded at the Department of Computer Science and Engineering at CUHK, conducts research in Natural Language Processing (NLP) and Computer Vision, with a particular emphasis on the intersection of vision and language.
Our work encompasses a range of topics including Language+Vision, Large Language Models, Multi-modal Large Models, and Embodied AI.
If you want to join LaVi Lab, please send an email to lwwang@cse.cuhk.edu.hk
Email /
Google Scholar /
Publications /
Lab website (soon) /
|
|
News
- 2024/09:
We are hiring Researchers, Interns, Postdocs and Phd students to work on topics of Embodied AI, Vision+Language, LLMs, multi-modal LLMs, and diffusion models.
- 2024/09:
Will serve as an Area Chair of CVPR 2025.
- 2024/05:
Will serve as an Area Chair of NeurIPS 2024.
- 2023/10:
Our work on LLMs Preference Model have been accepted to EMNLP 2023!
- 2023/10:
Our CLEVA, the comprehevsive Chinese Language Model Evalution Platform, has been accepted to EMNLP 2023 System Demonstration!
- 2023/07:
I was invited to give a talk in the IAS Workshop on Mathematical Theory for Emergent Intelligence.
- 2023/07:
Our work on Vision-Language Parameter-Efficient Tuning has been accepted to ICCV 2023.
- 2023/07:
Our work on reasoning with LLMs has been accepted to ACL 2023.
- 2023/06:
Will serve as an Area Chair of CVPR 2024
- 2022/10:
LaVi's two EMNLP 2022 Long papers on Dialogue research are accepted!
- 2022/10:
Serving as an Area Chair of CVPR 2023
- 2022/07:
Our LaVi team won the annual NLP challenge LIC 2022 (Multi-modal Video Understanding track) hosted by CCF and CIPS, check department news here .
- 2022/07:
One ECCV paper is accepted
- 2022/03:
Three papers got accepted to CVPR 2022 including our new work on Language + Vision.
- 2022/02:
One ACL 2022 long paper from our group on "Probing Pre-trained Models" has bee accepted.
- 2022/02:
Serving as an Area Chair of ECCV 2022 .
|
|
Towards Learning a Generalist Model for Embodied Navigation
Duo Zheng*, Shijia Huang*, Lin Zhao, Yiwu Zhong, Liwei Wang CVPR 2024 (Poster Highlight)     Code
|
|
Making Long-Context Language Models Better Multi-Hop Reasoners
Yanyang Li*, Shuo Liang*, Michael R. Lyu, Liwei Wang ACL 2024     Code
|
|
Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models
Yiwu Zhong*, Ziyuan Hu*, Michael R. Lyu, Liwei Wang EMNLP 2024     Code
|
|
Enhancing Temporal Modeling of Video LLMs via Time Gating
Zi-Yuan Hu, Yiwu Zhong, Shijia Huang, Michael R. Lyu, Liwei Wang EMNLP 2024 Findings     Code
|
|
Learning Preference Model for LLMs via Automatic Preference Data Generation
Shijia Huang*, Jianqiao Zhao*, Yanyang Li*, Liwei Wang EMNLP 2023 Long Paper
|
|
CLEVA: Chinese Language Models EVAluation Platform
Yanyang Li, Jianqiao Zhao, Duo Zheng, Zi-Yuan Hu, Zhi Chen, Xiaohui Su, Yongfeng Huang, Shijia Huang, Dahua Lin, Michael R. Lyu,
Liwei Wang EMNLP 2023 System Demonstration     Project
|
|
VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control
Ziyuan Hu*,
Yanyang Li*, Michael R. Lyu,
Liwei Wang
ICCV, 2023     Code
|
|
Multi-View Transformer for 3D Visual Grounding
Shijia Huang*,
Yilun Chen, Jiaya Jia,
Liwei Wang
CVPR, 2022     Code
|
|
Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation
Yanyang Li*,
Jianqiao Zhao*,
Michael R. Lyu,
Liwei Wang
EMNLP, 2022 Long Paper, Code
|
|
FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Jianqiao Zhao*,
Yanyang Li*,
Wanyu Du*,
Yangfeng Ji,
Dong Yu,
Michael R. Lyu,
Liwei Wang
EMNLP, 2022 Long Paper, Code and Dataset
|
|
Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Yanyang Li*,
Fuli Luo,
Runxin Xu,
Songfang Huang,
Fei Huang,
Liwei Wang
ACL, 2022, long paper    
|
|
SAT: 2D Semantics Assisted Training for 3D Visual Grounding
Zhengyuan Yang,
Songyang Zhang,
Liwei Wang,
Jiebo Luo
ICCV, 2021, Oral Presentation     Code
|
|
Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation
Liwei Wang,
Jing Huang,
Yin Li,
Kun Xu,
Zhengyuan Yang,
Dong Yu
CVPR, 2021     Code
|
|
Robust Dialogue Utterance Rewriting as Sequence Tagging
Jie Hao,
Linfeng Song,
Liwei Wang,
Kun Xu,
Zhaopeng Tu,
Dong Yu
EMNLP, 2021,     Code
|
|
Comprehensive Image Captioning via Scene Graph Decomposition
Yiwu Zhong*,
Liwei Wang,
Jianshu Chen,
Dong Yu,
Yin Li
ECCV, 2020     Code
|
|
Improving One-stage Visual Grounding by Recursive Sub-query Construction
Zhengyuan Yang,
Tianlang Chen,
Liwei Wang,
Jiebo Luo
ECCV, 2020     Code
|
|
MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning
Jie Lei*,
Liwei Wang,
Yelong Shen,
Dong Yu,
Tamara Berg,
Mohit Bansal
ACL, 2020     Code
|
|
A Fast and Accurate One-Stage Approach to Visual Grounding​
Zhengyuan Yang*,
Boqing Gong,
Liwei Wang,
Wenbing Huang,
Dong Yu,
Jiebo Luo
ICCV, 2019, Oral Presentation     Code
|
|
Fast, Diverse and Accurate Image Captioning Guided By Part-of-Speech
Aditya Deshpande,
Jyoti Aneja,
Liwei Wang,
Alexander Schwing,
D. A. Forsyth
CVPR, 2019, Oral Presentation
|
|
Learning Two-Branch Neural Networks for Image-Text Matching Tasks
Liwei Wang,
Yin Li,
Jing Huang,
Svetlana Lazebnik
TPAMI, 2018     Code
|
|
Learning structural motif representations for efficient protein structure search
Yang Liu,
Qing Ye,
Liwei Wang,
Jian Peng
Bioinformatics, 2018     Code
|
|
Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space
Liwei Wang,
Alex Schwing,
Svetlana Lazebnik
NeurIPS, 2017
|
|
Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models
Bryan Plummer,
Liwei Wang,
Chris M. Cervantes,
Juan C. Caicedo,
Julia Hockenmaier,
Svetlana Lazebnik
IJCV, 2016     Project
|
|
Learning Deep Structure-Preserving Image-Text Embeddings
Liwei Wang,
Yin Li,
Svetlana Lazebnik
CVPR, 2016     Code
|
|