Semantic IDs are key to scaling generative recommendation. This talk discusses how current semantic ID design can limit expressiveness and introduces two directions to expand it: extending IDs from short to long forms (KDD’25), and moving from static to personalized semantic IDs (ICLR under review).
Yupeng Hou is a Ph.D. candidate at UC San Diego advised by Prof. Julian McAuley. He received his M.E. and B.E. from Renmin University of China under Prof. Wayne Xin Zhao. He has published first-author papers at ICML, KDD, and WWW, received paper awards at RecSys’22 and CIKM’22, and interned at Google DeepMind and Amazon Rufus. His recent work focuses on tokenization and generative recommendation.
This talk explores how to learn a unified “world model” from visual signals that supports generation, spatial reasoning, and causal inference. It covers recent works such as ReconX, DimensionX, and VideoScene on reconstructing continuous 3D scenes from sparse or single-view inputs, and LangScene-X on unifying language, images, and 3D representations. The talk further connects spatial MLLMs (NeurIPS 2025) and Physics3D to show how video world models can reason over long time horizons with spatial consistency.
Fangfu Liu is a third-year direct-Ph.D. student in Electronic Engineering at Tsinghua University, advised by Prof. Qifeng Chen. His research centers on visual intelligence and world models (3D AIGC and Video Generation). He has published multiple papers at TPAMI, CVPR, ECCV, NeurIPS, ICCV, ICLR, and KDD, with GitHub projects accumulating over 10K stars, and serves as a reviewer for CVPR, NeurIPS, and ICLR. Personal homepage: https://liuff19.github.io.
This talk introduces a paradigm for enhancing large language models by mining internal supervision signals, reducing reliance on costly external data and annotations. It will cover advances in multilingual capability, safety alignment, and reasoning efficiency, showcasing the feasibility and potential of internal supervision for more scalable and efficient LLM training.
Weixiang Zhao is a fifth-year Ph.D. candidate at the Research Center for Social Computing and Information Retrieval (SCIR), Harbin Institute of Technology, advised by Prof. Yanyan Zhao and Prof. Bing Qin. His research focuses on alignment of large language models and emotional dialogue systems, and he has published over twenty papers as first or co-first author at top conferences and journals, including works selected as NeurIPS Spotlight, ACL Oral & Panel, and EMNLP Oral. Personal homepage: https://circle-hit.github.io/.
Recording: https://meeting.tencent.com/crm/2YRQomkeef
Seven papers have been accepted by NeurIPS 2025! 6 in Main track, 1 in Datasets and Benchmarks track. Congratulations to Yuxin Chen, Yaorui Shi, Jingnan Zheng, Leheng Sheng, Chenhang Cui, Guoqing Hu
AlphaLab is now officially established in USTC! We are looking forward to exploring the infinite possibilities of smart technology together!
The final round of UrbanCup 2025 was held in a hackathon format, where expert judges from the conference evaluated and ranked teams based on their final presentations and technical reports. Out of 16 finalist teams, Team Bacon—composed of Yingzhi He, Leheng Sheng, Yuxin Chen, Yan Sun, and Peisen Zheng—achieved 2nd place in the competition.
This work presents the 2nd place solution for the WWW’25 AgentSociety Challenge (Recommendation Track), which focuses on developing LLM-based agents for personalized recommendation. Our team, RecHackers, proposes an agent-voting system that leverages self-consistency across multiple LLM outputs to improve ranking accuracy. The approach follows a prompting-sampling-voting paradigm: we design prompts based on user and item features, sample multiple candidate rankings from LLMs, and aggregate them using voting mechanisms such as majority voting or Borda count. This simple yet effective method demonstrates strong performance across real-world recommendation scenarios and achieves the 2nd place in the competition.
One paper has been accepted by SIGKDD 2025. Congratulations to Yingzhi He and Xiaohao Liu!
One paper accepted by SIGIR 2025. Congratulations to all Guoqing Hu!
Three papers accepted by ICLR 2025. Congratulations to Chenhang Cui, Leheng Sheng, and Liu Shuo!