NEWS

Yupeng Hou's Sharing

Expanding the Expressiveness of Semantic IDs for Generative Recommendation

Semantic IDs are key to scaling generative recommendation. This talk discusses how current semantic ID design can limit expressiveness and introduces two directions to expand it: extending IDs from short to long forms (KDD’25), and moving from static to personalized semantic IDs (ICLR under review).

Yupeng Hou is a Ph.D. candidate at UC San Diego advised by Prof. Julian McAuley. He received his M.E. and B.E. from Renmin University of China under Prof. Wayne Xin Zhao. He has published first-author papers at ICML, KDD, and WWW, received paper awards at RecSys’22 and CIKM’22, and interned at Google DeepMind and Amazon Rufus. His recent work focuses on tokenization and generative recommendation.

Fangfeng Liu's Sharing

Towards World Models from Generative Spatial Intelligence

This talk explores how to learn a unified “world model” from visual signals that supports generation, spatial reasoning, and causal inference. It covers recent works such as ReconX, DimensionX, and VideoScene on reconstructing continuous 3D scenes from sparse or single-view inputs, and LangScene-X on unifying language, images, and 3D representations. The talk further connects spatial MLLMs (NeurIPS 2025) and Physics3D to show how video world models can reason over long time horizons with spatial consistency.

Fangfu Liu is a third-year direct-Ph.D. student in Electronic Engineering at Tsinghua University, advised by Prof. Qifeng Chen. His research centers on visual intelligence and world models (3D AIGC and Video Generation). He has published multiple papers at TPAMI, CVPR, ECCV, NeurIPS, ICCV, ICLR, and KDD, with GitHub projects accumulating over 10K stars, and serves as a reviewer for CVPR, NeurIPS, and ICLR. Personal homepage: https://liuff19.github.io.

Weixiang Zhao's Sharing

Enhancing Large Language Models via Internal Supervision

This talk introduces a paradigm for enhancing large language models by mining internal supervision signals, reducing reliance on costly external data and annotations. It will cover advances in multilingual capability, safety alignment, and reasoning efficiency, showcasing the feasibility and potential of internal supervision for more scalable and efficient LLM training.

Weixiang Zhao is a fifth-year Ph.D. candidate at the Research Center for Social Computing and Information Retrieval (SCIR), Harbin Institute of Technology, advised by Prof. Yanyan Zhao and Prof. Bing Qin. His research focuses on alignment of large language models and emotional dialogue systems, and he has published over twenty papers as first or co-first author at top conferences and journals, including works selected as NeurIPS Spotlight, ACL Oral & Panel, and EMNLP Oral. Personal homepage: https://circle-hit.github.io/.

Recording: https://meeting.tencent.com/crm/2YRQomkeef

AlphaLab Setup

AlphaLab Setup in USTC !!!

An Zhang, Yaorui Shi, Yan Sun, Zijing Wu, Yi Zhang, Du Lin, Peisen Zheng, Yanzhen Luo, Changshuo Shen

AlphaLab is now officially established in USTC! We are looking forward to exploring the infinite possibilities of smart technology together!

AgentSociety Challenge in WWW

2nd place in UrbanCup 2025: Urban Life Simulation with Large Language Models Empowered Agents

Peisen Zheng, Yan Sun, Yuxin Chen, Leheng Sheng, Yingzhi He UrbanCup 2025

The final round of UrbanCup 2025 was held in a hackathon format, where expert judges from the conference evaluated and ranked teams based on their final presentations and technical reports. Out of 16 finalist teams, Team Bacon—composed of Yingzhi He, Leheng Sheng, Yuxin Chen, Yan Sun, and Peisen Zheng—achieved 2nd place in the competition.

KDD 2025 LLM2Rec

Three papers accepted by ACL 2025

AgentSociety Challenge in WWW

2nd place in AgentSociety Challenge in WWW: Personalized Recommendation Agents with Self-Consistency

Zijing Wu, Leheng Sheng, Yuanlin Xia, Yi Zhang, Yuxin Chen, An Zhang WWW 2025

This work presents the 2nd place solution for the WWW’25 AgentSociety Challenge (Recommendation Track), which focuses on developing LLM-based agents for personalized recommendation. Our team, RecHackers, proposes an agent-voting system that leverages self-consistency across multiple LLM outputs to improve ranking accuracy. The approach follows a prompting-sampling-voting paradigm: we design prompts based on user and item features, sample multiple candidate rankings from LLMs, and aggregate them using voting mechanisms such as majority voting or Borda count. This simple yet effective method demonstrates strong performance across real-world recommendation scenarios and achieves the 2nd place in the competition.

KDD 2025 LLM2Rec

One paper accepted by KDD 2025

One paper has been accepted by SIGKDD 2025. Congratulations to Yingzhi He and Xiaohao Liu!

SIGIR 2025 Alphafuse

One paper accepted by SIGIR 2025

One paper accepted by SIGIR 2025. Congratulations to all Guoqing Hu!