PLM4IR: The WSDM 2022 Workshop on
pre-trained Language Model for Information Retrieval

Held in conjunction with WSDM'22

Zoom Link

Phoenix, AZ, United States, February 21-25, 2022

 

Introduction

Recently, the emergence of pre-trained models (PTMs) has yielded immense success in natural language processing, speech recogni- tion, and computer vision, significantly advancing state-of-the-art of these areas. In the information retrieval (IR) community, pre- trained models have also attracted much attention, and researchers have applied existing pre-training methods or even developed novel pre-training methods for different IR applications. Despite the fact that PTMs have gained significant progress in these areas, there are still many challenges to be addressed when applying these models to real-world IR scenarios. The PLM4IR workshop aims to provide a venue, which can bring together practitioners and researchers from academia and industry (i) to discuss the principles, limitations, and applications of pre-trained language models in IR, and (ii) to foster research on innovative algorithms, novel techniques, and new applications of PTMs to information retrieval.

Schedule

Zoom Link: https://us02web.zoom.us/j/85676029010?pwd=aDRPRlFDcVFHMm1mV0o5VVpCOG9TZz09

09:00-09:05 Opening ceremony
09:05-09:50 Keynote Speaker: Jie Tang
  Title: "WuDao: Pretrain the World" 
09:50-10:35 Keynote Speaker: Hamed Zamani
  Title: "Recent Advancements and Current Challenges in Neural Information Retrieval"  
10:40-11:10 Invited talk: Xinyu Ma
  Title: Pre-training with Representative Words Prediction for Ad-hoc Retrieval  
11:10-11:40 Invited talk: Yujia Zhou
  Title: DynamicRetriever: A Pre-training Model-based IR System with Neither Spare nor Dense Index  
11:40-12:10 Invited talk: Jingtao Zhan
  Title: DynamicRetriever: A Pre-training Model-based IR System with Neither Spare nor Dense Index  

 

Keynotes


Free Template by colorlib.com

Jie Tang is a Professor and the Associate Chair of the Department of Computer Science at Tsinghua University. He is a Fellow of the IEEE. His interests include artificial intelligence, data mining, social networks, and machine learning. He served as General Co-Chair of WWW’23, and PC Co-Chair of WWW’21, CIKM’16, WSDM’15, and EiC of IEEE T. on Big Data and AI Open J. He leads the project AMiner.org, an AI-enabled research network analysis system, which has attracted more than 20 million users from 220 countries/regions in the world. He was honored with the SIGKDD Test-of-Time Award, UK Royal Society-Newton Advanced Fellowship Award, NSFC for Distinguished Young Scholar, and KDD’18 Service Award.

WuDao: Pretrain the World

Large-scale pretrained model on web texts have substantially advanced the state of the art in various AI tasks, such as natural language understanding and text generation, and image processing, multimodal modeling. The downstream task performances have also constantly increased in the past few years. In this talk, I will first go through three families: augoregressive models (e.g., GPT), autoencoding models (e.g., BERT), and encoder-decoder models. Then, I will introduce China’s first homegrown super-scale intelligent model system, with the goal of building an ultra-large-scale cognitive-oriented pretraining model to focus on essential problems in general artificial intelligence from a cognitive perspective. In particular, as an example, I will elaborate a novel pretraining framework GLM (General Language Model) to address this challenge. GLM has three major benefits: (1) it performs well on classification, unconditional generation, and conditional generation tasks with one single pretrained model; (2) it outperforms BERT-like models on classification due to improved pretrain-finetune consistency; (3) it naturally handles variable-length blank filling which is crucial for many downstream tasks. Empirically, GLM substantially outperforms BERT on the SuperGLUE natural language understanding benchmark with the same amount of pre-training data.


Free Template by colorlib.com

  HAMED ZAMANI is an Assistant Professor in the Manning College of Information and Computer Sciences at the University of Massachusetts Amherst (UMass), where he also serves as the Associate Director of the Center for Intelligent Information Retrieval (CIIR), one of the top academic research labs in Information Retrieval worldwide. Prior to UMass, he was a Researcher at Microsoft. His research focuses on designing and evaluating statistical and machine learning models with applications to (interactive) information access systems, including search engines, recommender systems, and question answering. He is mostly known for his recent work in the areas of neural information retrieval and conversational information seeking. His work has led to over 70 refereed publications in the field, including a few Best Paper and Honorable Mentions, in addition to a number of open-source research tools.

Recent Advancements and Current Challenges in Neural Information Retrieval

Deep learning models for information retrieval (IR), also called Neural IR, have been studied for almost a decade. Neural IR research has observed a few paradigm shifts in the last few years. For example, the use of pre-trained large language models has recently become a popular recipe for achieving state-of-the-art retrieval performance. In this talk, I will first summarize the recent key findings by the Neural IR research community. In more detail, I will highlight recent neural network architectures and optimization methodologies for fundamental IR problems. I will further discuss recent Neural IR methods for standalone retrieval of documents from large collections. I will finally highlight the current research challenges in this area and discuss potential solutions to address these issues.

Call For Papers

PLM4IR will be a forum for discussion about the challenges in applying pre-trained language (PLM) models for information retrieval (IR) field as well as the theory behind the models and applications. The aim of this workshop can be multi-fold: 1) establishing a bridge for communications between academic researchers and industrial researchers, 2) providing an opportunity for researchers to present new works and early results, and 3) discussing the main challenges in designing and applying PLM in practice.

Specifically, although many existing PLM models (e.g. Bert, Ernie and etc) have achieved great success in IR tasks, these models do not consider the IR cues that might benefit the downstream IR tasks. A belief of task-dependent pre-training is that a pre-training objective that more closely resembles the downstream task could lead to better fine-tuning performance with higher efficiency. Therefore, this workshop seeks to pursue two main themes:

  • How to design effective and efficient PLM models tailored for IR tasks?
  • How to understand theoretical and practical advantages or disadvantages of PLM models in various of IR tasks (e.g., first-stage retrieval and re-ranking stage)?

Specific issues that emerge here include but not limited to:

  • Evaluation: evaluating PLM models with complicated metrics and targets (e.g., effectiveness, time complexity, and space complexity) in IR scenarios.
  • Efficiency: improving the efficiency of inference for PLM models in practical search systems.
  • Interpretability: interpreting the results of the PLM models, as well as understanding the underlying mechanism in the models.
  • Connection: uncovering the connection between PLM models and classical IR approaches, the effect of different network components, and the benefits or risk they bring to practical IR systems.
  • Robustness: Testing the robustness of PLM models with respect to noise, bias, and imbalance distributions in data collected from IR scenarios.

These are only a few of the many research questions in applying PLM models in practical applications and understanding the theoretical advantages. WSDM is uniquely positioned to host a workshop that would motivate interesting discussion and future work of PLM models for IR in both practical use and theoretical research. All papers will be peer reviewed and single-blinded. We welcome many kinds of papers, such as, but not limited to:

  • Novel research papers
  • Demo papers
  • Work-in-progress papers
  • Visionary papers (white papers)
  • Appraisal papers of existing methods and tools (e.g., lessons learned)
  • Relevant work that has been previously published
  • Work that will be presented at the main conference of WSDM

Authors should clearly indicate in their abstracts the kinds of submissions that the papers belong to, to help reviewers better understand their contributions.
Submissions must be in PDF, up to 10 pages long (plus unlimited pages for references) — shorter papers are welcome — and formatted according to the standard double-column ACM Proceedings Style.
The accepted papers will be published on the workshop’s website and will not be considered archival for resubmission purposes.
Authors whose papers are accepted to the workshop will have the opportunity to participate in a spotlight and poster session, and some set may also be chosen for oral presentation. For paper submission, please proceed to the submission website.

Please send enquiries to wsdm2022-plm4ir@hotmail.com

Important Dates

  • November 12, 2021: First call for papers.
  • January 15, 2022: Submission deadline.
  • January 30, 2022: Accepted notification.
  • February 25, 2022: PLM4IR workshop.

 

 

Workshop Organizers

 
Free Template by colorlib.com
LiangJie Hong

Head of Data Science
ETSY.com
New York City, USA

Free Template by colorlib.com
Yixing Fan

Associate Professor
Institute of Computing Technology, CAS
Beijing, China

Free Template by colorlib.com
Xiaohui Xie

Postdoctoral Researcher
Tsinghua University
Beijing, China

Free Template by colorlib.com
Dehong Ma

Senior Algorithm Engineer
Baidu inc.
Beijing, China

Free Template by colorlib.com
Shuaiqiang Wang

Senior Algorithm Engineer
Baidu inc.
Beijing, China

Free Template by colorlib.com
Jiafeng Guo

Professor
Institute of Computing Technology, CAS
Beijing, China

Free Template by colorlib.com
Yiqun Liu

Professor
Tsinghua University
Beijing, China

Program Committee

Chen Qu (University of Massachusetts Amherst)
Marc Najork (Google)
Daniel Hill (Amazon)
Daniel Cohen (University of Massachusetts Amherst)
Xuanhui Wang (Google)
Liu Yang (MUniversity of Massachusetts Amherst)

Shangsong Liang (Sun Yat-sen University)
Michael Bendersky(Google)
Choo Hui (Amazon)
Jun Xu (Renmin University of China)
Keping Bi (University of Massachusetts Amherst)
Wayne Xin-Zhao (Renmin University of China)