各国语言/Languages
英语/English 法语/Français 俄语/Русский 阿拉伯语/العربية 西班牙语/Español 德语/Deutsch 日语/日本語 印尼-马来/Indonesia 朝鲜语/한국어 意大利语/Italiano 葡萄牙/Português 印地语/हिन्दी 泰语/Việtไทย 越南语/Tiếng
Welcome to GDUFS!

How useful are latent representations for information retrieval?

SPEAKER: Jian-Yun Nie is a professor in University of Montreal. He has been working in the areas of information retrieval and natural language processing for more than 25 years. His research spans a wide range of IR topics, including information retrieval models, cross-language information retrieval, query expansion and understanding, utilization of query logs, etc. Jian-Yun Nie has published a large number of papers in IR and NLP and his papers have been widely cited. He is on editorial boards of several internationaljournals, and is a PC member of the major conferences in these areas such as SIGIR, CIKM, ACL, etc. He has been the general chair of SIGIR conference in 2011 held in Beijing.

Starts

15:30, October 15th

Location

Teaching Building G211, North Campus

More Info

ABSTRACT: Traditional information retrieval uses words as the basic representation units. The limitations of such a representation are well know. In particular, it cannot deal with synonymous and polysemous words, which may affect information retrieval. A series of latent representations have been used to address the problems, ranging from LSA, LDA to more recent word embeddings. In this talk, we will review these representations for IR applications. It will be shown that latent representations can help solve the problems to some extent, but cannot (yet) fully replace the traditional word-based representation. We will provide some analysis on this.