Information Retrieval Evaluation in a Changing World :
نام عام مواد
[Book]
ساير اطلاعات عنواني
Lessons Learned from 20 Years of CLEF /
نام نخستين پديدآور
Nicola Ferro, Carol Peters, editors.
وضعیت نشر و پخش و غیره
محل نشرو پخش و غیره
Cham :
نام ناشر، پخش کننده و غيره
Springer,
تاریخ نشرو بخش و غیره
2019.
مشخصات ظاهری
نام خاص و کميت اثر
1 online resource (597 pages)
فروست
عنوان فروست
The Information Retrieval Ser. ;
مشخصه جلد
v. 41
يادداشت کلی
متن يادداشت
1 Task Definition
یادداشتهای مربوط به مندرجات
متن يادداشت
Intro; Foreword; Preface; Contents; Acronyms; Editorial Board; Reviewers; Part I Experimental Evaluation and CLEF; From Multilingual to Multimodal: The Evolution of CLEF over Two Decades; 1 Introduction; 1.1 Experimental Evaluation; 1.2 International Evaluation Initiatives; 2 CLEF 1.0: Cross-Language Evaluation Forum (2000-2009); 2.1 Tracks and Tasks in CLEF 1.0; 2.1.1 Multilingual Text Retrieval (2000-2009); 2.1.2 The Domain-Specific Track (2001-2008); 2.1.3 Interactive Cross-Language Retrieval (2002-2009); 2.1.4 The Question-Answering Track (2003-2015)
متن يادداشت
2.1.5 Cross-Language Retrieval in Image Collections (2003-2019)2.1.6 Spoken Document/Speech Retrieval (2003-2007); 2.1.7 Multilingual Web Retrieval (2005-2008); 2.1.8 Geographical Retrieval (2005-2008); 2.1.9 Multilingual Information Filtering (2008-2009); 2.1.10 Cross-Language Video Retrieval (2008-2009); 2.1.11 Component-Based Evaluation (2009); 3 CLEF 2.0: Conference and Labs of the Evaluation Forum (2010-2019); 3.1 Workshops and Labs in CLEF 2.0; 3.1.1 Web People Search (2010); 3.1.2 Cross-Lingual Expert Search (2010); 3.1.3 Music Information Retrieval (2011)
متن يادداشت
3.1.17 News Recommendation Evaluation (2014-2017)3.1.18 Living Labs (2015-2016); 3.1.19 Social Book Search (2015-2016); 3.1.20 Microblog Cultural Contextualization (2016-2018); 3.1.21 Dynamic Search for Complex Tasks (2017-2018); 3.1.22 Early Risk Prediction on the Internet (eRisk, 2017-2019); 3.1.23 Evaluation of Personalised Information Retrieval (2017-2019); 3.1.24 Automatic Identification and Verification of Political Claims (2018-2019); 3.1.25 Reproducibility (2018-2019); 4 IR Tools and Test Collections; 4.1 ELRA Catalogue; 4.2 Some Publicly Accessible CLEF Test Suites
متن يادداشت
3.1.4 Entity Recognition (2013)3.1.5 Multimodal Spatial Role Labeling (2017); 3.1.6 Extracting Protests from News (2019); 3.1.7 Question Answering (2003-2015); 3.1.8 Image Retrieval (2003-2019); 3.1.9 Log File Analysis (2009-2011); 3.1.10 Intellectual Property in the Patent Domain (2009-2013); 3.1.11 Digital Text Forensics (2010-2019); 3.1.12 Cultural Heritage in CLEF (2011-2013); 3.1.13 Retrieval on Structured Datasets (2012-2014); 3.1.14 Online Reputation Management (2012-2014); 3.1.15 eHealth (2012-2019); 3.1.16 Biodiversity Identification and Prediction (2014-2019)
متن يادداشت
5 The CLEF Association6 Impact; References; The Evolution of Cranfield; 1 Introduction; 2 Cranfield Pre-TREC; 3 TREC Ad Hoc Collections; 3.1 Size; 3.2 Evaluation Measures; 3.3 Reliability Tests; 3.3.1 Effect of Topic Set Size; 3.3.2 Effect of Evaluation Measure Used; 3.3.3 Significance Testing; 4 Moving On; 4.1 Cross-Language Test Collections; 4.2 Other Tasks; 4.2.1 Filtering Tasks; 4.2.2 Focused Retrieval Tasks; 4.2.3 Web Tasks; 4.3 Size Revisited; 4.3.1 Special Measures; 4.3.2 Constructing Large Collections; 4.4 User-Based Measures; 5 Conclusion; References; How to Run an Evaluation Task
بدون عنوان
0
بدون عنوان
8
بدون عنوان
8
بدون عنوان
8
بدون عنوان
8
یادداشتهای مربوط به خلاصه یا چکیده
متن يادداشت
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
ویراست دیگر از اثر در قالب دیگر رسانه
عنوان
Information Retrieval Evaluation in a Changing World : Lessons Learned from 20 Years of CLEF.