Number of found records: 59
|
KAPOUN, Jim |
|
Teaching undergrads WEB evaluation: A guide for library instruction |
|
C&RL News July/August 1998, pp.522-523 |
|
On line ( 15/06/2004) |
|
Presentation of five criteria for the evaluation of a web page. |
|
Information; criterion; evaluation; Internet |
Assessment |
|
|
|
|
KEKALAINEN, Jaana; JARVELIN, Kalervo |
|
Using Graded Relevance Assessments in IR Evaluation |
|
Journal of the American Society for Information Science and Technology, vol.53, n.13, 2002, pp.1120–1129 |
|
On line (13/05/2005) |
|
This article proposes evaluation methods based on the use of nondichotomous relevance judgements in IR experiments. It is argued that evaluation methods should credit IR methods for their ability to retrieve highly relevant documents. This is desirable from the user point of view in modern large IR environments. The proposed methods are (1) a novel application of P-R curves and average precision computations based on separate recall bases for documents of different degrees of relevance, and (2) generalized recall and precision based directly on multiple grade relevance assessments (i.e., not dichotomizing the assessments). We demonstrate the use of the traditional and the novel evaluation measures in a case study on the effectiveness of query types, based on combinations of query structures and expansion, in retrieving documents of various degrees of relevance. The test was run with a best match retrieval system(InQuery1) in a text database consisting of newspaper articles. To gain insight into the retrieval process, one should use both graded relevance assessments and effectiveness measures that enable one to observe the differences, if any, between retrieval methods in retrieving documents of different levels of relevance. In modern times of information overload, one should pay attention, in particular, to the capability of retrieval methods retrieving highly relevant documents. (AU) |
|
Information retrieval; method; evaluation; relevance judgement |
Assessment |
|
|
|
|
KIRK, Elizabeth E. |
|
Evaluating Information Found on the Internet |
|
On line (13/05/2005) |
|
Presentation of different criteria for evaluating information on-line |
|
Evaluation; quality; Internet; information |
Assessment |
|
|
|
|
MANI, Inderjeet; GATES, Barbara; BLOEDORN, Eric |
|
Improving Summaries by Revising Them. |
|
In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 558{565, College Park, Maryland, USA, June 1999 |
|
PDF |
|
This paper describes a program which revises a draft text by aggregating together descriptions of discourse entities, in addition to deleting extraneous information. In contrast to knowledge rich sentence aggregation approaches explored in the past, this approach exploits statistical parsing and robust coreference detection. In an evaluation involving revision of topic-related summaries using informativeness measures from the TIPSTER SUMMAC evaluation, the results show gains in informativeness without compromising readability. (AU) |
|
evaluation; indicador; TIPSTER SUMMAC; |
Assessment |
|
|
|
|