Number of found records: 40
|
LIDDY, Elizabeth DuRoss |
|
Discourse-level structure in abstracts |
|
ASIS '87: Proceedings of the 50th ASIS Annual Meeting Edited by Ching-chih Chen, Medford, New Jersey, American Society for Information Science by Learned Information, 1987 138-147. |
|
On line (11/05/2005) |
|
Describes research undertaken into the possibility of automatically detecting not only that concepts co-occur in abstracts, but also that the roles they play in relation to each other are the ones of interest. A frame-like stucture of abstracts was developed by tapping the expertise of professional abstractors. Results of the next stage of the research will show whether rule-governed instantiation of the abstract frame structure can be accomplished, showing how the information is related to other information in the abstract and giving the potential for retrieval results of greater precision. (DB) |
|
Services; User services; Information services; Abstracting services; Automatic abstracting; Abstracting |
Assessment |
|
|
|
|
MOENS, Marie-Francine; UYTTENDAELE, Caroline; DUMORTIER, Jos |
|
Abstracting of legal cases: the potential of clustering based on the selection of representative objects |
|
Journal of the American Society for Information Science, 1999, vol.50, n.2, pp.151-161 |
|
On line (11/05/2005) |
|
The SALOMON project automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts text units from the case text to form a case summary. Such a case summary facilitates the rapid determination of the relevance of the case or may be employed in text search. An important part of the research concerns the development of techniques for automatic recognition of representative text paragraphs (or sentences) in texts of unrestricted domains. These techniques are employed to eliminate redundant material in the case texts and to identify informative text paragraphs which are relevant to include in the case summary. An evaluation of a test set of 700 criminal cases demonstrates that the algorithms have an application potential for automatic indexing, abstracting and text linking. (AU) |
|
Automatic abstracting; Court cases; Clustering; Belgium; SALOMON project |
Assessment |
|
|
|
|
TEUFEL, Simone; MOENS, Marc |
|
Sentence Extraction as a Classification Task. |
|
In MANI, Inderjeet ; MAYBURY, Mark T., (Eds), Proceedings of the Workshop on Intelligent Scalable Text Summarization at the 35th Meeting of the Association for Computational Linguistics, and the 8th Conference of the European Chapter of the Association for Computational Linguistics, Madrid, Spain, 1997. |
|
PDF |
|
A useful first step in document summarisation is the selection of a small number of 'meaningful' sentences from a larger text Kupiec et al (1995) describe thus as a classification task on the basis of a corpus of technical papers with summaries written by professional abstractors, their system identifies those sentences m the text which also occur in the summary, and then acquires a model of the 'abstract-worthiness' of a sentence as a combination of a limited number of properties of that sentence We report on a replication of thin experiment with different data summaries for our documents were not written by professional abstractors, but by the authors themselves Tins produced fewer alignable sentences to tram on We use alternative 'meaningful' sentences (selected by a human judge) as training and evaluation material, because this has advantages for the subsequent automatic generation of more flexible abstracts We quantitatively compare the two different strategies for training and evaluation (vm alignment vs human judgement), we also discuss qualitative differences and consequences for the generation of abstracts (AU) |
|
document summarization ; technical paper |
Assessment |
|
|
|
|
TINKER, A. J. |
|
An empirical evaluation of human-produced and machine-produced abstracts. |
|
Library and Information Research News, 1999, vol. 23, n. 74, pp.33-44. |
|
On line (11/05/2005) |
|
Reports the findings of an empirical user-based evaluation, examining the quality of automatic and manual abstracting. Presents a brief review of automatic abstracting research and a description of the research methodology and summary of the results. Although the results indicated that human-produced abstracts were superior to those produced automatically in almost every respect, the difference in performance and acceptability was marginal. Concludes that the continued development of automatic abstracting is clearly warranted. Recommends increased focus on user needs and continued innovation in automatic abstracting systems to challenge the boundaries set by the precedent of manual abstracting. (DB) |
|
Abstracting; Automatic abstracting; Evaluation; Manual systems |
Assessment |
|
|
|
|