Number of found records: 80
|
MANI, Inderjeet; HOUSE, David; KLEIN, Gary [et al.] |
|
TIPSTER Text Summarization Evaluation Conference (SUMMAC) |
|
On line (11/05/2005) |
|
Analysis of the evaluation methods of automated summarization based on natural language processing techniques, which are intrinsic to the evaluation of summarisation or of any other NLP technologies |
|
automated summarization; evaluation methods |
Assessment |
|
|
|
|
MARCU, Daniel |
|
Discourse structure, rhetorical parsing and text summarization |
|
On line (11/05/2005) |
|
It compares algorithms to determine validity of textual structure. It exists two applications : one is an abstract system based on discourse and the other one an algorithm of textual planning. |
|
text structure: |
Assessment |
|
|
|
|
MAYBURY, Mark T. |
|
Automated event summarization Techniques |
|
On line (11/05/2005) |
|
Automatically summarizing events from data or knowledge bases is a desirable capability for a number of application areas including report generation from databases (e.g., weather, financial, medical) and simulations (e.g., military, manufacturing, economic). While there have been several efforts to generate narratives from underlying structures, few have focused on event summarization. This extended abstract outlines tactics for selecting and presenting summaries of events. We discuss these tactics in the context of a system which generates summaries of events from an object-oriented battle simulator (AU) |
|
Automated Summarization; Report Generation |
Assessment |
|
|
|
|
MITRA, Mandar; SINGHAL, Amit; BUCKLEY, Chris |
|
Automatic text summarization by paragraph extraction. |
|
PDF |
|
Over the years, the amount of information available electronically has grown manifold. There is an increasing demand for automatic methods for text summarization. Domain-independent techniques for automatic summarization by paragraph extraction have been proposed in [12,15]. In this study, we attempt to evaluate these methods by comparing the automatically generated extracts to ones generated by humans. In view of the fact that extracts generated by two humans for the same article are surprisingly dissimilar the performance of the automatic methods is satisfactory. Even though this observation calls into question the feasibility of producing perfect summaries by extraction given the unavailability of other effective domain independent summarization tools we believe that this is a reasonable, though imperfect, alternative (AU) |
|
automatic methods; text summarization; evaluation; extracts |
Assessment |
|
|
|
|