Skip to content

FightingEveryDay0/Summarization-Papers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Summarization Papers

Organized by Xiachong Feng.

Contributor

Yichong Huang, Haozheng Yang, Jiaan Wang

Summarization Learning Route

Summarization Learning Route (with link)

Trending

Presentations && Notes

SOTA

  1. BRIO: Bringing Order to Abstractive Summarization Yixin Liu, Pengfei Liu, Dragomir Radev, Graham Neubig ACL 2022 [pdf] [code]

Benchmark

  • MuLD: The Multitask Long Document Benchmark G Thomas Hudson, Noura Al Moubayed [pdf] [data]
  • EXPLAINABOARD: An Explainable Leaderboard for NLP Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Graham Neubig [pdf] [ExplainaBoard]
  • GLGE: A New General Language Generation Evaluation Benchmark Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, Nan Duan [pdf] [benchmark]

Survey

  1. An Empirical Survey on Long Document Summarization: Datasets, Models and Metrics uan Yee Koh, Jiaxin Ju, Ming Liu, Shirui Pan ACM Computing Surveys [pdf]
    [Abs] Long documents such as academic articles and business reports have been the standard format to detail out important issues and complicated subjects that require extra attention. An automatic summarization system that can effectively condense long documents into short and concise texts to encapsulate the most important information would thus be significant in aiding the reader's comprehension. Recently, with the advent of neural architectures, significant research efforts have been made to advance automatic text summarization systems, and numerous studies on the challenges of extending these systems to the long document domain have emerged. In this survey, we provide a comprehensive overview of the research on long document summarization and a systematic evaluation across the three principal components of its research setting: benchmark datasets, summarization models, and evaluation metrics. For each component, we organize the literature within the context of long document summarization and conduct an empirical analysis to broaden the perspective on current research progress. The empirical analysis includes a study on the intrinsic characteristics of benchmark datasets, a multi-dimensional analysis of summarization models, and a review of the summarization evaluation metrics. Based on the overall findings, we conclude by proposing possible directions for future exploration in this rapidly growing field.
  2. Multi-document Summarization via Deep Learning Techniques: A Survey Congbo Ma, Wei Emma Zhang, Mingyu Guo, Hu Wang, QUAN Z. Sheng [pdf]
  3. Embedding Knowledge for Document Summarization: A Survey Yutong Qu, Wei Emma Zhang, Jian Yang, Lingfei Wu, Jia Wu, Xindong Wu [pdf]
  4. A Survey on Dialogue Summarization: Recent Advances and New Frontiers Xiachong Feng, Xiaocheng Feng, Bing Qin IJCAI 2022, Survey Track [pdf]
  5. Automatic Text Summarization Methods: A Comprehensive Review Divakar Yadav, Jalpa Desai, Arun Kumar Yadav [pdf]
  6. A Survey on Cross-Lingual Summarization Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie Zhou [pdf]
  7. Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, Hua Wu [pdf]
  8. Recent Advances in Neural Text Generation: A Task-Agnostic Survey Chen Tang, Frank Guerin, Yucheng Li, Chenghua Lin [pdf]
  9. Survey of Hallucination in Natural Language Generation Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, Pascale Fung [pdf]
  10. A Survey on Retrieval-Augmented Text Generation Huayang Li, Yixuan Su, Deng Cai, Yan Wang, Lemao Liu [pdf]
  11. A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models Hanqing Zhang, Haolin Song, Shaoyu Li, Ming Zhou, Dawei Song [pdf]
  12. A Survey of Pretrained Language Models Based Text Generation Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen [pdf]
  13. A Comprehensive Review on Summarizing Financial News Using Deep Learning Saurabh Kamal, Sahil Sharma [pdf]
  14. A Survey on Multi-modal Summarization Anubhav Jangra, Adam Jatowt, Sriparna Saha, Mohammad Hasanuzzaman [pdf]
  15. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig [pdf]
  16. Pretrained Language Models for Text Generation: A Survey Junyi Li, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen IJCAI21 [pdf]
  17. A Survey of Recent Abstract Summarization Techniques Diyah Puspitaningrum ICICT21 [pdf]
  18. A Survey of the State-of-the-Art Models in Neural Abstractive Text Summarization AYESHA AYUB SYED, FORD LUMBAN GAOL, TOKURO MATSUO [pdf]
  19. Automatic summarization of scientific articles: A survey Nouf Ibrahim Altmami, Mohamed El Bachir Menai Journal of King Saud University - Computer and Information Sciences [pdf]
  20. Multi-document Summarization via Deep Learning Techniques: A Survey Congbo Ma, Wei Emma Zhang, Mingyu Guo, Hu Wang, Quan Z. Sheng [pdf]
  21. Deep Learning Based Abstractive Text Summarization: Approaches, Datasets, Evaluation Measures, and Challenges Dima Suleiman, Arafat A. Awajan [pdf]
  22. A Survey of Knowledge-Enhanced Text Generation Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, Meng Jiang [pdf]
  23. From Standard Summarization to New Tasks and Beyond: Summarization with Manifold Information Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao, Rui Yan IJCAI20 [pdf]
  24. Neural Abstractive Text Summarization with Sequence-to-Sequence Models Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, Chandan K. Reddy [pdf]
  25. A Survey on Neural Network-Based Summarization Methods Yue Dong [pdf]
  26. Automated text summarisation and evidence-based medicine: A survey of two domains Abeed Sarker, Diego Molla, Cecile Paris [pdf]
  27. Automatic Keyword Extraction for Text Summarization: A Survey Santosh Kumar Bharti, Korra Sathya Babu [pdf]
  28. Text Summarization Techniques: A Brief Survey Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, Krys Kochut [pdf]
  29. Recent automatic text summarization techniques: a survey Mahak Gambhir, Vishal Gupta [pdf]

Toolkit

  1. iFacetSum: Coreference-based Interactive Faceted Summarization for Multi-Document Exploration Eran Hirsch, Alon Eirew, Ori Shapira, Avi Caciularu, Arie Cattan, Ori Ernst, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Ido Dagan EMNLP 2021 [pdf] [demo]
  2. SummerTime: Text Summarization Toolkit for Non-experts Ansong Ni, Zhangir Azerbayev, Mutethia Mutuma, Troy Feng, Yusen Zhang, Tao Yu, Ahmed Hassan Awadallah, Dragomir Radev EMNLP 2021 Demo Track [pdf] [Demo]
  3. Summary Explorer: Visualizing the State of the Art in Text Summarization Shahbaz Syed, Tariq Yousef, Khalid Al-Khatib, Stefan Jänicke, Martin Potthast [pdf] [web]
  4. fastnlp/fastSum [code]
  5. Graph4NLP [code] [summarization]
  6. CTRLsum: Towards Generic Controllable Text Summarization [pdf] [code]
  7. OpenNMT-py: Open-Source Neural Machine Translation [pdf] [code]
  8. Fairseq: Facebook AI Research Sequence-to-Sequence Toolkit written in Python. [code]
  9. LeafNATS: An Open-Source Toolkit and Live Demo System for Neural Abstractive Text Summarization Tian Shi, Ping Wang, Chandan K. Reddy NAACL19 [pdf] [code]
  10. TransformerSum [code]

Analysis

  1. On Decoding Strategies for Neural Text Generators Gian Wiher, Clara Meister, Ryan Cotterell [pdf]
  2. Training Dynamics for Text Summarization Models Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, Greg Durrett [https://arxiv.org/abs/2110.08370]
  3. Does Summary Evaluation Survive Translation to Other Languages? Neslihan Iskender, Oleg Vasilyev, Tim Polzehl, John Bohannon, Sebastian Möller [pdf]
  4. How well do you know your summarization datasets? Priyam Tejaswin, Dhruv Naik, Pengfei Liu Findings of ACL 2021 [pdf] [code]
  5. Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution Jiacheng Xu, Greg Durrett ACL2021 [pdf] [code]
  6. To Point or Not to Point: Understanding How Abstractive Summarizers Paraphrase Text Matt Wilber, William Timkey, Marten Van Schijndel Findings of ACL 2021 [pdf] [code]
  7. What Makes a Good Summary? Reconsidering the Focus of Automatic Summarization Maartje ter Hoeve, Julia Kiseleva, Maarten de Rijke [pdf]
  8. Intrinsic Evaluation of Summarization Datasets Rishi Bommasani, Claire Cardie EMNLP20 [pdf]
  9. Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics Manik Bhandari, Pranav Gour, Atabak Ashfaq, Pengfei Liu COLING20 Short [pdf] [code]
  10. At Which Level Should We Extract? An Empirical Analysis on Extractive Document Summarization Qingyu Zhou, Furu Wei, Ming Zhou COLING20 [pdf]
  11. Corpora Evaluation and System Bias detection in Multi Document Summarization Alvin Dey, Tanya Chowdhury, Yash Kumar, Tanmoy Chakraborty Findings of EMNLP [pdf]
  12. Understanding the Extent to which Summarization Evaluation Metrics Measure the Information Quality of Summaries Daniel Deutsch, Dan Roth [pdf] [code]
  13. Understanding Neural Abstractive Summarization Models via Uncertainty Jiacheng Xu, Shrey Desai, Greg Durrett EMNLP20 Short [pdf] [code]
  14. Re-evaluating Evaluation in Text Summarization Manik Bhandari, Pranav Gour, Atabak Ashfaq, Pengfei Liu, Graham Neubig EMNLP20 [pdf] [code]
  15. CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems Yiran Chen, Pengfei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang EMNLP20 [pdf] [code]
  16. What Have We Achieved on Text Summarization? Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, Yue Zhang EMNLP20 [pdf]
  17. Conditional Neural Generation using Sub-Aspect Functions for Extractive News Summarization Zhengyuan Liu, Ke Shi, Nancy F. Chen Findings of EMNLP20 [pdf]
  18. Extractive Summarization as Text Matching Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang ACL20 [pdf] [code]
  19. Neural Text Summarization: A Critical Evaluation Wojciech Kryściński, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher EMNLP19 [pdf]
  20. Earlier Isn’t Always Better:Sub-aspect Analysis on Corpus and System Biases in Summarization Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy EMNLP19 [pdf] [code]
  21. A Closer Look at Data Bias in Neural Extractive Summarization Models Ming Zhong, Danqing Wang, Pengfei Liu, Xipeng Qiu, Xuanjing Huang EMNLP19 Workshop [pdf]
  22. Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis EMNLP19 Short [pdf]
  23. Searching for Effective Neural Extractive Summarization: What Works and What's Next Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang ACL19 [pdf] [code]
  24. Content Selection in Deep Learning Models of Summarization Chris Kedzie, Kathleen McKeown, Hal Daumé III EMNLP18 [pdf] [code]

Thesis

  1. Principled Approaches to Automatic Text Summarization Maxime Peyrard [pdf]
  2. Neural Text Summarization and Generation Piji Li [pdf]

Theory

  1. Bayesian Active Summarization Alexios Gidiotis, Grigorios Tsoumakas [pdf]
  2. RefSum: Refactoring Neural Summarization Yixin Liu, Zi-Yi Dou, Pengfei Liu NAACL21 [pdf] [code]
  3. Principled Approaches to Automatic Text Summarization Maxime Peyrard [pdf]
  4. KLearn: Background Knowledge Inference from Summarization Data Maxime Peyrard, Robert West Findings of EMNLP20 [pdf] [code]
  5. A Simple Theoretical Model of Importance for Summarization Maxime Peyrard ACL19 [pdf]
  6. BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle Peter West, Ari Holtzman, Jan Buys, Yejin Choi EMNLP19 [pdf] [code]

Dataset

ID Name Description Paper Conference
1 CNN-DailyMail News Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond SIGNLL16
2 New York Times News The New York Times Annotated Corpus
3 DUC News The Effects Of Human Variation In DUC Summarization Evaluation
4 Gigaword News A Neural Attention Model For Abstractive Sentence Summarization EMNLP15
5 Newsroom News Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies NAACL18
6 Xsum News Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization EMNLP18
7 Multi-News Multi-document News Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model ACL19
8 SAMSum Multi-party conversation SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization EMNLP19
9 AMI Meeting The AMI Meeting Corpus: A pre-announcement.
10 ICSI Meeting The ICSI Meeting Corpus
11 MSMO Multi-modal MSMO: Multimodal Summarization with Multimodal Output EMNLP18
12 How2 Multi-modal How2: A Large-scale Dataset for Multimodal Language Understanding NIPS18
13 ScisummNet Scientific paper ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks AAAI19
14 PubMed, ArXiv Scientific paper A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents NAACL18
15 TALKSUMM Scientific paper TALKSUMM: A Dataset and Scalable Annotation Method for Scientific Paper Summarization Based on Conference Talks ACL19
16 BillSum Legal BillSum: A Corpus for Automatic Summarization of US Legislation EMNLP19
17 LCSTS Chinese Weibo LCSTS: A Large Scale Chinese Short Text Summarization Dataset EMNLP15
18 WikiHow Online Knowledge Base WikiHow: A Large Scale Text Summarization Dataset
19 Concept-map-based MDS Corpus Educational Multi-document Bringing Structure into Summaries : Crowdsourcing a Benchmark Corpus of Concept Maps EMNLP17
20 WikiSum Wikipedia Multi-document Generating Wikipedia By Summarizing Long Sequence ICLR18
21 GameWikiSum Game Multi-document GameWikiSum : a Novel Large Multi-Document Summarization Dataset LREC20
22 En2Zh CLS, Zh2En CLS Cross-Lingual NCLS: Neural Cross-Lingual Summarization EMNLP19
23 Timeline Summarization Dataset Baidu timeline Learning towards Abstractive Timeline Summarization IJCAI19
24 Reddit TIFU online discussion Abstractive Summarization of Reddit Posts with Multi-level Memory Networks NAACL19
25 TripAtt Review Attribute-aware Sequence Network for Review Summarization EMNLP19
26 Reader Comments Summarization Corpus Comments-based Weibo Abstractive Text Summarization by Incorporating Reader Comments AAAI19
27 BIGPATENT Patent BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization ACL19
28 Curation Corpus News Curation Corpus for Abstractive Text Summarisation
29 MATINF Multi-task MATINF: A Jointly Labeled Large-Scale Dataset for Classification, Question Answering and Summarization ACL20
30 MLSUM Multi-Lingual Summarization Dataset MLSUM: The Multilingual Summarization Corpus EMNLP20
31 Dialogue(Debate) Argumentative Dialogue Summary Corpus Using Summarization to Discover Argument Facets in Online Idealogical Dialog NAACL15
32 WCEP News Multi-document A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal ACL20 Short
33 ArgKP Argument-to-key Point Mapping From Arguments to Key Points: Towards Automatic Argument Summarization ACL20
34 CRD3 Dialogue Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset 2020
35 Gazeta Russian news Dataset for Automatic Summarization of Russian News
36 MIND English news recommendation, Summarization, Classification, Entity MIND: A Large-scale Dataset for News Recommendation ACL20
37 public_meetings french meeting(test set) Align then Summarize: Automatic Alignment Methods for Summarization Corpus Creation LREC
38 Enron Email Building a Dataset for Summarization and Keyword Extraction from Emails 2014
39 Columbia Email Summarizing Email Threads 2004
40 BC3 Email A publicly available annotated corpus for supervised email summarization
41 WikiLingua Cross-Lingual WikiLingua- A New Benchmark Dataset for Cross-Lingual Abstractive Summarization Findings of EMNLP20
42 LcsPIRT Chinese Dialogue Global Encoding for Long Chinese Text Summarization TALLIP
43 CLTSCLTS-plus Chinese News CLTS: A New Chinese Long Text Summarization Dataset CLTS+: A New Chinese Long Text Summarization Dataset with Abstractive Summaries NLPCC20
44 VMSMO Multi-modal VMSMO: Learning to Generate Multimodal Summary for Video-based News Articles EMNLP20
45 Multi-XScience Multi-document Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles EMNLP20 short
46 SCITLDR Scientific Document TLDR: Extreme Summarization of Scientific Documents Findings of EMNLP20
47 scisumm-corpus Scientific Document
48 QBSUM Query-Based Chinese QBSUM: a Large-Scale Query-Based Document Summarization Dataset from Real-world Applications Computer Speech & Language
49 qMDS Query-Based Multi-Document AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization
50 Liputan6 Indonesian Liputan6: A Large-scale Indonesian Dataset for Text Summarization AACL20
51 SportsSum Sports Game Generating Sports News from Live Commentary: A Chinese Dataset for Sports Game Summarization AACL20
52 WikiAsp Aspect-based WikiAsp: A Dataset for Multi-domain Aspect-based Summarization Transaction of the ACL
53 DebateSum argument DebateSum:A large-scale argument mining and summarization dataset ARGMIN 2020
54 Open4Business Business Open4Business (O4B): An Open Access Dataset for Summarizing Business Documents Workshop on Dataset Curation and Security-NeurIPS 2020
55 OrangeSum French BARThez: a Skilled Pretrained French Sequence-to-Sequence Model
56 Medical Conversation medical conversation Summarizing Medical Conversations via Identifying Important Utterances COLING20
57 SumTitles movie dialogue SumTitles: a Summarization Dataset with Low Extractiveness COLING20
58 BANS bengali news Bengali Abstractive News Summarization (BANS): A Neural Attention Approach TCCE-2020
59 e-commerce E-commerce On the Faithfulness for E-commerce Product Summarization COLING20
60 TWEETSUM Twitter TWEETSUM: Event-oriented Social Summarization Dataset COLING20
61 SPACE Opinion Extractive Opinion Summarization in Quantized Transformer Spaces TACL
62 pn-summary Persian Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization csicc2021
63 E-commerce1desensitized Dialogue Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling AAAI21
64 E-commerce2desensitized Dialogue Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders AAAI21
65 BengaliSummarization Bengali Unsupervised Abstractive Summarization of Bengali Text Documents EACL21
66 MediaSum Dialogue MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization NAACL21
67 Healthline and BreastCancer multi-document Nutri-bullets: Summarizing Health Studies by Composing Segments AAAI21
68 GOVREPORT Long Government reports Efficient Attentions for Long Document Summarization NAACL21
69 SSN Scientific Paper Enhancing Scientific Papers Summarization with Citation Graph AAAI21
70 MTSamples Medical Towards objectively evaluating the quality of generated medical summaries
71 QMSum Meeting, Query QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization NAACL21
72 MS2 Medical, Multi-Document MS2: Multi-Document Summarization of Medical Studies
73 SummScreen Television Series SummScreen: A Dataset for Abstractive Screenplay Summarization ACL 2022
74 SciDuet Scientific Papers and Slides D2S: Document-to-Slide Generation Via Query-Based Text Summarization NAACL21
75 MultiHumES Multilingual MultiHumES: Multilingual Humanitarian Dataset for Extractive Summarization EACL21
76 DialSumm Dialogue DialSumm: A Real-Life Scenario Dialogue Summarization Dataset Findings of ACL21
77 BookSum Book, Long-form BookSum: A Collection of Datasets for Long-form Narrative Summarization
78 CLES Chinese Weibo A Large-Scale Chinese Long-Text Extractive Summarization Corpus ICASSP
79 FacetSum Scientific Paper Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents ACL2021 short
80 ConvoSumm Dialogue ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining ACL2021
81 AgreeSum Multi-document with entailment annotations AgreeSum: Agreement-Oriented Multi-Document Summarization Findings of ACL2021
82 En2De Cross-Lingual En2De Cross-Lingual Abstractive Summarization with Limited Parallel Resources ACL 2021
83 VT-SSum Spoken VT-SSum: A Benchmark Dataset for Video Transcript Segmentation and Summarization
84 AESLC Email This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation ACL 2019
85 XL-Sum Cross-lingual XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages Findings of ACL2021
86 TES 2012-2016 Tweet TSSuBERT: Tweet Stream Summarization Using BERT
87 PENS Personalized Headline PENS: A Dataset and Generic Framework for Personalized News Headline Generation ACL 2021
88 XSum Hallucination Annotations Factuality On Faithfulness and Factuality in Abstractive Summarization ACL 2020
89 factuality-datasets Factuality Annotating and Modeling Fine-grained Factuality in Summarization NAACL 2021
90 frank Factuality Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics NAACL 2021
91 TRIPOD Movie Movie Summarization via Sparse Graph Construction AAAI 2021
92 AdaptSum Low-Resource AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization NAACL 2021
93 PTS Product Multi-Source Pointer Network for Product Title Summarization CIKM 2018
94 RAMDS Reader-Aware Reader-Aware Multi-Document Summarization: An Enhanced Model and The First Dataset EMNLP 2017 Workshop
95 court judgment court judgment How to Write Summaries with Patterns? Learning towards Abstractive Summarization through Prototype Editing EMNLP 2019
96 ADEGBTS gaze behaviors A Dataset for Exploring Gaze Behaviors in Text Summarization ACM MMSys'20
97 MeQSum Medical On the Summarization of Consumer Health Questions ACL 2019
98 OpoSum Opinion Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised EMNLP 2018
99 MM-AVS Multi-modal Multi-modal Summarization for Video-containing Documents NAACL 2021
100 WikiCatSum multi-doc Generating Summaries with Topic Templates and Structured Convolutional Decoders ACL 2019
101 SDF-TLS Timeline Summarize Dates First: A Paradigm Shift in Timeline Summarization SIGIR 2021
102 RWS-Cit *Automatic generation of related work through summarizing citations 2017
103 MTLS Timeline Multi-TimeLine Summarization (MTLS): Improving Timeline Summarization by Generating Multiple Summaries ACL 2021
104 EMAILSUM Email EmailSum: Abstractive Email Thread Summarization ACL 2021
105 WikiSum WikiHow WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation ACL 2021 Short
106 SumPubMed PubMed Scientific Article SumPubMed: Summarization Dataset of PubMed Scientific Articles ACL 2021 Student Research Workshop
107 MLGSum Multi-lingual Contrastive Aligned Joint Learning for Multilingual Summarization ACL 2021 Findings
108 SMARTPHONE,COMPUTER Product CUSTOM: Aspect-Oriented Product Summarization for E-Commerce
109 CSDS Customer Service Dialogue CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue Summarization EMNLP 2021
110 persian-dataset persian ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization
111 StreamHover spoken livestream StreamHover: Livestream Transcript Summarization and Annotation EMNLP 2021
112 CNewSum News CNewSum: A Large-scale Chinese News Summarization Dataset with Human-annotated Adequacy and Deducibility Level NLPCC 2021
113 MiRANews news, factual MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization EMNLP 2021 Findings
114 HowSumm query multi-doc HowSumm: A Multi-Document Summarization Dataset Derived from WikiHow Articles
115 SportsSum2.0 Sports SportsSum2.0: Generating High-Quality Sports News from Live Text Commentary
116 CoCoSum opinion multi-ref Comparative Opinion Summarization via Collaborative Decoding
117 MReD Controllable MReD: A Meta-Review Dataset for Controllable Text Generation
118 MSˆ2 Multi-Document, Medical MSˆ2: Multi-Document Summarization of Medical Studies EMNLP 2021
119 MassiveSumm MassiveSumm: a very large-scale, very multilingual, news summarisation dataset EMNLP 2021
120 XWikis multilingual Models and Datasets for Cross-Lingual Summarisation EMNLP 2021
121 SUBSUME Intent, subjective SUBSUME: A Dataset for Subjective Summary Extraction from Wikipedia Documents EMNLP 2021 newsum
122 TLDR9+ TLDR9+: A Large Scale Resource for Extreme Summarization of Social Media Posts EMNLP 2021 newsum
123 20 Minuten German A New Dataset and Efficient Baselines for Document-level Text Simplification in German EMNLP 2021 newsum
124 WSD multi-lingual A Novel Wikipedia based Dataset for Monolingual and Cross-Lingual Summarization EMNLP 2021 newsum
125 TEDSummary Speech Attention-based Multi-hypothesis Fusion for Speech Summarization
126 SummaC Benchmark Factual, NLI SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization
127 ForumSum Conversation ForumSum: A Multi-Speaker Conversation Summarization Dataset EMNLP 2021 Findings
128 K-SportsSum Sports Knowledge Enhanced Sports Game Summarization WSDM 2022
129 Test-Amazon Opinion, New test for Amazon reviews Unsupervised Opinion Summarization as Copycat-Review Generation ACL 2020
130 Test-Amazon-Yelp Opinion, New test for Amazon(180) and Yelp(300) Few-Shot Learning for Opinion Summarization EMNLP 2020
131 AmaSum Opinion Learning Opinion Summarizers by Selecting Informative Reviews EMNLP 2021
132 CrossSum Cross lingual CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs
133 HCSCL-MSDataset Multi-modal Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization AAAI 2022
134 Klexikon German Klexikon: A German Dataset for Joint Summarization and Simplification
135 TODSum Customer Service TODSum: Task-Oriented Dialogue Summarization with State Tracking
136 TWEETSUMM Customer Service TWEETSUMM - A Dialog Summarization Dataset for Customer Service Findings of EMNLP 2021
137 PeerSum Multi-document, Scientific PeerSum: A Peer Review Dataset for Abstractive Multi-document Summarization
138 Celebrity TS, Event TS, Wiki TS Timeline, person, event Follow the Timeline! Generating Abstractive and Extractive Timeline Summary in Chronological Order TOSI 2022
139 Chart-to-Text chart Chart-to-Text: A Large-Scale Benchmark for Chart Summarization
140 GovReport-QS Long Document HIBRIDS: Attention with Hierarchical Biases for Structure-aware Long Document Summarization ACL 2022
141 EntSUM Entity EntSUM: A Data Set for Entity-Centric Summarization ACL 2022
142 ALLSIDES Framing Bias NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias ACL 2022
143 GRAPHELSUMS graph Summarization with Graphical Elements
144 Annotated-Wikilarge-Newsela Factuality Evaluating Factuality in Text Simplification ACL 2022
145 WikiMulti Cross-lingual WikiMulti: a Corpus for Cross-Lingual Summarization
146 Welsh Introducing the Welsh Text Summarisation Dataset and Baseline Systems
147 SuMe Biomedical SuMe: A Dataset Towards Summarizing Biomedical Mechanisms LREC 2022
148 CiteSum CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation
148 MSAMSum Dialogue MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization ACL 2022 DialDoc
149 SQuALITY Long-Document SQuALITY: Building a Long-Document Summarization Dataset the Hard Way
150 X-SCITLDR X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents JCDL 2022
151 NEWTS News NEWTS: A Corpus for News Topic-Focused Summarization
152 EntSUM Entity EntSUM: A Data Set for Entity-Centric Extractive Summarization ACL 2022
153 ASPECTNEWS ASPECTNEWS: Aspect-Oriented Summarization of News Documents ACL 2022
154 RNSum Commit Logs RNSum: A Large-Scale Dataset for Automatic Release Note Generation via Commit Logs Summarization ACL 2022
155 AnswerSumm query multi-doc AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization NAACL 2022
156 CHQ-Summ CHQ-Summ: A Dataset for Consumer Healthcare Question Summarization
157 Multi-LexSum multi-doc Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities
158 DACSA Catalan and Spanish DACSA: A large-scale Dataset for Automatic summarization of Catalan and Spanish newspaper Articles NAACL 2022

Dialogue

  1. A Survey on Dialogue Summarization: Recent Advances and New Frontiers Xiachong Feng, Xiaocheng Feng, Bing Qin IJCAI 2022, Survey Track [pdf] [Chinese Slides]

Dataset

  1. TODSum: Task-Oriented Dialogue Summarization with State Tracking Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing Jiang, Wei Wu, Weiran Xu, Jun Guo, Fanyu Meng [pdf]
  2. TWEETSUMM - A Dialog Summarization Dataset for Customer Service Guy Feigenblat, Chulaka Gunasekara, Benjamin Sznajder, Sachindra Joshi, David Konopnicki, Ranit Aharonov Findings of EMNLP 2021 [pdf] [data]
  3. ForumSum: A Multi-Speaker Conversation Summarization Dataset Misha Khalman, Yao Zhao, Mohammad Saleh EMNLP 2021 Findings [pdf] [data]
  4. CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue Summarization Haitao Lin, Liqun Ma, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong EMNLP 2021 [pdf] [data]
  5. EmailSum: Abstractive Email Thread Summarization Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, Mohit Bansal ACL 2021 [pdf] [data]
  6. DialSumm: A Real-Life Scenario Dialogue Summarization Dataset Yulong Chen, Yang Liu, Liang Chen, Yue Zhang Findings of ACL21 [pdf] [data]
  7. ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev ACL 2021 [pdf] [code]
  8. MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization Chenguang Zhu, Yang Liu, Jie Mei, Michael Zeng NAACL21 [pdf] [code]
  9. QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, Dragomir Radev NAACL21 [pdf] [data]
  10. Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset Revanth Rameshkumar, Peter Bailey ACL20 [pdf] [data]
  11. SumTitles: a Summarization Dataset with Low Extractiveness Valentin Malykh, Konstantin Chernis, Ekaterina Artemova, Irina Piontkovskaya COLING20 [pdf] [code]
  12. Summarizing Medical Conversations via Identifying Important Utterances Yan Song, Yuanhe Tian, Nan Wang, Fei Xia COLING20 [pdf] [code]
  13. GupShup: Summarizing Open-Domain Code-Switched Conversations Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, Rajiv Ratn Shah EMNLP 2021 [pdf][code]
  14. SummScreen: A Dataset for Abstractive Screenplay Summarization Mingda Chen, Zewei Chu, Sam Wiseman, Kevin Gimpel ACL 2022 [pdf] [data]
    [Abs] We introduce SummScreen, a summarization dataset comprised of pairs of TV series transcripts and human written recaps. The dataset provides a challenging testbed for abstractive summarization for several reasons. Plot details are often expressed indirectly in character dialogues and may be scattered across the entirety of the transcript. These details must be found and integrated to form the succinct plot descriptions in the recaps. Also, TV scripts contain content that does not directly pertain to the central plot but rather serves to develop characters or provide comic relief. This information is rarely contained in recaps. Since characters are fundamental to TV series, we also propose two entity-centric evaluation metrics. Empirically, we characterize the dataset by evaluating several methods, including neural models and those based on nearest neighbors. An oracle extractive approach outperforms all benchmarked models according to automatic metrics, showing that the neural models are unable to fully exploit the input transcripts. Human evaluation and qualitative analysis reveal that our non-oracle models are competitive with their oracle counterparts in terms of generating faithful plot events and can benefit from better content selectors. Both oracle and non-oracle models generate unfaithful facts, suggesting future research directions.
  15. SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer EMNLP19 [pdf] [data]
  16. Dial2Desc: End-to-end Dialogue Description Generation Haojie Pan, Junpei Zhou, Zhou Zhao, Yan Liu, Deng Cai, Min Yang [pdf]
  17. The AMI meeting corpus: A pre-announcement Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and others [pdf]
  18. The ICSI meeting corpus Janin, Adam and Baron, Don and Edwards, Jane and Ellis, Dan and Gelbart, David and Morgan, Nelson and Peskin, Barbara and Pfau, Thilo and Shriberg, Elizabeth and Stolcke, Andreas and others [pdf]

Email Summarization

  1. Focus on the Action: Learning to Highlight and Summarize Jointly for Email To-Do Items Summarization Kexun Zhang, Jiaao Chen, Diyi Yang Findings of ACL 2022 [pdf]
  2. EmailSum: Abstractive Email Thread Summarization Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, Mohit Bansal ACL 2021 [pdf] [data]
  3. Smart To-Do: Automatic Generation of To-Do Items from Emails Sudipto Mukherjee, Subhabrata Mukherjee, Marcello Hasegawa, Ahmed Hassan Awadallah, Ryen White ACL 2020 [pdf] [code] [bib]
  4. Identifying Implicit Quotes for Unsupervised Extractive Summarization of Conversations Ryuji Kano, Yasuhide Miura, Tomoki Taniguchi, Tomoko Ohkuma AACL20 [pdf]
  5. This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation Rui Zhang, Joel Tetreault ACL 2019 [pdf] [data] [bib]
  6. Building a Dataset for Summarization and Keyword Extraction from Emails Vanessa Loza, Shibamouli Lahiri, Rada Mihalcea, Po-Hsiang Lai LREC 2014 [pdf]
  7. A Publicly Available Annotated Corpus for Supervised Email Summarization Jan Ulrich, Gabriel Murray, Giuseppe Carenini AAAI 2008 [pdf]
  8. Summarizing Email Conversations with Clue Words Giuseppe Carenini, Raymond T. Ng, Xiaodong Zhou WWW 2007 [pdf]
  9. Task-focused Summarization of Email Simon H. Corston-Oliver Eric Ringger Michael Gamon Richard Campbell ACL 2004 [pdf]
  10. Summarizing email threads Owen Rambow, Lokesh Shrestha, John Chen, Chirsty Lauridsen NAACL 2004 [pdf] [bib]
  11. Facilitating email thread access by extractive summary generation Ani Nenkova Recent advances in natural language processing III: selected papers from RANLP [pdf]
  12. Summarizing Archived Discussions: A Beginning Paula S. Newman, John C. Blitzer Proceedings of the 8th international conference on Intelligent user interfaces [pdf]
  13. Combining linguistic and machine learning techniques for email summarization Smaranda Muresan, Evelyne Tzoukermann, Judith L. Klavans Proceedings of the ACL 2001 Workshop on Computational Natural Language Learning (ConLL) 2001 [pdf] [bib]

Meeting Summarization

  1. ALIGNMEET: A Comprehensive Tool for Meeting Annotation, Alignment, and Evaluation Peter Polák, Muskaan Singh, Anna Nedoluzhko, Ondřej Bojar LREC 2022 [pdf] [data]
  2. TANet: Thread-Aware Pretraining for Abstractive Conversational Summarization Ze Yang, Liran Wang, Zhoujin Tian, Wei Wu, Zhoujun Li Findings of NAACL 2022 [pdf]
    [Abs] Although pre-trained language models (PLMs) have achieved great success and become a milestone in NLP, abstractive conversational summarization remains a challenging but less studied task. The difficulty lies in two aspects. One is the lack of large-scale conversational summary data. Another is that applying the existing pre-trained models to this task is tricky because of the structural dependence within the conversation and its informal expression, etc. In this work, we first build a large-scale (11M) pretraining dataset called RCSum, based on the multi-person discussions in the Reddit community. We then present TANet, a thread-aware Transformer-based network. Unlike the existing pre-trained models that treat a conversation as a sequence of sentences, we argue that the inherent contextual dependency among the utterances plays an essential role in understanding the entire conversation and thus propose two new techniques to incorporate the structural information into our model. The first is thread-aware attention which is computed by taking into account the contextual dependency within utterances. Second, we apply thread prediction loss to predict the relations between utterances. We evaluate our model on four datasets of real conversations, covering types of meeting transcripts, customer-service records, and forum threads. Experimental results demonstrate that TANet achieves a new state-of-the-art in terms of both automatic evaluation and human judgment.
  3. Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed H. Awadallah, Dragomir Radev, Rui Zhang ACL 2022 [pdf] [code]
    [Abs] Text summarization helps readers capture salient information from documents, news, interviews, and meetings. However, most state-of-the-art pretrained language models (LM) are unable to efficiently process long text for many summarization tasks. In this paper, we propose SummN, a simple, flexible, and effective multi-stage framework for input texts that are longer than the maximum context length of typical pretrained LMs. SummN first splits the data samples and generates a coarse summary in multiple stages and then produces the final fine-grained summary based on it. Our framework can process input text of arbitrary length by adjusting the number of stages while keeping the LM input size fixed. Moreover, it can deal with both single-source documents and dialogues, and it can be used on top of different backbone abstractive summarization models. To the best of our knowledge, SummN is the first multi-stage split-then-summarize framework for long input summarization. Our experiments demonstrate that SummN outperforms previous state-of-the-art methods by improving ROUGE scores on three long meeting summarization datasets AMI, ICSI, and QMSum, two long TV series datasets from SummScreen, and a long document summarization dataset GovReport. Our data and code are available at https://github.com/psunlpgroup/Summ-N.
  4. Exploring Neural Models for Query-Focused Summarization Jesse Vig, Alexander R. Fabbri, Wojciech Kryściński [pdf] [code]
  5. Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment MengNan Qi, Hao Liu, YuZhuo Fu, Ting Liu EMNLP 2021 Findings [pdf]
  6. Meeting Summarization with Pre-training and Clustering Methods Andras Huebner, Wei Ji, Xiang Xiao [pdf] [code]
  7. Context or No Context? A preliminary exploration of human-in-the-loop approach for Incremental Temporal Summarization in meetings Nicole Beckage, Shachi H Kumar, Saurav Sahay, Ramesh Manuvinakurike EMNLP 2021| newsum [pdf]
  8. RetrievalSum: A Retrieval Enhanced Framework for Abstractive Summarization Chenxin An, Ming Zhong, Zhichao Geng, Jianqiang Yang, Xipeng Qiu [pdf]
  9. An Exploratory Study on Long Dialogue Summarization: What Works and What's Next Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, Dragomir Radev Findings of EMNLP 2021 Short [pdf]
  10. DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng AAAI 2022 [pdf] [code]
  11. Dynamic Sliding Window for Meeting Summarization Zhengyuan Liu, Nancy F. Chen SummDial@SIGDial 2021 [pdf]
  12. MeetSum: Transforming Meeting Transcript Summarization using Transformers! Nima Sadri, Bohan Zhang, Bihan Liu [pdf]
  13. Incremental temporal summarization in multiparty meetings Ramesh Manuvinakurike, Saurav Sahay, Wenda Chen, Lama Nachman SIGIR 2021 [pdf]
  14. Abstractive Spoken Document Summarization using Hierarchical Model with Multi-stage Attention Diversity Optimization Potsawee Manakul, Mark J. F. Gales, Linlin Wang INTERSPEECH 2020 [pdf] [code]
  15. What are meeting summaries? An analysis of human extractive summaries in meeting corpus Fei Liu, Yang Liu SIGDIAL 2008 [pdf]
  16. Exploring Speaker Characteristics for Meeting Summarization Fei Liu, Yang Liu INTERSPEECH 2010 [pdf]
  17. Automatic meeting summarization and topic detection system Tai-Chia Huang, Chia-Hsuan Hsieh, Hei-Chia Wang [pdf]
  18. A keyphrase based approach to interactive meeting summarization Korbinian Riedhammer, Benoit Favre, Dilek Hakkani-T¨ur 2008 IEEE Spoken Language Technology Workshop [pdf]
  19. A global optimization framework for meeting summarization Dan Gillick, Korbinian Riedhammerm, Benoit Favre, Dilek Hakkani-Tur 2009 IEEE International Conference on Acoustics, Speech and Signal Processing [pdf]
  20. Evaluating the effectiveness of features and sampling in extractive meeting summarization Shasha Xie, Yang Liu, Hui Lin SLT 2008 [pdf]
  21. Abstractive Meeting Summarization Using Dependency Graph Fusion Siddhartha Banerjee, Prasenjit Mitra, Kazunari Sugiyama WWW 2015 [pdf]
  22. Automatic Community Creation for Abstractive Spoken Conversation Summarization Karan Singla, Evgeny Stepanov, Ali Orkan Bayer, Giuseppe Carenini, Giuseppe Riccardi ACL 2017 workshop [pdf] [bib]
  23. Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Jean-Pierre Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, Jean-Pierre Lorré ACL18 [pdf] [code]
  24. Abstractive meeting summarization based on an attentional neural model Nouha Dammak, Yassine BenAyed [pdf]
  25. A Study of Text Summarization Techniques for Generating Meeting Minutes Tu My Doan, Francois Jacquenet, Christine Largeron, Marc Bernard RCIS 2020 [pdf]
  26. Meeting Summarization, A Challenge for Deep Learning Francois Jacquenet, Marc Bernard, Christine Largeron IWANN 2019 [pdf]
  27. Generating Abstractive Summaries from Meeting Transcripts Siddhartha Banerjee, Prasenjit Mitra, Kazunari Sugiyama Proceedings of the 2015 ACM Symposium on Document Engineering, DocEng' 2015 [pdf]
  28. Align then Summarize: Automatic Alignment Methods for Summarization Corpus Creation Paul Tardy, David Janiszek, Yannick Estève, Vincent Nguyen LREC 2020 [pdf] [bib]
  29. Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng IJCAI21 [pdf] [code]
  30. How Domain Terminology Affects Meeting Summarization Performance Jia Jin Koay, Alexander Roustai, Xiaojin Dai, Dillon Burns, Alec Kerrigan, Fei Liu COLING20 Short [pdf] [code]
  31. How to Interact and Change? Abstractive Dialogue Summarization with Dialogue Act Weight and Topic Change Info Jiasheng Di, Xiao Wei, Zhenyu Zhang KSEM 2020 [pdf] [code]
  32. Abstractive Dialogue Summarization with Sentence-Gated Modeling Optimized by Dialogue Acts Chih-Wen Goo, Yun-Nung Chen SLT18 [pdf] [code]
  33. A Sliding-Window Approach to Automatic Creation of Meeting Minutes Jia Jin Koay, Alexander Roustai, Xiaojin Dai, Fei Liu [pdf]
  34. Hierarchical Learning for Generation with Long Source Sequences Tobias Rohde, Xiaoxia Wu, Yinhan Liu [pdf] [code]
  35. A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining Chenguang Zhu, Ruochen Xu, Michael Zeng, Xuedong Huang Findings of EMNLP20 [pdf] [code] [unofficial-code]
  36. Abstractive Meeting Summarization via Hierarchical Adaptive Segmental Network Learning Zhou Zhao, Haojie Pan, Changjie Fan, Yan Liu, Linlin Li, Min Yang WWW19 [pdf]
  37. Restructuring Conversations using Discourse Relations for Zero-shot Abstractive Dialogue Summarization Prakhar Ganesh, Saket Dingliwal [pdf]
  38. Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization Manling Li, Lingyu Zhang, Heng Ji, Richard J. Radke ACL19 [pdf]
  39. Automatic analysis of multiparty meetings STEVE RENALS [pdf]
  40. A Multimodal Meeting Browser that Implements an Important Utterance Detection Model based on Multimodal Information Fumio Nihei, Yukiko I. Nakano [pdf]
  41. Exploring Methods for Predicting Important Utterances Contributing to Meeting Summarization Fumio Nihei, Yukiko I. Nakano [pdf]
  42. Fusing Verbal and Nonverbal Information for Extractive Meeting Summarization Fumio Nihei, Yukiko I. Nakano, Yutaka Takase GIFT18 [pdf]
  43. Meeting Extracts for Discussion Summarization Based on Multimodal Nonverbal Information Fumio Nihei, Yukiko I. Nakano, Yutaka Takase ICMI16 [pdf]
  44. Extractive Summarization of Meeting Recordings Gabriel Murray, Steve Renals, Jean Carletta [pdf]
  45. Multimodal Summarization of Meeting Recordings Bema Erol, Dar-Shyang Lee, Jonathan Hull ICME 2003 [pdf]
  46. Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze EACL21 [pdf]
  47. Leverage Unlabeled Data for Abstractive Speech Summarization with Self-Supervised Learning and Back-Summarization SPECOM 2020 SPECOM 2020 [pdf]
  48. Focused Meeting Summarization via Unsupervised Relation Extraction Lu Wang, Claire Cardie SIGDIAL 2012 [pdf]
  49. QMSum: A New Benchmark for Query-based Multi-domain Meeting Summarization Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, Dragomir Radev NAACL21 [pdf] [data]
  50. Domain-Independent Abstract Generation for Focused Meeting Summarization Lu Wang, Claire Cardie ACL 2013 [pdf]
  51. Summarizing Decisions in Spoken Meetings Lu Wang, Claire Cardie ACL 2011 [pdf]
  52. Extracting Decisions from Multi-Party Dialogue Using Directed Graphical Models and Semantic Similarity Trung Bui, Matthew Frampton, John Dowding, Stanley Peters SIGDIAL 2009 [pdf] [bib]
  53. ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive Summarization with Argument Mining Alexander R. Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev ACL2021 [pdf] [code]

Chat Summarization

  1. A Finer-grain Universal Dialogue Semantic Structures based Model For Abstractive Dialogue Summarization Yuejie Lei, Fujia Zheng, Yuanmeng Yan, Keqing He, Weiran Xu EMNLP 2021 Findings [pdf] [code]
  2. Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization Dongyub Lee, Jungwoo Lim, Taesun Whang, Chanhee Lee, Seungwoo Cho, Mingun Park, Heuiseok Lim EMNLP 2021| newsum [pdf]
  3. Who says like a style of Vitamin: Towards Syntax-Aware DialogueSummarization using Multi-task Learning Seolhwa Lee, Kisu Yang, Chanjun Park, João Sedoc, Heuiseok Lim [pdf]
  4. Controllable Neural Dialogue Summarization with Personal Named Entity Planning Zhengyuan Liu, Nancy F. Chen EMNLP 2021 [pdf]
  5. GupShup: Summarizing Open-Domain Code-Switched Conversations Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle G. Lee, Anish Acharya, Rajiv Ratn Shah EMNLP 2021 [pdf][code]
  6. Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan, Xiaojie Wang EMNLP 2021 Findings [pdf] [code]
  7. Give the Truth: Incorporate Semantic Slot into Abstractive Dialogue Summarization Lulu Zhao, Weihao Zeng, Weiran Xu, Jun Guo EMNLP 2021 Findings [pdf]
  8. Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining Yicheng Zou, Bolin Zhu, Xingwu Hu, Tao Gui, Qi Zhang EMNLP 2021 [pdf] [code]
  9. Enhancing Semantic Understanding with Self-Supervised Methods for Abstractive Dialogue Summarization Hyunjae Lee, Jaewoong Yun, Hyunjin Choi, Seongho Joe, Youngjune L. Gwon Interspeech 2021 [pdf]
  10. Dialogue summarization with supporting utterance flow modeling and fact regularization Wang Chen, Piji Li, Hou PongChan, Irwin King Knowledge-Based Systems [pdf]
  11. Situation-Based Multiparticipant Chat Summarization: a Concept, an Exploration-Annotation Tool and an Example Collection Anna Smirnova, Evgeniy Slobodkin, George Chernishev ACL 2021 Student Research Workshop [pdf] [tool] [data]
  12. Coreference-Aware Dialogue Summarization Zhengyuan Liu, Ke Shi, Nancy F. Chen SIGDIAL 2021 [pdf]
  13. Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks Xiachong Feng, Xiaocheng Feng, Bing Qin CCL 2021 [pdf]
  14. Hierarchical Speaker-Aware Sequence-to-Sequence Model for Dialogue Summarization Yuejie Lei, Yuanmeng Yan, Zhiyuan Zeng, Keqing He, Ximing Zhang, Weiran Xu ICASSP21 [pdf]
  15. Summary Grounded Conversation Generation Chulaka Gunasekara, Guy Feigenblat, Benjamin Sznajder, Sachindra Joshi, David Konopnicki Findings of ACL 2021 [pdf]
  16. Controllable Abstractive Dialogue Summarization with Sketch Supervision Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, Caiming Xiong ACL-Findings 2021 [pdf] [code]
  17. Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs Jiaao Chen, Diyi Yang NAACL21 [pdf] [code]
  18. Planning with Learned Entity Prompts for Abstractive Summarization Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Ryan McDonald TACL 2021 [pdf]
  19. Improving Abstractive Dialogue Summarization with Graph Structures and Topic Words Lulu Zhao, Weiran Xu, Jun Guo COLING20 [pdf]
  20. Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization Jiaao Chen, Diyi Yang EMNLP20 [pdf] [code]
  21. SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer EMNLP19 [pdf] [data]

Medical Dialogue Summarization

  1. Counseling Summarization using Mental Health Knowledge Guided Utterance Filtering Aseem Srivastava, Tharun Suresh, Sarah Peregrine (Grin)Lord, Md. Shad Akhtar, Tanmoy Chakraborty KDD 2022 ADS Track [pdf]
    [Abs] The psychotherapy intervention technique is a multifaceted conversation between a therapist and a patient. Unlike general clinical discussions, psychotherapy's core components (viz. symptoms) are hard to distinguish, thus becoming a complex problem to summarize later. A structured counseling conversation may contain discussions about symptoms, history of mental health issues, or the discovery of the patient's behavior. It may also contain discussion filler words irrelevant to a clinical summary. We refer to these elements of structured psychotherapy as counseling components. In this paper, the aim is mental health counseling summarization to build upon domain knowledge and to help clinicians quickly glean meaning. We create a new dataset after annotating 12.9K utterances of counseling components and reference summaries for each dialogue. Further, we propose ConSum, a novel counseling-component guided summarization model. ConSum undergoes three independent modules. First, to assess the presence of depressive symptoms, it filters utterances utilizing the Patient Health Questionnaire (PHQ-9), while the second and third modules aim to classify counseling components. At last, we propose a problem-specific Mental Health Information Capture (MHIC) evaluation metric for counseling summaries. Our comparative study shows that we improve on performance and generate cohesive, semantic, and coherent summaries. We comprehensively analyze the generated summaries to investigate the capturing of psychotherapy elements. Human and clinical evaluations on the summary show that ConSum generates quality summary. Further, mental health experts validate the clinical acceptability of the ConSum. Lastly, we discuss the uniqueness in mental health counseling summarization in the real world and show evidences of its deployment on an online application with the support of http://mpathic.ai/
  2. Adding more data does not always help: A study in medical conversation summarization with PEGASUS Varun Nair, Namit Katariya, Xavier Amatriain, Ilya Valmianski, Anitha Kannan [pdf]
  3. Leveraging Pretrained Models for Automatic Summarization of Doctor-Patient Conversations Longxiang Zhang, Renato Negrinho, Arindam Ghosh, Vasudevan Jagannathan, Hamid Reza Hassanzadeh, Thomas Schaaf, and Matthew R. Gormley Findings of EMNLP 2021 [pdf]
  4. Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization Bharath Chintagunta, Namit Katariya, Xavier Amatriain, Anitha Kannan NAACL | NLPMC 2021 [pdf1] [pdf2]
  5. Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques Kundan Krishna, Sopan Khosla, Jeffrey P. Bigham, Zachary C. Lipton ACL 2021 [pdf] [code]
  6. Summarizing Medical Conversations via Identifying Important Utterances Yan Song, Yuanhe Tian, Nan Wang, Fei Xia COLING 2020 [pdf] [code] [bib]
  7. Dr.Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures Anirudh Joshi, Namit Katariya, Xavier Amatriain, Anitha Kannan Findings of EMNLP 2020 [pdf] [bib]
  8. Medical Dialogue Summarization for Automated Reporting in Healthcare Sabine Molenaar, Lientje Maas, Verónica Burriel, Fabiano Dalpiaz,Sjaak Brinkkemper Advanced Information Systems Engineering Workshops 2020 [pdf] [bib]
  9. Generating Medical Reports from Patient-Doctor Conversations using Sequence-to-Sequence Models Seppo Enarvi, Marilisa Amoia, Miguel Del-Agua Teba, Brian Delaney, Frank Diehl, Stefan Hahn, Kristina Harris, Liam McGrath, Yue Pan, Joel Pinto, Luca Rubini, Miguel Ruiz, Gagandeep Singh, Fabian Stemmer, Weiyi Sun, Paul Vozila, Thomas Lin, Ranjani Ramamurthy ACL 2020 Short [pdf] [bib]
  10. Automatically Generating Psychiatric Case Notes From Digital Transcripts of Doctor-Patient Conversations Nazmul Kazi, Indika Kahanda NAACL 2019 [pdf] [bib]
  11. Alignment Annotation for Clinic Visit Dialogue to Clinical Note Sentence Language Generation Wen-wai Yim, Meliha Yetisgen, Jenny Huang, Micah Grossman LREC 2020 [pdf] [bib]
  12. Topic-aware Pointer-Generator Networks for Summarizing Spoken Conversations Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, Nancy F. Chen ASRU 2019 [pdf]

Customer Service Summarization

  1. Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions Haitao Lin, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong ACL 2022 [pdf] [code]
    [Abs] Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g., merchants and consumers. Existing methods handle this task by summarizing each role’s content separately and thus are prone to ignore the information from other roles. However, we believe that other roles’ content could benefit the quality of summaries, such as the omitted information mentioned by other roles. Therefore, we propose a novel role interaction enhanced method for role-oriented dialogue summarization. It adopts cross attention and decoder self-attention interactions to interactively acquire other roles’ critical information. The cross attention interaction aims to select other roles’ critical dialogue utterances, while the decoder self-attention interaction aims to obtain key information from other roles’ summaries. Experimental results have shown that our proposed method significantly outperforms strong baselines on two public role-oriented dialogue summarization datasets. Extensive analyses have demonstrated that other roles’ content could help generate summaries with more complete semantics and correct topic structures.
  2. An End-to-End Dialogue Summarization System for Sales Calls Abedelkadir Asi, Song Wang, Roy Eisenstadt, Dean Geckt, Yarin Kuper, Yi Mao, Royi Ronen NAACL 2022 [pdf]
  3. Heuristic-based Inter-training to Improve Few-shot Multi-perspective Dialog Summarization Benjamin Sznajder, Chulaka Gunasekara, Guy Lev, Sachin Joshi, Eyal Shnarch, Noam Slonim [pdf]
  4. Dialogue Summaries as Dialogue States (DS2), Template-Guided Summarization for Few-shot Dialogue State Tracking Jamin Shin, Hangyeol Yu, Hyeongdon Moon, Andrea Madotto, Juneyoung Park Findings of ACL 2022 [pdf] [code]
  5. TWEETSUMM - A Dialog Summarization Dataset for Customer Service Guy Feigenblat, Chulaka Gunasekara, Benjamin Sznajder, Sachindra Joshi, David Konopnicki, Ranit Aharonov [pdf] [data]
  6. Extractive Dialogue Summarization Without Annotation Based on Distantly Supervised Machine Reading Comprehension in Customer Service Bing Ma, Haifeng Sun , Jingyu Wang , Qi Qi, and Jianxin Liao TASLP [pdf]
  7. TODSum: Task-Oriented Dialogue Summarization with State Tracking Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing Jiang, Wei Wu, Weiran Xu, Jun Guo, Fanyu Meng [pdf]
  8. CSDS: A Fine-grained Chinese Dataset for Customer Service Dialogue Summarization Haitao Lin, Liqun Ma, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong EMNLP 2021 [pdf] [data]
  9. Distant Supervision based Machine Reading Comprehension for Extractive Summarization in Customer Service Bing Ma, Cao Liu, Jingyu Wang, Shujie Hu, Fan Yang, Xunliang Cai, Guanglu Wan, Jiansong Chen, Jianxin Liao SIGIR 2021 [pdf]
  10. Unsupervised Abstractive Dialogue Summarization for Tete-a-Tetes Xinyuan Zhang, Ruiyi Zhang, Manzil Zaheer, Amr Ahmed AAAI21 [pdf]
  11. Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, Xiaozhong Liu AAAI21 [pdf] [code]
  12. Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and Context-Aware Auto-Encoders Yicheng Zou, Jun Lin, Lujun Zhao, Yangyang Kang, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, Xiaozhong Liu AAAI21 [pdf] [code]
  13. Abstractive Dialog Summarization with Semantic Scaffolds Lin Yuan, Zhou Yu [pdf]
  14. Automatic Dialogue Summary Generation for Customer Service Chunyi Liu, Peng Wang, Jiang Xu, Zang Li and Jieping Ye KDD19 [pdf]

Domain Adaption

  1. Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, Yanan Wu NAACL 2022 [pdf] [code]
  2. AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization Tiezheng Yu, Zihan Liu, Pascale Fung NAACL21 [pdf] [code]
  3. Domain Adaptation to Summarize Human Conversations Oana Sandu, Giuseppe Carenini, Gabriel Murray, Raymond Ng ACL2010 Workshop [pdf]

Others

  1. Effectiveness of French Language Models on Abstractive Dialogue Summarization Task Yongxin Zhou, François Portet, Fabien Ringeval LREC 2022 [pdf]
    [Abs] Pre-trained language models have established the state-of-the-art on various natural language processing tasks, including dialogue summarization, which allows the reader to quickly access key information from long conversations in meetings, interviews or phone calls. However, such dialogues are still difficult to handle with current models because the spontaneity of the language involves expressions that are rarely present in the corpora used for pre-training the language models. Moreover, the vast majority of the work accomplished in this field has been focused on English. In this work, we present a study on the summarization of spontaneous oral dialogues in French using several language specific pre-trained models: BARThez, and BelGPT-2, as well as multilingual pre-trained models: mBART, mBARThez, and mT5. Experiments were performed on the DECODA (Call Center) dialogue corpus whose task is to generate abstractive synopses from call center conversations between a caller and one or several agents depending on the situation. Results show that the BARThez models offer the best performance far above the previous state-of-the-art on DECODA. We further discuss the limits of such pre-trained models and the challenges that must be addressed for summarizing spontaneous dialogues.
  2. Data Augmentation for Low-Resource Dialogue Summarization Yongtai Liu, Joshua Maynez, Gonçalo Simões, Shashi Narayan Findings of NAACL 2022 [pdf]
    [Abs] We present DADS, a novel Data Augmentation technique for low-resource Dialogue Summarization. Our method generates synthetic examples by replacing sections of text from both the input dialogue and summary while preserving the augmented summary to correspond to a viable summary for the augmented dialogue. We utilize pretrained language models that produce highly likely dialogue alternatives while still being free to generate diverse alternatives. We applied our data augmentation method to the SAMSum dataset in low resource scenarios, mimicking real world problems such as chat, thread, and meeting summarization where large scale supervised datasets with human-written summaries are scarce. Through both automatic and human evaluations, we show that DADS shows strong improvements for low resource scenarios while generating topically diverse summaries without introducing additional hallucinations to the summaries.
  3. An End-to-End Dialogue Summarization System for Sales Calls Abedelkadir Asi, Song Wang, Roy Eisenstadt, Dean Geckt, Yarin Kuper, Yi Mao, Royi Ronen NAACL 2022 Industry Track [pdf]
    [Abs] Summarizing sales calls is a routine task performed manually by salespeople. We present a production system which combines generative models fine-tuned for customer-agent setting, with a human-in-the-loop user experience for an interactive summary curation process. We address challenging aspects of dialogue summarization task in a real-world setting including long input dialogues, content validation, lack of labeled data and quality evaluation. We show how GPT-3 can be leveraged as an offline data labeler to handle training data scarcity and accommodate privacy constraints in an industrial setting. Experiments show significant improvements by our models in tackling the summarization and content validation tasks on public datasets.
  4. Few-shot fine-tuning SOTA summarization models for medical dialogues David Fraile Navarro, Mark Dras, Shlomo Berkovsky NAACL 2022 Student Research Workshop [pdf] [code]
    [Abs] Abstractive summarization of medical dialogues presents a challenge for standard training approaches, given the paucity of suitable datasets. We explore the performance of state-of-the-art models with zero-shot and few-shot learning strategies and measure the impact of pretraining with general domain and dialogue-specific text on the summarization performance.
  5. DialSummEval: Revisiting Summarization Evaluation for Dialogues Mingqi Gao, Xiaojun Wan NAACL 2022 [pdf] [code]
    [Abs] Dialogue summarization is receiving increasing attention from researchers due to its extraordinary difficulty and unique application value. We observe that current dialogue summarization models have flaws that may not be well exposed by frequently used metrics such as ROUGE. In our paper, we re-evaluate 18 categories of metrics in terms of four dimensions: coherence, consistency, fluency and relevance, as well as a unified human evaluation of various models for the first time. Some noteworthy trends which are different from the conventional summarization tasks are identified. We will release DialSummEval, a multi-faceted dataset of human judgments containing the outputs of 14 models on SAMSum.
  6. Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, Yanan Wu NAACL 2022 [pdf] [code]
    [Abs] The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings. To explore the lightweight fine-tuning methods for domain adaptation of dialogue summarization, in this paper, we propose an efficient and generalizable Domain-Oriented Prefix-tuning model, which utilizes a domain word initialized prefix module to alleviate domain entanglement and adopts discrete prompts to guide the model to focus on key contents of dialogues and enhance model generalization. We conduct zero-shot experiments and build domain adaptation benchmarks on two multi-domain dialogue summarization datasets, TODSum and QMSum. Adequate experiments and qualitative analysis prove the effectiveness of our methods.
  7. From spoken dialogue to formal summary: An utterance rewriting for dialogue summarization Yue Fang, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Bo Long, Yanyan Lan, Yanquan Zhou NAACL 2022 [pdf]
    [Abs] Due to the dialogue characteristics of unstructured contexts and multi-parties with first-person perspective, many successful text summarization works have failed when dealing with dialogue summarization. In dialogue summarization task, the input dialogue is usually spoken style with ellipsis and co-references but the output summaries are more formal and complete. Therefore, the dialogue summarization model should be able to complete the ellipsis content and co-reference information and then produce a suitable summary accordingly. However, the current state-of-the-art models pay more attention on the topic or structure of summary, rather than the consistency of dialogue summary with its input dialogue context, which may suffer from the personal and logical inconsistency problem. In this paper, we propose a new model, named ReWriteSum, to tackle this problem. Firstly, an utterance rewriter is conducted to complete the ellipsis content of dialogue content and then obtain the rewriting utterances. Then, the co-reference data augmentation mechanism is utilized to replace the referential person name with its specific name to enhance the personal information. Finally, the rewriting utterances and the co-reference replacement data are used in the standard BART model. Experimental results on both SAMSum and DialSum datasets show that our ReWriteSum significantly outperforms baseline models, in terms of both metric-based and human evaluations. Further analysis on multi-speakers also shows that ReWriteSum can obtain relatively higher improvement with more speakers, validating the correctness and property of ReWriteSum.
  8. Unsupervised Abstractive Dialogue Summarization with Word Graphs and POV Conversion Seongmin Park, Jihwa Lee WIT Workshop @ ACL2022 [pdf] [code]
  9. MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization Xiachong Feng, Xiaocheng Feng, Bing Qin ACL 2022 DialDoc Workshop [pdf] [data]
  10. The Cross-lingual Conversation Summarization Challenge Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, Yue Zhang [pdf]
  11. Post-Training Dialogue Summarization using Pseudo-Paraphrasing Qi Jia, Yizhu Liu, Haifeng Tang, Kenny Q. Zhu Findings of NAACL 2022 [pdf] [code]
    [Abs] Previous dialogue summarization techniques adapt large language models pretrained on the narrative text by injecting dialogue-specific features into the models. These features either require additional knowledge to recognize or make the resulting models harder to tune. To bridge the format gap between dialogues and narrative summaries in dialogue summarization tasks, we propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives. After that, the model is fine-tuned for dialogue summarization as usual. Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization and outperforms other SOTA models by the summary quality and implementation costs.
  12. ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, Jie Zhou [pdf] [code]
  13. CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, Dragomir Radev [pdf]
  14. Are We Summarizing the Right Way? A Survey of Dialogue Summarization Data Sets Don Tuggener, Margot Mieskes, Jan Deriu, Mark Cieliebak EMNLP 2021| newsum [pdf]
  15. Dialogue Inspectional Summarization with Factual Inconsistency Awareness Leilei Gan, Yating Zhang, Kun Kuang, Lin Yuan, Shuo Li, Changlong Sun, Xiaozhong Liu, Fei Wu [pdf]
  16. Do Boat and Ocean Suggest Beach? Dialogue Summarization with External Knowledge Tianqing Fang, Haojie Pan, Hongming Zhang, Yangqiu Song, Kun Xu, Dong Yu AKBC 2021 [pdf] [code]
  17. Prompt scoring system for dialogue summarization using GPT3 Prodan, George; Pelican, Elena [pdf]
  18. Simple Conversational Data Augmentation for Semi-supervised Abstractive Dialogue SummarizationJiaao Jiaao Chen, Diyi Yang EMNLP 2021 [pdf] [code]
  19. A Bag of Tricks for Dialogue Summarization Muhammad Khalifa, Miguel Ballesteros, Kathleen McKeown EMNLP 2021 Short [pdf]
  20. Hierarchical Summarization for Longform Spoken Dialog Daniel Li, Thomas Chen, Albert Tung, Lydia Chilton UIST 2021 [pdf]
  21. RepSum: Unsupervised Dialogue Summarization based on Replacement Strategy Xiyan Fu, Yating Zhang, Tianyi Wang, Xiaozhong Liu, Changlong Sun, Zhenglu Yang ACL 2021 [pdf] [code]
  22. Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu ACL 2021 [pdf] [code]
  23. A Two-Phase Approach for Abstractive Podcast Summarization Chujie Zheng, Kunpeng Zhang, Harry Jiannan Wang, Ling Fan TREC 2020 Podcasts Track [pdf]
  24. Hierarchical Learning for Generation with Long Source Sequences Tobias Rohde, Xiaoxia Wu, Yinhan Liu [pdf] [code]
  25. Improving Online Forums Summarization via Unifying Hierarchical Attention Networks with Convolutional Neural Networks Sansiri Tarnpradab, Fereshteh Jafariakinabad, Kien A. Hua [pdf] [code]
  26. Extractive Summarization of Call Transcripts Pratik K. Biswas, Aleksandr Iakubovich [pdf]
  27. Legal Summarization for Multi-role Debate Dialogue via Controversy Focus Mining and Multi-task Learning Xinyu Duan, Yating Zhang, Lin Yuan, Xin Zhou, Xiaozhong Liu, Tianyi Wang, Ruocheng Wang, Qiong Zhang, Changlong Sun, Fei Wu CIKM 2019 [pdf]
  28. Collabot: Personalized Group Chat Summarization Naama Tepper, Anat Hashavit, Maya Barnea, Inbal Ronen, Lior Leiba WSDM 2018 [pdf]
  29. Summarizing Dialogic Arguments from Social Media Amita Misra, Shereen Oraby, Shubhangi Tandon, Sharath TS, Pranav Anand, Marilyn Walker SemDial 2017 [pdf]
  30. The SENSEI Annotated Corpus: Human Summaries of Reader Comment Conversations in On-line News Emma Barker, Monica Lestari Paramita, Ahmet Aker, Emina Kurtic, Mark Hepple, Robert Gaizauskas SIGDIAL 2016 [pdf]
  31. Semantic Similarity Applied to Spoken Dialogue Summarization Iryna Gurevych, Michael Strube COLING 2004 [pdf] [bib] Switchboard dialogues

Long Document

  1. An Empirical Survey on Long Document Summarization: Datasets, Models and Metrics uan Yee Koh, Jiaxin Ju, Ming Liu, Shirui Pan ACM Computing Surveys [pdf]
    [Abs] Long documents such as academic articles and business reports have been the standard format to detail out important issues and complicated subjects that require extra attention. An automatic summarization system that can effectively condense long documents into short and concise texts to encapsulate the most important information would thus be significant in aiding the reader's comprehension. Recently, with the advent of neural architectures, significant research efforts have been made to advance automatic text summarization systems, and numerous studies on the challenges of extending these systems to the long document domain have emerged. In this survey, we provide a comprehensive overview of the research on long document summarization and a systematic evaluation across the three principal components of its research setting: benchmark datasets, summarization models, and evaluation metrics. For each component, we organize the literature within the context of long document summarization and conduct an empirical analysis to broaden the perspective on current research progress. The empirical analysis includes a study on the intrinsic characteristics of benchmark datasets, a multi-dimensional analysis of summarization models, and a review of the summarization evaluation metrics. Based on the overall findings, we conclude by proposing possible directions for future exploration in this rapidly growing field.
  2. MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes Nianlong Gu, Elliott Ash, Richard Hahnloser ACL 2022 [pdf] [code]
    [Abs] We introduce MemSum (Multi-step Episodic Markov decision process extractive SUMmarizer), a reinforcement-learning-based extractive summarizer enriched at each step with information on the current extraction history. When MemSum iteratively selects sentences into the summary, it considers a broad information set that would intuitively also be used by humans in this task: 1) the text content of the sentence, 2) the global text context of the rest of the document, and 3) the extraction history consisting of the set of sentences that have already been extracted. With a lightweight architecture, MemSum obtains state-of-the-art test-set performance (ROUGE) in summarizing long documents taken from PubMed, arXiv, and GovReport. Ablation studies demonstrate the importance of local, global, and history information. A human evaluation confirms the high quality and low redundancy of the generated summaries, stemming from MemSum’s awareness of extraction history.
  3. Semantic Self-Segmentation for Abstractive Summarization of Long Legal Documents in Low-Resource Regimes Gianluca Moro, Luca Ragazzi AAAI 2022 [pdf]
  4. Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents by Sampling Summary Views Marcio Fonseca, Yftah Ziser, Shay B. Cohen `` [pdf]
  5. Leveraging Locality in Abstractive Text Summarization Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev [pdf]
  6. SNaC: Coherence Error Detection for Narrative Summarization Tanya Goyal, Junyi Jessy Li, Greg Durrett [pdf]
  7. Sequence-Based Extractive Summarisation for Scientific Articles Daniel Kershaw, Rob Koeling `` [pdf]
  8. LDKP: A Dataset for Identifying Keyphrases from Long Scientific Documents Debanjan Mahata, Naveen Agarwal, Dibya Gautam, Amardeep Kumar, Swapnil Parekh, Yaman Kumar Singla, Anish Acharya, Rajiv Ratn Shah [pdf] [data1] [data2]
  9. HIBRIDS: Attention with Hierarchical Biases for Structure-aware Long Document Summarization Shuyang Cao, Lu Wang ACL 2022 [pdf] [code] [data]
    [Abs] Document structure is critical for efficient information consumption. However, it is challenging to encode it efficiently into the modern Transformer architecture. In this work, we present HIBRIDS, which injects Hierarchical Biases foR Incorporating Document Structure into attention score calculation. We further present a new task, hierarchical question-summary generation, for summarizing salient content in the source document into a hierarchy of questions and summaries, where each follow-up question inquires about the content of its parent question-summary pair. We also annotate a new dataset with 6,153 question-summary hierarchies labeled on government reports. Experiment results show that our model produces better question-summary hierarchies than comparisons on both hierarchy quality and content coverage, a finding also echoed by human judges. Additionally, our model improves the generation of long-form summaries from long government reports and Wikipedia articles, as measured by ROUGE scores.
  10. HiStruct+: Improving Extractive Text Summarization with Hierarchical Structure Information Qian Ruan, Malte Ostendorff, Georg Rehm [pdf] [code]
  11. Long Document Summarization with Top-down and Bottom-up Inference Bo Pang, Erik Nijkamp, Wojciech Kryściński, Silvio Savarese, Yingbo Zhou, Caiming Xiong [pdf]
  12. Summ^N: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed H. Awadallah, Dragomir Radev, Rui Zhang ACL 2022 [pdf]
  13. DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev ACL 2022 [pdf] [code]
    [Abs] Transformer-based models have achieved state-of-the-art performance on short-input summarization. However, they still struggle with summarizing longer text. In this paper, we present DYLE, a novel dynamic latent extraction approach for abstractive long-input summarization. DYLE jointly trains an extractor and a generator and treats the extracted text snippets as the latent variable, allowing dynamic snippet-level attention weights during decoding. To provide adequate supervision, we propose simple yet effective heuristics for oracle extraction as well as a consistency loss term, which encourages the extractor to approximate the averaged dynamic weights predicted by the generator. We evaluate our method on different long-document and long-dialogue summarization tasks: GovReport, QMSum, and arXiv. Experiment results show that DYLE outperforms all existing methods on GovReport and QMSum, with gains up to 6.1 ROUGE, while yielding strong results on arXiv. Further analysis shows that the proposed dynamic weights provide interpretability of our generation process.
  14. SciBERTSUM: Extractive Summarization for Scientific Documents Athar Sefid, C Lee Giles [pdf] [code]
  15. Neural Content Extraction for Poster Generation of Scientific Papers Sheng Xu, Xiaojun Wan [pdf]
  16. LongT5: Efficient Text-To-Text Transformer for Long Sequences Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang [pdf]
  17. The Influence of Data Pre-processing and Post-processing on Long Document Summarization Xinwei Du, Kailun Dong, Yuchen Zhang, Yongsheng Li, Ruei-Yu Tsay [pdf]
  18. End-to-End Segmentation-based News Summarization Yang Liu, Chenguang Zhu, Michael Zeng [pdf]
  19. Leveraging Information Bottleneck for Scientific Document Summarization Jiaxin Ju, Ming Liu, Huan Yee Koh, Yuan Jin, Lan Du, Shirui Pan EMNLP 2021 Findings [pdf]
  20. Generating Summaries for Scientific Paper Review Ana Sabina Uban, Cornelia Caragea [pdf]
  21. Sparsity and Sentence Structure in Encoder-Decoder Attention of Summarization Systems Potsawee Manakul, Mark J. F. Gales EMNLP 2021 short paper [pdf] [code]
  22. Bringing Structure into Summaries: a Faceted Summarization Dataset for Long Scientific Documents Rui Meng, Khushboo Thaker, Lei Zhang, Yue Dong, Xingdi Yuan, Tong Wang, Daqing He ACL 2021 short [pdf] [data]
  23. Sliding Selector Network with Dynamic Memory for Extractive Summarization of Long Documents Peng Cui, Le Hu NAACL21 [pdf] [code]
  24. Long-Span Summarization via Local Attention and Content Selection Potsawee Manakul, Mark J. F. Gales ACL 2021 [pdf]
  25. Globalizing BERT-based Transformer Architectures for Long Document Summarization Quentin Grail, Julien Perez, Eric Gaussier EACL 2021 [pdf]
  26. Discourse-Aware Unsupervised Summarization for Long Scientific Documents Yue Dong, Andrei Mircea Romascanu, Jackie Chi Kit Cheung EACL21 [pdf] [code]
  27. Enhancing Scientific Papers Summarization with Citation Graph Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang AAAI 2021 [pdf] [code]
  28. Efficient Attentions for Long Document Summarization Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, Lu Wang NAACL 2021 [pdf] [code] [data]
  29. Can We Automate Scientific Reviewing? Weizhe Yuan, Pengfei Liu, and Graham Neubig [pdf] [code]
  30. Long Document Summarization in a Low Resource Setting using Pretrained Language Models Ahsaas Bajaj, Pavitra Dangati, Kalpesh Krishna, Pradhiksha Ashok Kumar, Rheeya Uppaal, Bradford Windsor, Eliot Brenner, Dominic Dotterrer, Rajarshi Das, Andrew McCallum ACL 2021 Student Research Workshop [pdf]
  31. Summaformers @ LaySumm 20, LongSumm 20 Sayar Ghosh Roy, Nikhil Pinnaparaju, Risubh Jain, Manish Gupta, Vasudeva Varma SDP EMNLP 2020 [pdf]
  32. On Generating Extended Summaries of Long Documents Sajad Sotudeh, Arman Cohan, Nazli Goharian SDU21 [pdf] [code]
  33. Self-Supervised Learning for Visual Summary Identification in Scientific Publications Shintaro Yamamoto, Anne Lauscher, Simone Paolo Ponzetto, Goran Glavaš, Shigeo Morishima [pdf]
  34. Systematically Exploring Redundancy Reduction in Summarizing Long Documents Wen Xiao, Giuseppe Carenini AACL20 [pdf [code]
  35. On Extractive and Abstractive Neural Document Summarization with Transformer Language Models Sandeep Subramanian, Raymond Li, Jonathan Pilault, Christopher Pal EMNLP20 [pdf]
  36. Dimsum @LaySumm 20: BART-based Approach for Scientific Document Summarization Tiezheng Yu, Dan Su, Wenliang Dai, Pascale Fung [pdf] [code]
  37. SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline Jiaxin Ju, Ming Liu, Longxiang Gao, Shirui Pan [pdf] [code]
  38. Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks Peng Cui, Le Hu, Yuanchao Liu COLING20 [pdf]
  39. Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles Yao Lu, Yue Dong, Laurent Charlin EMNLP20 Short [pdf] [data]
  40. A Divide-and-Conquer Approach to the Summarization of Long Documents Alexios Gidiotis, Grigorios Tsoumakas IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING [pdf]
  41. TLDR: Extreme Summarization of Scientific Documents Isabel Cachola, Kyle Lo, Arman Cohan, Daniel S. Weld Findings of EMNLP20 [pdf] [data]
  42. Extractive Summarization of Long Documents by Combining Global and Local Context Wen Xiao, Giuseppe Carenini EMNLP19 [pdf] [code]
  43. ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R. Fabbri, Irene Li, Dan Friedman, Dragomir R. Radev AAAI19 [pdf] [data]
  44. TalkSumm: A Dataset and Scalable Annotation Method for Scientific Paper Summarization Based on Conference Talks Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, David Konopnicki ACL19 [pdf] [data]
  45. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian NAACL18 [pdf] [data]

Factual Consistency





Toolkit: factsumm

The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin [pdf]

  1. Jointly Learning Guidance Induction and Faithful Summary Generation via Conditional Variational Autoencoders Wang Xu, Tiejun Zhao Findings of NAACL 2022 [pdf]
    [Abs] Abstractive summarization can generate high quality results with the development of the neural network. However, generating factual consistency summaries is a challenging task for abstractive summarization. Recent studies extract the additional information with off-the-shelf tools from the source document as a clue to guide the summary generation, which shows effectiveness to improve the faithfulness. Unlike these work, we present a novel framework based on conditional variational autoencoders, which induces the guidance information and generates the summary equipment with the guidance synchronously. Experiments on XSUM and CNNDM dataset show that our approach can generate relevant and fluent summaries which is more faithful than the existing state-of-the-art approaches, according to multiple factual consistency metrics.
  2. Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung Findings of NAACL 2022 [pdf] [code]
    [Abs] Despite the recent advances in abstractive summarization systems, it is still difficult to determine whether a generated summary is factual consistent with the source text. To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries. Luckily, the former is readily available as reference summaries in existing summarization datasets. However, generating the latter remains a challenge, as they need to be factually inconsistent, yet closely relevant to the source text to be effective. In this paper, we propose to generate factually inconsistent summaries using source texts and reference summaries with key information masked. Experiments on seven benchmark datasets demonstrate that factual consistency classifiers trained on summaries generated using our method generally outperform existing models and show a competitive correlation with human judgments. We also analyze the characteristics of the summaries generated using our method. We will release the pre-trained model and the code at https://github.com/hwanheelee1993/MFMA.
  3. Improving the Faithfulness of Abstractive Summarization via Entity Coverage Control Haopeng Zhang, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou Findings of NAACL 2022 [pdf] [code]
    [Abs] Abstractive summarization systems leveraging pre-training language models have achieved superior results on benchmark datasets. However, such models have been shown to be more prone to hallucinate facts that are unfaithful to the input context. In this paper, we propose a method to remedy entity-level extrinsic hallucinations with Entity Coverage Control (ECC). We first compute entity coverage precision and prepend the corresponding control code for each training example, which implicitly guides the model to recognize faithfulness contents in the training phase. We further extend our method via intermediate fine-tuning on large but noisy data extracted from Wikipedia to unlock zero-shot summarization. We show that the proposed method leads to more faithful and salient abstractive summarization in supervised fine-tuning and zero-shot settings according to our experimental results on three benchmark datasets XSum, Pubmed, and SAMSum of very different domains and styles.
  4. FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization David Wan, Mohit Bansal NAACL 2022 [pdf] [code]
    [Abs] We present FactPEGASUS, an abstractive summarization model that addresses the problem of factuality during pre-training and fine-tuning: (1) We augment the sentence selection strategy of PEGASUS’s (Zhang et al., 2019) pre-training objective to create pseudo-summaries that are both important and factual; (2) We introduce three complementary components for fine-tuning. The corrector removes hallucinations present in the reference summary, the contrastor uses contrastive learning to better differentiate nonfactual summaries from factual ones, and the connector bridges the gap between the pre-training and fine-tuning for better transfer of knowledge. Experiments on three downstream tasks demonstrate that FactPEGASUS substantially improves factuality evaluated by multiple automatic metrics and humans. Our thorough analysis suggests that FactPEGASUS is more factual than using the original pre-training objective in zero-shot and few-shot settings, retains factual behavior more robustly than strong baselines, and does not rely entirely on becoming more extractive to improve factuality.
  5. Improving the Faithfulness of Abstractive Summarization via Entity Coverage Control Haopeng Zhang, Semih Yavuz, Wojciech Kryscinski, Kazuma Hashimoto, Yingbo Zhou NAACL 2022 findings [pdf]
    [Abs] Abstractive summarization systems leveraging pre-training language models have achieved superior results on benchmark datasets. However, such models have been shown to be more prone to hallucinate facts that are unfaithful to the input context. In this paper, we propose a method to remedy entity-level extrinsic hallucinations with Entity Coverage Control (ECC). We first compute entity coverage precision and prepend the corresponding control code for each training example, which implicitly guides the model to recognize faithfulness contents in the training phase. We further extend our method via intermediate fine-tuning on large but noisy data extracted from Wikipedia to unlock zero-shot summarization. We show that the proposed method leads to more faithful and salient abstractive summarization in supervised fine-tuning and zero-shot settings according to our experimental results on three benchmark datasets XSum, Pubmed, and SAMSum of very different domains and styles.
  6. SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization Philippe Laban, Tobias Schnabel, Paul N. Bennett, Marti A. Hearst TACL 2022 Volume 10 [pdf] [code]
    [Abs] In the summarization domain, a key requirement for summaries is to be factually consistent with the input document. Previous work has found that natural language inference (NLI) models do not perform competitively when applied to inconsistency detection. In this work, we revisit the use of NLI for inconsistency detection, finding that past work suffered from a mismatch in input granularity between NLI datasets (sentence-level), and inconsistency detection (document level). We provide a highly effective and light-weight method called SummaCConv that enables NLI models to be successfully used for this task by segmenting documents into sentence units and aggregating scores between pairs of sentences. We furthermore introduce a new benchmark called SummaC (Summary Consistency) which consists of six large inconsistency detection datasets. On this dataset, SummaCConv obtains state-of-the-art results with a balanced accuracy of 74.4%, a 5% improvement compared with prior work.
  7. Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization Meng Cao, Yue Dong, Jackie Cheung ACL 2022 [pdf] [code]
    [Abs] State-of-the-art abstractive summarization systems often generate hallucinations; i.e., content that is not directly inferable from the source text. Despite being assumed to be incorrect, we find that much hallucinated content is actually consistent with world knowledge, which we call factual hallucinations. Including these factual hallucinations in a summary can be beneficial because they provide useful background information. In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities. Our method is based on an entity’s prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively. Empirical results suggest that our method vastly outperforms two baselines in both accuracy and F1 scores and has a strong correlation with human judgments on factuality classification tasks.Furthermore, we use our method as a reward signal to train a summarization system using an off-line reinforcement learning (RL) algorithm that can significantly improve the factuality of generated summaries while maintaining the level of abstractiveness.
  8. Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Kryściński, Justin F. Rousseau, Greg Durrett [pdf] [code]
  9. Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization Prasetya Ajie Utama, Joshua Bambrick, Nafise Sadat Moosavi, Iryna Gurevych NAACL 2022 [pdf] [code]
    [Abs] Neural abstractive summarization models are prone to generate summaries that are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization.
  10. Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung NAACL 2022 Findings [pdf] [code]
  11. Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization Yue Dong, John Wieting, Pat Verga [pdf]
  12. Learning to Revise References for Faithful Summarization Griffin Adams, Han-Chin Shing, Qing Sun, Christopher Winestock, Kathleen McKeown, Noémie Elhadad [pdf] [code]
  13. Factual Error Correction for Abstractive Summaries Using Entity Retrieval Hwanhee Lee, Cheoneum Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Juae Kim, Kyomin Jung [pdf]
  14. Evaluating Factuality in Text Simplification Ashwin Devaraj, William Sheffield, Byron C. Wallace, Junyi Jessy Li ACL 2022 [pdf] [code]
  15. FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations Leonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, Mohit Bansal NAACL 2022 [pdf] [code]
    [Abs] Despite recent improvements in abstractive summarization, most current approaches generate summaries that are not factually consistent with the source document, severely restricting their trust and usage in real-world applications. Recent works have shown promising improvements in factuality error identification using text or dependency arc entailments; however, they do not consider the entire semantic graph simultaneously. To this end, we propose FactGraph, a method that decomposes the document and the summary into structured meaning representations (MR), which are more suitable for factuality evaluation. MRs describe core semantic concepts and their relations, aggregating the main content in both document and summary in a canonical form, and reducing data sparsity. FactGraph encodes such graphs using a graph encoder augmented with structure-aware adapters to capture interactions among the concepts based on the graph connectivity, along with text representations using an adapter-based text encoder. Experiments on different benchmarks for evaluating factuality show that FactGraph outperforms previous approaches by up to 15%. Furthermore, FactGraph improves performance on identifying content verifiability errors and better captures subsentence-level factual inconsistencies.
  16. Don't Say What You Don't Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search Daniel King, Zejiang Shen, Nishant Subramani, Daniel S. Weld, Iz Beltagy, Doug Downey [pdf] [code]
  17. CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning Xiangru Tang, Arjun Nair, Borui Wang, Bingyao Wang, Jai Desai, Aaron Wade, Haoran Li, Asli Celikyilmaz, Yashar Mehdad, Dragomir Radev NAACL 2022 [pdf]
    [Abs] Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained neural language models, substantial amounts of hallucinated content are found during the human evaluation. In this work, we first devised a typology of factual errors to better understand the types of hallucinations generated by current models and conducted human evaluation on popular dialog summarization dataset. We further propose a training strategy that improves the factual consistency and overall quality of summaries via a novel contrastive fine-tuning, called CONFIT. To tackle top factual errors from our annotation, we introduce additional contrastive loss with carefully designed hard negative samples and self-supervised dialogue-specific loss to capture the key information between speakers. We show that our model significantly reduces all kinds of factual errors on both SAMSum dialogue summarization and AMI meeting summarization. On both datasets, we achieve significant improvements over state-of-the-art baselines using both automatic metrics, ROUGE and BARTScore, and human evaluation.
  18. QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization Alexander R. Fabbri, Chien-Sheng Wu, Wenhao Liu, Caiming Xiong NAACL 2022 [pdf] [code]
    [Abs] Factual consistency is an essential quality of text summarization models in practical settings. Existing work in evaluating this dimension can be broadly categorized into two lines of research, entailment-based and question answering (QA)-based metrics, and different experimental setups often lead to contrasting conclusions as to which paradigm performs the best. In this work, we conduct an extensive comparison of entailment and QA-based metrics, demonstrating that carefully choosing the components of a QA-based metric, especially question generation and answerability classification, is critical to performance. Building on those insights, we propose an optimized metric, which we call QAFactEval, that leads to a 14% average improvement over previous QA-based metrics on the SummaC factual consistency benchmark, and also outperforms the best-performing entailment-based metric. Moreover, we find that QA-based and entailment-based metrics can offer complementary signals and be combined into a single metric for a further performance boost.
  19. CO2Sum:Contrastive Learning for Factual-Consistent Abstractive Summarization Wei Liu, Huanqin Wu, Wenjing Mu, Zhen Li, Tao Chen, Dan Nie [pdf]
  20. Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization Yiran Chen, Pengfei Liu, Xipeng Qiu EMNLP 2021 Findings [pdf] [code]
  21. SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization Philippe Laban, Tobias Schnabel, Paul N. Bennett, Marti A. Hearst [pdf] [code]
  22. Dialogue Inspectional Summarization with Factual Inconsistency Awareness Leilei Gan, Yating Zhang, Kun Kuang, Lin Yuan, Shuo Li, Changlong Sun, Xiaozhong Liu, Fei Wu [pdf]
  23. Fine-grained Factual Consistency Assessment for Abstractive Summarization Models Sen Zhang, Jianwei Niu, Chuyuan Wei `` [pdf]
  24. MoFE: Mixture of Factual Experts for Controlling Hallucinations in Abstractive Summarization Prafulla Kumar Choubey, Jesse Vig, Wenhao Liu, Nazneen Fatema Rajani [pdf]
  25. Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries Xiangru Tang, Alexander R. Fabbri, Ziming Mao, Griffin Adams, Borui Wang, Haoran Li, Yashar Mehdad, Dragomir Radev NAACL 2022 [pdf]
    [Abs] Current pre-trained models applied for summarization are prone to factual inconsistencies that misrepresent the source text. Evaluating the factual consistency of summaries is thus necessary to develop better models. However, the human evaluation setup for evaluating factual consistency has not been standardized. To determine the factors that affect the reliability of the human evaluation, we crowdsource evaluations for factual consistency across state-of-the-art models on two news summarization datasets using the rating-based Likert Scale and ranking-based Best-Worst Scaling. Our analysis reveals that the ranking-based Best-Worst Scaling offers a more reliable measure of summary quality across datasets and that the reliability of Likert ratings highly depends on the target dataset and the evaluation design. To improve crowdsourcing reliability, we extend the scale of the Likert rating and present a scoring algorithm for Best-Worst Scaling that we call value learning. Our crowdsourcing guidelines will be publicly available to facilitate future work on factual consistency in summarization.
  26. MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas EMNLP2021 Findings [pdf] [data]
  27. Inspecting the Factuality of Hallucinated Entities in Abstractive Summarization Meng Cao, Yue Dong, Jackie Chi Kit Cheung [pdf]
  28. CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization Shuyang Cao, Lu Wang EMNLP 2021 [pdf] [code]
  29. Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization Faisal Ladhak, Esin Durmus, He He, Claire Cardie, Kathleen McKeown ACL 2022 [pdf] [code]
    [Abs] Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors. While prior work has proposed models that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to make summarization models more extractive. In this work, we present a framework for evaluating the effective faithfulness of summarization systems, by generating a faithfulness-abstractiveness trade-off curve that serves as a control at different operating points on the abstractiveness spectrum. We then show that the Maximum Likelihood Estimation (MLE) baseline as well as recently proposed methods for improving faithfulness, fail to consistently improve over the control at the same level of abstractiveness. Finally, we learn a selector to identify the most faithful and abstractive summary for a given document, and show that this system can attain higher faithfulness scores in human evaluations while being more abstractive than the baseline system on two datasets. Moreover, we show that our system is able to achieve a better faithfulness-abstractiveness trade-off than the control at the same level of abstractiveness.
  30. Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, Bolin Ding EMNLP 2021 Findings [pdf] [code]
  31. Improving Factual Consistency of Abstractive Summarization on Customer Feedback Yang Liu, Yifei Sun, Vincent Gao ACL 2021 Proceedings of The 4th Workshop on e-Commerce and NLP [pdf]
  32. AgreeSum: Agreement-Oriented Multi-Document Summarization Richard Yuanzhe Pang, Adam D. Lelkes, Vinh Q. Tran, Cong Yu Findings of ACL 2021 [pdf] [data]
  33. Focus Attention: Promoting Faithfulness and Diversity in Summarization Rahul Aralikatte, Shashi Narayan, Joshua Maynez, Sascha Rothe, Ryan McDonald ACL 2021 [pdf]
  34. Improving Factual Consistency of Abstractive Summarization via Question Answering Feng Nan, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Kathleen McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, Bing Xiang ACL 2021 [pdf] [code]
  35. Discourse Understanding and Factual Consistency in Abstractive Summarization Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi EACL21 [pdf] [code]
  36. Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection Sihao Chen, Fan Zhang, Kazoo Sone and Dan Roth NAACL21 [pdf] [code]
  37. Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics Artidoro Pagnoni, Vidhisha Balachandran and Yulia Tsvetkov NAACL21 [pdf] [code]
  38. Annotating and Modeling Fine-grained Factuality in Summarization Tanya Goyal, Greg Durrett NAACL21 [pdf] [code]
  39. SAFEval: Summarization Asks for Fact-based Evaluation Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang [pdf] [code]
  40. Enhancing Factual Consistency of Abstractive Summarization Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, Meng Jiang NAACL21 [pdf]
  41. Entity-level Factual Consistency of Abstractive Text Summarization Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, Bing Xiang EACL21 [pdf] [code]
  42. On the Faithfulness for E-commerce Product Summarization Peng Yuan, Haoran Li, Song Xu, Youzheng Wu, Xiaodong He, Bowen Zhou COLING20 [pdf] [code]
  43. FFCI: A Framework for Interpretable Automatic Evaluation of Summarization Fajri Koto, Jey Han Lau, Timothy Baldwin [pdf] [code]
  44. GSum: A General Framework for Guided Neural Abstractive Summarization Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig NAACL21 [pdf] [code]
  45. Truth or Error? Towards systematic analysis of factual errors in abstractive summaries Klaus-Michael Lux, Maya Sappelli, Martha Larson EMNLP | Eval4NLP 20 [pdf]
  46. Detecting Hallucinated Content in Conditional Neural Sequence Generation Chunting Zhou, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer, Marjan Ghazvininejad [pdf] [code]
  47. Go Figure! A Meta Evaluation of Factuality in Summarization Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, Jianfeng Gao Findings of ACL 2021 [pdf]
  48. Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation Yuning Mao, Xiang Ren, Heng Ji, Jiawei Han [pdf]
  49. Factual Error Correction for Abstractive Summarization Models Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung EMNLP20 short [pdf] [code]
  50. Multi-Fact Correction in Abstractive Text Summarization. Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, Jingjing Liu EMNLP20 [pdf]
  51. Factual Error Correction for Abstractive Summarization Models Cao Meng, Yue Cheung Dong, Jiapeng Wu, and Jackie Chi Kit EMNLP20 [pdf]
  52. Evaluating the Factual Consistency of Abstractive Text Summarization Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher EMNLP20 [pdf] [code]
  53. Reducing Quantity Hallucinations in Abstractive Summarization Zheng Zhao, Shay B. Cohen, Bonnie Webber Findings of EMNLP [pdf]
  54. On Faithfulness and Factuality in Abstractive Summarization Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan McDonaldACL20 [pdf] [data]
  55. Improving Truthfulness of Headline Generation Kazuki Matsumaru, Sho Takase, Naoaki Okazaki ACL20[pdf]
  56. Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D. Manning, Curtis P. Langlotz ACL20[pdf]
  57. FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization Esin Durmus, He He, Mona Diab ACL20 [pdf] [code]
  58. Asking and Answering Questions to Evaluate the Factual Consistency of Summaries Alex Wang, Kyunghyun Cho, Mike Lewis ACL20 [pdf] [code]
  59. Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward Luyang Huang, Lingfei Wu, Lu Wang ACL20 [pdf]
  60. Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization Beliz Gunel, Chenguang Zhu, Michael Zeng, Xuedong Huang NIPS19 [pdf]
  61. Assessing The Factual Accuracy of Generated Text Ben Goodrich, Vinay Rao, Mohammad Saleh, Peter J Liu KDD19 [pdf]
  62. Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, Iryna Gurevych ACL19 [pdf] [data]
  63. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization Haoran Li, Junnan Zhu, Jiajun Zhang, Chengqing Zong COLING18 [pdf] [code]
  64. Faithful to the Original: Fact-Aware Neural Abstractive Summarization Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li AAAI18 [pdf]
  65. FAR-ASS:Fact-aware reinforced abstractive sentence summarization MengLi Zhanga, Gang Zhoua, Wanting Yua, Wenfen Liub [pdf]

Contrastive Learning

  1. CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization Shuyang Cao, Lu Wang EMNLP 2021 [pdf] [code]
  2. Sequence Level Contrastive Learning for Text Summarization Shusheng Xu, Xingxing Zhang, Yi Wu, Furu Wei AAAI 2022 [pdf]](https://arxiv.org/abs/2109.03481)
  3. Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization Chujie Zheng, Kunpeng Zhang, Harry Jiannan Wang, Ling Fan, Zhe Wang [pdf] [code]
  4. Constructing Contrastive samples via Summarization for Text Classification with limited annotations Yangkai Du, Tengfei Ma, Lingfei Wu, Fangli Xu, Xuhong Zhang, Shouling Ji Findings of EMNLP 2021 Short [pdf]
  5. Alleviating Exposure Bias via Contrastive Learning for Abstractive Text Summarization Shichao Sun, Wenjie Li [pdf] [code]
  6. SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization Yixin Liu, Pengfei Liu ACL 2021 short [pdf] [code]
  7. Contrastive Learning with Adversarial Perturbations for Conditional Text Generation Seanie Lee, Dong Bok Lee, Sung Ju Hwang ICLR 2021 [pdf]
  8. DeepChannel: Salience Estimation by Contrastive Learning for Extractive Document Summarization Jiaxin Shi, Chen Liang, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang AAAI 2019 [pdf] [code]
  9. Unsupervised Reference-Free Summary Quality Evaluation via Contrastive Learning Hanlu Wu, Tengfei Ma, Lingfei Wu, Tariro Manyumwa, Shouling Ji EMNLP 2020 [pdf] [code]
  10. Contrastive Attention Mechanism for Abstractive Sentence Summarization Xiangyu Duan, Hongfei Yu, Mingming Yin, Min Zhang, Weihua Luo, Yue Zhang EMNLP 2019 [pdf] [code]

Evaluation

  1. SummScore: A Comprehensive Evaluation Metric for Summary Quality Based on Cross-Encoder Wuhang Lin, Shasha Li, Chen Zhang, Bin Ji, Jie Yu, Jun Ma, Zibo Yi APWeb-WAIM2022 [pdf]
    [Abs] Text summarization models are often trained to produce summaries that meet human quality requirements. However, the existing evaluation metrics for summary text are only rough proxies for summary quality, suffering from low correlation with human scoring and inhibition of summary diversity. To solve these problems, we propose SummScore, a comprehensive metric for summary quality evaluation based on CrossEncoder. Firstly, by adopting the original-summary measurement mode and comparing the semantics of the original text, SummScore gets rid of the inhibition of summary diversity. With the help of the text-matching pre-training Cross-Encoder, SummScore can effectively capture the subtle differences between the semantics of summaries. Secondly, to improve the comprehensiveness and interpretability, SummScore consists of four fine-grained submodels, which measure Coherence, Consistency, Fluency, and Relevance separately. We use semi-supervised multi-rounds of training to improve the performance of our model on extremely limited annotated data. Extensive experiments show that SummScore significantly outperforms existing evaluation metrics in the above four dimensions in correlation with human scoring. We also provide the quality evaluation results of SummScore on 16 mainstream summarization models for later research.
  2. Does Summary Evaluation Survive Translation to Other Languages? Spencer Braun, Oleg Vasilyev, Neslihan Iskender, John Bohannon NAACL 2022 [pdf] [code]
    [Abs] The creation of a quality summarization dataset is an expensive, time-consuming effort, requiring the production and evaluation of summaries by both trained humans and machines. The returns to such an effort would increase significantly if the dataset could be used in additional languages without repeating human annotations. To investigate how much we can trust machine translation of summarization datasets, we translate the English SummEval dataset to seven languages and compare performances across automatic evaluation measures. We explore equivalence testing as the appropriate statistical paradigm for evaluating correlations between human and automated scoring of summaries. We also consider the effect of translation on the relative performance between measures. We find some potential for dataset reuse in languages similar to the source and along particular dimensions of summary quality. Our code and data can be found at https://github.com/PrimerAI/primer-research/.
  3. Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics Daniel Deutsch, Rotem Dror, Dan Roth NAACL 2022 [pdf] [code]
    [Abs] How reliably an automatic summarization evaluation metric replicates human judgments of summary quality is quantified by system-level correlations. We identify two ways in which the definition of the system-level correlation is inconsistent with how metrics are used to evaluate systems in practice and propose changes to rectify this disconnect. First, we calculate the system score for an automatic metric using the full test set instead of the subset of summaries judged by humans, which is currently standard practice. We demonstrate how this small change leads to more precise estimates of system-level correlations. Second, we propose to calculate correlations only on pairs of systems that are separated by small differences in automatic scores which are commonly observed in practice. This allows us to demonstrate that our best estimate of the correlation of ROUGE to human judgments is near 0 in realistic scenarios. The results from the analyses point to the need to collect more high-quality human judgments and to improve automatic metrics when differences in system scores are small.
  4. SueNes: A Weakly Supervised Approach to Evaluating Single-Document Summarization via Negative Sampling Forrest Bao, Ge Luo, Hebi Li, Minghui Qiu, Yinfei Yang, Youbiao He, Cen Chen NAACL 2022 [pdf] [code]
    [Abs] Canonical automatic summary evaluation metrics, such as ROUGE, focus on lexical similarity which cannot well capture semantics nor linguistic quality and require a reference summary which is costly to obtain. Recently, there have been a growing number of efforts to alleviate either or both of the two drawbacks. In this paper, we present a proof-of-concept study to a weakly supervised summary evaluation approach without the presence of reference summaries. Massive data in existing summarization datasets are transformed for training by pairing documents with corrupted reference summaries. In cross-domain tests, our strategy outperforms baselines with promising improvements, and show a great advantage in gauging linguistic qualities over all metrics.
  5. Reference-free Summarization Evaluation via Semantic Correlation and Compression Ratio Yizhu Liu, Qi Jia, Kenny Zhu NAACL 2022 [pdf] [code]
    [Abs] A document can be summarized in a number of ways. Reference-based evaluation of summarization has been criticized for its inflexibility. The more sufficient the number of abstracts, the more accurate the evaluation results. However, it is difficult to collect sufficient reference summaries. In this paper, we propose a new automatic reference-free evaluation metric that compares semantic distribution between source document and summary by pretrained language models and considers summary compression ratio. The experiments show that this metric is more consistent with human evaluation in terms of coherence, consistency, relevance and fluency.
  6. MaskEval: Weighted MLM-Based Evaluation for Text Summarization and Simplification Yu Lu Liu, Rachel Bawden, Thomas Scaliom, Benoît Sagot, Jackie Chi Kit Cheung [pdf] [code]
  7. TRUE: Re-evaluating Factual Consistency Evaluation NAACL 2022 [pdf]
  8. Play the Shannon Game With Language Models: A Human-Free Approach to Summary Evaluation Nicholas Egan, Oleg Vasilyev, John Bohannon AAAI 2022 [pdf] [code]
  9. Differentiable N-gram Objective on Abstractive Summarization Yunqi Zhu, Wensheng Zhang, Mingjin Zhu [pdf] [code]
  10. DiscoScore: Evaluating Text Generation with BERT and Discourse Coherence Wei Zhao, Michael Strube, Steffen Eger [pdf] [code]
  11. WIDAR -- Weighted Input Document Augmented ROUGE Raghav Jain, Vaibhav Mavi, Anubhav Jangra, Sriparna Saha ECIR 2022 [pdf] [code]
  12. InfoLM: A New Metric to Evaluate Summarization & Data2Text Generation Pierre Colombo, Chloe Clave, Pablo Piantanida AAAI 2022 [pdf]
  13. Evaluation of Summarization Systems across Gender, Age, and Race Anna Jørgensen, Anders Søgaard EMNLP 2021| newsum [pdf]
  14. Evaluation of Abstractive Summarisation Models with Machine Translation in Deliberative Processes M. Arana-Catania, Rob Procter, Yulan He, Maria Liakata EMNLP 2021 New Frontiers in Summarization Workshop [pdf]
  15. Evaluation of Summarization Systems across Gender, Age, and Race Anna Jørgensen, Anders Søgaard [pdf]
  16. Finding a Balanced Degree of Automation for Summary Evaluation Shiyue Zhang, Mohit Bansal EMNLP 2021 [pdf] [code]
  17. QuestEval: Summarization Asks for Fact-based Evaluation Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang EMNLP 2021 [pdf] [code]
  18. BARTScore: Evaluating Generated Text as Text Generation Weizhe Yuan, Graham Neubig, Pengfei Liu [pdf] [code]
  19. A Training-free and Reference-free Summarization Evaluation Metric via Centrality-weighted Relevance and Self-referenced Redundancy Wang Chen, Piji Li, Irwin King ACL 2021 [pdf] [code]
  20. Evaluating the Efficacy of Summarization Evaluation across Languages Fajri Koto, Jey Han Lau, Timothy Baldwin Findings of ACL 2021 [pdf]
  21. Question-aware Transformer Models for Consumer Health Question Summarization Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman [pdf]
  22. Towards Human-Free Automatic Quality Evaluation of German Summarization Neslihan Iskender, Oleg Vasilyev, Tim Polzehl, John Bohannon, Sebastian Möller [pdf]
  23. Reliability of Human Evaluation for Text Summarization: Lessons Learned and Challenges Ahead Neslihan Iskender, Tim Polzehl, Sebastian Möller EACL21 [pdf] [code]
  24. SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for Text Summarization Jesse Vig, Wojciech Kryscinski, Karan Goel, Nazneen Fatema Rajani ACL 2021 demo [pdf] [data]
  25. Is human scoring the best criteria for summary evaluation? Findings of ACL 2021 Oleg Vasilyev, John Bohannon [pdf]
  26. How to Evaluate a Summarizer: Study Design and Statistical Analysis for Manual Linguistic Quality Evaluation Julius Steen, Katja Markert EACL21 [pdf] [code]
  27. HOLMS: Alternative Summary Evaluation with Large Language Models Yassine Mrabet, Dina Demner-Fushman COLING20 [pdf] [bib]
  28. FFCI: A Framework for Interpretable Automatic Evaluation of Summarization Fajri Koto, Jey Han Lau, Timothy Baldwin [pdf] [code]
  29. Unsupervised Reference-Free Summary Quality Evaluation via Contrastive Learning Hanlu Wu, Tengfei Ma, Lingfei Wu, Tariro Manyumwa, Shouling Ji EMNLP20 [pdf] [code]
  30. SacreROUGE: An Open-Source Library for Using and Developing Summarization Evaluation Metrics Daniel Deutsch, Dan Roth [pdf] [code]
  31. SummEval: Re-evaluating Summarization Evaluation Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, Dragomir Radev [pdf] [code]
  32. HIGHRES: Highlight-based Reference-less Evaluation of Summarization Hardy, Shashi Narayan, Andreas Vlachos ACL19 [pdf] [code]

Multi-Document

  1. Proposition-Level Clustering for Multi-Document Summarization Ori Ernst, Avi Caciularu, Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Jacob Goldberger, Ido Dagan NAACL 2022 [pdf] [code]
    [Abs] Text clustering methods were traditionally incorporated into multi-document summarization (MDS) as a means for coping with considerable information repetition. Particularly, clusters were leveraged to indicate information saliency as well as to avoid redundancy. Such prior methods focused on clustering sentences, even though closely related sentences usually contain also non-aligned parts. In this work, we revisit the clustering approach, grouping together sub-sentential propositions, aiming at more precise information alignment. Specifically, our method detects salient propositions, clusters them into paraphrastic clusters, and generates a representative sentence for each cluster via text fusion.Our summarization method improves over the previous state-of-the-art MDS method in the DUC 2004 and TAC 2011 datasets, both in automatic ROUGE scores and human preference.
  2. Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities Zejiang Shen, Kyle Lo, Lauren Yu, Nathan Dahlberg, Margo Schlanger, Doug Downey [pdf] [data]
    [Abs] With the advent of large language models, methods for abstractive summarization have made great strides, creating potential for use in applications to aid knowledge workers processing unwieldy document collections. One such setting is the Civil Rights Litigation Clearinghouse (CRLC) (this https URL),which posts information about large-scale civil rights lawsuits, serving lawyers, scholars, and the general public. Today, summarization in the CRLC requires extensive training of lawyers and law students who spend hours per case understanding multiple relevant documents in order to produce high-quality summaries of key events and outcomes. Motivated by this ongoing real-world summarization effort, we introduce Multi-LexSum, a collection of 9,280 expert-authored summaries drawn from ongoing CRLC writing. Multi-LexSum presents a challenging multi-document summarization task given the length of the source documents, often exceeding two hundred pages per case. Furthermore, Multi-LexSum is distinct from other datasets in its multiple target summaries, each at a different granularity (ranging from one-sentence "extreme" summaries to multi-paragraph narrations of over five hundred words). We present extensive analysis demonstrating that despite the high-quality summaries in the training data (adhering to strict content and style guidelines), state-of-the-art summarization models perform poorly on this task. We release Multi-LexSum for further research in summarization methods as well as to facilitate development of applications to assist in the CRLC's mission at this https URL.
  3. AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization Alexander R. Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, Mona Diab NAACL 2022 [pdf] [code]
    [Abs] Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions. Each question thread can receive a large number of answers with different perspectives. One goal of answer summarization is to produce a summary that reflects the range of answer perspectives. A major obstacle for this task is the absence of a dataset to provide supervision for producing such summaries. Recent works propose heuristics to create such data, but these are often noisy and do not cover all answer perspectives present. This work introduces a novel dataset of 4,631 CQA threads for answer summarization curated by professional linguists. Our pipeline gathers annotations for all subtasks of answer summarization, including relevant answer sentence selection, grouping these sentences based on perspectives, summarizing each perspective, and producing an overall summary. We analyze and benchmark state-of-the-art models on these subtasks and introduce a novel unsupervised approach for multi-perspective data augmentation that boosts summarization performance according to automatic evaluation. Finally, we propose reinforcement learning rewards to improve factual consistency and answer coverage and analyze areas for improvement.
  4. The patient is more dead than alive: exploring the current state of the multi-document summarisation of the biomedical literature Yulia Otmakhova, Karin Verspoor, Timothy Baldwin, Jey Han Lau ACL 2022 [pdf]
    [Abs] Although multi-document summarisation (MDS) of the biomedical literature is a highly valuable task that has recently attracted substantial interest, evaluation of the quality of biomedical summaries lacks consistency and transparency. In this paper, we examine the summaries generated by two current models in order to understand the deficiencies of existing evaluation approaches in the context of the challenges that arise in the MDS task. Based on this analysis, we propose a new approach to human evaluation and identify several challenges that must be overcome to develop effective biomedical MDS systems.
  5. Predicting Intervention Approval in Clinical Trials through Multi-Document Summarization Georgios Katsimpras, Georgios Paliouras ACL 2022 [pdf] [code]
    [Abs] Clinical trials offer a fundamental opportunity to discover new treatments and advance the medical knowledge. However, the uncertainty of the outcome of a trial can lead to unforeseen costs and setbacks. In this study, we propose a new method to predict the effectiveness of an intervention in a clinical trial. Our method relies on generating an informative summary from multiple documents available in the literature about the intervention under study. Specifically, our method first gathers all the abstracts of PubMed articles related to the intervention. Then, an evidence sentence, which conveys information about the effectiveness of the intervention, is extracted automatically from each abstract. Based on the set of evidence sentences extracted from the abstracts, a short summary about the intervention is constructed. Finally, the produced summaries are used to train a BERT-based classifier, in order to infer the effectiveness of an intervention. To evaluate our proposed method, we introduce a new dataset which is a collection of clinical trials together with their associated PubMed articles. Our experiments, demonstrate the effectiveness of producing short informative summaries and using them to predict the effectiveness of an intervention.
  6. Discriminative Marginalized Probabilistic Neural Method for Multi-Document Summarization of Medical Literature Gianluca Moro, Luca Ragazzi, Lorenzo Valgimigli, Davide Freddi ACL 2022 [pdf] [code]
    [Abs] Although current state-of-the-art Transformer-based solutions succeeded in a wide range for single-document NLP tasks, they still struggle to address multi-input tasks such as multi-document summarization. Many solutions truncate the inputs, thus ignoring potential summary-relevant contents, which is unacceptable in the medical domain where each information can be vital. Others leverage linear model approximations to apply multi-input concatenation, worsening the results because all information is considered, even if it is conflicting or noisy with respect to a shared background. Despite the importance and social impact of medicine, there are no ad-hoc solutions for multi-document summarization. For this reason, we propose a novel discriminative marginalized probabilistic method (DAMEN) trained to discriminate critical information from a cluster of topic-related medical documents and generate a multi-document summary via token probability marginalization. Results prove we outperform the previous state-of-the-art on a biomedical dataset for multi-document summarization of systematic literature reviews. Moreover, we perform extensive ablation studies to motivate the design choices and prove the importance of each module of our method.
  7. ACM -- Attribute Conditioning for Abstractive Multi Document Summarization Aiswarya Sankar, Ankit Chadha [pdf]
  8. Improving Multi-Document Summarization through Referenced Flexible Extraction with Credit-Awareness Yun-Zhu Song, Yi-Syuan Chen, Hong-Han Shuai NAACL 2022 [pdf] [code]
    [Abs] A notable challenge in Multi-Document Summarization (MDS) is the extremely-long length of the input. In this paper, we present an extract-then-abstract Transformer framework to overcome the problem. Specifically, we leverage pre-trained language models to construct a hierarchical extractor for salient sentence selection across documents and an abstractor for rewriting the selected contents as summaries. However, learning such a framework is challenging since the optimal contents for the abstractor are generally unknown. Previous works typically create pseudo extraction oracle to enable the supervised learning for both the extractor and the abstractor. Nevertheless, we argue that the performance of such methods could be restricted due to the insufficient information for prediction and inconsistent objectives between training and testing. To this end, we propose a loss weighting mechanism that makes the model aware of the unequal importance for the sentences not in the pseudo extraction oracle, and leverage the fine-tuned abstractor to generate summary references as auxiliary signals for learning the extractor. Moreover, we propose a reinforcement learning method that can efficiently apply to the extractor for harmonizing the optimization between training and testing. Experiment results show that our framework substantially outperforms strong baselines with comparable model sizes and achieves the best results on the Multi-News, Multi-XScience, and WikiCatSum corpora.
  9. NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias Nayeon Lee, Yejin Bang, Tiezheng Yu, Andrea Madotto, Pascale Fung NAACL 2022 [pdf] [code]
    [Abs] Media news framing bias can increase political polarization and undermine civil society. The need for automatic mitigation methods is therefore growing. We propose a new task, a neutral summary generation from multiple news articles of the varying political leaningsto facilitate balanced and unbiased news reading.In this paper, we first collect a new dataset, illustrate insights about framing bias through a case study, and propose a new effective metric and model (NeuS-Title) for the task. Based on our discovery that title provides a good signal for framing bias, we present NeuS-Title that learns to neutralize news content in hierarchical order from title to article. Our hierarchical multi-task learning is achieved by formatting our hierarchical data pair (title, article) sequentially with identifier-tokens (“TITLE=>”, “ARTICLE=>”) and fine-tuning the auto-regressive decoder with the standard negative log-likelihood objective.We then analyze and point out the remaining challenges and future directions. One of the most interesting observations is that neural NLG models can hallucinate not only factually inaccurate or unverifiable content but also politically biased content.
  10. Read Top News First: A Document Reordering Approach for Multi-Document News Summarization Chao Zhao, Tenghao Huang, Somnath Basu Roy Chowdhury, Muthu Kumar Chandrasekaran, Kathleen McKeown, Snigdha Chaturvedi Findings of ACL 2022 [pdf] [code]
  11. A Multi-Document Coverage Reward for RELAXed Multi-Document Summarization Jacob Parnell, Inigo Jauregi Unanue, Massimo Piccardi ACL 2022 [pdf] [code]
    [Abs] Multi-document summarization (MDS) has made significant progress in recent years, in part facilitated by the availability of new, dedicated datasets and capacious language models. However, a standing limitation of these models is that they are trained against limited references and with plain maximum-likelihood objectives. As for many other generative tasks, reinforcement learning (RL) offers the potential to improve the training of MDS models; yet, it requires a carefully-designed reward that can ensure appropriate leverage of both the reference summaries and the input documents. For this reason, in this paper we propose fine-tuning an MDS baseline with a reward that balances a reference-based metric such as ROUGE with coverage of the input documents. To implement the approach, we utilize RELAX (Grathwohl et al., 2018), a contemporary gradient estimator which is both low-variance and unbiased, and we fine-tune the baseline in a few-shot style for both stability and computational efficiency. Experimental results over the Multi-News and WCEP MDS datasets show significant improvements of up to +0.95 pp average ROUGE score and +3.17 pp METEOR score over the baseline, and competitive results with the literature. In addition, they show that the coverage of the input documents is increased, and evenly across all documents.
  12. PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization Wen Xiao, Iz Beltagy, Giuseppe Carenini, Arman Cohan ACL 2022 [pdf] [code]
    [Abs] We introduce PRIMERA, a pre-trained model for multi-document representation with a focus on summarization that reduces the need for dataset-specific architectures and large amounts of fine-tuning labeled data. PRIMERA uses our newly proposed pre-training objective designed to teach the model to connect and aggregate information across documents. It also uses efficient encoder-decoder transformers to simplify the processing of concatenated input documents. With extensive experiments on 6 multi-document summarization datasets from 3 different domains on zero-shot, few-shot and full-supervised settings, PRIMERA outperforms current state-of-the-art dataset-specific and pre-trained models on most of these settings with large margins.
  13. PeerSum: A Peer Review Dataset for Abstractive Multi-document Summarization Miao Li, Jianzhong Qi, Jey Han Lau [pdf] [data]
  14. A Proposition-Level Clustering Approach for Multi-Document Summarization Ori Ernst, Avi Caciularu, Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Jacob Goldberger, Ido Dagan [pdf] [code]
  15. MSˆ2: Multi-Document Summarization of Medical Studies Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, Lucy Wang EMNLP 2021 [pdf] [data]
  16. SgSum: Transforming Multi-document Summarization into Sub-graph Selection Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang EMNLP 2021 [pdf] [code]
  17. Topic-Guided Abstractive Multi-Document Summarization Peng Cui, Le Hu Findings of EMNLP 2021 [pdf]
  18. Modeling Endorsement for Multi-Document Abstractive Summarization Logan Lebanoff, Bingqing Wang, Zhe Feng, Fei Liu EMNLP 2021|newsum [pdf]
  19. Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization Congbo Ma, Wei Emma Zhang, Hu Wang, Shubham Gupta, Mingyu Guo [pdf]
  20. Capturing Relations between Scientific Papers: An Abstractive Model for Related Work Section Generation Xiuying Chen, Hind Alamro, Mingzhe Li, Shen Gao, Xiangliang Zhang, Dongyan Zhao, Rui Yan ACL 2021 [pdf] [data]
  21. Highlight-Transformer: Leveraging Key Phrase Aware Attention to Improve Abstractive Multi-Document Summarization Shuaiqi Liu, Jiannong Cao, Ruosong Yang, Zhiyuan Wen ACL 2021 Findings [pdf]
  22. Entity-Aware Abstractive Multi-Document Summarization Hao Zhou, Weidong Ren, Gongshen Liu, Bo Su, Wei Lu ACL 2021 Findings [pdf] [code]
  23. TWAG: A Topic-Guided Wikipedia Abstract Generator Fangwei Zhu, Shangqing Tu, Jiaxin Shi, Juanzi Li, Lei Hou, Tong Cui ACL 2021 [pdf] [code]
  24. AgreeSum: Agreement-Oriented Multi-Document Summarization Richard Yuanzhe Pang, Adam D. Lelkes, Vinh Q. Tran, Cong Yu Findings of ACL 2021 [pdf] [data]
  25. Analysis of GraphSum's Attention Weights to Improve the Explainability of Multi-Document Summarization M. Lautaro Hickmann, Fabian Wurzberger, Megi Hoxhalli, Arne Lochner, Jessica Töllich, Ansgar Scherp [pdf]
  26. Extending Multi-Document Summarization Evaluation to the Interactive Setting Ori Shapira, Ramakanth Pasunuru, Hadar Ronen, Mohit Bansal, Yael Amsterdamer, Ido Dagan NAACL21 [pdf] [code]
  27. Efficiently Summarizing Text and Graph Encodings of Multi-Document Clusters Ramakanth Pasunuru, Mengwen Liu, Mohit Bansal, Sujith Ravi, Markus Dreyer NAACL21 [pdf] [code]
  28. Self-Supervised and Controlled Multi-Document Opinion Summarization Hady Elsahar, Maximin Coavoux, Jos Rozen, Matthias Gallé EACL 2021 [pdf]
  29. MS2: Multi-Document Summarization of Medical Studies Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Keuhl, Lucy Lu Wang [pdf] [data]
  30. Nutri-bullets: Summarizing Health Studies by Composing Segments Darsh J Shah, Lili Yu, Tao Lei, Regina Barzilay AAAI21 [pdf] [code]
  31. Multi-document Summarization using Semantic Role Labeling and Semantic Graph for Indonesian News Article Yuly Haruka Berliana Gunawan, Masayu Leylia Khodra [pdf]
  32. Flight of the PEGASUS? Comparing Transformers on Few-Shot and Zero-Shot Multi-document Abstractive Summarization Travis Goodwin, Max Savery, Dina Demner-Fushman COLING20 [pdf]
  33. Abstractive Multi-Document Summarization via Joint Learning with Single-Document Summarization Hanqi Jin, Xiaojun Wan Findings of EMNLP [pdf] [code]
  34. Coarse-to-Fine Query Focused Multi-Document Summarization Yumo Xu, Mirella Lapata EMNLP20 [pdf] [code] [code]
  35. WSL-DS: Weakly Supervised Learning with Distant Supervision for Query Focused Multi-Document Abstractive Summarization Md Tahmid Rahman Laskar, Enamul Hoque, Jimmy Xiangji Huang COLING20 Short [pdf] [code]
  36. AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, Eugene Ie [pdf] [data]
  37. Multi-document Summarization with Maximal Marginal Relevance-guided Reinforcement Learning Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren, Jiawei Han EMNLP20 [pdf] [code]
  38. Heterogeneous Graph Neural Networks for Extractive Document Summarization Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang ACL20 [pdf] [code]
  39. Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization Hanqi Jin, Tianming Wang, Xiaojun Wan ACL20 [pdf]
  40. SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for Multi-Document Summarization Yang Gao, Wei Zhao, Steffen Eger ACL20 [pdf] [code]
  41. Leveraging Graph to Improve Abstractive Multi-Document Summarization Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, Junping Du ACL20 [pdf] [code]
  42. Generating Representative Headlines for News Stories Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, Hongkun Yu, You Wu, Cong Yu, Daniel Finnie, Jiaqi Zhai, Nicholas Zukoski WWW20 [pdf] [code]
  43. Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization Diego Antognini, Boi Faltings EMNLP19 [pdf]
  44. Improving the Similarity Measure of Determinantal Point Processes for Extractive Multi-Document Summarization Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, Fei Liu ACL19 [pdf] [code]
  45. Hierarchical Transformers for Multi-Document Summarization Yang Liu, Mirella Lapata ACL19 [pdf] [code]
  46. MeanSum: A Neural Model for Unsupervised Multi-Document Abstractive Summarization Eric Chu, Peter J. Liu ICML19 [pdf] [code]
  47. Generating Wikipedia By Summarizing Long Sequence Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer ICLR18 [pdf] [code]
  48. Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization Logan Lebanoff, Kaiqiang Song, Fei Liu EMNLP18 [pdf] [code]
  49. Graph-based Neural Multi-Document Summarization Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev CoNLL17 [pdf]
  50. Improving Multi-Document Summarization via Text Classification Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei AAAI17 [pdf]
  51. Automatic generation of related work through summarizing citations Jingqiang Chen, Hai Zhuge [pdf] [data]
  52. An Unsupervised Multi-Document Summarization Framework Based on Neural Document Model Shulei Ma, Zhi-Hong Deng, Yunlun Yang COLING16 [pdf]
  53. Event-Centric Summary Generation Lucy Vanderwende Michele Banko Arul Menezes ACL04 [pdf]

Cross-Lingual

  1. Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant [pdf]
  2. MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization Xiachong Feng, Xiaocheng Feng, Bing Qin ACL 2022 DialDoc Workshop [pdf] [data]
  3. The Cross-lingual Conversation Summarization Challenge Yulong Chen, Ming Zhong, Xuefeng Bai, Naihao Deng, Jing Li, Xianchao Zhu, Yue Zhang [pdf]
  4. Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization Ruipeng Jia, Xingxing Zhang, Yanan Cao, Shi Wang, Zheng Lin, Furu Wei ACL 2022 [pdf]
    [Abs] In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages. Given English gold summaries and documents, sentence-level labels for extractive summarization are usually generated using heuristics. However, these monolingual labels created on English datasets may not be optimal on datasets of other languages, for that there is the syntactic or semantic discrepancy between different languages. In this way, it is possible to translate the English dataset to other languages and obtain different sets of labels again using heuristics. To fully leverage the information of these different sets of labels, we propose NLSSum (Neural Label Search for Summarization), which jointly learns hierarchical weights for these different sets of labels together with our summarization model. We conduct multilingual zero-shot summarization experiments on MLSUM and WikiLingua datasets, and we achieve state-of-the-art results using both human and automatic evaluations across these two datasets.
  5. Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao [pdf]
  6. A Survey on Cross-Lingual Summarization Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie Zhou [pdf]
  7. A Variational Hierarchical Model for Neural Cross-Lingual Summarization Yunlong Liang, Fandong Meng, Chulun Zhou, Jinan Xu, Yufeng Chen, Jinsong Su, Jie Zhou ACL 2022 [pdf] [code]
    [Abs] The goal of the cross-lingual summarization (CLS) is to convert a document in one language (e.g., English) to a summary in another one (e.g., Chinese). The CLS task is essentially the combination of machine translation (MT) and monolingual summarization (MS), and thus there exists the hierarchical relationship between MT&MS and CLS. Existing studies on CLS mainly focus on utilizing pipeline methods or jointly training an end-to-end model through an auxiliary MT or MS objective. However, it is very challenging for the model to directly conduct CLS as it requires both the abilities to translate and summarize. To address this issue, we propose a hierarchical model for the CLS task, based on the conditional variational auto-encoder. The hierarchical model contains two kinds of latent variables at the local and global levels, respectively. At the local level, there are two latent variables, one for translation and the other for summarization. As for the global level, there is another latent variable for cross-lingual summarization conditioned on the two local-level variables. Experiments on two language directions (English-Chinese) verify the effectiveness and superiority of the proposed approach. In addition, we show that our model is able to generate better cross-lingual summaries than comparison models in the few-shot setting.
  8. CptGraphSum: Let key clues guide the cross-lingual abstractive summarization Shuyu Jiang, Dengbiao Tu, Xingshu Chen, Rui Tang, Wenxian Wang, Haizhou Wang [pdf]
  9. ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, Jie Zhou [pdf] [code]
  10. CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs Tahmid Hasan, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, Rifat Shahriyar [pdf] [code]
  11. Improving Neural Cross-Lingual Summarization via Employing Optimal Transport Distance for Knowledge Distillation Thong Nguyen, Luu Anh Tuan AAAI 2022 [pdf] [code]
  12. Evaluation of Abstractive Summarisation Models with Machine Translation in Deliberative Processes Miguel Arana-Catania, Rob Procter, Yulan He, Maria Liakata EMNLP 2021| newsum [pdf]
  13. Models and Datasets for Cross-Lingual Summarisation Laura Perez-Beltrachini, Mirella Lapata EMNLP 2021 [pdf] [data]
  14. MassiveSumm: a very large-scale, very multilingual, news summarisation dataset Daniel Varab, Natalie Schluter EMNLP 2021 [pdf] [code]
  15. Bridging the Gap: Cross-Lingual Summarization with Compression Rate Yu Bai, Heyan Huang, Kai Fan, Yang Gao, Zewen Chi, Boxing Chen [pdf]
  16. Contrastive Aligned Joint Learning for Multilingual Summarization Danqing Wang, Jiaze Chen, Hao Zhou, Xipeng Qiu, Lei Li ACL 2021 Findings [pdf] [data]
  17. XL-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages T. Hasan, A. Bhattacharjee, M. S. Islam, K. Samin, Y. Li, Y. Kang, M. S. Rahman, R. Shahriyar Findings of ACL 2021 [pdf] [data]
  18. ZmBART: An Unsupervised Cross-lingual Transfer Framework for Language Generation Kaushal Kumar Maurya, Maunendra Sankar Desarkar, Yoshinobu Kano, Kumari Deepshikha Findings of ACL 2021 [pdf] [code]
  19. mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang Xian-Ling Mao, Heyan Huang, Furu Wei [pdf] [code]
  20. Evaluating the Efficacy of Summarization Evaluation across Languages Fajri Koto, Jey Han Lau, Timothy Baldwin Findings of ACL 2021 [pdf]
  21. Cross-Lingual Abstractive Summarization with Limited Parallel Resources Yu Bai, Yang Gao, Heyan Huang ACL 2021 [pdf] [code]
  22. Unsupervised Approach to Multilingual User Comments Summarization Aleš Žagar, Marko Robnik-Šikonja EACL21 [pdf] [code]
  23. MultiHumES: Multilingual Humanitarian Dataset for Extractive Summarization Jenny Paola Yela-Bello, Ewan Oglethorpe, Navid Rekabsaz EACL21 [pdf] [data]
  24. Cross-lingual Approach to Abstractive Summarization Aleš Žagar, Marko Robnik-Šikonja [pdf]
  25. Mixed-Lingual Pre-training for Cross-lingual Summarization Ruochen Xu, Chenguang Zhu, Yu Shi, Michael Zeng, Xuedong Huang AACL20 [pdf]
  26. Multi-Task Learning for Cross-Lingual Abstractive Summarization Sho Takase, Naoaki Okazaki [pdf]
  27. WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization Faisal Ladhak, Esin Durmus, Claire Cardie, Kathleen McKeown Findings of EMNLP20 [pdf] [data]
  28. A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity Rewards Zi-Yi Dou, Sachin Kumar, Yulia Tsvetkov ACL20 workshop [pdf] [code]
  29. Jointly Learning to Align and Summarize for Neural Cross-Lingual Summarization Yue Cao, Hui Liu, Xiaojun Wan ACL20 [pdf]
  30. Attend, Translate and Summarize: An Efficient Method for Neural Cross-Lingual Summarization Junnan Zhu, Yu Zhou, Jiajun Zhang, Chengqing Zong ACL20 [pdf] [code]
  31. MultiSumm: Towards a Unified Model for Multi-Lingual Abstractive Summarization Yue Cao, Xiaojun Wan, Jinge Yao, Dian Yu AAAI20 [pdf] [code]
  32. Cross-Lingual Natural Language Generation via Pre-Training Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, Heyan Huang AAAI 2020 [pdf] [code]
  33. Global Voices: Crossing Borders in Automatic News Summarization Khanh Nguyen, Hal Daumé III EMNLP19 workshop [pdf] [data]
  34. NCLS: Neural Cross-Lingual Summarization Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Chengqing Zong EMNLP19 [pdf] [code]
  35. Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, Weihua Luo ACL19 [pdf] [code]
  36. A Robust Abstractive System for Cross-Lingual Summarization Jessica Ouyang, Boya Song, Kathy McKeown NAACL19 [pdf]
  37. Cross-Lingual Korean Speech-to-Text Summarization HyoJeon Yoon, Dinh Tuyen Hoang, Ngoc Thanh Nguyen, Dosam Hwang ACIIDS19 [pdf]
  38. Cross-language document summarization via extraction and ranking of multiple summaries Xiaojun Wan, Fuli Luo, Xue Sun, Songfang Huang & Jin-ge Yao [pdf]
  39. Zero-Shot Cross-Lingual Neural Headline Generation Shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuan Liu, Mao-song Sun TASLP18 [pdf]
  40. Cross-Language Text Summarization using Sentence and Multi-Sentence Compression Elvys Linhares Pontes, Stéphane Huet, Juan-Manuel Torres-Moreno, Andréa Carneiro Linhares NLDB18 [pdf]
  41. Abstractive Cross-Language Summarization via Translation Model Enhanced Predicate Argument Structure Fusing Jiajun Zhang, Yu Zhou, Chengqing Zong TASLP16 [pdf]
  42. Phrase-based Compressive Cross-Language Summarization Jin-ge Yao ,Xiaojun Wan ,Jianguo Xiao EMNLP15 [pdf]
  43. Multilingual Single-Document Summarization with MUSE Marina Litvak, Mark Last MultiLing13 [pdf]
  44. Using bilingual information for cross-language document summarization Xiaojun Wan ACL11 [pdf]
  45. A Graph-based Approach to Cross-language Multi-document Summarization Florian Boudin, Stéphane Huet, Juan-Manuel Torres-Moreno [pdf]
  46. Cross-language document summarization based on machine translation quality prediction Xiaojun Wan, Huiying Li, Jianguo Xiao ACL10 [pdf]
  47. Evaluation of a Cross-lingual Romanian-English Multi-document Summariser Constantin Orasan, Oana Andreea Chiorean LREC08 [pdf]
  48. Cross-lingual C*ST*RD: English access to Hindi information Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich Germann, Franz Josef Och, Eduard Hovy [pdf]

Multi-modal

  1. MHMS: Multimodal Hierarchical Multimedia Summarization Jielin Qiu, Jiacheng Zhu, Mengdi Xu, Franck Dernoncourt, Trung Bui, Zhaowen Wang, Bo Li, Ding Zhao, Hailin Jin [pdf]
  2. Video Summarization Based on Video-text Representation Li Haopeng, Ke Qiuhong, Gong Mingming, Zhang Rui [pdf]
  3. UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang AAAI 2022 [pdf]
  4. Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization Litian Zhang, Xiaoming Zhang, Junshu Pan, Feiran Huang AAAI 2022 [pdf] [data]
  5. Attention-based Multi-hypothesis Fusion for Speech Summarization Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Shinji Watanabe [pdf]
  6. Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization Tiezheng Yu, Wenliang Dai, Zihan Liu, Pascale Fung EMNLP 2021 [pdf] [code]
  7. Multi-Modal Supplementary-Complementary Summarization using Multi-Objective Optimization Anubhav Jangra, Sriparna Saha, Adam Jatowt, Mohammad Hasanuzzaman SIGIR 2021 [pdf]
  8. Self-Supervised Multimodal Opinion Summarization Jinbae Im, Moonki Kim, Hoyeop Lee, Hyunsouk Cho, Sehee Chung ACL21 [pdf] [code]
  9. GPT2MVS: Generative Pre-trained Transformer-2 for Multi-modal Video Summarization Jia-Hong Huang, Luka Murn, Marta Mrak, Marcel Worring ICMR21 [pdf]
  10. Multimodal Sentence Summarization via Multimodal Selective Encoding Haoran Li, Junnan Zhu, Jiajun Zhang, Xiaodong He, Chengqing Zong COLING20 [pdf]
  11. Multistage Fusion with Forget Gate for Multimodal Summarization in Open-Domain Videos Nayu Liu, Xian Sun, Hongfeng Yu, Wenkai Zhang, Guangluan Xu EMNLP20 [pdf]
  12. MAST: Multimodal Abstractive Summarization with Trimodal Hierarchical Attention Aman Khullar, Udit Arora EMNLP20 Workshop [pdf] [code]
  13. VMSMO: Learning to Generate Multimodal Summary for Video-based News Articles Mingzhe Li, Xiuying Chen, Shen Gao, Zhangming Chan, Dongyan Zhao, Rui Yan EMNLP20 [pdf] [data]
  14. Multi-modal Summarization for Video-containing Documents Xiyan Fu, Jun Wang, Zhenglu Yang [pdf] [code]
  15. Text-Image-Video Summary Generation Using Joint Integer Linear Programming Anubhav Jangra, Adam Jatowt, Mohammad Hasanuzzaman, Sriparna Saha ECIR20 [pdf]
  16. Aspect-Aware Multimodal Summarization for Chinese E-Commerce Products Haoran Li, Peng Yuan, Song Xu, Youzheng Wu, Xiaodong He, Bowen Zhou AAAI20 [pdf] [code]
  17. Convolutional Hierarchical Attention Network for Query-Focused Video Summarization Shuwen Xiao, Zhou Zhao, Zijian Zhang, Xiaohui Yan, Min Yang AAAI20 [pdf]
  18. Multimodal Summarization with Guidance of Multimodal Reference Junnan Zhu, Yu Zhou, Jiajun Zhang, Haoran Li, Chengqing Zong, Changliang Li AAAI20 [pdf]
  19. EmotionCues: Emotion-Oriented Visual Summarization of Classroom Videos Haipeng Zeng, Xinhuan Shu, Yanbang Wang, Yong Wang, Liguo Zhang, Ting-Chuen Pong, Huamin Qu [pdf]
  20. A Survey on Automatic Summarization Using Multi-Modal Summarization System for Asynchronous Collections Shilpadevi Vasant Bhagwat, Sheetal .S. Thokal [pdf]
  21. Extractive summarization of documents with images based on multi-modal RNN Jingqiang Chen, Hai Zhuge [pdf]
  22. Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization Manling Li, Lingyu Zhang, Heng Ji, Richard J. Radke ACL19 [pdf]
  23. Multimodal Abstractive Summarization for How2 Videos Shruti Palaskar, Jindřich Libovický, Spandana Gella, Florian Metze ACL19 [pdf]
  24. MSMO: Multimodal Summarization with Multimodal Output Junnan Zhu, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang, Chengqing Zong EMNLP18 [pdf] [data]
  25. Abstractive Text-Image Summarization Using Multi-Modal Attentional Hierarchical RNN Jingqiang Chen, Hai Zhuge EMNLP18 [pdf]
  26. Multi-modal Sentence Summarization with Modality Attention and Image Filtering Haoran Li, Junnan Zhu, Tianshang Liu, Jiajun Zhang, Chengqing Zong IJCAI18 [pdf]
  27. Multimodal Abstractive Summarization for Open-Domain Videos Jindrich Libovický, Shruti Palaskar, Spandana Gella, Florian Metze NIPS18 [pdf] [data]
  28. Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video Haoran Li, Junnan Zhu, Cong Ma, Jiajun Zhang, Chengqing Zong [pdf]
  29. Fusing Verbal and Nonverbal Information for Extractive Meeting Summarization Fumio Nihei, Yukiko Nakano, Yukiko I. Nakano, Yutaka Takase, Yutaka Takase GIFT18 [pdf]
  30. Multi-modal Summarization for Asynchronous Collection of Text, Image, Audio and Video Haoran Li, Junnan Zhu, Cong Ma, Jiajun Zhang, Chengqing Zong EMNLP17 [pdf]
  31. Meeting Extracts for Discussion Summarization Based on Multimodal Nonverbal Information Fumio Nihei, Yukiko Nakano, Yukiko I. Nakano, Yutaka Takase, Yutaka Takase ICMI16 [pdf]
  32. Summarizing a multimodal set of documents in a Smart Room Maria Fuentes, Horacio Rodríguez, Jordi Turmo LREC12 [pdf]
  33. Multi-modal summarization of key events and top players in sports tournament videos Dian Tjondronegoro, Xiaohui Tao, Johannes Sasongko and Cher Han Lau [pdf]
  34. Multimodal Summarization of Complex Sentences Naushad UzZaman, Jeffrey P. Bigham, James F. Allen [pdf]
  35. Summarization of Multimodal Information Saif Ahmad, Paulo C F de Oliveira, Khurshid Ahmad LREC04 [pdf]
  36. Multimodal Summarization of Meeting Recordings Berna Erol, Dar-Shyang Lee, and Jonathan Hull ICME03 [pdf]

Sentiment Related

  1. Making the Best Use of Review Summary for Sentiment Analysis Sen Yang, Leyang Cui, Jun Xie, Yue Zhang COLING20 [pdf] [code] [bib]
  2. A Unified Dual-view Model for Review Summarization and Sentiment Classification with Inconsistency Loss Hou Pong Chan, Wang Chen, Irwin King SIGIR20 [pdf] [code]
  3. A Hierarchical End-to-End Model for Jointly Improving Text Summarization and Sentiment Classification Shuming Ma, Xu Sun, Junyang Lin, Xuancheng Ren IJCAI18 [pdf]
  4. Two-level Text Summarization from Online News Sources with Sentiment Analysis Tarun B. Mirani, Sreela Sasi IEEE17 [pdf]
  5. Creating Video Summarization From Emotion Perspective Yijie Lan, Shikui Wei, Ruoyu Liu, Yao Zhao ICSP16 [pdf]

Pre-trained Language Model Based

  1. MVP: Multi-task Supervised Pre-training for Natural Language Generation Tianyi Tang, Junyi Li, Wayne Xin Zhao, Ji-Rong Wen [pdf] [code]
    [Abs] Pre-trained language models (PLMs) have achieved notable success in natural language generation (NLG) tasks. Up to now, most of the PLMs are pre-trained in an unsupervised manner using large-scale general corpus. In the meanwhile, an increasing number of models pre-trained with less labeled data showcase superior performance compared to unsupervised models. Motivated by the success of supervised pre-training, we propose Multi-task superVised Pre-training (MVP) for natural language generation. For pre-training the text generation model MVP, we collect a labeled pre-training corpus from 45 datasets over seven generation tasks. For each task, we further pre-train specific soft prompts to stimulate the model capacity in performing a specific task. Extensive experiments have demonstrated the effectiveness of our supervised pre-training in a number of NLG tasks, and our general methods achieve state-of-the-art performance on 12 of 17 datasets.
  2. E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Dacheng Tao [pdf]
  3. Does Pretraining for Summarization Require Knowledge Transfer? Kundan Krishna, Jeffrey Bigham, Zachary C. Lipton EMNLP 2021 Findings [pdf] [code]
  4. ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization Alireza Salemi, Emad Kebriaei, Ghazal Neisi Minaei, Azadeh Shakery EMNLP 2021 [pdf] [code]
  5. Leveraging Lead Bias for Zero-shot Abstractive News Summarization Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, Xuedong Huang SIGIR 2021 [pdf]
  6. ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Xuan Ouyang, Dianhai Yu, Hao Tian, Hua Wu, Haifeng Wang [pdf]
  7. BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining Weizhen Qi, Yeyun Gong, Jian Jiao, Yu Yan, Weizhu Chen, Dayiheng Liu, Kewen Tang, Houqiang Li, Jiusheng Chen, Ruofei Zhang, Ming Zhou, Nan Duan ICML 2021 [pdf] [code]
  8. Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT Ruifeng Yuan, Zili Wang, Wenjie Li COLING20 [pdf] [code]
  9. Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning Travis Goodwin, Max Savery, Dina Demner-Fushman Findings of EMNLP [pdf] [code]
  10. Improving Zero and Few-Shot Abstractive Summarization with Intermediate Fine-tuning and Data Augmentation Alexander R. Fabbri, Simeng Han, Haoyuan Li, Haoran Li, Marjan Ghazvininejad, Shafiq Joty, Dragomir Radev, Yashar Mehdad [pdf]
  11. Pre-trained Summarization Distillation Sam Shleifer, Alexander M. Rush [pdf] [code]
  12. Pre-training for Abstractive Document Summarization by Reinstating Source Text Yanyan Zou, Xingxing Zhang, Wei Lu, Furu Wei, Ming Zhou EMNLP20 [pdf] [code]
  13. PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation Bin Bi, Chenliang Li, Chen Wu, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, Luo Si EMNLP20 [pdf]
  14. TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and Denoising Ziyi Yang Chenguang Zhu Robert Gmyr Michael Zeng Xuedong Huang Eric Darve Findings of EMNLP20 [pdf]
  15. QURIOUS: Question Generation Pretraining for Text Generation Shashi Narayan, Gonçalo Simoes, Ji Ma, Hannah Craighead, Ryan Mcdonald ACL20 Short [pdf]
  16. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu ICML20 [pdf] [code]
  17. Abstractive Text Summarization based on Language Model Conditioning and Locality Modeling Dmitrii Aksenov, Julián Moreno-Schneider, Peter Bourgonje, Robert Schwarzenberg, Leonhard Hennig, Georg Rehm LREC20 [pdf]
  18. Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models Dmitrii Aksenov, Julián Moreno-Schneider, Peter Bourgonje, Robert Schwarzenberg, Leonhard Hennig, Georg Rehm [pdf]
  19. Learning by Semantic Similarity Makes Abstractive Summarization Better Wonjin Yoon, Yoon Sun Yeo, Minbyul Jeong, Bong-Jun Yi, Jaewoo Kang ICML20 [pdf] [code]
  20. Text Summarization with Pretrained Encoders Yang Liu, Mirella Lapata EMNLP19 [pdf] [code]
  21. HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization Xingxing Zhang, Furu Wei, Ming Zhou ACL19 [pdf]
  22. MASS: Masked Sequence to Sequence Pre-training for Language Generation Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu ICML19 [pdf] [code]
  23. Pretraining-Based Natural Language Generation for Text Summarization Haoyu Zhang, Jianjun Xu, Ji Wang [pdf]
  24. Fine-tune BERT for Extractive Summarization Yang Liu [pdf] [code]
  25. Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon NIPS19 [pdf] [code]
  26. Self-Supervised Learning for Contextualized Extractive Summarization Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang ACL19 [pdf] [code]
  27. Efficient Adaptation of Pretrained Transformers for Abstractive Summarization Andrew Hoang, Antoine Bosselut, Asli Celikyilmaz, Yejin Choi [pdf] [code]

Controllable

  1. Topic-Aware Evaluation and Transformer Methods for Topic-Controllable Summarization Tatiana Passali, Grigorios Tsoumakas `` [pdf] [code]
    [Abs] Topic-controllable summarization is an emerging research area with a wide range of potential applications. However, existing approaches suffer from significant limitations. First, there is currently no established evaluation metric for this task. Furthermore, existing methods built upon recurrent architectures, which can significantly limit their performance compared to more recent Transformer-based architectures, while they also require modifications to the model's architecture for controlling the topic. In this work, we propose a new topic-oriented evaluation measure to automatically evaluate the generated summaries based on the topic affinity between the generated summary and the desired topic. We also conducted a user study that validates the reliability of this measure. Finally, we propose simple, yet powerful methods for topic-controllable summarization either incorporating topic embeddings into the model's architecture or employing control tokens to guide the summary generation. Experimental results show that control tokens can achieve better performance compared to more complicated embedding-based approaches while being at the same time significantly faster.
  2. Length Control in Abstractive Summarization by Pretraining Information Selection Yizhu Liu, Qi Jia, Kenny Zhu ACL 2022 [pdf] [code]
    [Abs] Previous length-controllable summarization models mostly control lengths at the decoding stage, whereas the encoding or the selection of information from the source document is not sensitive to the designed length. They also tend to generate summaries as long as those in the training data. In this paper, we propose a length-aware attention mechanism (LAAM) to adapt the encoding of the source based on the desired length. Our approach works by training LAAM on a summary length balanced dataset built from the original training data, and then fine-tuning as usual. Results show that this approach is effective in generating high-quality summaries with desired lengths and even those short lengths never seen in the original training set.
  3. A Character-Level Length-Control Algorithm for Non-Autoregressive Sentence Summarization Puyuan Liu, Xiang Zhang, Lili Mou [pdf] [code]
  4. EntSUM: A Data Set for Entity-Centric Summarization Mounica Maddela, Mayank Kulkarni, Daniel Preotiuc-Pietro ACL 2022 [pdf] [code] [data]
  5. Reinforced Abstractive Summarization with Adaptive Length Controlling Mingyang Song, Yi Feng, Liping Jing [pdf]
  6. HydraSum -- Disentangling Stylistic Features in Text Summarization using Multi-Decoder Models Tanya Goyal, Nazneen Fatema Rajani, Wenhao Liu, Wojciech Kryściński [pdf]
  7. RetrievalSum: A Retrieval Enhanced Framework for Abstractive Summarization Chenxin An, Ming Zhong, Zhichao Geng, Jianqiang Yang, Xipeng Qiu [pdf]
  8. Aspect-Controllable Opinion Summarization Reinald Kim Amplayo, Stefanos Angelidis, Mirella Lapata EMNLP 2021 [pdf] [code]
  9. Extract, Denoise, and Enforce: Evaluating and Predicting Lexical Constraints for Conditional Text Generation Yuning Mao, Wenchang Ma, Deren Lei, Xiang Ren [pdf] [code]
  10. Planning with Learned Entity Prompts for Abstractive Summarization Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Ryan McDonald TACL [pdf]
  11. GSum: A General Framework for Guided Neural Abstractive Summarization Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig NAACL21 [pdf] [code]
  12. Abstractive summarization with combination of pre-trained sequence-to-sequence and saliency models Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita [pdf]
  13. Self-Supervised and Controlled Multi-Document Opinion Summarization Hady Elsahar, Maximin Coavoux, Jos Rozen, Matthias Gallé EACL 2021 [pdf]
  14. Controllable Summarization with Constrained Markov Decision Process Hou Pong Chan, Lu Wang, Irwin King TACL 2021 [pdf] [code]
  15. LenAtten: An Effective Length Controlling Unit For Text Summarization Zhongyi Yu, Zhenghao Wu, Hao Zheng, Zhe XuanYuan, Jefferson Fong, Weifeng Su Findings of ACL 2021 (short) [pdf] [code]
  16. Controllable Abstractive Dialogue Summarization with Sketch Supervision Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, Caiming Xiong ACL-Findings 2021 [pdf] [code]
  17. Enhancing Factual Consistency of Abstractive Summarization Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, Meng Jiang NAACL21 [pdf]
  18. Inference Time Style Control for Summarization Shuyang Cao, Lu Wang NAACL21 short [pdf] [code]
  19. CTRLsum: Towards Generic Controllable Text Summarization Junxian He, Wojciech Kryściński, Bryan McCann, Nazneen Rajani, Caiming Xiong [pdf] [code]
  20. Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation Yuning Mao, Xiang Ren, Heng Ji, Jiawei Han [pdf]
  21. Keywords-Guided Abstractive Sentence Summarization Haoran Li, Junnan Zhu, Jiajun Zhang, Chengqing Zong, Xiaodong He AAAI20 [pdf]
  22. SemSUM: Semantic Dependency Guided Neural Abstractive Summarization Hanqi Jin, Tianming Wang, Xiaojun Wan AAAI2020 [pdf] [code]
  23. Interpretable Multi-Headed Attention for Abstractive Summarization at Controllable Lengths Ritesh Sarkhel, Moniba Keymanesh, Arnab Nandi, Srinivasan Parthasarathy COLING20 [pdf]
  24. Controllable Abstractive Sentence Summarization with Guiding Entities Changmeng Zheng, Yi Cai, Guanjie Zhang, Qing Li COLING20 [pdf] [code]
  25. Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised Approach Bowen Tan, Lianhui Qin, Eric P. Xing, Zhiting Hu EMNLP20 Short [pdf] [code]
  26. Length-controllable Abstractive Summarization by Guiding with Summary Prototype Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Atsushi Otsuka, Hisako Asano, Junji Tomita, Hiroyuki Shindo, Yuji Matsumoto [pdf]
  27. The Summary Loop: Learning to Write Abstractive Summaries Without Examples Philippe Laban, Andrew Hsi, John Canny, Marti A. Hearst ACL20 [pdf]
  28. Hooks in the Headline: Learning to Generate Headlines with Controlled Styles Di Jin, Zhijing Jin, Joey Tianyi Zhou, Lisa Orii, Peter Szolovits ACL20 [pdf] [code]
  29. BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization Kai Wang, Xiaojun Quan, Rui Wang ACL19 [pdf] [code]
  30. Improving Abstractive Document Summarization with Salient Information Modeling Yongjian You, Weijia Jia, Tianyi Liu, Wenmian Yang ACL19 [pdf] [code]
  31. Positional Encoding to Control Output Sequence Length Sho Takase, Naoaki Okazaki NAACL19 [pdf] [code]
  32. Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq Models Tal Baumel, Matan Eyal, Michael Elhadad [pdf]
  33. Guiding Generation for Abstractive Text Summarization based on Key Information Guide Network Chenliang Li, Weiran Xu, Si Li, Sheng Gao NAACL18 [pdf]
  34. Controllable Abstractive Summarization Angela Fan, David Grangier, Michael Auli ACL2018 Workshop [pdf]
  35. Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei ACL18 [pdf]
  36. Controlling Length in Abstractive Summarization Using a Convolutional Neural Network Yizhu Liu, Zhiyi Luo, Kenny Zhu EMNLP18 [pdf] [code]
  37. Generating Wikipedia By Summarizing Long Sequence Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer ICLR18 [pdf] [code]
  38. Controlling Output Length in Neural Encoder-Decoders Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, Manabu Okumura EMNLP16 [pdf] [code]

Abstractive

  1. Semantic-Preserving Abstractive Text Summarization with Siamese Generative Adversarial Net Xin Sheng, Linli Xu, Yinlong Xu, Deqiang Jiang, Bo Ren Findings of NAACL 2022 [pdf]
    [Abs] We propose a novel siamese generative adversarial net for abstractive text summarization (SSPGAN), which can preserve the main semantics of the source text. Different from previous generative adversarial net based methods, SSPGAN is equipped with a siamese semantic-preserving discriminator, which can not only be trained to discriminate the machine-generated summaries from the human-summarized ones, but also ensure the semantic consistency between the source text and target summary. As a consequence of the min-max game between the generator and the siamese semantic-preserving discriminator, the generator can generate a summary that conveys the key content of the source text more accurately. Extensive experiments on several text summarization benchmarks in different languages demonstrate that the proposed model can achieve significant improvements over the state-of-the-art methods.
  2. ExtraPhrase: Efficient Data Augmentation for Abstractive Summarization Mengsay Loem, Sho Takase, Masahiro Kaneko, Naoaki Okazaki NAACL 2022 Student Research Workshop [pdf] [code]
    [Abs] TNeural models trained with large amount of parallel data have achieved impressive performance in abstractive summarization tasks. However, large-scale parallel corpora are expensive and challenging to construct. In this work, we introduce a low-cost and effective strategy, ExtraPhrase, to augment training data for abstractive summarization tasks. ExtraPhrase constructs pseudo training data in two steps: extractive summarization and paraphrasing. We extract major parts of an input text in the extractive summarization step and obtain its diverse expressions with the paraphrasing step. Through experiments, we show that ExtraPhrase improves the performance of abstractive summarization tasks by more than 0.50 points in ROUGE scores compared to the setting without data augmentation. ExtraPhrase also outperforms existing methods such as back-translation and self-training. We also show that ExtraPhrase is significantly effective when the amount of genuine training data is remarkably small, i.e., a low-resource setting. Moreover, ExtraPhrase is more cost-efficient than the existing approaches
  3. BRIO: Bringing Order to Abstractive Summarization Yixin Liu, Pengfei Liu, Dragomir Radev, Graham Neubig ACL 2022 [pdf] [code]
    [Abs] Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary. This assumption may lead to performance degradation during inference, where the model needs to compare several system-generated (candidate) summaries that have deviated from the reference summary. To address this problem, we propose a novel training paradigm which assumes a non-deterministic distribution so that different candidate summaries are assigned probability mass according to their quality. Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets. Further analysis also shows that our model can estimate probabilities of candidate summaries that are more correlated with their level of quality.
  4. SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization Mathieu Ravaut, Shafiq Joty, Nancy F. Chen ACL 2022 [pdf] [code]
    [Abs] Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially through fine-tuning large pre-trained language models on the downstream dataset. These models are typically decoded with beam search to generate a unique summary. However, the search space is very large, and with the exposure bias, such decoding is not optimal. In this paper, we show that it is possible to directly train a second-stage model performing re-ranking on a set of summary candidates. Our mixture-of-experts SummaReranker learns to select a better candidate and consistently improves the performance of the base model. With a base PEGASUS, we push ROUGE scores by 5.44% on CNN- DailyMail (47.16 ROUGE-1), 1.31% on XSum (48.12 ROUGE-1) and 9.34% on Reddit TIFU (29.83 ROUGE-1), reaching a new state-of-the-art. Our code and checkpoints will be available at https://github.com/ntunlp/SummaReranker.
  5. Adaptive Beam Search to Enhance On-device Abstractive Summarization Harichandana B S S, Sumit Kumar IEEE INDICON 2021 [pdf]
  6. PLSUM: Generating PT-BR Wikipedia by Summarizing Multiple Websites André Seidel Oliveira, Anna Helena Reali Costa ENIAC 2021 [pdf]
  7. Pointer over Attention: An Improved Bangla Text Summarization Approach Using Hybrid Pointer Generator Network Nobel Dhar, Gaurob Saha, Prithwiraj Bhattacharjee, Avi Mallick, Md Saiful Islam [pdf]
  8. Template-aware Attention Model for Earnings Call Report Generation Yangchen Huang, Prashant K. Dhingra, Seyed Danial Mohseni Taheri EMNLP 2021| newsum [pdf]
  9. Rewards with Negative Examples for Reinforced Topic-Focused Abstractive Summarization Khalil Mrini, Can Liu, Markus Dreyer EMNLP 2021| newsum [pdf]
  10. Knowledge and Keywords Augmented Abstractive Sentence Summarization Shuo Guan, Ping Zhu, Zhihua Wei EMNLP 2021| newsum [pdf] [code]
  11. Sentence-level Planning for Especially Abstractive Summarization Andreas Marfurt, James Henderson EMNLP 2021| newsum [pdf] [code]
  12. Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization Haoran Li, Song Xu, Peng Yuan, Yujia Wang, Youzheng Wu, Xiaodong He, Bowen Zhou EMNLP 2021 [pdf] [code]
  13. Enhance Long Text Understanding via Distilled Gist Detector from Abstractive Summarization Yan Liu, Yazheng Yang [pdf]
  14. VieSum: How Robust Are Transformer-based Models on Vietnamese Summarization? Hieu Nguyen, Long Phan, James Anibal, Alec Peltekian, Hieu Tran [pdf]
  15. Enriching and Controlling Global Semantics for Text Summarization Thong Nguyen, Anh Tuan Luu, Truc Lu, Tho Quan EMNLP 2021 [pdf]
  16. Augmented Abstractive Summarization With Document-LevelSemantic Graph Qiwei Bi, Haoyuan Li, Kun Lu, Hanfang Yang Journal of Data Science [pdf]
  17. ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization Alireza Salemi, Emad Kebriaei, Ghazal Neisi Minaei, Azadeh Shakery [pdf] [data]
  18. Subjective Bias in Abstractive Summarization Lei Li, Wei Liu, Marina Litvak, Natalia Vanetik, Jiacheng Pei, Yinan Liu, Siya Qi [pdf] [code]
  19. Neural Abstractive Unsupervised Summarization of Online News Discussions Ignacio Tampe Palma, Marcelo Mendoza, Evangelos Milios [pdf]
  20. Attention Temperature Matters in Abstractive Summarization Distillation Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, Furu Wei ACL 2022 [pdf] [code]
    [Abs] Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive. This paper aims to distill these large models into smaller ones for faster inference and with minimal performance loss. Pseudo-labeling based methods are popular in sequence-to-sequence model distillation. In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models. Our experiments on three summarization datasets show our proposed method consistently improves vanilla pseudo-labeling based methods. Further empirical analysis shows that both pseudo labels and summaries produced by our students are shorter and more abstractive.
  21. BASS: Boosting Abstractive Summarization with Unified Semantic Graph Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, Haifeng Wang ACL21 [pdf]
  22. Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization Yichen Jiang, Asli Celikyilmaz, Paul Smolensky, Paul Soulos, Sudha Rao, Hamid Palangi, Roland Fernandez, Caitlin Smith, Mohit Bansal, Jianfeng Gao NAACL21 [pdf] [code]
  23. Uncertainty-Aware Abstractive Summarization Alexios Gidiotis, Grigorios Tsoumakas [pdf]
  24. What's in a Summary? Laying the Groundwork for Advances in Hospital-Course Summarization Griffin Adams, Emily Alsentzer, Mert Ketenci, Jason Zucker, Noémie Elhadad NAACL21 [pdf]
  25. Generating abstractive summaries of Lithuanian news articles using a transformer model Lukas Stankevičius, Mantas Lukoševičius [pdf]
  26. Summarization, Simplification, and Generation: The Case of Patents Silvia Casola, Alberto Lavelli [pdf]
  27. Quantifying Appropriateness of Summarization Data for Curriculum Learning Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma EACL21 [pdf]
  28. Text Summarization of Czech News Articles Using Named Entities Petr Marek, Štěpán Müller, Jakub Konrád, Petr Lorenc, Jan Pichl, Jan Šedivý Journal [pdf]
  29. Planning with Entity Chains for Abstractive Summarization Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Ryan McDonald [pdf]
  30. Attention Head Masking for Inference Time Content Selection in Abstractive Summarization Shuyang Cao, Lu Wang NAACL21 short [pdf] [code]
  31. A New Approach to Overgenerating and Scoring Abstractive Summaries Kaiqiang Song, Bingqing Wang, Zhe Feng, Fei Liu NAACL21 [pdf] [code]
  32. Exploring Explainable Selection to Control Abstractive Summarization Wang Haonan, Gao Yang, Bai Yu, Mirella Lapata, Huang Heyan AAAI21 [pdf] [code]
  33. Friendly Topic Assistant for Transformer Based Abstractive Summarization Zhengjue Wang, Zhibin Duan, Hao Zhang, Chaojie Wang, Long Tian, Bo Chen, Mingyuan Zhou EMNLP20 [pdf] [code]
  34. Neural Abstractive Text Summarizer for Telugu Language Mohan Bharath B, Aravindh Gowtham B, Akhil M ICSCSP20 [pdf]
  35. Topic-Aware Abstractive Text Summarization Chujie Zheng, Kunpeng Zhang, Harry Jiannan Wang, Ling Fan [pdf] [code]
  36. Multi-hop Inference for Question-driven Summarization Yang Deng, Wenxuan Zhang, Wai Lam EMNLP20 [pdf]
  37. Quantitative Argument Summarization and Beyond-Cross-Domain Key Point Analysis Roy Bar-Haim, Yoav Kantor, Lilach Eden, Roni Friedman, Dan Lahav, Noam Slonim EMNLP20 [pdf]
  38. Learning to Fuse Sentences with Transformers for Summarization Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Lidan Wang, Walter Chang, Fei Liu EMNLP20 short [pdf] [code]
  39. A Cascade Approach to Neural Abstractive Summarization with Content Selection and Fusion Logan Lebanoff, Franck Dernoncourt, Doo Soon Kim, Walter Chang, Fei Liu AACL20 [pdf] [code]
  40. AutoSurvey: Automatic Survey Generation based on a Research Draft Hen-Hsen Huang IJCAI20 [pdf] [code]
  41. Neural Abstractive Summarization with Structural Attention Tanya Chowdhury, Sachin Kumar, Tanmoy Chakraborty IJCAI20 [pdf]
  42. A Unified Model for Financial Event Classification, Detection and Summarization Quanzhi Li, Qiong Zhang IJCAI20 Special Track on AI in FinTech [pdf]
  43. Discriminative Adversarial Search for Abstractive Summarization Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano ICML20 [pdf]
  44. Controlling the Amount of Verbatim Copying in Abstractive Summarization Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren, Fei Liu AAAI20 [pdf] [code]
  45. GRET:Global Representation Enhanced Transformer Rongxiang Weng, Haoran Wei, Shujian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jiajun Chen AAAI20 [pdf]
  46. Abstractive Summarization of Spoken and Written Instructions with BERT Alexandra Savelieva, Bryan Au-Yeung, Vasanth Ramani KDD Converse 2020 [pdf]
  47. Concept Pointer Network for Abstractive Summarization Wang Wenbo, Gao Yang, Huang Heyan, Zhou Yuxiang EMNLP19 [pdf] [code]
  48. Co-opNet: Cooperative Generator–Discriminator Networks for Abstractive Summarization with Narrative Flow Saadia Gabriel, Antoine Bosselut, Ari Holtzman, Kyle Lo, Asli Celikyilmaz, Yejin Choi [pdf]
  49. Contrastive Attention Mechanism for Abstractive Sentence Summarization Xiangyu Duan, Hongfei Yu, Mingming Yin, Min Zhang, Weihua Luo, Yue Zhang EMNLP19 [pdf] [code]
  50. An Entity-Driven Framework for Abstractive Summarization Eva Sharma, Luyang Huang, Zhe Hu, Lu Wang EMNLP19 [pdf] [code]
  51. Abstract Text Summarization: A Low Resource Challenge Shantipriya Parida, Petr Motlicek EMNLP19 [pdf] [code]
  52. Attention Optimization for Abstractive Document Summarization Min Gui, Junfeng Tian, Rui Wang, Zhenglu Yang EMNLP19 [pdf] [code]
  53. Scoring Sentence Singletons and Pairs for Abstractive Summarization Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu ACL19 [pdf] [code]
  54. Inducing Document Structure for Aspect-based Summarization Lea Frermann, Alexandre Klementiev ACL19 [pdf] [code]
  55. Generating Summaries with Topic Templates and Structured Convolutional Decoders Laura Perez-Beltrachini, Yang Liu, Mirella Lapata ACL19 [pdf] [code]
  56. Summary Refinement through Denoising Nikola I. Nikolov, Alessandro Calmanovici, Richard H.R. Hahnloser RANLP19 [pdf] [code]
  57. Closed-Book Training to Improve Summarization Encoder Memory Yichen Jiang, Mohit Bansal EMNLP18 [pdf]
  58. Improving Neural Abstractive Document Summarization with Structural Regularization Wei Li, Xinyan Xiao, Yajuan Lyu, Yuanzhuo Wang EMNLP18 [pdf]
  59. Bottom-Up Abstractive Summarization Sebastian Gehrmann, Yuntian Deng, Alexander M. Rush EMNLP18 [pdf] [code]
  60. A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, Min Sun ACL18 [pdf]
  61. Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation Han Guo, Ramakanth Pasunuru, Mohit Bansal ACL18 [pdf]
  62. Abstractive Document Summarization via Bidirectional Decoder Xin WanChen LiRuijia WangDing XiaoChuan Shi ADMA18 [pdf]
  63. Entity Commonsense Representation for Neural Abstractive Summarization Reinald Kim Amplayo, Seonjae Lim, Seung-won Hwang NAACL18 [pdf]
  64. Get To The Point: Summarization with Pointer-Generator Networks Abigail See, Peter J. Liu, Christopher D. Manning ACL17 [pdf] [code]
  65. Selective Encoding for Abstractive Sentence Summarization Qingyu Zhou, Nan Yang, Furu Wei, Ming Zhou ACL17 [pdf]
  66. Abstractive Document Summarization with a Graph-Based Attentional Neural Model Jiwei Tan, Xiaojun Wan, Jianguo Xiao ACL17 [pdf]
  67. Toward Abstractive Summarization Using Semantic Representations Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, Noah A. Smith NAACL15 [pdf]
  68. Abstractive Meeting Summarization with Entailment and Fusion Yashar Mehdad, Giuseppe Carenini, Frank Tompa, Raymond T. Ng ENLG13 [pdf]

Graph-Based

  1. Hierarchical Heterogeneous Graph Attention Network for Syntax-Aware Summarization Zixing Song, Irwin King AAAI 2022 [pdf]
  2. Summarization with Graphical Elements Maartje ter Hoeve, Julia Kiseleva, Maarten de Rijke [pdf] [code]
  3. HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization Ye Liu, Jian-Guo Zhang, Yao Wan, Congying Xia, Lifang He, Philip S. Yu EMNLP 2021 short [pdf]
  4. Centrality Meets Centroid: A Graph-based Approach for Unsupervised Document Summarization Haopeng Zhang, Jiawei Zhang [pdf]
  5. Neural Extractive Summarization with Hierarchical Attentive Heterogeneous Graph Network Ruipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, Shi Wang EMNLP20 [pdf] [code]
  6. Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks Peng Cui, Le Hu, Yuanchao Liu COLING20 [pdf]
  7. Heterogeneous Graph Neural Networks for Extractive Document Summarization Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang ACL20 [pdf] [code]
  8. Structured Neural Summarization Patrick Fernandes, Miltiadis Allamanis, Marc Brockschmidt ICLR19 [pdf] [code]
  9. Hierarchical Transformers for Multi-Document Summarization Yang Liu, Mirella Lapata ACL19 [pdf] [code]
  10. Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization Diego Antognini, Boi Faltings EMNLP19 [pdf]
  11. Graph-based Neural Multi-Document Summarization Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev CoNLL17 [pdf]
  12. Abstractive Document Summarization with a Graph-Based Attentional Neural Model Jiwei Tan, Xiaojun Wan, Jianguo Xiao ACL17 [pdf]

Unsupervised

  1. Learning Non-Autoregressive Models from Search for Unsupervised Sentence Summarization Puyuan Liu, Chenyang Huang, Lili Mou ACL 2022 [[pdf] [code]
    [Abs] Text summarization aims to generate a short summary for an input text. In this work, we propose a Non-Autoregressive Unsupervised Summarization (NAUS) approach, which does not require parallel data for training. Our NAUS first performs edit-based search towards a heuristically defined score, and generates a summary as pseudo-groundtruth. Then, we train an encoder-only non-autoregressive Transformer based on the search result. We also propose a dynamic programming approach for length-control decoding, which is important for the summarization task. Experiments on two datasets show that NAUS achieves state-of-the-art performance for unsupervised summarization, yet largely improving inference efficiency. Further, our algorithm is able to perform explicit length-transfer summary generation.
  2. Unsupervised Extractive Opinion Summarization Using Sparse Coding Somnath Basu Roy Chowdhury, Chao Zhao, Snigdha Chaturvedi ACL 2022 [pdf] [code]
    [Abs] Opinion summarization is the task of automatically generating summaries that encapsulate information expressed in multiple user reviews. We present Semantic Autoencoder (SemAE) to perform extractive opinion summarization in an unsupervised manner. SemAE uses dictionary learning to implicitly capture semantic information from the review text and learns a latent representation of each sentence over semantic units. Our extractive summarization algorithm leverages the representations to identify representative opinions among hundreds of reviews. SemAE is also able to perform controllable summarization to generate aspect-specific summaries using only a few samples. We report strong performance on SPACE and AMAZON datasets and perform experiments to investigate the functioning of our model.
  3. Want To Reduce Labeling Cost? GPT-3 Can Help Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng Findings of EMNLP 2021 [pdf]
  4. Improving Unsupervised Extractive Summarization with Facet-Aware Modeling Xinnian Liang, Shuangzhi Wu, Mu Li, Zhoujun Li ACL 2021 Findings [pdf] [code]
  5. MRCBert: A Machine Reading ComprehensionApproach for Unsupervised Summarization Saurabh Jain, Guokai Tang, Lim Sze Chi [pdf] [code]
  6. Centrality Meets Centroid: A Graph-based Approach for Unsupervised Document Summarization Haopeng Zhang, Jiawei Zhang [pdf]
  7. Unsupervised Opinion Summarization with Content Planning Reinald Kim Amplayo, Stefanos Angelidis, Mirella Lapata AAAI21 [pdf] [code]
  8. Biased TextRank: Unsupervised Graph-Based Content Extraction Ashkan Kazemi, Verónica Pérez-Rosas, Rada Mihalcea COLING20 [pdf] [code]
  9. Unsupervised Extractive Summarization by Pre-training Hierarchical Transformers Shusheng Xu, Xingxing Zhang, Yi Wu, Furu Wei, Ming Zhou [pdf] [code]
  10. Q-learning with Language Model for Edit-based Unsupervised Summarization Ryosuke Kohita, Akifumi Wachi, Yang Zhao, Ryuki Tachibana EMNLP20 [pdf] [code]
  11. Abstractive Document Summarization without Parallel Data Nikola I. Nikolov, Richard H.R. Hahnloser LREC20 [pdf] [code]
  12. Unsupervised Neural Single-Document Summarization of Reviews via Learning Latent Discourse Structure and its Ranking Masaru Isonuma, Junichiro Mori, Ichiro Sakata ACL19 [pdf] [code]
  13. Sentence Centrality Revisited for Unsupervised Summarization Hao Zheng, Mirella Lapata ACL19 [pdf] [code]
  14. Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert ACL20 [pdf] [code]
  15. SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic Auto-Encoders Peter J. Liu, Yu-An Chung, Jie Ren [pdf] [code]
  16. MeanSum : A Neural Model for Unsupervised Multi-Document Abstractive Summarization Eric Chu, Peter J. Liu ICML19 [pdf] [code]
  17. SEQ3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, Alexandros Potamianos NAACL19 [pdf] [code]
  18. Learning to Encode Text as Human-Readable Summaries usingGenerative Adversarial Networks Yaushian Wang, Hung-Yi Lee EMNLP18 [pdf] [code]
  19. Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, Jean-Pierre Lorré ACL18 [pdf] [code]

Concept-map-based

  1. Fast Concept Mention Grouping for Concept Map–based Multi-Document Summarization Tobias Falke, Iryna Gurevych NAACL19 [pdf] [code]
  2. Bringing Structure into Summaries : Crowdsourcing a Benchmark Corpus of Concept Maps Tobias Falke, Iryna Gurevych EMNLP17 [pdf] [code]

Timeline

  1. Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization Jingyi You, Dongyuan Li, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura NAACL 2022 [pdf] [data]
    [Abs] Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them. They also considered date selection and event detection as two independent tasks, which makes it impossible to integrate their advantages and obtain a globally optimal summary. In this paper, we present a joint learning-based heterogeneous graph attention network for TLS (HeterTls), in which date selection and event detection are combined into a unified framework to improve the extraction accuracy and remove redundant sentences simultaneously. Our heterogeneous graph involves multiple types of nodes, the representations of which are iteratively learned across the heterogeneous graph attention layer. We evaluated our model on four datasets, and found that it significantly outperformed the current state-of-the-art baselines with regard to ROUGE scores and date selection metrics.
  2. Updated Headline Generation: Creating Updated Summaries for Evolving News Stories Sheena Panthaplackel, Adrian Benton, Mark Dredze ACL 2022 [pdf] [code]
    [Abs] We propose the task of updated headline generation, in which a system generates a headline for an updated article, considering both the previous article and headline. The system must identify the novel information in the article update, and modify the existing headline accordingly. We create data for this task using the NewsEdits corpus by automatically identifying contiguous article versions that are likely to require a substantive headline update. We find that models conditioned on the prior headline and body revisions produce headlines judged by humans to be as factual as gold headlines while making fewer unnecessary edits compared to a standard headline generation model. Our experiments establish benchmarks for this new contextual summarization task.
  3. Abstractive summarization of hospitalisation histories with transformer networks Alexander Yalunin, Dmitriy Umerenkov, Vladimir Kokh [pdf]
  4. Follow the Timeline! Generating Abstractive and Extractive Timeline Summary in Chronological Order Xiuying Chen, Mingzhe Li, Shen Gao, Zhangming Chan, Dongyan Zhao, Xin Gao, Xiangliang Zhang, Rui Yan TOIS [pdf] [data]
  5. Multi-TimeLine Summarization (MTLS): Improving Timeline Summarization by Generating Multiple Summaries Yi Yu, Adam Jatowt, Antoine Doucet, Kazunari Sugiyama, Masatoshi Yoshikawa ACL 2021 [pdf] [data]
  6. Summarize Dates First: A Paradigm Shift in Timeline Summarization Moreno La Quatra, Luca Cagliero, Elena Baralis, Alberto Messina, Maurizio Montagnuolo SIGIR 2021 [pdf] [data]
  7. Examining the State-of-the-Art in News Timeline Summarization Demian Gholipour Ghalandari, Georgiana Ifrim ACL20 [pdf] [code]
  8. Learning towards Abstractive Timeline Summarization Xiuying Chen, Zhangming Chan, Shen Gao, Meng-Hsuan Yu, Dongyan Zhao, Rui Yan IJCAI19 [pdf] [data]

Opinion

  1. Efficient Few-Shot Fine-Tuning for Opinion Summarization Arthur Bražinskas, Ramesh Nallapati, Mohit Bansal, Markus Dreyer Findings of NAACL 202 [pdf] [code]
    [Abs] Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples. However, in opinion summarization, large annotated datasets of reviews paired with reference summaries are not available and would be expensive to create. This calls for fine-tuning methods robust to overfitting on small datasets. In addition, generically pre-trained models are often not accustomed to the specifics of customer reviews and, after fine-tuning, yield summaries with disfluencies and semantic mistakes. To address these problems, we utilize an efficient few-shot method based on adapters which, as we show, can easily store in-domain knowledge. Instead of fine-tuning the entire model, we add adapters and pre-train them in a task-specific way on a large corpus of unannotated customer reviews, using held-out reviews as pseudo summaries. Then, fine-tune the adapters on the small available human-annotated dataset. We show that this self-supervised adapter pre-training improves summary quality over standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp datasets, respectively. Finally, for summary personalization, we condition on aspect keyword queries, automatically created from generic datasets. In the same vein, we pre-train the adapters in a query-based manner on customer reviews and then fine-tune them on annotated datasets. This results in better-organized summary content reflected in improved coherence and fewer redundancies.
  2. DSGPT: Domain-Specific Generative Pre-Training of Transformers for Text Generation in E-commerce Title and Review Summarization Xueying Zhang, Yunjiang Jiang, Yue Shang, Zhaomeng Cheng, Chi Zhang, Xiaochuan Fan, Yun Xiao, Bo Long SIGIR 2021 [pdf]
  3. Convex Aggregation for Opinion Summarization Hayate Iso, Xiaolan Wang, Yoshihiko Suhara, Stefanos Angelidis, Wang-Chiew Tan EMNLP 2021 Findings [pdf] [code]
  4. Measuring Similarity of Opinion-bearing Sentences Wenyi Tay, Xiuzhen Zhang, Stephen Wan, Sarvnaz Karimi EMNLP 2021| newsum [pdf] [data]
  5. Comparative Opinion Summarization via Collaborative Decoding Hayate Iso, Xiaolan Wang, Yoshihiko Suhara [pdf] [data]
  6. Learning Opinion Summarizers by Selecting Informative Reviews Arthur Bražinskas, Mirella Lapata, Ivan Titov EMNLP 2021 [pdf] [code]
  7. Aspect-Controllable Opinion Summarization Reinald Kim Amplayo, Stefanos Angelidis, Mirella Lapata EMNLP 2021 [pdf] [code]
  8. CUSTOM: Aspect-Oriented Product Summarization for E-Commerce Jiahui Liang, Junwei Bao, Yifan Wang, Youzheng Wu, Xiaodong He, Bowen Zhou [pdf] [code]
  9. TransSum: Translating Aspect and Sentiment Embeddings for Self-Supervised Opinion Summarization Ke Wang, Xiaojun Wan ACL 2021 Findings [pdf]
  10. Unsupervised Abstractive Opinion Summarization by Generating Sentences with Tree-Structured Topic Guidance Masaru Isonuma, Junichiro Mori, Danushka Bollegala, Ichiro Sakata TACL 2021 [pdf]
  11. PASS: Perturb-and-Select Summarizer for Product Reviews Nadav Oved, Ran Levy ACL 2021 [pdf]
  12. Self-Supervised Multimodal Opinion Summarization Jinbae Im, Moonki Kim, Hoyeop Lee, Hyunsouk Cho, Sehee Chung ACL21 [pdf] [code]
  13. MRCBert: A Machine Reading Comprehension Approach for Unsupervised Summarization Saurabh Jain, Guokai Tang, Lim Sze Chi [pdf] [code]
  14. Informative and Controllable Opinion Summarization Reinald Kim Amplayo, Mirella Lapata EACL21 [pdf] [code]
  15. Self-Supervised and Controlled Multi-Document Opinion Summarization Hady Elsahar, Maximin Coavoux, Jos Rozen, Matthias Gallé EACL21 [pdf]
  16. Unsupervised Opinion Summarization with Content Planning Reinald Kim Amplayo, Stefanos Angelidis, Mirella Lapata AAAI21 [pdf] [code]
  17. Extractive Opinion Summarization in Quantized Transformer Spaces Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, Mirella Lapata TACL [pdf] [code]
  18. Few-Shot Learning for Opinion Summarization Arthur Bražinskas, Mirella Lapata, Ivan Titov EMNLP20 [pdf] [code]
  19. Unsupervised Opinion Summarization as Copycat-Review Generation Arthur Bražinskas, Mirella Lapata, Ivan Titov ACL20 [pdf] [code]
  20. Unsupervised Opinion Summarization with Noising and Denoising Reinald Kim Amplayo, Mirella Lapata ACL20 [pdf] [code]
  21. OPINIONDIGEST: A Simple Framework for Opinion Summarization Yoshihiko Suhara, Xiaolan Wang, Stefanos Angelidis, Wang-Chiew Tan ACL20 Short [pdf] [code]
  22. Weakly-Supervised Opinion Summarization by Leveraging External Information Chao Zhao, Snigdha Chaturvedi AAAI20 [pdf] [code]
  23. MeanSum: A Neural Model for Unsupervised Multi-Document Abstractive Summarization Eric Chu, Peter J. Liu ICML19 [pdf] [code]

Reinforcement Learning

  1. Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman ACL 2021 short [pdf] [code]
  2. RewardsOfSum: Exploring Reinforcement Learning Rewards for Summarisation Jacob Parnell, Inigo Jauregi Unanue, Massimo Piccardi 5th Workshop on Structured Prediction for NLP ACL-IJCNLP 2021 [pdf]
  3. Reinforced Generative Adversarial Network for Abstractive Text Summarization Tianyang Xu, Chunyun Zhang [pdf]
  4. Answers Unite! Unsupervised Metrics for Reinforced Summarization Models Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano EMNLP19 [pdf]
  5. Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization Siyao Li, Deren Lei, Pengda Qin, William Yang Wang EMNLP19 [pdf]
  6. Reinforced Extractive Summarization with Question-Focused Rewards Kristjan Arumae, Fei Liu ACL18 [pdf]
  7. Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting Yen-Chun Chen, Mohit Bansal ACL18 [pdf] [code]
  8. Multi-Reward Reinforced Summarization with Saliency and Entailmen Ramakanth Pasunuru, Mohit Bansal NAACL18 [pdf]
  9. Deep Communicating Agents for Abstractive Summarization Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi NAACL18 [pdf]
  10. Ranking Sentences for Extractive Summarization with Reinforcement Learning Shashi Narayan, Shay B. Cohen, Mirella Lapata NAACL18 [pdf] [code]
  11. A Deep Reinforced Model For Abstractive Summarization Romain Paulus, Caiming Xiong, Richard Socher ICLR18 [pdf]

Reward Learning

  1. Recursively Summarizing Books with Human Feedback Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nissan Stiennon, Ryan Lowe, Jan Leike, Paul Christiano [pdf] [code]
  2. Learning to summarize from human feedback Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano [pdf] [code]
  3. Better Rewards Yield Better Summaries: Learning to Summarise Without References Florian Böhm, Yang Gao, Christian M. Meyer, Ori Shapira, Ido Dagan, Iryna Gurevych EMNLP19 [pdf] [code]

Extractive

  1. OTExtSum: Extractive Text Summarisation with Optimal Transport Peggy Tang, Kun Hu, Rui Yan, Lei Zhang, Junbin Gao, Zhiyong Wang Findings of NAACL 2022 [pdf] [code]
    [Abs] Extractive text summarisation aims to select salient sentences from a document to form a short yet informative summary. While learning-based methods have achieved promising results, they have several limitations, such as dependence on expensive training and lack of interpretability. Therefore, in this paper, we propose a novel non-learning-based method by for the first time formulating text summarisation as an Optimal Transport (OT) problem, namely Optimal Transport Extractive Summariser (OTExtSum). Optimal sentence extraction is conceptualised as obtaining an optimal summary that minimises the transportation cost to a given document regarding their semantic distributions. Such a cost is defined by the Wasserstein distance and used to measure the summary’s semantic coverage of the original document. Comprehensive experiments on four challenging and widely used datasets - MultiNews, PubMed, BillSum, and CNN/DM demonstrate that our proposed method outperforms the state-of-the-art non-learning-based methods and several recent learning-based methods in terms of the ROUGE metric.
  2. Post-Editing Extractive Summaries by Definiteness Prediction Jad Kabbara, Jackie Chi Kit Cheung EMNLP 2021 Findings [pdf]
  3. Decision-Focused Summarization Chao-Chun Hsu, Chenhao Tan EMNLP 2021 [pdf] [code]
  4. Monolingual versus Multilingual BERTology for Vietnamese Extractive Multi-Document Summarization Huy To Quoc, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen, Anh Gia-Tuan Nguyen [pdf]
  5. Multiplex Graph Neural Network for Extractive Text Summarization Baoyu Jing, Zeyu You, Tao Yang, Wei Fan, Hanghang Tong EMNLP 2021 Short [pdf]
  6. Unsupervised Extractive Summarization-Based Representations for Accurate and Explainable Collaborative Filtering Reinald Adrian Pugoy, Hung-Yu Kao ACL 2021 [pdf]
  7. Deep Differential Amplifier for Extractive Summarization Ruipeng Jia, Yanan Cao, Fang Fang, Yuchen Zhou, Zheng Fang, Yanbing Liu, Shi Wang ACL 2021 [pdf]
  8. Incorporating Domain Knowledge for Extractive Summarization of Legal Case Documents Paheli Bhattacharya, Soham Poddar, Koustav Rudra, Kripabandhu Ghosh, Saptarshi Ghosh ICAIL 2021 [pdf]
  9. Topic Modeling Based Extractive Text Summarization Kalliath Abdul Rasheed Issam, Shivam Patel, Subalalitha C. N Journal [pdf]
  10. Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning Linzi Xing, Wen Xiao, Giuseppe Carenini ACL2021-short [pdf] [code]
  11. Genetic Algorithms For Extractive Summarization William Chen, Kensal Ramos, Kalyan Naidu Mullaguri [pdf]
  12. Extractive Summarization Considering Discourse and Coreference Relations based on Heterogeneous Graph Yin Jou Huang, Sadao Kurohashi EACL21 [pdf]
  13. AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization Keping Bi, Rahul Jha, Bruce Croft, Asli Celikyilmaz EACL21 [pdf]
  14. Unsupervised Extractive Summarization using Pointwise Mutual Information Vishakh Padmakumar, He He EACL21 [pdf] [code]
  15. Better Highlighting: Creating Sub-Sentence Summary Highlights Sangwoo Cho, Kaiqiang Song, Chen Li, Dong Yu, Hassan Foroosh, Fei Liu EMNLP20 [pdf] [code]
  16. Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries Xiaofei Sun, Chun Fan, Zijun Sun, Yuxian Meng, Fei Wu, Jiwei Li [pdf] [code]
  17. SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy Umanga Bista, Alexander Patrick Mathews, Aditya Krishna Menon, Lexing Xie [pdf] [code]
  18. Stepwise Extractive Summarization and Planning with Structured Transformers Shashi Narayan, Joshua Maynez, Jakub Adamek, Daniele Pighin, Blaž Bratanič, Ryan McDonald EMNLP20 [pdf] [code]
  19. A Discourse-Aware Neural Extractive Model for Text Summarization Jiacheng Xu, Zhe Gan, Yu Cheng, Jingjing Liu ACL20 [pdf] [code]
  20. Reading Like HER: Human Reading Inspired Extractive Summarization Ling Luo, Xiang Ao, Yan Song, Feiyang Pan, Min Yang, Qing He EMNLP19 [pdf]
  21. Exploiting Discourse-Level Segmentation for Extractive Summarization Zhengyuan Liu, Nancy Chen EMNLP19 [pdf]
  22. DeepChannel: Salience Estimation by Contrastive Learning for Extractive Document Summarization Jiaxin Shi, Chen Liang, Lei Hou, Juanzi Li, Zhiyuan Liu, Hanwang Zhang AAAI19 [pdf] [code]
  23. Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks Aishwarya Jadhav, Vaibhav Rajan ACL18 [pdf]
  24. Neural Document Summarization by Jointly Learning to Score and Select Sentences Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, Tiejun Zhao ACL18 [pdf]
  25. Neural Latent Extractive Document Summarization Xingxing Zhang, Mirella Lapata, Furu Wei, Ming Zhou ACL18 [pdf]
  26. Generative Adversarial Network for Abstractive Text Summarization Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, Hongyan Li AAAI18 [pdf] [code]
  27. Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling Wei Li, Xinyan Xiao, Yajuan Lyu, Yuanzhuo Wang EMNLP18[pdf]
  28. Extractive Summarization Using Multi-Task Learning with Document Classification Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, Ichiro Sakata EMNLP17 [pdf]
  29. SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents Ramesh Nallapati, Feifei Zhai, Bowen Zhou AAAI17 [pdf] [code]
  30. Text Summarization through Entailment-based Minimum Vertex Cover Anand Gupta, Manpreet Kaur, Shachar Mirkin, Adarsh Singh, Aseem Goyal ENLG13 [pdf]

Extractive-Abstractive

  1. EASE: Extractive-Abstractive Summarization with Explanations Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, Marjan Ghazvininejad EMNLP 2021| newsum [pdf]
  2. Semantic Extractor-Paraphraser based Abstractive Summarization Anubhav Jangra, Raghav Jain, Vaibhav Mavi, Sriparna Saha, Pushpak Bhattacharyya [pdf]
  3. Contextualized Rewriting for Text Summarization Guangsheng Bao, Yue Zhang AAAI21 [pdf]
  4. Jointly Extracting and Compressing Documents with Summary State Representations Afonso Mendes, Shashi Narayan, Sebastião Miranda, Zita Marinho, André F. T. Martins, Shay B. Cohen NAACL19 [pdf] [code]

VAE

  1. Deep Recurrent Generative Decoder for Abstractive Text Summarization Piji Li, Wai Lam, Lidong Bing, Zihao Wang EMNLP17 [pdf]
  2. Document Summarization with VHTM: Variational Hierarchical Topic-Aware Mechanism Xiyan Fu, Jun Wang, Jinghan Zhang, Jinmao Wei, Zhenglu Yang AAAI20 [pdf]

Syntactic

  1. Compressive Summarization with Plausibility and Salience Modeling Shrey Desai, Jiacheng Xu, Greg Durrett EMNLP20 [pdf] [code]
  2. StructSum: Incorporating Latent and Explicit Sentence Dependencies for Single Document Summarization Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, Yulia Tsvetkov EACL21 [pdf] [code]
  3. Joint Parsing and Generation for Abstractive Summarization Kaiqiang Song, Logan Lebanoff, Qipeng Guo, Xipeng Qiu, Xiangyang Xue, Chen Li, Dong Yu, Fei Liu AAAI20 [pdf] [code]
  4. Neural Extractive Text Summarization with Syntactic Compression Jiacheng Xu, Greg Durrett EMNLP19 [pdf] [code]
  5. Single Document Summarization as Tree Induction Yang Liu, Ivan Titov, Mirella Lapata NAACL19 [pdf] [code]

QA Related

  1. Educational Question Generation of Children Storybooks via Question Type Distribution Learning and Event-centric Summarization Zhenjie Zhao, Yufang Hou, Dakuo Wang, Mo Yu, Chengzhong Liu, Xiaojuan Ma ACL 2022 [pdf] [code]
    [Abs] Generating educational questions of fairytales or storybooks is vital for improving children’s literacy ability. However, it is challenging to generate questions that capture the interesting aspects of a fairytale story with educational meaningfulness. In this paper, we propose a novel question generation method that first learns the question type distribution of an input story paragraph, and then summarizes salient events which can be used to generate high-cognitive-demand questions. To train the event-centric summarizer, we finetune a pre-trained transformer-based sequence-to-sequence model using silver samples composed by educational question-answer pairs. On a newly proposed educational question-answering dataset FairytaleQA, we show good performance of our method on both automatic and human evaluation metrics. Our work indicates the necessity of decomposing question type distribution learning and event-centric summary generation for educational question generation.
  2. Benchmarking Answer Verification Methods for Question Answering-Based Summarization Evaluation Metrics Daniel Deutsch, Dan Roth [pdf]
  3. Using Question Answering Rewards to Improve Abstractive Summarization Chulaka Gunasekara, Guy Feigenblat, Benjamin Sznajder, Ranit Aharonov, Sachindra Joshi EMNLP 2021 Findings [pdf]
  4. Question-Based Salient Span Selection for More Controllable Text Summarization Daniel Deutsch, Dan Roth [pdf]
  5. Text Summarization with Latent Queries Yumo Xu, Mirella Lapata [pdf]
  6. Summarizing Chinese Medical Answer with Graph Convolution Networks and Question-focused Dual Attention Ningyu Zhang, Shumin Deng, Juan Li, Xi Chen, Wei Zhang, Huajun Chen Findings of EMNLP [pdf]
  7. Towards Question-Answering as an Automatic Metric for Evaluating the Content Quality of a Summary Daniel Deutsch, Tania Bedrax-Weiss, Dan Roth [pdf] [code]
  8. Guiding Extractive Summarization with Question-Answering Rewards Kristjan Arumae, Fei Liu NAACL19 [pdf] [code]
  9. A Semantic QA-Based Approach for Text Summarization Evaluation Ping Chen, Fei Wu, Tong Wang, Wei Ding AAAI18 [pdf]

Query

  1. Domain Adaptation with Pre-trained Transformers for Query Focused Abstractive Text Summarization Md Tahmid Rahman Laskar, Enamul Hoque, Jimmy Xiangji Huang [pdf] [code]
  2. Exploring Neural Models for Query-Focused Summarization Jesse Vig, Alexander R. Fabbri, Wojciech Kryściński Findings of NAACL 2022 [pdf] [code]
    [Abs] Query-focused summarization (QFS) aims to produce summaries that answer particular questions of interest, enabling greater user control and personalization. While recently released datasets, such as QMSum or AQuaMuSe, facilitate research efforts in QFS, the field lacks a comprehensive study of the broad space of applicable modeling methods. In this paper we conduct a systematic exploration of neural approaches to QFS, considering two general classes of methods: two-stage extractive-abstractive solutions and end-to-end models. Within those categories, we investigate existing models and explore strategies for transfer learning. We also present two modeling extensions that achieve state-of-the-art performance on the QMSum dataset, up to a margin of 3.38 ROUGE-1, 3.72 ROUGE2, and 3.28 ROUGE-L when combined with transfer learning strategies. Results from human evaluation suggest that the best models produce more comprehensive and factually consistent summaries compared to a baseline model. Code and checkpoints are made publicly available: https://github.com/salesforce/query-focused-sum.
  3. Aspect-Oriented Summarization through Query-Focused Extraction Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett [pdf]
  4. Query-Focused Extractive Summarisation for Finding Ideal Answers to Biomedical and COVID-19 Questions Diego Mollá (1 and 2), Urvashi Khanna (1), Dima Galat (1), Vincent Nguyen (2 and 3)Maciej Rybinski (3) ( (1) Macquarie University, (2) CSIRO Data61, (3) Australian National University) [pdf]
  5. Summary-Oriented Question Generation for Informational Queries Xusen Yin, Li Zhou, Kevin Small, Jonathan May Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021) [pdf]
  6. Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman ACL 2021 short [pdf] [code]
  7. Generating Query Focused Summaries from Query-Free Resources ACL 2021 Yumo Xu, Mirella Lapata [pdf] [code]
  8. Improve Query Focused Abstractive Summarization by Incorporating Answer Relevance Dan Su, Tiezheng Yu, Pascale Fung ACL21 [pdf] [code]
  9. D2S: Document-to-Slide Generation Via Query-Based Text Summarization Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy X.R. Wang NAACL21 [pdf] [code]

EncoderFusion

  1. Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Zhaopeng Tu ICLR21 [pdf]
  2. Improving Abstractive Text Summarization with History Aggregation Pengcheng Liao, Chuang Zhang, Xiaojun Chen, Xiaofei Zhou [pdf] [code]

Discourse

  1. Discourse-Aware Unsupervised Summarization for Long Scientific Documents Yue Dong, Andrei Mircea Romascanu, Jackie Chi Kit Cheung EACL21 [pdf] [code]
  2. Discourse Understanding and Factual Consistency in Abstractive Summarization Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi EACL21 [pdf] [code]
  3. Predicting Discourse Trees from Transformer-based Neural Summarizers Wen Xiao, Patrick Huber, Giuseppe Carenini NAACL21 [pdf] [code]
  4. Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help ! Wen Xiao, Patrick Huber, Giuseppe Carenini EMNLP20 Workshop [pdf]
  5. Dialogue Discourse-Aware Graph Convolutional Networks for Abstractive Meeting Summarization Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng, Ting Liu [pdf]
  6. Restructuring Conversations using Discourse Relations for Zero-shot Abstractive Dialogue Summarization Prakhar Ganesh, Saket Dingliwal [pdf]
  7. Unsupervised Neural Single-Document Summarization of Reviews via Learning Latent Discourse Structure and its Ranking Masaru Isonuma, Junichiro Mori, Ichiro Sakata ACL19 [pdf] [code]
  8. Exploiting Discourse-Level Segmentation for Extractive Summarization Zhengyuan Liu, Nancy Chen EMNLP19 [pdf]
  9. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian NAACL18 [pdf] [data]

Movie

  1. Movie Summarization via Sparse Graph Construction Pinelopi Papalampidi, Frank Keller, Mirella Lapata AAAI21 [pdf] [code]

Low Resource

  1. Automatic Summarization of Russian Texts: Comparison of Extractive and Abstractive Methods Valeriya Goloviznina, Evgeny Kotelnikov Dialogue-2022 [pdf]
    [Abs] The development of large and super-large language models, such as GPT-3, T5, Switch Transformer, ERNIE, etc., has significantly improved the performance of text generation. One of the important research directions in this area is the generation of texts with arguments. The solution of this problem can be used in business meetings, political debates, dialogue systems, for preparation of student essays. One of the main domains for these applications is the economic sphere. The key problem of the argument text generation for the Russian language is the lack of annotated argumentation corpora. In this paper, we use translated versions of the Argumentative Microtext, Persuasive Essays and UKP Sentential corpora to fine-tune RuBERT model. Further, this model is used to annotate the corpus of economic news by argumentation. Then the annotated corpus is employed to fine-tune the ruGPT-3 model, which generates argument texts. The results show that this approach improves the accuracy of the argument generation by more than 20 percentage points (63.2% vs. 42.5%) compared to the original ruGPT-3 model.
  2. Indian Legal Text Summarization: A Text Normalisation-based Approach Satyajit Ghosh, Mousumi Dutta, Tanaya Das [pdf]
    [Abs] In the Indian court system, pending cases have long been a problem. There are more than 4 crore cases outstanding. Manually summarising hundreds of documents is a time-consuming and tedious task for legal stakeholders. Many state-of-the-art models for text summarization have emerged as machine learning has progressed. Domain-independent models don't do well with legal texts, and fine-tuning those models for the Indian Legal System is problematic due to a lack of publicly available datasets. To improve the performance of domain-independent models, the authors have proposed a methodology for normalising legal texts in the Indian context. The authors experimented with two state-of-the-art domain-independent models for legal text summarization, namely BART and PEGASUS. BART and PEGASUS are put through their paces in terms of extractive and abstractive summarization to understand the effectiveness of the text normalisation approach. Summarised texts are evaluated by domain experts on multiple parameters and using ROUGE metrics. It shows the proposed text normalisation approach is effective in legal texts with domain-independent models.
  3. Domain Specific Fine-tuning of Denoising Sequence-to-Sequence Models for Natural Language Summarization Brydon Parker, Alik Sokolov, Mahtab Ahmed, Matt Kalebic, Sedef Akinli Kocak, Ofer Shai `` [pdf] [code] [data]
  4. PSP: Pre-trained Soft Prompts for Few-Shot Abstractive Summarization Xiaochen Liu, Yu Bai, Jiawei Li, Yinan Hu, Yang Gao [pdf]
  5. An Overview of Indian Language Datasets used for Text Summarization Shagun Sinha, Girish Nath Jha [pdf]
  6. AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive Summarization Moussa Kamal Eddine, Nadi Tomeh, Nizar Habash, Joseph Le Roux, Michalis Vazirgiannis [pdf] [code]
  7. ExtraPhrase: Efficient Data Augmentation for Abstractive Summarization Mengsay Loem, Sho Takase, Masahiro Kaneko, Naoaki Okazaki [pdf]
  8. Mitigating Data Scarceness through Data Synthesis, Augmentation and Curriculum for Abstractive Summarization Ahmed Magooda, Diane Litman Findings of EMNLP 2021 Short [pdf]
  9. Exploring Multitask Learning for Low-Resource Abstractive Summarization Ahmed Magooda, Mohamed Elaraby, Diane Litman EMNLP 2021 short [pdf]
  10. Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze EACL21 [pdf]
  11. AdaptSum: Towards Low-Resource Domain Adaptation for Abstractive Summarization Tiezheng Yu, Zihan Liu, Pascale Fung NAACL21 [pdf] [code]
  12. Meta-Transfer Learning for Low-Resource Abstractive Summarization Yi-Syuan Chen, Hong-Han Shuai AAAI21 [pdf] [code]

Personalized

  1. Unsupervised Summarization with Customized Granularities Ming Zhong, Yang Liu, Suyu Ge, Yuning Mao, Yizhu Jiao, Xingxing Zhang, Yichong Xu, Chenguang Zhu, Michael Zeng, Jiawei Han [pdf]
  2. Transformer Reasoning Network for Personalized Review Summarization Hongyan Xu, Hongtao Liu, Pengfei Jiao, Wenjun Wang SIGIR 2021 [pdf]
  3. PENS: A Dataset and Generic Framework for Personalized News Headline Generation Xiang Ao Xiting Wang Ling Luo Ying Qiao Qing He Xing Xie ACL 2021 [pdf] [data]
  4. Collabot: Personalized Group Chat Summarization Naama Tepper, Anat Hashavit, Maya Barnea, Inbal Ronen, Lior Leiba WSDM 2018 [pdf]
  5. Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback Avinesh P.V.S, Christian M. Meyer ACL 2017 [pdf] [code]
  6. Context Enhanced Personalized Social Summarization Po Hu, Donghong Ji, Chong Teng, Yujing Guo COLING12 [pdf]
  7. Summarize What You Are Interested In: An Optimization Framework for Interactive Personalized Summarization Rui Yan, Jian-Yun Nie, Xiaoming Li EMNLP 2011 [pdf]
  8. In-Browser Summarisation: Generating Elaborative Summaries Biased Towards the Reading Context Stephen Wan, Cécile Paris ACL 2008 [pdf]
  9. Personalized Summarization Agent Using Non-negative Matrix Factorization Sun Park PRICAI 2008 [pdf]
  10. Aspect-Based Personalized Text Summarization Shlomo Berkovsky, Timothy Baldwin, Ingrid Zukerman AH 2008 [pdf]
  11. User-model based personalized summarization Alberto Díaz, Pablo Gervás [pdf]
  12. Machine Learning of Generic and User-Focused Summarization Inderjeet Mani, Eric Bloedorn AAAI 1998 [pdf]

Interactive

  1. Make The Most of Prior Data: A Solution for Interactive Text Summarization with Preference Feedback Duy-Hung Nguyen, Nguyen Viet Dung Nghiem, Bao-Sinh Nguyen, Dung Tien Tien Le, Shahab Sabahi, Minh-Tien Nguyen, Hung Le Findings of NAACL 2022 [pdf]
    [Abs] For summarization, human preferences is critical to tame outputs of the summarizer in favor of human interests, as ground-truth summaries are scarce and ambiguous. Practical settings require dynamic exchanges between humans and AI agents wherein feedback is provided in an online manner, a few at a time. In this paper, we introduce a new framework to train summarization models with preference feedback interactively. By properly leveraging offline data and a novel reward model, we improve the performance regarding ROUGE scores and sample-efficiency. Our experiments on three various datasets confirm the benefit of the proposed framework in active, few-shot and online settings of preference learning.
  2. Interactive Query-Assisted Summarization via Deep Reinforcement Learning Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Ido Dagan, Yael Amsterdamer NAACL 2022 [pdf] [code]
    [Abs] Interactive summarization is a task that facilitates user-guided exploration of information within a document set. While one would like to employ state of the art neural models to improve the quality of interactive summarization, many such technologies cannot ingest the full document set or cannot operate at sufficient speed for interactivity. To that end, we propose two novel deep reinforcement learning models for the task that address, respectively, the subtask of summarizing salient information that adheres to user queries, and the subtask of listing suggested queries to assist users throughout their exploration. In particular, our models allow encoding the interactive session state and history to refrain from redundancy. Together, these models compose a state of the art solution that addresses all of the task requirements. We compare our solution to a recent interactive summarization system, and show through an experimental study involving real users that our models are able to improve informativeness while preserving positive user experience.
  3. Hone as You Read: A Practical Type of Interactive Summarization Tanner Bohn, Charles X. Ling [pdf]

Speech

  1. Speech Summarization using Restricted Self-Attention Roshan Sharma, Shruti Palaskar, Alan W Black, Florian Metze ICASSP 2022 [pdf]

Prompt

  1. Discourse-Aware Prompt Design for Text Generation Marjan Ghazvininejad, Vladimir Karpukhin, Asli Celikyilmaz [pdf]

Temp

  1. SETSum: Summarization and Visualization of Student Evaluations of Teaching Yinuo Hu, Shiyue Zhang, Viji Sathy, Abigail Panter, Mohit Bansal NAACL 2022 Demo [pdf] [code]
    [Abs] Student Evaluations of Teaching (SETs) are widely used in colleges and universities. Typically SET results are summarized for instructors in a static PDF report. The report often includes summary statistics for quantitative ratings and an unsorted list of open-ended student comments. The lack of organization and summarization of the raw comments hinders those interpreting the reports from fully utilizing informative feedback, making accurate inferences, and designing appropriate instructional improvements. In this work, we introduce a novel system, SETSUM, that leverages sentiment analysis, aspect extraction, summarization, and visualization techniques to provide organized illustrations of SET findings to instructors and other reviewers. Ten university professors from diverse departments serve as evaluators of the system and all agree that SETSUM help them interpret SET results more efficiently; and 6 out of 10 instructors prefer our system over the standard static PDF report (while the remaining 4 would like to have both). This demonstrates that our work holds the potential of reforming the SET reporting conventions in the future.
  2. ASPECTNEWS: Aspect-Oriented Summarization of News Documents Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, Greg Durrett ACL 2022 [pdf] [code]
    [Abs] Generic summaries try to cover an entire document and query-based summaries try to answer document-specific questions. But real users’ needs often fall in between these extremes and correspond to aspects, high-level topics discussed among similar types of documents. In this paper, we collect a dataset of realistic aspect-oriented summaries, AspectNews, which covers different subtopics about articles in news sub-domains. We annotate data across two domains of articles, earthquakes and fraud investigations, where each article is annotated with two distinct summaries focusing on different aspects for each domain. A system producing a single generic summary cannot concisely satisfy both aspects. Our focus in evaluation is how well existing techniques can generalize to these domains without seeing in-domain training data, so we turn to techniques to construct synthetic training data that have been used in query-focused summarization work. We compare several training schemes that differ in how strongly keywords are used and how oracle summaries are extracted. Our evaluation shows that our final approach yields (a) focused summaries, better than those from a generic summarization system or from keyword matching; (b) a system sensitive to the choice of keywords.
  3. The Triangle-Densest-k-Subgraph Problem: Hardness, Lovász Extension, and Application to Document Summarization Aritra Konar, Nicholas D. Sidiropoulos AAAI 2022 [pdf]
  4. Applying Automatic Text Summarization for Fake News Detection Philipp Hartl, Udo Kruschwitz [pdf] [code]
  5. Graph Enhanced Contrastive Learning for Radiology Findings Summarization Jinpeng Hu, Zhuo Li, Zhihong Chen, Zhen Li, Xiang Wan, Tsung-Hui Chang ACL 2022 [pdf] [code]
    [Abs] The impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians. Summarizing findings is time-consuming and can be prone to error for inexperienced radiologists, and thus automatic impression generation has attracted substantial attention. With the encoder-decoder framework, most previous studies explore incorporating extra knowledge (e.g., static pre-defined clinical ontologies or extra background information). Yet, they encode such knowledge by a separate encoder to treat it as an extra input to their models, which is limited in leveraging their relations with the original findings. To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i.e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation. In detail, for each input findings, it is encoded by a text encoder and a graph is constructed through its entities and dependency tree. Then, a graph encoder (e.g., graph neural networks (GNNs)) is adopted to model relation information in the constructed graph. Finally, to emphasize the key words in the findings, contrastive learning is introduced to map positive samples (constructed by masking non-key words) closer and push apart negative ones (constructed by masking key words). The experimental results on two datasets, OpenI and MIMIC-CXR, confirm the effectiveness of our proposed method, where the state-of-the-art results are achieved.
  6. Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization Sanjeev Kumar Karn, Ning Liu, Hinrich Schuetze, Oladimeji Farri ACL 2022 [pdf]
    [Abs] The IMPRESSIONS section of a radiology report about an imaging study is a summary of the radiologist’s reasoning and conclusions, and it also aids the referring physician in confirming or excluding certain diagnoses. A cascade of tasks are required to automatically generate an abstractive summary of the typical information-rich radiology report. These tasks include acquisition of salient content from the report and generation of a concise, easily consumable IMPRESSIONS section. Prior research on radiology report summarization has focused on single-step end-to-end models – which subsume the task of salient content acquisition. To fully explore the cascade structure and explainability of radiology report summarization, we introduce two innovations. First, we design a two-step approach: extractive summarization followed by abstractive summarization. Second, we additionally break down the extractive part into two independent tasks: extraction of salient (1) sentences and (2) keywords. Experiments on English radiology reports from two clinical sites show our novel approach leads to a more precise summary compared to single-step and to two-step-with-single-extractive-process baselines with an overall improvement in F1 score of 3-4%.
  7. AUTOSUMM: Automatic Model Creation for Text Summarization Sharmila Reddy Nangi, Atharv Tyagi, Jay Mundra, Sagnik Mukherjee, Raj Snehal, Niyati Chhaya, Aparna Garimella EMNLP 2021 [pdf]

Extend

  1. SOM-NCSCM : An Efficient Neural Chinese Sentence Compression Model Enhanced with Self-Organizing Map Kangli Zi, Shi Wang, Yu Liu, Jicun Li, Yanan Cao, Cungen Cao EMNLP 2021 [pdf] [data]

Retrieve-augmented

  1. Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, Michael Zeng ACL 2022 [pdf] [code]

Chart-to-text

  1. Chart-to-Text: A Large-Scale Benchmark for Chart Summarization Shankar Kanthara, Rixie Tiffany Ko Leong, Xiang Lin, Ahmed Masry, Megh Thakkar, Enamul Hoque, Shafiq Joty ACL 2022 [pdf] [data]
    [Abs] Charts are commonly used for exploring data and communicating insights. Generating natural language summaries from charts can be very helpful for people in inferring key insights that would otherwise require a lot of cognitive and perceptual efforts. We present Chart-to-text, a large-scale benchmark with two datasets and a total of 44,096 charts covering a wide range of topics and chart types. We explain the dataset construction process and analyze the datasets. We also introduce a number of state-of-the-art neural models as baselines that utilize image captioning and data-to-text generation techniques to tackle two problem variations: one assumes the underlying data table of the chart is available while the other needs to extract data from chart images. Our analysis with automatic and human evaluation shows that while our best models usually generate fluent summaries and yield reasonable BLEU scores, they also suffer from hallucinations and factual errors as well as difficulties in correctly explaining complex patterns and trends in charts.

Podcast

  1. Towards Abstractive Grounded Summarization of Podcast Transcripts Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu ACL 2022 [pdf] [code]
    [Abs] Podcasts have shown a recent rise in popularity. Summarization of podcasts is of practical benefit to both content providers and consumers. It helps people quickly decide whether they will listen to a podcast and/or reduces the cognitive load of content providers to write summaries. Nevertheless, podcast summarization faces significant challenges including factual inconsistencies of summaries with respect to the inputs. The problem is exacerbated by speech disfluencies and recognition errors in transcripts of spoken language. In this paper, we explore a novel abstractive summarization method to alleviate these issues. Our approach learns to produce an abstractive summary while grounding summary segments in specific regions of the transcript to allow for full inspection of summary details. We conduct a series of analyses of the proposed approach on a large podcast dataset and show that the approach can achieve promising results. Grounded summaries bring clear benefits in locating the summary and transcript segments that contain inconsistent information, and hence improve summarization quality in terms of automatic and human evaluation.

Sports

  1. Knowledge Enhanced Sports Game Summarization Jiaan Wang, Zhixu Li, Tingyi Zhang, Duo Zheng, Jianfeng Qu, An Liu, Lei Zhao, Zhigang Chen WSDM 2022 [pdf] [code]
  2. SportsSum2.0: Generating High-Quality Sports News from Live Text Commentary Jiaan Wang, Zhixu Li, Qiang Yang, Jianfeng Qu, Zhigang Chen, Qingsheng Liu, Guoping Hu CIKM 2021 short [pdf] [data]
  3. Generating Sports News from Live Commentary: A Chinese Dataset for Sports Game Summarization Kuan-Hao Huang, Chen Li, Kai-Wei Chang AACL 2020 [pdf] [data]
  4. Generate Football News from Live Webcast Scripts Based on Character-CNN with Five Strokes Xue-Qiang Lv, Xin-Dong You, Wen-Chao Wang, Jian-She Zhou [pdf]
  5. Content Selection for Real-time Sports News Construction from Commentary Texts Jin-ge Yao, Jianmin Zhang, Xiaojun Wan, Jianguo Xiao INLG 2017 [pdf]
  6. Towards Constructing Sports News from Live Text Commentary Jianmin Zhang, Jin-ge Yao, Xiaojun Wan ACL 2016 [pdf]
  7. Sports News Generation from Live Webcast Scripts Based on Rules and Templates Maofu Liu, Qiaosong Qi, Huijun Hu, Han Ren NLPCC 2016 [pdf]
  8. Research on Summary Sentences Extraction Oriented to Live Sports Text Liya Zhu, Wenchao Wang, Yujing Chen, Xueqiang Lv, Jianshe Zhou NLPCC 2016 [pdf]

Scientific [TBD]

  1. TSTR: Too Short to Represent, Summarize with Details! Intro-Guided Extended Summary Generation Sajad Sotudeh, Nazli Goharian NAACL 2022 [pdf] [code]
    [Abs] Many scientific papers such as those in arXiv and PubMed data collections have abstracts with varying lengths of 50-1000 words and average length of approximately 200 words, where longer abstracts typically convey more information about the source paper. Up to recently, scientific summarization research has typically focused on generating short, abstract-like summaries following the existing datasets used for scientific summarization. In domains where the source text is relatively long-form, such as in scientific documents, such summary is not able to go beyond the general and coarse overview and provide salient information from the source document. The recent interest to tackle this problem motivated curation of scientific datasets, arXiv-Long and PubMed-Long, containing human-written summaries of 400-600 words, hence, providing a venue for research in generating long/extended summaries. Extended summaries facilitate a faster read while providing details beyond coarse information. In this paper, we propose TSTR, an extractive summarizer that utilizes the introductory information of documents as pointers to their salient information. The evaluations on two existing large-scale extended summarization datasets indicate statistically significant improvement in terms of Rouge and average Rouge (F1) scores (except in one case) as compared to strong baselines and state-of-the-art. Comprehensive human evaluations favor our generated extended summaries in terms of cohesion and completeness.
  2. X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents Sotaro Takeshita, Tommaso Green, Niklas Friedrich, Kai Eckert, Simone Paolo Ponzetto JCDL 2022 [pdf] [data]
  3. Target-aware Abstractive Related Work Generation with Contrastive Learning Xiuying Chen, Hind Alamro, Mingzhe Li, Shen Gao, Rui Yan, Xin Gao, Xiangliang Zhang SIGIR 2022 [pdf] [code]
  4. CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation Yuning Mao, Ming Zhong, Jiawei Han [pdf] [data]

Post-Editing

  1. An Exploration of Post-Editing Effectiveness in Text Summarization Vivian Lai, Alison Smith-Renner, Ke Zhang, Ruijia Cheng, Wenjuan Zhang, Joel Tetreault, Alejandro Jaimes NAACL 2022 [pdf] [code]
    [Abs] Automatic summarization methods are efficient but can suffer from low quality. In comparison, manual summarization is expensive but produces higher quality. Can humans and AI collaborate to improve summarization performance? In similar text generation tasks (e.g., machine translation), human-AI collaboration in the form of "post-editing" AI-generated text reduces human workload and improves the quality of AI output. Therefore, we explored whether post-editing offers advantages in text summarization. Specifically, we conducted an experiment with 72 participants, comparing post-editing provided summaries with manual summarization for summary quality, human efficiency, and user experience on formal (XSum news) and informal (Reddit posts) text. This study sheds valuable insights on when post-editing is useful for text summarization: it helped in some cases (e.g., when participants lacked domain knowledge) but not in others (e.g., when provided summaries include inaccurate information). Participants' different editing strategies and needs for assistance offer implications for future human-AI summarization systems.

Human

  1. What Makes a Good and Useful Summary? Incorporating Users in Automatic Summarization Research Maartje Ter Hoeve, Julia Kiseleva, Maarten Rijke NAACL 2022 [pdf] [code]
    [Abs] Automatic text summarization has enjoyed great progress over the years and is used in numerous applications, impacting the lives of many. Despite this development, there is little research that meaningfully investigates how the current research focus in automatic summarization aligns with users’ needs. To bridge this gap, we propose a survey methodology that can be used to investigate the needs of users of automatically generated summaries. Importantly, these needs are dependent on the target group. Hence, we design our survey in such a way that it can be easily adjusted to investigate different user groups. In this work we focus on university students, who make extensive use of summaries during their studies. We find that the current research directions of the automatic summarization community do not fully align with students’ needs. Motivated by our findings, we present ways to mitigate this mismatch in future research on automatic summarization: we propose research directions that impact the design, the development and the evaluation of automatically generated summaries.
  2. Mapping the Design Space of Human-AI Interaction in Text Summarization Ruijia Cheng, Alison Smith-Renner, Ke Zhang, Joel Tetreault, Alejandro Jaimes-Larrarte NAACL 2022 [pdf] [code]
    [Abs] Automatic text summarization systems commonly involve humans for preparing data or evaluating model performance, yet, there lacks a systematic understanding of humans’ roles, experience, and needs when interacting with or being assisted by AI. From a human-centered perspective, we map the design opportunities and considerations for human-AI interaction in text summarization and broader text generation tasks. We first conducted a systematic literature review of 70 papers, developing a taxonomy of five interactions in AI-assisted text generation and relevant design dimensions. We designed text summarization prototypes for each interaction. We then interviewed 16 users, aided by the prototypes, to understand their expectations, experience, and needs regarding efficiency, control, and trust with AI in text summarization and propose design considerations accordingly.

Tutorial

  1. Beyond Opinion Mining: Summarizing Opinions of Customer Reviews Reinald Kim Amplayo, Arthur Bražinskas, Yoshi Suhara, Xiaolan Wang, Bing Liu SIGIR Tutorial 2022 [pdf]

About

Summarization Papers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TeX 97.8%
  • Python 2.2%