Authors:
S. Hemadharsana, N. Madhumitha, R. Vinoth, R. Vanitha, G. Brintha, Adithi Venkatakrishnan
Addresses:
Department of Computer Science and Engineering (Cyber Security), Saveetha Engineering College, Chennai, Tamil Nadu, India. Department of Computer Science and Engineering, Saveetha Engineering College, Chennai, Tamil Nadu, India. Department of Artificial Intelligence and Machine Learning, SRM Institute of Science and Technology, Ramapuram, Chennai, Tamil Nadu, India. Department of Computer Science and Engineering, KCG College of Technology, Chennai, Tamil Nadu, India. Department of Artificial Intelligence and Data Science, Francis Xavier Engineering College, Tirunelveli, Tamil Nadu, India. Department of Information Technology and Management, University of Texas at Dallas, Richardson, Texas, United States of America.
Abstract:
Long Short-Term Memory (LSTM) networks in an encoder–decoder architecture are used to test abstractive text summarisation for news articles. The InShorts News Article Dataset is used to train and evaluate several deep learning configurations, including regular LSTM and BiGRU models, LSTMs with pre-trained word embeddings such as GloVe and Word2Vec, and attention-integrated architectures. While researchers examined various performance measures, BLEU scores were the most essential for summarising quality. The LSTM encoder-decoder with pre-trained Word2Vec embeddings and an attention layer achieved the highest BLEU score of 0.7481. This illustrates that attention approaches significantly affect how well the model catches contextual dependencies and prioritises key input content. Pre-trained embeddings improve semantic understanding and summarisation. The findings indicate that encoder-decoder designs and attention-based improvements are crucial for abstractive summarisation. With copious data, efficient text summarisation is crucial for better information retrieval, faster content consumption, and user understanding in the digital age. Neural architecture comparison, hyperparameter optimization, transformer-based models, and new embedding strategies to improve summarisation accuracy and resilience are future research topics.
Keywords: Hyperparameter Optimisation; Abundant Data; Abstractive Summarisation; Attention Mechanisms; Neural Architecture; Encoder–Decoder Architecture; Transformer-Based Models.
Received: 13/12/2024, Revised: 03/04/2025, Accepted: 20/05/2025, Published: 12/12/2025
DOI: 10.64091/ATICS.2025.000211
AVE Trends in Intelligent Computing Systems, 2025 Vol. 2 No. 4 , Pages: 184-195