Authors:
Surya Lokesh Bhargav Pentakota
Addresses:
Department of Research and Development, Ginger Labs, Texas, New York, United States of America.
Abstract:
The purpose of this study is to discuss the development of deepfake and fake content using GPT models, as well as the necessity of effective detection and countermeasures. The dataset includes linguistic features such as sentiment polarity, syntactic patterns, and linguistic patterns to enable discrimination of content. The dataset was tested on a well-prepared dataset consisting of 50,000 samples. These samples included original news reports, fake news created from them, deepfake transcripts, and forged text. The GPT-4 model, which is used for creating material, adversarial noise approaches, which are used to identify manipulations, and blockchain, which is used to ensure authenticity, are also included in the technology that was utilised in the analysis. Even though it is an excellent method for backing up content, watermarking is also a countermeasure. For the purpose of determining whether or not the detection mechanisms have a high and reliable test, the outputs are validated using precision, recall, and F1-score. Both the tools and the knowledge complement one another to provide a thorough comprehension of how GPT models will be utilised to develop and detect artificial material, with the goal of fostering a cyberspace that is safer.
Keywords: Deepfake Transcripts; GPT Models; Social Countermeasures; False Content; Digital Security; Artificial Content; Adversarial Noise; Comprising Original; Detection Mechanism.
Received: 28/05/2024, Revised: 12/07/2024, Accepted: 17/08/2024, Published: 01/03/2025
DOI: 10.64091/ATICL.2025.000095
AVE Trends in Intelligent Computer Letters, 2025 Vol. 1 No. 1 , Pages: 41-50