SENTIMENT AND TOXICITY DETECTION FOR BLOCKING ABUSIVE CONTENT ACROSS SOCIAL MEDIA
DOI:
https://doi.org/10.62643/Abstract
The rapid growth of social media platforms has significantly increased user interaction, but it has also led to a rise in abusive, toxic, and harmful content. This project, “Sentiment and Toxicity Detection for Blocking Abusive Content Across Social Media,” aims to develop an intelligent system that automatically identifies and filters such content in real time. The system leverages Natural Language Processing (NLP) and Machine Learning techniques to analyze user-generated text and classify it based on sentiment (positive, negative, neutral) and toxicity levels. Advanced models such as Logistic Regression, Support Vector Machines, and deep learning approaches like LSTM and Transformer-based architectures are utilized to improve detection accuracy. The system is trained on labeled datasets containing various forms of abusive language, including hate speech, offensive remarks, and cyberbullying content. Based on the classification results, the system can automatically block, flag, or warn users about inappropriate content. This solution helps create a safer and more respectful online environment by reducing the spread of harmful communication. It can be integrated into multiple social media platforms to enhance content moderation mechanisms, protect users, and promote healthy digital interactions.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.













