CIFAKE: Explainable Image Classification and AI-Generated Synthetic Image Identification
DOI:
https://doi.org/10.62643/Keywords:
ai-generated images, synthetic image detection, cnn2d, grad-cam, image classification, latent diffusion, explainable ai, cifar-10, deep learning, binary classification, fake image detection, computer vision, performance evaluation, neural networksAbstract
Recent advancements in synthetic data synthesis have made it harder for humans to discern between real-world and AI-generated images. This work addresses the critical need for data reliability and authenticity by providing a method to enhance the identification of AI-generated images via computer vision algorithms. The study uses a synthetic dataset built using latent diffusion and modeled after the CIFAR-10 dataset to build a set of images for comparison with real photographs. The classification challenge is given as a binary issue that involves distinguishing between real and artificial intelligence-generated pictures. A CNN2D model, which contains 32 neurons and yields the best results, is employed for this. Two convolutional layers, two MaxPooling2D layers, and two dense layers make up this structure. Additionally, the Grad-CAM (Gradient Class Activation Mapping) technique highlights components that aid CNN in distinguishing between real and fake images. A modified version that was optimized without the inclusion of additional layers like dropout or global average pooling achieved 95.94% accuracy compared to 94.98% for the proposed CNN2D. The performance is evaluated using accuracy, precision, recall, F1-score, and a confusion matrix.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.













