HBKU - QCRI
Deepfake Detection 

A comprehensive project built to detect and evaluate deepfake media across a wide range of robustness, fairness, and security dimensions. The system incorporates multiple state-of-the-art detection models, including transformer-based architectures and Stable Diffusion variants, and supports rigorous evaluation through modular testing components. These include adversarial robustness analysis, resistance to natural perturbations, generalization to unseen generative models, explainable decision-making, and fine-grained manipulation localization. 


 

Current features:

  • Support for deepfake detection in both facial imagery and natural scene contexts

  • Integration of diverse model architectures, including transformer-based and diffusion-based backbones

  • Simulation of adversarial attacks and robustness evaluation

  • Assessment under both natural and synthetic perturbation scenarios

  • Generalization testing across unseen datasets and generative sources

  • Visual explanation through saliency maps and heatmap overlays

  • Reasoning and interpretability modules for deepfake decisions

  • Multimodal feature analysis leveraging frequency domain, visual content, and descriptive text