Antideepfake Methods for Multimodal Technologies

Feb 2024 – Jun 2024

This research project investigated cutting-edge approaches to detect and mitigate deepfake content across multiple modalities including video, audio, and text. By analyzing the strengths and weaknesses of current detection systems and proposing new evaluation frameworks, we aimed to enhance the robustness of deepfake detection in real-world scenarios.

Visualization of deepfake detection features across video frames

Visualization of deepfake detection features across video frames

Comparison of detection performance across different demographic groups

Comparison of detection performance across different demographic groups

Technologies Used

Deep LearningComputer VisionPythonPyTorch

Challenges

  • Evaluating detection models against increasingly sophisticated deepfake generation techniques
  • Addressing dataset biases that lead to unfair performance across demographic groups
  • Creating standardized benchmarks that reflect real-world deployment scenarios
  • Developing detection methods that work across multiple modalities (video, audio, text)

Solutions

  • Conducted comprehensive analysis of state-of-the-art deepfake detection models across diverse datasets
  • Investigated new fairness metrics for deepfake detection and facial recognition tasks
  • Proposed 'Model Card' for benchmarking of deep learning models

Outcomes

  • Identified critical biases in current deepfake datasets and proposed remediation strategies
  • Developed a fairness-aware evaluation framework
  • Identified gaps in model benchmark reporting that lead to misrepresentation of model performance