Interpretable Face Manipulation Detection via Feature Whitening

Yingying HuaDaichi ZhangPengju WangShiming Ge

Paper [ICML Workshop 2021]   


Why should we trust the detections of deep neural networks for manipulated faces? Understanding the reasons is important for users in improving the fairness, reliability, privacy and trust of the detection models. In this work, we propose an in- terpretable face manipulation detection approach to achieve the trustworthy and accurate inference. The approach could make the face manipulation detection process transparent by embedding the feature whitening module. This module aims to whiten the internal working mechanism of deep networks through feature decorrelation and fea- ture constraint. The experimental results demon- strate that our proposed approach can strike a balance between the detection accuracy and the model interpretability.




Detecting Deepfake Videos with Temporal Dropout 3DCNN

Daichi ZhangChenyu LiFanzhao LinDan ZengShiming Ge

Paper [IJCAI2021]   


While the abuse of deepfake technology has brought about a serious impact on human society, the detection of deepfake videos is still very challenging due to their highly photorealistic synthesis on each frame. To address that, this paper aims to leverage the possible inconsistent cues among video frames and proposes a Temporal Dropout 3-Dimensional Convolutional Neural Network (TD-3DCNN) to detect deepfake videos. In the approach, the fixed-length frame volumes sampled from a video are fed into a 3-Dimensional Convolutional Neural Network (3DCNN) to extract features across different scales and identified whether they are real or fake. Especially, a temporal dropout operation is introduced to randomly sample frames in each batch. It serves as a simple yet effective data augmentation and can enhance the representation and generalization ability, avoiding model overfitting and improving detecting accuracy. In this way, the resulting video-level classifier is accurate and effective to identify deepfake videos. Extensive experiments on benchmarks including Celeb-DF(v2) and DFDC clearly demonstrate the effectiveness and generalization capacity of our approach


	    author = {Daichi Zhang, and Chenyu Li, and Fanzhao Lin, and Dan Zeng, and Shiming Ge},
  	    title = {Detecting Deepfake Videos with Temporal Dropout 3DCNN},
  	    booktitle = {Proceedings of the 30th International Joint Conference on Artificial Intelligence},
  	    year = {2021},


Interpret the Predictions of Deep Networks via Re-label Distillation

Yingying HuaShiming GeDaichi Zhang

Paper [ICME 2021]   


Interpreting the predictions of a black-box deep network can facilitate the reliability of its deployment. In this work, we propose a re-label distillation approach to learn a direct map from the input to the prediction in a self-supervision manner. The image is projected into a VAE subspace to generate some synthetic images by randomly perturbing its latent vector. Then, these synthetic images can be annotated into one of two classes by identifying whether their labels shift. After that, using the labels annotated by the deep network as teacher, a linear student model is trained to approximate the annotations by mapping these synthetic images to the classes. In this manner, these re-labeled synthetic images can well describe the local classification mechanism of the deep network, and the learned student can provide a more intuitive explanation towards the predictions. Extensive experiments verify the effectiveness of our approach qualitatively and quantitatively.


                author={Hua, Yingying and Ge, Shiming and Zhang, Daichi},
                booktitle={2021 IEEE International Conference on Multimedia and Expo (ICME)}, 
                title={Interpret The Predictions Of Deep Networks Via Re-Label Distillation},