Deepfake detection
Analyze suspicious files and URLs to detect types of AI-generated visual threats. Beware that your sample submissions must contain human faces, as every analysis will look for signs of manipulation and synthesis on the face area…
Source: https://platform.sensity.ai/deepfake-detection this link isn't free anymore
Another good source:
https://pentestit.com/open-source-deepfake-detection-tool-list/
FALdetector: This open source tool in Python helps you detect Photoshopped faces by helping you script Adobe Photoshop! The basic premise behind this tool is that most malicious photo manipulations are created using standard image editing tools, such as Adobe Photoshop. It does so by detecting image warping edits applied to human faces by implementing a model trained entirely using fake images that were automatically generated by scripting Photoshop itself. This has been theoretically proved by the authors in their academic paper and the open source project. Pretty impressive I must say.
Deepstar: I blogged about this tool about a month ago. Deepstar aka deep* is an open source, AI based toolkit in python that helps you detect deepfake videos. It is also extensible enough to be able to facilitate testing of new detection algorithms. Check out it’s GitHub repository.
Visual DeepFake Detection: This tool takes a different approach for detecting deepfakes. Since different people create different deepfake videos, it is assumed that these videos are created using a variety of deepfake techniques. Furthermore, assumptions about the type of models, their architecture or the type of artifacts they generate are also not made by this tool. Whats more is that the real and fake videos used are completely unrelated! This toolset further augments another dataset called as FaceForensics++. Check out the GitHub repository of this offering by Dessa.
DeepFake Audio Detection: By now, we also know also have techniques and tools to detect deepfaked audio. This deepfake audio detector model is a deep neural network that uses Temporal convolution. First, raw audio is preprocessed and converted into a mel-frequency spectrogram — this is the input for the model. Mel-frequency cepstrum (MFC) is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The model performs convolutions over the time dimension of the spectrogram, then uses masked pooling to prevent overfitting. Finally, the output is passed into a dense layer and a sigmoid activation function, which ultimately outputs a predicted probability between 0 (fake) and 1 (real). Check out the GitHub repository of this offering by Dessa as well.
Resemblyzer: Resemblyzer allows you to derive a high-level representation of a voice through a deep learning model. Given an audio file of speech, it creates a summary vector of 256 values that summarizes the characteristics of the voice spoken. It helps in fake speech detection by verifying if the speech is legitimate or fake by comparing the similarity of possible fake speech to real speech. Check this project out here.