Skip to content

Classifying Deepfake imges using Residual learning Architecture

Notifications You must be signed in to change notification settings

GSaiDheeraj/deepfakesdetection

Repository files navigation

Deepfakes-Detection

https://deepfakedetectionwithai.herokuapp.com/


bandicam.2022-02-04.11-53-30-207.mp4

Team Members:

G.Sai Dheeraj : Deep Learnining model/Deployment

Methodology: CRISP-DM (Cross Industry Standard process for Data Mining)

  1. Problem : Detect Deepfakes

  2. Data Gathering : Kaggle (https://www.kaggle.com/xhlulu/140k-real-and-fake-faces)

  3. Data Cleaning : Removed the Duplicates in the data

  4. Data Preparation : Reshaped the size of data into reqquired input shape for transfer learning models.

  5. Modelling : Used Densenet, VGGFace, Custom Designed Architecture.

    -Why These three models?
    Answer) Usually, we start with the simple architecture and end with the complex architecture. From Here Web Development Cycle starts

    1. Requirements Gathering : Understand the requirement and data source
    2. Identifying the problems : The hardest problem is Slug size and the future scope of the project.
    3. Wire Frame work: Created the flow chart for connecting hte different web links like connecting the About page with disease page and home page with the result page etc.
    4. Tools gathering
    5. Content Creation : Created the content that need to be in the web site
    6. Web Site Experiments: Created different web sites/ UI designs
    7. Integreation with Deep learning models
  6. Deployment in the cloud

  7. maintainence.


Notebooks need to be run in kaggle env due to version errors and the size of the data.