Image Payload Creating/Injecting tools
-
Updated
Nov 30, 2023 - Perl
Image Payload Creating/Injecting tools
A list of backdoor learning resources
a unique framework for cybersecurity simulation and red teaming operations, windows auditing for newer vulnerabilities, misconfigurations and privilege escalations attacks, replicate the tactics and techniques of an advanced adversary in a network.
For educational purposes only, samples of old & new malware builders including screenshots!
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Hide your payload into .jpg file
The open-sourced Python toolbox for backdoor attacks and defenses.
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them
A CUSTOM CODED FUD DLL, CODED IN C , WHEN LOADED , VIA A DECOY WEB-DELIVERY MODULE( FIRING A DECOY PROGRAM), WILL GIVE A REVERSE SHELL (POWERSHELL) FROM THE VICTIM MACHINE TO THE ATTACKER CONSOLE , OVER LAN AND WAN.
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''
[Discontinued] Transform your payload into fake powerpoint (.ppt)
This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE) in PyTorch.
Add a description, image, and links to the backdoor-attacks topic page so that developers can more easily learn about it.
To associate your repository with the backdoor-attacks topic, visit your repo's landing page and select "manage topics."