MSc Dissertation: Ensemble neural network for static malware classification using multiple representations
-
Updated
Aug 14, 2022 - PureBasic
MSc Dissertation: Ensemble neural network for static malware classification using multiple representations
👮 Simulate various public and private security scenarios.
IDVoice + ChatGPT Android demo app
CLI tool that uses the Lakera API to perform security checks in LLM inputs
AntiNex python client for training and using pre-trained deep neural networks with JWT authentication
Prompt Engineering Tool for AI Models with cli prompt or api usage
Building Private Healthcare AI Assistant for Clinics Using Qdrant Hybrid Cloud, DSPy and Groq - Llama3
Python SDK for IvyCheck
Official code for paper: Z. Zhang, X. Wang, J. Huang and S. Zhang, "Analysis and Utilization of Hidden Information in Model Inversion Attacks," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2023.3295942
IDVoice + ChatGPT iOS demo app
A centralized resource for technical professionals looking to establish a strategy for implementing security and responsible AI practices on Azure
GeminiHacker is a Python script designed to harness the power of a generative AI model for security research, bug bounty hunting, and vulnerability scanning. This README.md file provides detailed instructions on how to install, configure, and use the script effectively.
Datasets for training deep neural networks to defend software applications
Evaluation & testing framework for computer vision models
Official Implementation of IEEE TIFS paper Odyssey: Creation, Analysis and Detection of Trojan Models
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
AI/LLM Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts that can be used to evaluate the behavior of AI or LLM systems when exposed to different types of inputs.
Neural networks, but malefic! 😈
A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.
Add a description, image, and links to the ai-security topic page so that developers can more easily learn about it.
To associate your repository with the ai-security topic, visit your repo's landing page and select "manage topics."