Xhaheen's picture
Update README.md
3fcc621 verified
|
raw
history blame
4.77 kB
metadata
title: Falconz - Red teamers
emoji: 🚀
colorFrom: blue
colorTo: yellow
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: true
thumbnail: >-
  /static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F621c88aca7d6c7e0563256ae%2FsCv6mFixuQLmzhTJuzgXG.png%3C%2Fspan%3E
short_description: MCP Powered Redteaming tool to Safegaurd your Agentic Apps!!
tags:
  - building-mcp-track-enterprise
  - mcp-in-action-track-enterprise
  - security
  - red-teaming
  - ai-safety

🛡️ Falconz – Unified LLM Security & Red Teaming Platform

Welcome to our submission for the Hugging Face GenAI Agents & MCP Hackathon!
Falconz is a multi-model AI security platform built with Gradio & MCP and ANthropic Claude models, designed to detect jailbreaks, prompt injections, and unsafe LLM outputs in Agentic pipelines / LLM based workflows across multiple foundation models in real time.

🎥 Demo working Video:
Main Falconz demo showcasing core features with MCP in Action in Claude Desktop.

https://www.youtube.com/watch?v=wZ9RQjpoMYo

🌐 Social media -LinkedIn Post:
Public announcement and shareable link.
https://www.linkedin.com/posts/sallu-mandya_ai-aiagents-mcp-activity-7399436956662841344-3o1I?utm_source=share&utm_medium=member_desktop&rcm=ACoAACD-K8sBnXZWALlW2yw-AnT_4KptCJFJs7M

🌐 Platform Overview

Falconz provides a unified security layer for LLM-based apps by combining:

  • 🔐 Real-time jailbreak & prompt-injection detection using CLaude Model
  • 🧠 Multi-model testing across Anthropic, OpenAI, Gemini, Mistral, Phi & more
  • 🖼️ Image-based prompt injection scanning
  • 📊 Analytics dashboard for threat trends
  • 🪝 MCP integration for agentic workflows

This platform helps developers validate and harden LLM systems against manipulation and unsafe outputs.


🧩 Core Modules

💬 Chat & Response Analysis

  • Interact with multiple LLMs
  • Automatically evaluates model responses for:
    • Jailbreak signals
    • Policy violations
    • Manipulation attempts
  • Outputs structured JSON + visual risk scoring

📝 Prompt Tester

  • Test known or custom jailbreak prompts
  • Compare how different models respond
  • Ideal for red-teaming and benchmarking model safety

🖼️ Image Scanner

  • Detects hidden prompt instructions within images
  • Flags potential injection attempts (SAFE / UNSAFE)

⚙️ Prompt Library (Customizable)

  • Built-in top 10 jailbreak templates (OWASP-inspired)
  • Users can update and auto-modify prompt templates
  • Supports CSV import + dynamic replacements

📊 Analytics Dashboard

  • Trends of SAFE vs UNSAFE detections
  • Risk score visualization
  • Model performance insights

🔗 Multi-Model Support

Falconz integrates with (With openAI like Endpoints):

  • ✅ Anthropic
  • ✅ openai
  • ✅ Google Gemini
  • ✅ Mistral
  • ✅ Microsoft Phi
  • ✅ Meta (Guard Models)
  • ✅ Meta (Guard Models)
  • Any Custom model from OpenRouter or OpenAI like endpoints

Each model can be tested independently for safety robustness.


High-level components:

  • Frontend: Gradio UI (Multi-tab interaction)
  • Middleware: MCP-powered routing & agent logic
  • Backend: Multi-model OpenRouter API
  • Analytics: Local CSV logging + dashboards

🚀 How It Works (Full App Flow Across All Tabs)

✅ 1️⃣ Chat & Analysis Flow

  1. User enters a message in the Chat tab
  2. Falconz sends the message to the selected LLM model
  3. The model responds normally
  4. The response is passed through the risk analysis engine
  5. A JSON risk score + visual report is generated
  6. Conversation & analysis logs are stored for analytics

✅ 2️⃣ Text Prompt Tester Flow

  1. User inputs a jailbreak/prompt-injection test prompt
  2. Falconz sends it directly to the selected guard model
  3. The raw model response is returned (no chat history)
  4. Users compare responses to evaluate model safety behavior

✅ 3️⃣ Image Scanner Flow

  1. User uploads an image containing text or hidden instructions
  2. Falconz extracts image content and sends it to a vision model
  3. The model evaluates the content for injection threats
  4. Output is classified as SAFE or UNSAFE

🧑‍💻 Authors

📝 License

This project is licensed under the MIT License.


✅ Reminder

Falconz is intended only for ethical security testing and AI safety research as part of MCP Gradio Hackathon.
Users are responsible for complying with all laws, policies, and platform terms.

🛡️ Build safe. Test responsibly. Protect the future of AI , contact me to Xhaheen for Collab .