OffSec RedTeam Info
OffSec RedTeam Info is a SlimPajama‑style, category‑organized corpus of security knowledge text crawled from reputable red‑team/blue‑team websites: wikis, training blogs, vendor research, CERT advisories, reversing/malware labs, cloud/kubernetes posts, OSINT handbooks, AD tradecraft, and more.
Token count: ~1.646B tokens.
⚠️ Ethical use only. Use for research, education, and defensive security. Respect robots.txt, site terms, and copyrights. Do not misuse this corpus to harm systems or violate laws.
What’s new (Nov 2025)
- Per‑category Parquet configs published for fast streaming via
datasets.load_dataset(...). raw/** JSONL** kept alongside Parquet for reproducibility and low‑level processing.- Consistent schema across all categories (see below) with required
textandmetakeys. - Balanced shard sizes (≈256–512 MB) to keep memory steady during load & push.
Repository layout
/ # dataset root (this card lives here as README.md)
raw/ # line-delimited JSON for each category (post-clean)
<category>.jsonl # e.g., raw/ad_ops.jsonl
<category>/ # per-category Parquet config (train split)
<category>.parquet # e.g., ad_ops/ad_ops.parquet
Parquet is present for non‑empty categories. Some categories may be JSONL‑only depending on the snapshot.
Categories
- core_wikis – foundational red‑team/blue‑team references (ATT&CK, CAPEC/CWE, GTFOBins, LOLBAS, PayloadsAllTheThings, etc.).
- web_app – OWASP content, web vulns, API security, web‑sec blogs.
- training_blogs – walkthroughs, labs, CTF‑style posts and methodology.
- ad_ops – Active Directory/Windows internals, abuse paths, domain tradecraft.
- windows_privesc, linux_unix – OS‑specific privilege escalation & hardening.
- cloud_redteam, kubernetes_container – cloud & container security.
- osint_recon, phishing_se – OSINT techniques, social engineering.
- c2_tradecraft – C2 techniques, operator tradecraft (defensive write‑ups included).
- mobile_wireless – mobile, Wi‑Fi/Bluetooth/802.11 and radio‑adjacent topics.
- ics_scada – industrial control systems / OT security.
- reversing_malware – reversing & malware analysis posts from labs and vendors.
- binary_exploitation – pwn, exploitation notes, vuln research.
- password_cracking – hashcat/john guides, NIST/NCSC guidance.
- dfir_detection – incident response, detection engineering, Sigma, DFIR reports.
- threat_intel – vendor TI, advisories, newsroom items with technical depth.
Exact category availability depends on the current revision (feeds change; some snapshots may be sparser).
Schema (UPDATED)
Each record follows exactly this structure:
{
"text": "<cleaned article/content text>",
"meta": {
"url": "https://example.com/path",
"title": "<page title>",
"source": "example.com",
"category": "ad_ops",
"timestamp": "2025-11-02T22:21:39.384421+00:00",
"language": "en"
}
}
Field definitions
text (string) — cleaned article/content text (readability‑style extraction, normalized whitespace).
meta (object) — metadata container with the following keys:
- url (string) — canonical URL of the item.
- title (string|null) — page title.
- source (string|null) — site/domain the content came from (e.g.,
www.semperis.com). - category (string) — logical bucket matching the config name (e.g.,
ad_ops). - timestamp (string, ISO‑8601) — fetch/process time for the item.
- language (string) — language code (e.g.,
en).
The sample you shared from Semperis conforms to this schema.
Load examples
Load a single category (Parquet, recommended)
from datasets import load_dataset
REPO = "tandevllc/offsec_redteam_info"
cat = "web_app" # pick any listed config
ds = load_dataset(REPO, name=cat, split="train")
print(len(ds), ds.column_names[:6])
print(ds[0]["text"][:400])
print(ds[0]["meta"]["url"]) # access metadata
Load multiple categories and interleave
from datasets import load_dataset, interleave_datasets
REPO = "tandevllc/offsec_redteam_info"
names = ["core_wikis", "training_blogs", "threat_intel"]
parts = [load_dataset(REPO, name=n, split="train") for n in names]
# Uniform interleave (good for blended training/eval)
blend = interleave_datasets(parts, probabilities=[1/len(parts)]*len(parts), seed=42)
Filter typical research slices
# Keep only long English articles
long_en = ds.filter(lambda r: (r.get("text") and len(r["text"]) > 1200) and ((r.get("meta") or {}).get("language") == "en"))
# Narrow to a specific source/domain
from_portswigger = ds.filter(lambda r: ((r.get("meta") or {}).get("source") or "").endswith("portswigger.net"))
Load raw JSONL
from datasets import load_dataset
raw = load_dataset(
"json",
data_files="raw/web_app.jsonl",
repo_id="tandevllc/offsec_redteam_info",
split="train",
)
Cleaning & quality (high level)
- Content extracted with readability‑style heuristics; multi‑block merge when the best block is short.
- Basic quality gates: minimum words/sentences, alpha‑fraction, optional index‑page filtering by link density.
- Normalization: canonicalized URLs, per‑category dedup by link/content hash (some snapshots may apply global dedup).
- Non‑content and noisy paths avoided (search, feeds, asset dirs, etc.).
These heuristics favor clean prose and technical material, but may still include boilerplate or miss embedded code blocks.
Intended uses
- Pretraining / continued pretraining of security‑aware language models.
- RAG / retrieval over current security references and blogs, by category/site.
- Evaluation of security knowledge, extraction, summarization, and long‑context QA.
- Trend analysis across sources (pair with timestamps when present).
This dataset is not a CVE ground‑truth database and does not replace vendor advisories.
Limitations & caveats
- Copyrights & terms apply. Underlying website content retains the publisher’s license/terms.
- Temporal drift. Websites change; snapshots may vary; links can rot.
- Extraction noise. Readability may omit figures/code or include navigation text.
- Metadata sparsity. Some fields are missing for certain sources.
License & access
License: "TanDev Proprietary License — All Rights Reserved"
Underlying content: remains under each site’s terms. For conservative use, store only links and your own embeddings/summaries.
Commercial usage: A paid TanDev Commercial License is available for commercial training/inference and internal derivatives. Contact [email protected] with organization, intended use, and deployment details.
Takedowns: If you own content included here and want it removed, please open an issue or email the maintainer.
Citation
@dataset{tandevllc_2025_offsec_redteam_info,
author = {Gupta, Smridh},
title = {OffSec RedTeam Info},
year = {2025},
url = {https://huggingface.co/datasets/tandevllc/offsec_redteam_info}
}
Maintainer
Smridh Gupta — [email protected]
- Downloads last month
- 198