Standardized benchmark for AI safety scanners. Run your scanner, get a score. From Authensor / 15 Research Lab.
-
Updated
Mar 15, 2026 - TypeScript
Standardized benchmark for AI safety scanners. Run your scanner, get a score. From Authensor / 15 Research Lab.
Behavioral fingerprinting for AI agents. Build profiles, detect drift, identify compromise.
Can AI systems detect when they're being evaluated? Research paper and reference implementation exploring the Hawthorne Effect for AI.
Bidirectional mapping between MITRE ATT&CK and AI alignment failure modes. The rosetta stone between cybersecurity and AI safety.
Black box recorder for AI agents. Reconstruct decisions, detect anomalies, verify audit chains. Part of the Authensor safety stack.
Map the attack surface of AI agents. Enumerates tools, capabilities, and security gaps. SARIF output. From the team behind 126 responsible disclosures across NVIDIA, Microsoft, Meta, Google.
AI security payloads and wordlists. Prompt injection, jailbreaks, model exploitation. The SecLists of AI. From the team behind 350+ verified vulnerabilities across the ML ecosystem.
Add a description, image, and links to the authensor topic page so that developers can more easily learn about it.
To associate your repository with the authensor topic, visit your repo's landing page and select "manage topics."