Tag: prompt injection

Secure Development for Generative AI: Secrets, Logging, and Red-Teaming
Secure Development for Generative AI: Secrets, Logging, and Red-Teaming

Tamara Weed, Mar, 20 2026

Secure generative AI development requires rethinking secrets, logging, and testing. Learn how prompt injection, AI-BOMs, red-teaming, and short-lived credentials protect your models from emerging threats in 2026.

Categories:

Databricks AI Red Team Findings: How AI-Generated Game and Parser Code Can Be Exploited
Databricks AI Red Team Findings: How AI-Generated Game and Parser Code Can Be Exploited

Tamara Weed, Mar, 18 2026

Databricks AI red team uncovered critical vulnerabilities in AI-generated game and parser code, revealing how prompt injection and data leakage can bypass traditional security tools. Learn how to protect your systems.

Categories:

Security Risks in LLM Agents: Injection, Escalation, and Isolation
Security Risks in LLM Agents: Injection, Escalation, and Isolation

Tamara Weed, Mar, 8 2026

LLM agents are autonomous systems with dangerous security flaws - prompt injection, privilege escalation, and isolation failures are causing real-world breaches. Learn how these threats work and what actually stops them.

Categories: