News

JavaScript injection attacks surged in 2024, hitting major brands via Polyfill.io. Learn why frameworks failed.
An attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code.
Mozilla recently unveiled a new prompt injection attack against Google Gemini for Workspace, which can be abused to turn AI summaries in Gmail messages into an effective phishing operation.