News

An attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code.