News
An attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results