News

By embedding malicious Python code in various ways via a prompt, attackers can exploit the vulnerability to execute arbitrary code within the context of the process running PandasAI.
Setting up a Large Language Model (LLM) like Llama on your local machine allows for private, offline inference and experimentation.