Short answer: Prompt injection is when a user (or attacker) tricks an LLM-powered chatbot into ignoring its system instructions by embedding adversarial text…