Are Copilot prompt injection flaws vulnerabilities or AI limits? Microsoft’s Copilot AI is facing scrutiny over prompt injection and sandbox bypasses that a security researcher claims are vulnerabilities, but the company dismisses them as mere limitations of generative AI tech—essentially, “that’s just how LLMs roll for now.” Take the file upload workaround, for instance, where sneaky users encode risky files in base64 to slip past restrictions, highlighting how Copilot’s guardrails can be sidestepped if you’re clever enough. While some pros argue this exposes real risks like data poisoning or unintended disclosures, others point out it’s a known flaw in large language models that can’t always tell instructions from data, making it tough to fix without neutering the AI’s usefulness. For SMBs and MSPs relying on tools like Copilot, this debate underscores the need to treat AI as a double-edged sword—implement strong input validation and keep an eye on emerging best practices to avoid turning your smart assistant into a security headache.

Source: https://www.bleepingcomputer.com/news/security/are-copilot-prompt-injection-flaws-vulnerabilities-or-ai-limits/