Creating a Microsoft Copilot chatbot is easy, but making it safe and secure is pretty much impossible, security experts say.

General
Creating a Microsoft Copilot chatbot is easy, but making it safe and secure is pretty much impossible, security experts say.
[Microsoft's Copilot Studio is a useful tool for the less technically savvy (those who can't dream in Fortran) to create their own chatbots. The idea is to make it easy for most companies and organizations to create chatbots based on internal documents and data.

One could imagine game developers using chatbots to allow gamers to ask questions about everything from how to complete a game, to applying the best settings, to solving technical problems. Inevitably, however, there is a catch.

According to AI security specialist Zenity, Copilot Studio and the chatbots it creates are a security nightmare (via The Register) Zenity CTO Michael Bargury recently hosted a session at the Black Hat Security Conference where he delved into the horrors that can unfold if Copilot is allowed access to data to create chatbots.

Apparently, it all stems from an improper default security setting in Copilot Studio. Put another way, the danger is that the super-simple Copilot Studio tool, which creates a super-convenient tool for customers and employees to query using natural language, opens the door wide for abuse.

Burghley showed how a malicious actor could achieve malicious code injection by placing malicious code in a seemingly innocuous email and instructing the Copilot bot to "inspect" it.

In another example, Copilot sends a fake Microsoft login page to the user, where the victim's credentials are obtained.

Furthermore, Zenity claims that the average large U.S. company already has 3,000 such bots in operation. Frighteningly, 63% of them are discoverable online. If true, that means that the average Fortune 500 company has about 2,000 bots ready to spit out sensitive corporate information.

"We scanned the Internet and found tens of thousands of these bots," says Burghley. According to him, Copilot Studio's original default configuration automatically published bots to the Web without requiring authentication to access them; after Zenity reported the issue to Microsoft, the problem was fixed, but the bots created before the update bots, which is of no help to bots.

"There is a fundamental problem here," says Burghley. 'When you give an AI access to data, that data is open to prompt injection attacks.' In essence, Burghley is saying that publicly accessible chatbots are inherently insecure.

Broadly speaking, there are two problems here. On the one hand, for bots to be useful, they need a certain level of autonomy and flexibility. This is difficult to solve. The other problem seems to be a fairly obvious oversight by Microsoft.

The latter problem may not be surprising given the debacle surrounding Windows' Copilot Recall feature.

What Microsoft has to say on the matter is a bit of a salty response to the Register.

"We are grateful to Michael Burghley for identifying and responsibly reporting these techniques through coordinated disclosure. We are investigating these reports and are continually improving our systems to proactively identify and mitigate these types of threats and help protect our customers. [These methods, like other post-compromise methods, require pre-compromise or social engineering of the system. Microsoft Security offers a robust suite of protections that customers can use to address these risks.

Like many things related to AI, security seems to be another area that is a minefield of unintended consequences and collateral damage. It will be a long time before we have a safe and reliable AI that does only what we want it to do.

.

Categories