Microsoft’s AI assistant Copilot has a disclaimer buried in its terms of use that has been making rounds on social media lately. The terms, which appear to have been last updated on October 24, 2025, describe Copilot as being “for entertainment purposes only,” warning users not to rely on it for important advice and to use it “at your own risk.”
While the language sounds surprising for a product Microsoft is actively selling to corporate customers, the move may actually reflect a calculated approach to legal self-protection.
Why Tech Companies Are Adding AI Disclaimers
Microsoft is not alone in using cautious language around its AI products. As Tom’s Hardware noted, OpenAI warns users not to treat its output as “a sole source of truth or factual information,” while xAI tells users not to rely on its output as “the truth.”
These disclaimers are becoming standard practice across the industry, and for good reason. With AI companies facing growing legal scrutiny, a clear disclaimer in the terms of use can serve as a shield in court, limiting liability if a user acts on incorrect or misleading AI-generated content.
What Microsoft Says About The ‘Legacy Language’
A Microsoft spokesperson told PCMag that the company plans to update what it described as “legacy language” in the terms. “As the product has evolved, that language is no longer reflective of how Copilot is used today and will be altered with our next update,” the spokesperson said.
The clarification suggests the “entertainment only” label was not a deliberate product positioning choice, but a leftover from an earlier version of the terms.
Still, whether intentional or not, having that language in place during a period when AI firms are navigating courtrooms and regulatory pressure puts Microsoft on relatively safer ground compared to companies that make bolder claims about their AI’s reliability.


