Microsoft’s Copilot has been sold as a workplace assistant woven into Windows and Microsoft 365, but now, a line in its terms of use is causing some conversation online.
In the version last updated in October last year, the document says:
“Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.” The wording resurfaced on social media and quickly brought on some criticism.
According to Impact Newswire, the disclosure stems from Microsoft’s Copilot terms of use, last updated in October 2025. The document explicitly tells users not to depend on the AI assistant for important decisions. It adds that Copilot “can make mistakes” and “may not work as intended,” and that users should use it at their own risk.
The language appeared a bit confusing for many, especially with how Microsoft has presented Copilot in public. During the company’s January earnings call, chief executive Satya Nadella praised how accurate Microsoft 365 Copilot is, bringing up its latency powered by Work IQ and describing it as an intelligence tool inside the AI agent.
A Microsoft spokesperson addressed the controversy in a statement first published by PCMag and reported by Business Insider. “The ‘entertainment purposes’ phrasing is legacy language from when Copilot originally launched as a search companion service in Bing,” the spokesperson said. “As the product has evolved, that language is no longer reflective of how Copilot is used today and will be altered with our next update.”
How Do Copilot’s Terms Compare With Rivals?
Microsoft’s wording stands out when placed together with competitors, even though other AIaaS providers also limit liability.
Speaking of services within AI, have you heard of the term ‘AI As A Service’, or AIaaS? Well, this video explains this perfectly…
More from News
- Not Only Does AI Save Business Travellers Money – It Also Saves Business Costs
- Only 10% Of EU Startups Make It to Series A. What Are They Doing Right?
- Are We Seeing New Generation Of Social Apps In 2026?
- How Is The US-Iran Conflict Impacting Big Tech?
- Cyber Attacks Are Costing 80% Of CNI Organisations Up To £5 Million
- US Petrol Prices Are Over $4 For The First Time Since 2022 – Is It Too Late For Recovery?
- Diverse Startup Boards Are More Likely To Succeed – What’s Driving The Advantage?
- Let’s Talk About The White House App: Informative, Intrusive Or Irrelevant?
On the different rivals, OpenAI’s terms say: “You accept and agree that any use of outputs from our service is at your sole risk, and you will not rely on output as a sole source of truth or factual information, or as a substitute for professional advice.” The emphasis is on user responsibility, but it does not describe the product as being for entertainment.
Elon Musk’s xAI, acquired by SpaceX in February, goes further in legal protection. Its terms read: “To the fullest extent permitted by law, you will defend, indemnify, and hold xAI and our parents, subsidiaries and affiliates, and our and their respective agents, suppliers, licensors, employees, contractors, officers, and directors (collectively the ‘xAI Indemnitees’) harmless from and against any and all claims, damages.”
Meta’s AI terms for most users also have limits in place around professional use. The company instructs users to “Not rely upon outputs for any purpose or use outputs to inform professional advice or decisions related to medicine, finance, law, or pharmaceuticals.” It also lists unacceptable uses, including: “Solicit professional advice (including but not limited to medical, psychological, financial, or legal advice) or content to be used for the purpose of engaging in other regulated activities (including but not limited to political campaigning or lobbying).”
Lawsuits are already stacking up across the sector as Business Insider reports (and we know that) OpenAI faces a dozen lawsuits in California state court over GPT 4o, a retired model known for sycophantic responses. Just last month, Nippon Insurance Company sued OpenAI in federal court in Illinois, alleging harm after ChatGPT told a customer that her lawyer was gaslighting her about a settlement.
Is Copilot Entering A New Venture?
Microsoft’s exact words in its “Summary of Changes” are that it had “clarified when these Terms apply to certain Copilot services and experiences,” “added terms for Copilot Actions, Copilot Labs, and Shopping experiences,” and “rewritten and reorganised our Terms to be clearer and simpler.”
The document defines “Actions” as “the automated set of tasks that Copilot takes on your behalf at your request.” It also sets out shopping features, explaining that products bought through Copilot are sold and shipped by a third party merchant, not Microsoft, and that “We don’t process payments for your purchases through Copilot.”
Copilot Labs is described as offering features that are “highly experimental and may not always work as intended,” and Microsoft writes that it may “add, modify, or remove features or services from Copilot Labs at any time for any reason.”
The October update reads like preparation for a larger commercial offering. Shopping, automated tasks, experimental features and tighter conduct rules could mean that Copilot is trying to be positioned as a product that’ll handles more of what users do online.
Microsoft says that entertainment purposes phrase will change and that the next update may show how far Copilot has travelled from its Bing chatbot roots and what kind of business Microsoft wants it to become.