OpenAI Reveals How People Are Using ChatGPT – And The Results May Surprise You

ChatGPT usage

This week, a report by OpenAI revealed how people use ChatGPT, and the results are quite surprising.

The report, which was co-created by the National Bureau of Economic Research (NBER) and OpenAI, analysed huge amounts of data to find out how people are using the popular tool.

One of the biggest takeaways? 10% of the world’s population is now thought to use ChatGPT, that’s 814,200,000 people!

 

Most ChatGPT Usage Is Personal

 

Whilst you might think that most people use ChatGPT for work, the results actually say something different. According to the report, more than 70% of all usage was actually non-work related.

For the 30% that do use it for work, practical guidance, looking for information and writing are the most common uses.

 

Which Countries Use AI The Most?

 

According to research by GPTZero, China has the highest percentage of people using AI (83%), compared to 73% in India, 49% in Australia, 45% in the US and 29% in the UK.

Unsurprisingly, when it comes to which age groups are using it the most, AI is mostly being used by younger age groups (18-29).

In fact, Millennials and Gen Z make up 65% of all AI “super users”.

 

Which Sectors Use AI The Most?

 

According to research published last September by Hays, the top five professions where workers are using AI the most include:

 

Marketing

The research showed that over half (54%) of marketers use AI in their role.

This can be to analyse data, write content and even brainstorm ideas.

 

Technology

After marketers, 39% of technology professionals have already used an AI tool at work. In this sector, AI is mainly being used to identify cyberthreats although it’s likely that coders are also relying on AI to help them build and develop code.

 

Education

The research showed that 20%, 2 in 10 educators had used AI at work. Primarily, AI has helped educators access content, create resources and plan interesting lessons.

 

 

Accountancy and finance

16% of professionals working in accounting and finance admitted to using AI in their jobs. Whilst this is lower than other industries, it does show openness to using these tools.

 

Engineering

Similar to accountants, 16% of engineering professionals admitted to using AI tools to help with their roles. Particularly in improving the design process, AI can be a great tool for engineers looking to create models and test quickly.

 

But How Do Bosses Feel About ChatGPT Usage At Work?

 

According to ONS data publicised in Forbes, around 1 in 6 UK organisations (over 430,000) use at least 1 AI technology.

When it came to the split, 68% of large companies, 33% of medium sized companies and 15% of small companies had adopted at least 1 AI tool into their tech suite.

But how do bosses feel about ChatGPT being used at work? To find out, we asked them directly.

Here is what they had to say…

 

Michael Baron, Managing Director at BWS

 

People: Northern Gritstone; M&T Hotel Mgmnt; BWS; University of GM; Stiebel Eltron; Equity Release Group; Glaisyers ETL | TheBusinessDesk.com

 

“Often companies don’t need policies for or against the use of Chat GPT and other AI software per se, but what they need is training and guidance of when it can be beneficial, and the pitfalls of using it to perform tasks, analyse data, and generate information.

“Company policies against Chat GPT are often necessary, especially when handling sensitive information. However, policies against AI usage – when we live in such a rapidly-evolving digital era – can be counterintuitive. AI needs human oversight, and training employees on how to use AI ethically and use it to their advantage, can free up time for employees to perform their high-skilled tasks that require expertise and human experience.

“It’s important that employees are made aware of any GDPR or data compromises when entering information into any third party platform, but ultimately, it’s less about the adoption of AI in businesses, and more about the way AI is used. Ensuring employees are trained on when it’s appropriate and beneficial to use AI, such as categorising anonymous data, and when it’s not ethical to use – such as handling sensitive data, or emotive, human-based matters. The adoption of AI in the workplace requires a nuanced approach, where we can reap the benefits, while still being conscious and ethical.”

Zoe Cunningham, Director at Softwire

 

Zoe Cunningham - Softwire

 

“At Softwire, we don’t take a blanket yes-or-no approach to tools like ChatGPT and generative AI. As a technology consultancy, we encourage our people to experiment where it adds value – whether that’s idea generation, refining written content, or writing code. At the same time, we apply strict limits: sensitive client or company data must never be entered into these tools, and any use for client projects must be specifically approved.

Our philosophy is that generative AI is another tool in the developer’s toolkit. The reason for this approach is that, as a software consultancy, we need to trial and adopt these technologies ourselves in order to advise clients effectively – but always with clear guardrails in place and always within a human-led process. This balance allows us to explore how AI can speed up routine tasks, unlock new ways of working, and support creativity, while ensuring that we prioritise security, trust, and quality.

We see AI as part of an ongoing evolution in technology, much like the shift to Agile software development two decades ago. By experimenting responsibly and sharing what we learn, we can help our clients harness these tools with confidence.”

 

Kerry Parkin, Founder at The Remarkables

 

The 70%: Kerry Parkin - Conference News

 

“Most of our clients are engaging with AI in some form, but the picture is mixed when it comes to tools like ChatGPT. Many organisations are investing heavily in their own in-house or bespoke AI platforms, often built to integrate seamlessly with their existing data and security environments. Others are embracing AI solutions already embedded in widely adopted enterprise software, for example promoting Microsoft Copilot as part of the Office suite.

“At the other end of the spectrum we see clients actively blocking access to ChatGPT and similar tools through corporate firewalls. The rationale is simple: if leadership does not fully understand the risks and benefits, the instinct is to restrict. This is not a new phenomenon. When platforms like Facebook or online news portals first emerged in the workplace, the immediate response was often to remove or restrict access until the value became undeniable.

“We anticipate a similar trajectory here. Over time two realities will shift the landscape: first, leaders will recognise that conversational AI is vital for efficiency, creativity and competitiveness; and second, the genie is already out of the bottle. Employees and customers alike are using AI, whether sanctioned or not, and businesses will have to adapt accordingly.”

 

Amanda Spicer, Co-Founder at Your Eco.

 

Amanda Spicer - Co-founder & Director, Chief Operating Officer at Your Eco | The Org

 

“Our company policy doesn’t prohibit AI tools like ChatGPT, but we’ve established clear guidelines that prioritise our strategically implemented Sintra platform for all business-critical operations.

“We’ve developed comprehensive AI usage policies that require staff to use our trained LLM brain for tendering processes and technical documentation, ensuring consistency with our ISO certifications and B Corp standards.

“Our policy framework follows Kaizen principles, providing structured AI seats to team members with defined protocols for engagement, which drives cohesive outcomes whilst maintaining the technical accuracy essential for our solar energy installations. This policy-driven approach allows us to harness AI’s benefits responsibly whilst protecting client confidentiality and upholding the professional standards expected in the renewable energy sector.”

 

Charlotte Stoel, Managing Director at Firefly Communications

 

Charlotte Stoel - Group Managing Director at Firefly Communications | LinkedIn

 

“At Firefly Communications, we do have a policy regarding the use of AI tools like ChatGPT. But it isn’t to restrict people, rather it gives guidance to be ethical and responsible. There are occasions where it is not at all appropriate to use AI and employees must be very clear on this.

“Having a policy is fine and good to have but the real value comes from training, human insight, and sharing “moments” we encounter in our everyday work – did we use a tool and it was great, or maybe it was bad, or maybe it just morphs our initial work beyond recognition. Talking, sharing, refining means we use AI in a more powerful way.

“It’s a human impulse to look for a shortcut or an ‘easy way’ and I urge my team, and all professionals, to not fall into that trap and make sure don’t overlook the value we bring. Critical thinking – one of the most vital skills in our industry, every industry even – can’t be skipped. If we move too fast, we risk producing work that feels shallow, or lazy.

“AI should raise the bar for the quality of our work, not lower it. And that only happens when we bring our lived expertise and judgement.”