Google has combined forces from its DeepMind and Brain teams to test AI tools that could serve as personal life coaches.
These technologies could potentially guide individuals in different areas of life, from giving general advice to assisting with meal planning.
Uniting DeepMind and Brain
Google’s decision to merge DeepMind, based in London, with Silicon Valley’s Brain, came from a desire to enhance its AI technologies.
This move positions Google to face competition from rivals such as Microsoft and OpenAI.
Generative AI’s Potential
Chatbots, like OpenAI’s ChatGPT and Google’s Bard, demonstrate the allure of generative AI. Google’s ambition extends beyond chat functions, hoping to offer users a range of support from life advice to tutoring.
Documents reviewed by The New York Times state that the tools under testing aim to “give users life advice, ideas, planning instructions, and tutoring tips.”
Google’s recent direction shows a change from its past caution with generative AI. The company had concerns about users forming strong emotional bonds with chatbots.
Google is now eager to show that it can stand alongside tech leaders in AI advancements.
Introduced to a few users in the United States and Britain, Bard can craft ideas, write blog posts, and answer questions.
More from News
- What Is Open Banking And Why Are So Many People Using It?
- Research Says Gen Z Is Nearly 3 Times More Vulnerable To Phishing Than Boomers, Here’s Why
- Will Meta Make Users Start Paying A Subscription For WhatsApp?
- Confidence Gap Holds Back Gen Z Business Founders, Report Finds
- Google To Pay $68m After Lawsuit Over Google Assistant Recordings
- Big Tech Paid $7.8bn In Fines Last Year, Report Finds
- New Research Reveals What (Or Who) The Actual Risks To Businesses Are In The AI Age
- UK Government Invests £36m To Make Supercomputing Centre 6 Times More Powerful
Trialling AI’s Capabilities
Google’s efforts involve rigorous trials, examining AI’s ability to help with personal dilemmas. They even presented a scenario where the user struggles with attending a dear friend’s destination wedding due to financial constraints.
But, there are concerns. Google’s AI safety team expressed that relying solely on AI for life advice could impact users’ well-being negatively.
And while Bard can provide answers and opinions, it’s clear about not offering medical, financial, or legal guidance. If users express mental distress, Bard offers mental health resources.
Past Attempts Serve as a Warning
Past experiments show that AI in advisory roles isn’t without its pitfalls. Tessa, an AI chatbot from the National Eating Disorder Association, was discontinued after offering damaging advice.
The Center for Countering Digital Hate (CCDH) from the UK pointed out AI’s potential to spread misinformation.
The Allen Institute for AI experienced challenges with Delphi, an AI designed for moral guidance. Users quickly found out they could make it support dangerous beliefs.
Recognising AI’s potential risks in moral guidance is vital. A piece in AI and Ethics stresses the need to consider these challenges.
The benefits of using AI in such roles need careful evaluation against possible dangers.
What’s Next for Google?
Google’s testing phase is ongoing, with no definite plans for a full roll-out of these tools. A representative from Google DeepMind highlighted the regular evaluations they conduct across various projects.
Google isn’t just focusing on life advice tools. Other areas, from helping journalists to understanding patterns in text, are also under exploration.