Meta has announced the launch of its first standalone AI assistant app, Meta AI. The company already lets billions chat with its bot in WhatsApp, Facebook and Instagram. The new mobile tool brings that assistant into its own home, giving users a single space to ask questions, write messages or create images with help from the firm’s Llama 4 model. Meta says the app will roll out gradually on iOS and Android across the world over the coming weeks.
Mark Zuckerberg called the release “your personal AI” during a video on Instagram. Users can tap a button, speak naturally and hear replies that sound less scripted than text read aloud. Meta’s engineers trained the bot on conversational dialogue so that it can interrupt, laugh or change tone mid-sentence, much like a human. A demo version using new full-duplex speech technology gives a peak at what Meta thinks natural machine conversation should “feel” like.
From launch the app also acts as the controller for Ray Ban Meta glasses, letting a user start a chat while walking outside and carry it over to the phone or desktop website later on. Settings and media transfer automatically, so existing glasses owners do not have to pair the wearable again.
How Will The Assistant Personalise Answers?
For personalisation, Meta says the assistant notices topics that matter to its owner and can remember facts on request, such as a favourite football club or a partner’s birthday.
Those who link their Facebook and Instagram accounts through the Accounts Center give the bot more clues. In that case the model can draw upon past likes, saved posts and other activity to craft replies that feel personal rather than generic.
Meta AI also arrives with a Browse feed. This scrolling board shows the most creative prompts being shared around the world, and it lets users remix a prompt with one tap before posting their own version. Nothing reaches the feed unless a user chooses to share it.
For people outside North America, personal data still stays local for now. Meta says full personal replies start in the United States and Canada, and voice chats roll out first in those two countries plus Australia and New Zealand.
More from News
- What Influences Bitcoin Prices And Fluctuations?
- World’s First AI Chef To Come This September. Here’s How It Works
- Microsoft Who? Nvidia Has Officially Become the First Company to Surpass a $4 Trillion Market Cap
- Are AI Startups Investing In Teachers Learning AI A Good Move For Education?
- Can a Robot Really Perform Surgery Without Human Help?
- Reports Show Fewer Students Chasing Tech Careers, Here’s Why
- Undersea Cables And Digital Systems At Risk, MPs Warn
- Government Partners With Google Cloud To Modernise Tech, Here’s How
Where Does This Leave ChatGPT And Other AI Platforms?
OpenAI’s ChatGPT has already dominated the market… Because unlike its rival, Meta frames its service as more of a social network, not a separate productivity tool.
Developers gathered at Meta’s LlamaCon event heard chief product officer Chris Cox say that openness gives the system an advantage. Because Llama is open source, coders can mix different models and keep only the parts that suit their project.
Meta is also chasing hardware integration, with the RayBan glasses partnership could bring about a next phase where an assistant sits in a wearable rather than a phone that sits in the pocket. That could really differentiate Meta from OpenAI, which still reaches users mainly through a browser or smartphone app.
OpenAI keeps the inner workings of its models private, and Meta lets outsiders inspect and tweak the code. Zuckerberg told viewers this flexibility would let teams choose the strongest traits from each model and build exactly what they need.
Turning that promise into steady public use will have to depend on how well the assistant earns trust with users. Early testers report lively chat but also the odd pause or misheard phrase. Meta has labelled the release the beginning stage and says updates will follow quickly as they get more feedback.