Meta is bringing more Instagram, WhatsApp and Facebook features with its latest big thing: Meta AI. They have announced that Meta AI can now be used in 22 countries, with new languages it supports. This tool was created to help users with questions, tips and assistance with tasks using the latest Llama 405B model. Zuckerberg shared, “In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models.”
Meta shared that this tool is open source, making it accessible to the public, and for developers to modify the code how they see fit. The announcement of their new Llama 3.1 405B confirms that more complex questions related to coding and math, can be answered in detail and give debugging tips.
Why Open Source?
This model is different from previous ones because of the accessibility part. Those models were not open source, and most of the work was done internally. Meta wants the Llama 3.1 model to inspire more developers to use open source. Zuck explains, “With past Llama models, Meta developed them for ourselves and then released them, but didn’t focus much on building a broader ecosystem.
Using open source AI works for Meta’s plans because it allows them to develop the best technology without being restricted by competitors’ closed ecosystems. This is great for building better services and maintaining long-term access to the latest advancements. For developers, especially, open source lets them train and modify models to fit their specific needs without sharing data with closed model providers, for a cost-effective and secure way to develop AI solutions.
“We’re taking a different approach with this release. We’re building teams internally to enable as many developers and partners as possible to use Llama, and we’re actively building partnerships so that more companies in the ecosystem can offer unique functionality to their customers as well.”
More from News
- Fintech Funding Falls To Seven-Year Low
- Opsyte Appoints New Managing Director to Drive Next Phase of Growth
- OpenAI Partners with UAE Government: Will All UAE Residents Have Free Access To ChatGPT Plus?
- How 135 Year Old Sauce Brand Lee Kum Kee Is Innovating For New Audiences
- X Has Experienced An Outage, Here’s Why
- What Are The Risks When It Comes To AI Search?
- UK Public Sector Leads Europe In GenAI Trials At 75% Adoption
- ZeroAvia Opens Electric Plane Factory Near Glasgow
The ‘Imagine Me’ AI Generator
The roll out of Meta’s latest generative AI feature lets users create using text prompts and their selfies. This feature works by letting users simply tell the AI what scenario or theme to generate an existing image into. “Simply type ‘Imagine me’ in your Meta AI chat to get started, and then you can add a prompt like ‘Imagine me as royalty’ or ‘Imagine me in a surrealist painting,'” Meta explained.
Creators can use this as a way to bring their imaginative ideas to life effortlessly. If you ever wanted to imagine how you looked with tattoos but you’re indecisive, this feature would be a perfect way to trial the potential outcome, before making the ink commitment.
Meta AI And Mixed Reality
Meta also announced that Meta AI will soon be available on Meta Quest from next month in the US and Canada. This update, with Meta AI with Vision, is for users to interact with their physical surroundings through the headset.
Users can ask questions, get real-time information, and even receive fashion tips or restaurant recommendations. Meta AI will replace the current Voice Commands on Quest, for a more interactive and useful experience.