MIT researchers and the MIT-IBM Watson AI Lab are finding ways to test how trustworthy AI’s predictions are, before the AI is used practically. This is a highly relevant one, because AI is being used in industries where accuracy is of utmost importance.
Critical industries like the medical, law, engineering industries are using AI for analysis, diagnoses, among other tasks.
Even though AI cannot and does not replace these critical roles, it does serve as a useful assistant in these industries. The only way this can happen successfully, though, is if it is used in the most responsible ways.
How Does It Work?
Researchers shared in a paper how the process goes. To compare the models, the MIT team introduced the idea of neighbourhood consistency.
This method involves setting up reliable reference points and checking how closely different models agree on these points when looking at a test data point.
The MIT news page reports, “They do this by training a set of foundation models that are slightly different from one another.
“Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.”
AI is known for hallucinations and inaccuracies. Different organisations are working to find solutions. Not too long ago, Oxford’s Computer Science researchers revealed an algorithm that also uses testing to determine whether AI is hallucinating.
These universities are contributing greatly to what could help millions, and save even more across industries.
“All models can be wrong, but models that know when they are wrong are more useful… Our method allows you to quantify how reliable a representation model is for any given input data,” says senior author and research lead, Navid Azizan.
More from News
- How Exactly Do YouTube Ads Work?
- FBI Warns Of Deepfake Voice Scam Targeting Officials
- Number Of Female Founders Has Risen By Half Over The Past Decade, VC Data Reveals
- Coinbase Faces Cyberattack Where Workers Were Bribed To Give Out Data
- More Than Half of Small Business Founders Have Experienced Burnout This Past Year
- ChatGPT-4.1 Will Now Be Directly Available For Users
- Continuous CVE Practice Closes Critical Gap Between Vulnerability Alerts and Effective Defence
- US Scraps Biden Rule That Limited AI Chip Exports
How AI Inaccuracies Impact Us
In finance, recognising fraud is important. Institutions need to make sure sensitive data is well-handled, and the use of AI needs to be in a way that recognises errors. MIT’s development prevents this before it can occur.
Health care is no different. But adding to handling sensitive data, AI will be used to examine organs such as the brain, and this cannot afford to produce any inaccuracies.
For startups, AI may be used for predictions, analysis of data, and depending on the industry, handling sensitive information as well. Ecommerce startups who sell products online also need to make sure that their automation systems are producing the right results.
AI and cybersecurity are closely linked as experts within these industries are working together to make sure that AI’s development is still in the best interests of the public, over and above being accurate.
This method helps create a space where AI systems are still ethical, by providing a way to assess the likelihood of producing false or misleading outputs.
Right now, the only improvements they report needs further exploring involved finding a way to go through the process except with less systems needed. That way, even more operations and costs would be involved.