Well, that was short-lived.
Just hours after making headlines about its interesting take on the consensual selling of personal data obtained from recording phone calls, the previously praised application, Neon, has gone dark.
And, the irony isn’t lost on us.
The application caused a stir when it was first launched, causing people to think differently about data sharing – previously, the sharing of things like phone call recordings, transcripts and logs, among other things, was seen as an uncontested invasion of privacy. But, Neon highlighted a different way of viewing things – what if people were to provide their own information to AI companies of their own free will? Moreso, what if they were to do so both consentually and for profit?
Initially, this proposition was met with a wall of silence – well, why not? There’s nothing inherently wrong with sharing data as long as it’s done so freely and with permission, so why shouldn’t the process become an opportunity for profit?
Well, in theory, perhaps there really shouldn’t be an issue. In fact, there was even talk about the selling of personal data to AI companies coming the next big side hustle.
However, even when things were rosy and the lights were on, experts raised several concerns about the apparent precedent that Neon was setting with this business model. Would this be the start of a “data-for-cash” sector? How would the companies keep data private and secure beyond sharing with the app itself? How would this sector be regulated?
Unfortunately, these potential problems are no longer mere hypotheticals because Neon has already been caught with its pants down and the lights on. So, to those who were skeptical about their ability to keep data safe, you were right.
The Security Breach At Neon
After doing a quick preview of the application yesterday afternoon, TechCrunch discovered quite quickly (shockingly quickly, in fact) that Neon was plagued by a serious flaw – it allowed users to view not only their own personal details (phone numbers, call transcripts, call recordings and more) but that of other users too. Most shockingly, this security flaw had seemingly slipped under the radar, with nobody else having discovered it up until that point – at least, nobody who had notified Neon.
After alerting Alex Kiam of their discovery, the servers were promptly taken down, rendering the app unusable. Users were notified that it had been “paused”, but they weren’t provided with any further detail on why or what had happened.
Later, according to Neon users, they received the following notification: ““Your data privacy is our number one priority, and we want to make sure it is fully secure even during this period of rapid growth. Because of this, we are temporarily taking the app down to add extra layers of security.”
A classic case, in my opinion, of, “I didn’t lie, I just didn’t tell you everything!” Kind of a big omission, though, wouldn’t you say?
The problem here is that data security is a big deal, and just because people are providing permission for their information to be used doesn’t mean that due diligence and security measures aren’t necessary. All it took was a few quick checks and some basic network traffic analysis to pick up the issue, and just like that, who knows how many users’ private information has now been shared?
More from News
- How Is The UK Accelerating The Use Of AI In The NHS?
- What Is ChatGPT Pulse And How Does It Work?
- PayPal Goes All In On MEA: What $100M Means For Startups And Entrepreneurs
- Will Apple Stop Selling Its Products In Europe?
- How Is The UK Government Investing In Semiconductors?
- How Much Did The Co-Op Cyberattack Actually Cost?
- Italy Races Ahead with AI Regulation, Will the Rest of Europe Follow?
- OpenAI To Build 5 More Data Centres Worth Over $400 Billion
The Skeptics Were Right
They may not be words anybody wants to hear (never mind say), but it seems as though the skeptical among us may have been right on this one. If it seemed like the idea of selling your own personal data was suspiciously simple and too good to be true, it’s starting seem llike it’s because it is. The reality is, just because people consent to the company in question accessing their data and using it for AI training purposes doesn’t mean that they’ve solved all the other issues with data privacy.
That is, are they even capable of holding this data securely? And, who’s checking?
According to Square Trade, there’s a lot to lose. It’s not just about call logs, data breaches put a plethora of other things at risk, including: banking apps and crypto wallets; personal data and stored passwords (now including voice data as Neon shows); and potential identity theft through compromised digital footprint.
So, it seems like what was initially seen by many as a new and exciting way to get around the rules and make passive income by taking advantage of personal data as an asset to be monetised, this model isn’t quite ready for public consumption.
But, that doesn’t mean that there isn’t potential. Rather, it may be more about the fact that Neon rushed the process, attempting to get a good idea onto the market before the competition managed to clock the opportunity. They simply weren’t ready, from a security perspective, to go to market. Unfortunately for Neon, security is arguably the most important part of the equation, so their rush to be the first may have sunk them before they even had the chance to swim.