Site icon TechRound

A Chat with Andrey Suzdaltsev, CEO & Co-Founder at Brightside AI

Andrey Suzdaltsev, CEO & Co-Founder at Brightside AI

Tell us about Brightside AI

 

I would say that we are a company that allows businesses to assess and manage human risk in cyberattacks.

Brightside AI develops SaaS to help teams combat social engineering cyberattacks enabled by mass adoption of genAI. Unlike other course-based products, Brightside has a data-centric approach, focusing on showing precisely which data can be used to personalize an attack and provides step-by-step instructions on how to mitigate the risk (e.g., deep awareness, GDPR data reclaims, attack decomposition). Brightside tailors every employee’s approach based on their digital footprint. It then provides the organization with the team’s realistic risk score by leveraging custom genAI to conduct ultra-personalized phishing drills.

Brightside AI’s mission is to develop the most effective cybersecurity solution to protect SMEs from genAI threats.

Our solution finally allows you to look beyond the perimeter of the company and assess the risks that may come from outside that the company has not been notified of. This will not only show but also control the risk. Companies have always been blind to external factors, such as the data employees share in the public domain or their personal data leaks. And since phishing is essentially bad marketing (hackers use data to target and convert people with a convincing fraudulent message), this data really matters.

 

 

What lets Brightside AI stand out from the competitors?

 

Right now, the solution that companies have is to tell people, “Don’t click on buttons, don’t click on links,” which is basically just educational content for employees in the area of cybersecurity. Our solution has the ability to both create simulations on real personalized data and give them the ability to delete it. So, we are the only ones who can, in fact, realistically measure this risk, clearly verify it, and mitigate it.

Before us, there were no cost-effective tools allowing this to be done consistently while benefiting both the company and its employees.

The human factor always poses a risk for companies, even if they offer training and require employees to take courses. Testing people is necessary, but the current tests available for businesses aren’t very effective. The issue is that these tests are generally trivial or the kind of thing employees get used to quite quickly, making them ineffective.

We take a personalized approach to every employee and, thanks to our AI technology, we can scale that personalized approach to thousands of people.

Another criterion that makes us different is that we don’t just hand companies a list of things to fix. Instead, we take a more holistic problem-solving approach.. The first thing we do is, at a minimum, if it’s unavoidable, just explain to the person that it can happen and test it. The second is we can delete the data that can be used to personalise phishing through GDPR reclamation. So, this is us specifically mitigating the risks. We don’t say, “Watch out next time;” we show exactly where the attack could come from and how it could be carried out.

 

 

How has the company evolved over the last year?

 

During the last year, we developed our platform and carried out pilots with three European SMBs, exposing that 40% of C-level executives were vulnerable to cyberattacks and combating future phishing attempts for these brands.

We also secured a $1 million deal with Social Links, a technology investor in OSINT infrastructure.

As part of this collaboration, Brightside will have access to the Social Links platform over the next 5 years to train LLM models for our artificial intelligence-based human data search engine, making the models more effective through the use of real-world examples.

 

What can we hope to see from Brightside AI in the future?

 

We want to become the first “psychological antivirus” in the age of deepfakes and genAI attacks – an indispensable tool that companies and employees use to mitigate the risks they face by exposing their data to public sources or being victims of data leaks. Social engineering is a major point of vulnerability for companies of any size, and with the development of generative AI, such attacks will become virtually indefinable.

We aim to develop a full-fledged data protection and anti-phishing platform that will be integrated into email and messaging agents and detect personalised phishing threats automatically, helping to combat phishing and social engineering.

Exit mobile version