Research plays an important role in technological business. However, turning scientific work into a profitable idea requires more than just focusing on a popular applied problem. Andrey Bolshakov, an expert in industrial computer vision, robotics, and edge computing and the co-founder of digital supervising company NVI Solutions, as well as autonomous cargo platform companies Megawatt and Evocargo, shares his journey from science to business and how he transformed his research into successful business ventures.
Andrey, tell us about your transition from science to entrepreneurship—how did you come to this decision, and how did you choose your field of work?
Since childhood, I always wanted to do something applied—something that would bring visible and tangible benefits to the wider society. That’s why I became interested in technology, particularly robotics. This interest grew, and after school, I enrolled in a specialised university.
During my senior years, I began looking for a direction in which to take my thesis and subsequently decided to pursue applied work. This led me to an internship at a research institute which focused on information transmission problems. During the internship, I developed software for treating strabismus in children. I found this immensely rewarding because I could directly contribute to solving real-world human problems.
However, I faced challenges afterward—the program could have benefited millions of children around the world, but no one at the institute knew how to develop and expand the project from a business perspective. I started studying this issue myself and found an accelerator program for medical projects. I completed it, and then, together with the company’s owners, we launched a startup where I served as the CEO. This gave me valuable experience in sales, marketing, and economics. It was my first significant venture investment experience, which took my career to a new level. I clearly understood what to offer the market, how to do it, and what not to even attempt.
Armed with my newfound knowledge in marketing, project management, and economics, I returned to the institute and began focusing on technological entrepreneurship. I successfully balanced my scientific and business roles. As the number of projects grew, I started attracting specialists and shifted my focus toward management. Eventually, I began selling scientific consulting—my technologies related to image processing and perception were purchased by major American and Asian consumer electronics companies.
But eventually, I hit a ceiling—consulting services are difficult to scale because you don’t have an unlimited number of specialists at your disposal when margins are semi-fixed. So, I started looking for a product niche with significant potential. This led to the creation of my projects—AI-based digital supervising company NVI Solutions and the autonomous cargo platform company Evocargo. This 10 year route led me to the Institute Deputy Director position, where I was among the youngest nationwide.
In these projects, technology remains at the core, but instead of selling consulting services, we provide solutions based on specific software, making the business much easier to scale.
As for the direction—we focus on autonomous cameras, continuing the path I started back in university. Back then, I worked with the human visual system, and now, under my leadership, we developed a similar system built on technology and AI. I am grateful for my scientific background, as it allowed me to interact with scientists from various fields, helping me recognize and utilize connections between seemingly unrelated disciplines, such as human physiology and emerging technologies.
You have a number of scientific papers and publications. Which ones do you consider key, and have any of their ideas been implemented in your business?
Yes, several publications have contributed directly to my business. For example, I participated in projects on image compression for data transmission optimised for human perception. The idea is that in any video, certain areas are more likely to attract human attention—these are the “areas of interest.” In a movie, for example, this could be the faces of people engaged in conversation. We can ensure that the area of interest is in maximum detail while reducing the detail in the rest of the image. This significantly simplifies data storage and transmission.
This principle is similar to how our brain works. For instance, if you look at your finger, you will see it clearly, while the surrounding space remains blurry. However, your brain doesn’t need a fully detailed picture to determine the distance of your finger or identify nearby objects. Similarly, a computer focuses only on key information while performing tasks efficiently.
When I started working on business projects related to autonomous video surveillance, I applied this principle. To reduce hardware requirements for our cameras, I trained them to recognise only the “areas of interest” instead of analysing the entire image. For example, for autonomous trucks, this could be another vehicle or a building they need to avoid. As a result, the equipment we use for our services remains compact and resource-efficient.
More from Interviews
- A Chat with Liza Tullidge, CEO and Founder of Netā
- A Chat with Gareth Lewis, Founder and Co-Chief Executive at FinTech Company: Delio
- A Chat with Anton Zimarov, Co-Founder and CEO at Custom Software Development Company: Erbis
- Meet AI and Marketing Teacher, Will Francis
- Agile Transformation: Danila Vasilyev Busts 5 Typical Myths And Misconceptions
- Meet Dave Smallwood, UK MD at Financial Service Provider: Mollie
- A Chat with Nigel O’Neill, Founder and CEO at Independent Strategy Consultancy: Tarralugo.
- Meet Jamie Akhtar, CEO and Co-Founder at All-In-One Cybersecurity Platform: CyberSmart
Your key projects—NVI Solutions, Evocargo, and Megawatt—are based on edge computing and are highly technological. Could you explain this architecture and how you started working with it?
Edge computing is an approach where a significant portion of computations occurs as close as possible to the data source. For example, instead of processing data in a remote cloud data center, it happens directly within the camera. This is a fundamental concept that temporarily lost ground with the rise of connectivity and cloud computing. However, we now see that the sheer volume of data generated on site (in my experience, hundreds of gigabytes per minute per site) makes cloud processing both unpredictable and inefficient, especially for real-time systems like drones or industrial automation.
This principle has helped us radically optimise camera operations. Our cargo platforms operate in vast areas where network connectivity isn’t always reliable. To ensure smooth data transmission to the server, we use edge computing technology.
Edge computing also has applications in other fields—such as remote polar stations where connectivity is almost nonexistent. Many of our developments originated from my earlier research on image compression.
Are there any technologies or markets you haven’t worked with yet but see potential in?
I see rapid growth in wearable camera technologies. Today, everyone has a wearable camera capable of multiple functions—a smartphone. But other types of wearable devices are emerging. Security personnel worldwide use body cameras to record incidents and determine whether their actions were justified.
The AI market is now looking for ways to deploy its models on local devices, driving demand for so-called Local or Embedded AI. This is essentially an edge computing approach applied to AI agents. At the same time, hardware manufacturers are shifting from producing universal chips to designing chips tailored to specific neural network architectures, further reducing resource consumption and power costs.
Many of these projects are not yet successful. In my view, for them to find customers, they must be able to solve a specific, repetitive set of tasks efficiently. Ideally, they should automate functions such as agricultural equipment inspections, whereby workers could use wearable AI-enabled devices to quickly identify and document issues.
This leads to another adjacent area—RPA (Robotic Process Automation). RPA links different IT systems together. The challenge is that programming RPA has traditionally required separate development for each system version it integrates with. Now, AI agents are emerging that can identify and connect various interfaces automatically. As a result, system integrators must decide how to evolve to stay relevant.
Overall, AI agents open vast opportunities for working with unstructured data streams—videos, images, texts, and code. This significantly simplifies business processes and accelerates operations for any enterprise. I am also currently focused on automation technologies and actively developing them.
As far as I know, NVI Solutions is actively developing sensors and other systems to enhance industrial safety. Which of your solutions do you consider the most advanced, and why?
Our key innovation lies in real-time responses to complex situations. Our core development is a system of smart cameras that can notify workers in real-time about potential incidents, even in conditions of limited bandwidth.
My team had to conduct thorough engineering work to design an effective and weather-resistant system, as our cameras operate outdoors and in all manner of conditions. In some cases, we had to implement protection against vibrations, sparks, and even explosions—each of these factors required additional technical developments. Every product we create undergoes a long journey, from a prototype on the lab table to a high-tech device operating in hazardous environments.
In your opinion, how will artificial intelligence transform key industries such as logistics, transportation, or agribusiness over the next 20 years?
The scale of change we are about to witness is truly revolutionary. Millions of traditionally manual operations can be handed over to AI—not just for simple automation and replication but also for creating entirely new approaches to problem-solving.
We are standing on the threshold of a major transformation in the workforce, where a significant portion of jobs could be dramatically altered. Hiring an AI-powered android could become cheaper than training a human worker. However, this raises an important question—what will happen to employees whose jobs are replaced by robots? Right now, there is no clear understanding of how these processes should be ethically managed and structured.
What do you consider the most important lesson from your experience in this field? What advice would you give to entrepreneurs starting their journey in high-tech business?
The key lesson is to develop what the market actually needs, not just what you personally enjoy doing. This is a common mistake among startup founders—they work on technologies they are passionate about but fail to validate their market demand and scalability.
I’m often asked what books I recommend for aspiring entrepreneurs. I always suggest The Goal: A Process of Ongoing Improvement by Eliyahu Goldratt and Jeff Cox. This business book explains how to design value creation processes. The book’s core idea is that a system’s efficiency is determined not by its most efficient element but by its weakest link.
The key is to identify the “bottleneck” in your process and focus on improving it. If you develop technology without considering how it will be integrated and what business processes will surround it, it will be functionally useless.
And don’t fear mistakes. Every major company has gone through countless pivots. Entrepreneurship is all about learning from failures and adapting. Stay motivated and keep trying.