Product Engineering Approaches For Building UX in Generative AI Tools

AI-powered tools are rapidly entering business workflows, but many of them face growing frustration from users. The problem isn’t the models themselves, it’s how they’re integrated into the product experience.

In AI products, UX is shaped not just by interface design but by engineering decisions: how context is collected, how uncertainty is handled, how latency is managed. These are product engineering practices that shape the very interface between humans and AI, not just how the systems work, but how people experience them.

As the field rapidly evolves, engineers are no longer just builders; they are becoming the architects of interaction, defining how users engage with intelligence itself. Ivan Liagushkin is a software engineer with over 10 years of experience and deep expertise in AI-powered product architecture. He currently leads engineering at Twain, an AI copywriting platform backed by Sequoia Capital. In this interview, he explains why user experience in AI products is broken and what engineers are doing to fix it right as we speak.

 

What Do Users Expect From AI Products Today? And How Realistic Are These Expectations Given The Current State Of The Technology?

 

Numerous studies confirm that users increasingly expect AI products to demonstrate human-like communication skills, deliver precise and concise outputs, operate without the need for complex prompts or manual tuning and show a strong understanding of contextual information, all while maintaining a high degree of creativity.

In the course of regular user interviews, I’ve observed a clear shift: this level of functionality is no longer seen as aspirational, it’s just expected now.

Recent shifts in user demand also show a growing expectation for autonomy and proactivity. According to a Microsoft study, 70% of users want AI not just to assist, but to take action for example, by planning tasks, writing emails, and preparing reports independently. There’s a growing expectation that AI systems should not only understand highly specific problems, but also come up with the right solutions and carry them out on their own.

In reality, we are not quite there yet. Despite rapid progress, today’s AI still stumbles over ambiguity, incomplete data, and the messy edge cases of real-world use falling short of the smooth, assistant-like experience users have come to imagine.

 

Despite Their Popularity, Why Do Chat Interfaces Struggle With Real-World Tasks?

 

Text chat may feel like a natural fit for LLMs, but it breaks down fast in real-world use, it’s structurally inefficient for most users. There is no clear hierarchy, no easy way to reuse or revisit what was said. Try sharing a long thread with a teammate or finding a key point from three days ago. In collaborative or task-heavy environments, chat becomes more of a bottleneck than a bridge.

A deeper issue lies in the lack of transparency within these systems.

Users are often left in the dark; they do not know what information the model has seen, why it made a particular suggestion, or how confident it is in the result.  This opacity can erode trust, especially when users cannot ascertain the reliability of the information presented. Studies have shown that users frequently experience dissatisfaction when AI systems fail to grasp their intentions, emphasising the need for clearer communication and understanding between users and AI.

UX should help bridge this gap by exposing which context was used, which factors influenced the output, and where the model might be uncertain. That’s a core requirement, especially in professional use cases. As highlighted in Frontiers in Human Dynamics, transparency and interpretability are essential foundations for trust in AI, especially in business and public sector applications.

What Needs To Be In Place For Both The User And The AI System To Succeed In A Generative Workflow?

 

Every interaction with AI takes two inputs: the instruction and the context. Most users can articulate what they want done and the real challenge is supplying the right context.

In practice, this context is fragmented across internal docs, spreadsheets, CRM records, support tickets, and email threads. If a product can’t access or structure that information, the model either returns general answers or starts hallucinating.

Collecting relevant context is a core responsibility of the product. It is essential to closely analyse user behaviour and workflows in order to integrate seamlessly into their daily routines.

For example, since we focus on generating emails and LinkedIn messages, embedding Twain’s functionality into a browser extension was a clear and strategic decision. From a data collection standpoint, this approach is highly effective, it allows the product to operate directly within the user’s inbox, parse prior messages and threads, identify individual prospects, and retrieve critical context in real time.

However, privacy considerations must be a priority. Only the data that is strictly necessary should be collected, with full transparency regarding what is being gathered and why. All practices must be compliant with data protection regulations such as GDPR.

Data enrichment is another option. If the user provided partial or unclear data, try to fill in the blanks, search the web, scrape their website, or buy it from a vendor. For instance, we have a pipeline that works like this: we ask users to provide what we need for the best results and collect all the missing pieces ourselves.

Modern AI tools must be able to retrieve, filter, enrich, and structure context automatically. This is where most of the engineering effort should go and it’s directly linked to UX quality.

 

Why Do Things That Appear Simple To The User Often Require Such Significant Effort From The Engineering Team?

 

Minimalist AI interfaces hide enormous complexity that for many would be hard to even imagine. Every simple interaction requires orchestration: collecting data, validating input quality, isolating relevant segments, generating output, and packaging it into a usable format. Based on my own experience and practice, even minor improvements, like reducing latency by 15% may require substantial architecture redesign. This is invisible to users but critical to how the product feels.

When engineers can’t speed up inference, they work on perceived responsiveness, showing progress indicators, streaming partial results, or preloading likely answers. These aren’t UX hacks, they’re core parts of making AI products usable.

Another approach to managing perceived latency is intelligent caching of slow-generated data for reuse when possible. For example, at Twain, we built a Grammarly-style interface that highlights and corrects user messages. Because generating those corrections can be time-consuming, we tracked edits on the sentence level and refreshed the cache only when the original text was meaningfully altered.

You can enhance this approach further: detecting that corrections are needed is always faster than generating the actual edit. This allows you to immediately highlight specific text sections that require attention, then provide the actual corrections only when users interact with those sections.

 

How Is The Role Of Software Engineers Changing In The Age Of AI?

 

AI will not make software engineers obsolete but it is already reshaping what it means to be one. The job is no longer just about writing lines of code. It is becoming more about owning the product, understanding the business, and figuring out how to work with systems that do not behave in fully predictable ways.

The role of developers is evolving to operate at the intersection of machine learning, domain-specific logic, and user workflows. This shift demands a new set of competencies including the ability to experiment, navigate ambiguity and understand the limitations of AI models.

Yes, AI will handle more of the rote coding. But that frees developers to step up: to go deeper into product strategy, or to become the kind of high-level generalist who can architect, build, and ship entire systems on their own.

One thing is clear: change is here. But with it comes a new kind of creativity and a chance to rediscover what makes building software so satisfying in the first place.