As many of our customers deploy multiple GenAI products, we’ve introduced a new organizational structure to simplify managing user analytics across different projects.
Organizations
This is the top-level structure for managing your projects. An organization can represent:
- Your company name
- The environment of your chatbot (e.g., production vs. development)
- A specific team (e.g., Marketing, Product, Customer Success) looking to gain insight
Projects
Projects exist within each organization and let you dive deeper into analytics for specific chatbots or user groups. For example:
- Each project can correspond to one chatbot you’re measuring.
- Or, projects can focus on segments of your chatbot’s users, such as those in different geographical markets.
Understanding user retention is critical for growing and optimizing your GenAI products. That’s why we’ve enhanced our user retention chart with two new retention metrics to give you deeper insights into user behavior:
- Rolling Retention: Tracks users who returned consistently across all prior time periods. For example: Week 3 rolling retention includes only users active in weeks 1, 2, and 3. This is great for understanding long-term engagement trends.
- Anchor Retention: Measures user retention relative to a fixed starting point. For example: Week 3 anchor retention shows the percentage of users from week 0 still active in week 3. This metric is perfect for evaluating the overall stickiness of your product.
How to Use It
1. Navigate to the User Retention tab.
2. Select the retention type, either Anchor or Rolling.
3. Analyze the user trends and understand why users are returning.
4. Select a specific group of users, such as those returning every week, by clicking on either the point on the chart or the table below.
5. Create a cohort with only these users to gain deeper insights into their behavior.
6. Filter all user analytics on the platform for this subgroup by using the Filter button and selecting the cohort you created.
We’ve made it easier than ever to analyze interactions in your chatbot, regardless of the language your users speak. With our new one-click translation feature, you can translate all user-chatbot interactions into English. This is especially helpful for teams working in a different language than their users, ensuring you gain clear insights without language barriers.
How to Use It
1. Navigate to the Interactions tab.
2. Click the Translate button located in the top-right corner.
3. All interactions will instantly be translated into English for seamless analysis.
This feature empowers you to better understand and analyze conversations with your chatbot, no matter the original language of your users.
• Macro category of user problems with LLMs.We’ve added macro categories that identify areas where users encounter challenges due to limitations or errors in the language model. This feature provides a clear breakdown of common problem areas, such as knowledge gaps or functional limitations, helping you pinpoint specific pain points according to your end-users and enhance the overall user experience.
These macro-categories aren’t predefined; they’re automatically tailored to your specific use case, providing insights customized to address your users’ unique challenges. Some common issues we often observe include:
• Lack of Knowledge.
Users complain because the chatbot lacks information on specific queries.
• Inability to Perform Tasks.
Users request actions that the chatbot isn’t capable of performing.
• Incorrect Answers.
Users report dissatisfaction with inaccurate responses.
For each macro-category, you can click to explore related issues in greater detail. In the example below, the macro-category “Real-time information and internet access limitations” refers to situations where the chatbot cannot provide real-time or internet-sourced information. Within this category, you can see four clusters of specific issues, offering a highly detailed view of problems to address for a better user experience.
Understand the WHY behind each conversation.
By categorizing interactions as informational, transactional, support-related, or feedback-oriented, you gain insight into why users are engaging, enabling you to tailor responses and optimize content. By default we have added the following categories of user intent:
1. Informational.Users seeking information or answers to specific questions.
2. Navigational.Users looking to navigate to specific content or sections.
3. Transactional / Buying.Users showing buying signals or interest in making a purchase.
4. Support.Users seeking assistance or troubleshooting help.
5. Exploratory / Research.Users gathering information for broader knowledge.
6. Engagement / Feedback.Users interacting to provide feedback or engage with the content.
7. Complaint / Dissatisfaction.Users expressing dissatisfaction or negative sentiment.
This feature is incredibly powerful for quickly understanding why users are coming to the chatbot. For instance, if you’re curious about users with a purchase intent, you can simply click on the “Transactional” user intent. This will show you the topics these users are interested in buying. In the example below, users with a transactional intent are exploring the topic of literature, specifically seeking book recommendations.
We’ve renamed “User Intents” to “User Actions” to better reflect the tasks or objectives users are aiming to accomplish when interacting with the chatbot. By tracking actions like translating text, checking grammar, or writing emails, you gain a clear understanding of what users are actively doing. This allows you to identify user objectives more effectively and refine the chatbot’s functionality to meet their specific needs.
To help you gain deeper insights, we’ve added a new visualization option in the User Intelligence tab: Map View.
You can now visualize data in three different ways:
• Map (NEW).
This view is ideal for analyzing correlations between user metrics. For example, explore how user sentiment relates to retention or how thumbs-down responses correlate with engagement. Simply select two metrics (primary and secondary) to compare.
• Chart.
Track changes in key user metrics (e.g., topics, intents) over time to see trends and identify popular topics.
• Table.
Understand how user metrics connect to KPIs like retention and engagement. For example, see which topics yield the highest retention.
Many of our customers are interested in exporting the insights we generate to other platforms.
Now, you can effortlessly export any tabular data you see on the platform by downloading the related CSV file. Simply click the Export CSV button located at the top-right corner of each table.
We have made several enhancements to the reports section to improve the overall user experience.
• View details of LLM Issues.
Now, if you want to see details about a specific point on a chart, you can simply click and select the information you’re most interested in. We’ve also added the ability to view the LLM issues related to each specific point on the chart, helping you understand why users may be dissatisfied with the LLM’s responses.
• Compare with Previous Time Period and Average.
You can now add a comparison to the previous time period for each trend chart in the reports section. Additionally, you can compute and display the average over a specific time period. To activate these options, simply click the “Show Previous Period and Average” button when creating a new chart.
• Adjust Horizontal Bar Chart Text Space.
You now have the ability to adjust the size of the text area in horizontal bar charts to better accommodate long labels (such as user intents or behavioral alerts). Just click and drag the vertical blue line to resize the text space as needed
Some of you may want a deeper understanding of how a specific user is interacting with your LLMs. We’ve added a search option in the Users section, allowing you to quickly find the user you’re looking for. Additionally, we will soon introduce a dedicated Profile Page, which will display all insights related to a specific user’s behavior.
A conversation between your users and LLMs consists of multiple interactions.
Previously, our platform focused on analyzing user interactions with LLMs on an interaction-level, meaning that we computed metrics such as topics, user intents, and user issues for each individual interaction. Now, we’ve expanded our capabilities to also provide insights at the conversation level, where multiple interactions are summarized to give a more holistic view of the entire user conversation.
• Conversation-Level Analysis (Aggregate Mode).
In this mode, we provide insights at the conversation level, offering a holistic view of entire user interactions with LLMs. By aggregating multiple interactions within a conversation, this mode highlights key metrics like overall topics, user intents, and recurring problems.
• Interaction-Level Analysis (Detailed Mode).
This mode focuses on the individual interactions within each conversation, enabling a more granular analysis. By breaking down each interaction separately, it captures specific user intents, topics, and problems at each step of the conversation.
You can choose between the two modalities using the button in the top right corner of the platform:
We’ve also added a “Conversations” page, where you can view individual conversations, similar to how you could previously view interactions on the interactions page.
We’re excited to introduce a new overview page for LLM performance issues, along with a global metric to help you assess how your LLM is performing based on user interactions: the LLM User Error Rate. This metric provides a clear and actionable representation of your LLM’s performance from the users’ perspective. Additionally, we’ve added benchmarks to compare your performance against market data, providing better context for evaluation.
Here’s how the LLM User Error Rate is categorized:
• “Optimal” - Error rate < 10%
The LLM is performing exceptionally well, with minimal user frustration.
• “Moderate” - Error rate between 10% and 20%
The LLM’s performance is generally acceptable, though some users are encountering issues. In this case, we highly recommend reviewing the “Problems Identified” section and addressing those issues to improve the user experience and bring the metric closer to “Optimal.”
• “Critical” - Error rate > 20%
The LLM’s performance is suboptimal, with a significant number of users experiencing frustration. It’s highly recommended to immediately investigate the “Problems Identified” section and prioritize solving these issues to enhance the user experience and reduce the error rate.
This new metric will help you better understand and improve your LLM’s performance, ensuring you can take actionable steps toward achieving optimal user satisfaction.
New & Improved:
• Improved the UX in chart duplication: Now when duplicating a chart you can directly select the report you want to duplicate the chart in.
Fixes:
• Fixed a bug that prevented scrolling directly to the new chart when duplicating a chart in the same report.
• Fixed a bug in user filters that prevented the filters from being applied correctly.
• Improved the general stability of the reports page.
• Interactive Chart Reports
You can now click directly on points of interest (e.g., spikes) within your report charts to view the associated interactions, user intents, or topics. This feature allows for a more detailed and intuitive exploration of data trends.
• Enhanced Navigation for Horizontal Charts
For horizontal charts with breakdowns, you can now seamlessly navigate to the detailed user-intelligence page, just as you would when interacting with the user-intelligence tables, providing a consistent and efficient user experience.
We’ve improved the clarity of the interaction details page by clearly distinguishing between user interactions and assistant responses, making it easier to follow conversations.
• Translation UI Improvements
We’ve added a new feature to the translation UI that allows you to switch back to the original text after a message (or conversation) has been translated, offering greater flexibility in viewing content.
Now, when you hover over a chart (whether a line chart or horizontal bar chart) with a breakdown applied, the selected breakdown value is highlighted relative to the other lines or bars, enhancing the visibility and clarity of the chart data.
We added support for both the latest models released by OpenAI:
• o1-preview
• o1-mini
New & Improved:
• On the reports page, when a new chart is created, the page now automatically scrolls to the bottom where the chart has been added, ensuring you can easily view it right away.
• We have removed the global filters from the reports, as they were causing confusion with the dedicated report filters, streamlining the filtering process for better clarity.
Fixes:
• Fixed a bug preventing the sharing of user-intelligence pages.
On the user intelligence page, you can now customise the tabs to suit your preferences.
You can activate or deactivate the data types you’re interested in and arrange them in your desired order. Simply click the edit button on the right side of the tabs component and drag and drop the tab names to reorder them.
We have also added the ability to sort charts in reports.
Simply click the “Edit order” button in the top right corner of the report page. You can then rearrange the charts by dragging and dropping their names to your desired positions. This feature gives you greater control over how your data is displayed and shared with different stakeholders.
For applications where users interact in languages other than English, we added the option to translate the entire conversation instead of just a single message. This full-conversation translation feature greatly enhances the user experience by providing a broader view of what the user discussed across multiple messages.
We have improved the clarity of the retention charts. The configurable parameters and retention descriptions now explain the chart functionality more effectively. It is also easier to understand which parameters need to be modified to obtain the desired retention chart.
New & Improved:
• Now the granularity in the overview page charts automatically switches to hour when a single day time-range is selected on the platform.
• Default granularity values have been improved across the platform.
Fixes:
• Fixed several UI bugs across the platform, resulting in a cleaner user interface and improved user experience.
• Fixed a bug on the report page that directed users to the wrong page when clicking on the breadcrumb.
• Fixed a bug in the average sentiment score rounding.
In order to give a better visibility on the trends of quantities like topics, behavioural alerts and user intents, we updated the user intelligence page. Now it is possible to choose between two different visualizations:
• Table
Where you will keep visualizing the information you are used to in a tabular way.
• Chart
Where you can select the topics, intents or behavioural alerts you are most interested in and visualize them as a time-series in the selected time range. This will help you in analysing the trends in the time-frames you are most interested in.
For each user metric (such as a specific topic), you can choose to visualize the trend of various related “primary metrics.” For example, you might explore:
• The trend of interactions related to the topic
• The LLM error rate associated with the topic, and more
To visualize these trends, simply select the primary metric of interest using the button below:
To adjust the granularity of the x-axis, you can select from “Hour,” “Day,” “Week,” “Month,” or “Year.”
To provide a comprehensive analysis of how your users are interacting with the chatbot, we have added two more metrics you can monitor to the user intelligence page:
• User emotions
We selected a set of 27 emotions based on psychological literature.
For each interaction, we detect whether the user is expressing one or more of these emotions.
• User sentiment
Each interaction is classified into one of six sentiment categories:
• Very Negative. When the user uses highly explicit negative terms or expresses a strongly opinionated negative view.
• Negative. When the user is irritated, complaining, or mildly insulting the assistant.
• Neutral. No discernible sentiment is detected.
• Mixed. Both positive and negative sentiments are expressed in the interaction.
• Positive. When the user praises something, either implicitly or explicitly.
• Very Positive. When the user is enthusiastic about something.
Both metrics are computed at the interaction level and can be visualized on the User Intelligence page as well as in reports.
For applications where users interact in languages other than English (such as Spanish, German, etc.), you can now easily translate raw interactions into English using the “Translate to English” feature.
This functionality simplifies platform use in multilingual environments, eliminating the need to manually translate what your users are saying.
To activate translation, simply click the “Translation” button within the details of the raw interaction.
To simplify using the User Retention page, you can now easily select the “Retention Frequency” from:
• “Daily”: users coming back every day
• “Weekly”: users coming back every week
• “Monthly”: users coming back every month
As well as the “Starting Date”, which is the date you want to begin analyzing retention.
copy link
button to copy the link to your clipboard.