More efficiency. More loyalty.
With natural language processing and machine learning tools, reach the balance between automation and personalized experience.
Ease your workload
Let the chatbot take up to 45% of incoming inquiries right out of the box.
Consistent user experience
Web. Mobile app. SMS. Messaging app. Enjoy a consistent user experience across channels.
Your bot sidekick – night or day
Boost self-service adoption. Improve operational efficiency. Increase customer satisfaction. Reinvent the in-person experience online.
Frequent industry questions
Implement a bot equipped with answers to the most frequently asked questions in the financial services industry.
Easy-train bot for business users
Let the bot handle up to 70%–80% of incoming inquiries when monitored and trained by business users.
The smart bot just got smarter with our GPT Extension for financial services
Unblu Bot is a highly configurable, no-code solution that allows for an efficient integration process. It can be completed in as little as one week, and personalizing it to match your brand and colors takes only a couple of days.
Creating the intents is the most extensive task, which can take from 4 to 8 weeks depending on your organization’s familiarity with the top 30 customer queries. Liaising with your contact center is key to understanding the topics discussed over phone, alongside analyzing emails, forms, or Live Chat channels if enabled. Our service team will assist in training and co-building the necessary intents for a successful Go-Live.
In summary, the Bot implementation and Go-Live featuring a substantial number of intents could take from one to three months, while delivering value from day one.
Starting Too Small: Launching the Bot on a limited section of your website with minimal capabilities can hinder engagement and successful interactions. Avoid this by presenting the Bot with a comprehensive scope, showcasing its true value from the start.
Going Live Without a Fallback Solution: Launching a Bot without Live Chat capabilities or a fallback solution can impede customer adoption. It’s crucial to provide an option for customers to connect with a live agent if the Bot is unable to deliver a satisfactory answer.
Perfectionism Delay: Spending excessive time, like 12 months, striving for the „perfect” Bot can be counterproductive. Customer needs define the Bot’s knowledge, so deploying it sooner allows for real-time learning and optimization. Once the initial Go-Live intents are in place, learning from live engagements is key to achieving organizational goals swiftly.
There are two primary methods to gauge a Bot’s impact on an organization. The empirical approach assesses how many requests the Bot can handle without involving a live agent. Contrasting the cost of a Bot session with the cost of a phone call typically reveals a 60 to 80% cost reduction per interaction with the Bot. Subsequently, the Service Rate or Deflection Rate becomes a key indicator of the Bot’s value, showcasing its ability to handle inquiries without agent intervention.
Beyond quantitative analysis, Bots offer two distinct advantages. They operate 24/7 to provide a continuous service experience, and can encourage customers who are hesitant to make a phone call to engage with the organization, enhancing customer satisfaction as a result. Organizations implementing text channels often witness a 5 to 10% increase in new customer engagements, underscoring the broader positive impact of Bot deployment.
The TCO of your Bot is influenced by factors like your contact center’s agent capacity, service growth, and the Bot’s target service rate. Ongoing maintenance and optimization requirements vary among organizations, involving tasks such as performance monitoring and intent improvement.
Our Bot packages include Service Days/year in the first year, ensuring continuous improvement led by Unblu experts. This serves as a bridge for a seamless transition to a dedicated resource in your organization. We collaborate with you to tailor ongoing maintenance to your needs, enabling internal resource training for sustained Bot performance. For example, we provide an experienced call center agent with training to enable them to train the Bot going forward. This approach ensures your Bot evolves with your organization, aligning with growth and requirements.
If the Bot cannot pick the next intent, the knowledge Bot will kick in and formulate a response using LLM and leverage the content defined in the knowledge section of the Bot builder. If no relevant content can be found, the customers will be offered to chat with a live agent.
The escalation point is fully configurable. You can add a live-agent handover within an intent without going through the knowledge Bot or push the handover to the right agent team.
Absolutely. Our Bot Persona feature empowers financial organizations to tailor the conversational tone to each customer. You can specify the brevity of answers and directly test it in the Bot Persona window within the Bot Dialog Builder. This allows for nuanced adjustments, such as using regional language specific to countries or areas, whether Scotland, Canada, etc.
For instance, you may want the Bot to adopt a formal tone when engaging a 50/60+ years old customer, while employing a more direct and friendly tone with a young student. Currently, you can define the Bot’s tone based on website areas, and we are actively developing the capability to leverage CRM data for authenticated customers, aligning the tone with customer segmentation. At Unblu, we refer to this as Corporate Language, enabling the translation of a company’s brand and culture through the Bot’s tone of expression.
During business hours, the Bot seamlessly hands over to a live agent without disrupting the customer journey. Outside business hours, alternative options like booking an appointment can be provided.
Yes, Both the intent Bot and the LLM Bot can tolerate a certain number of mistakes, ensuring a robust performance even in the presence of minor errors. Several factors will influence the Bot’s capacity such as the words and length of the overall messages, as well as the Large Language Model used for the LLM Bot.
All conversations, whether successful or otherwise, can be viewed in the Bot Dialog Builder. From within, it is possible to review the questions the Bot couldn’t answer and decide whether to attach the question to an existing intent, create a new intent to answer the question or store it into a backlog for building the intent later.
In addition, this can also help you to identify any gaps in the knowledge Bot content; you can add the relevant content (e.g., web page, PDF documents) to the knowledge section of the Bot builder.
Yes, the best practice is to input a variety of sentences for each intent to ensure the Bot recognizes different ways of asking the same question. An intent is not deployed until it has at least 20 sentences, and we recommend going up to 50 sentences for frequently asked questions. The Bot trainer assist (LLM Bot use case) can be leveraged to generate those sentences.
The intent-based Bot can answer with text, images, and even take users through a workflow to get a better understanding of the customer’s request or organize the next steps in the customer journey (booking a meeting, hand-over to an agent, etc.). Moreover, with text answers, you can configure the Bot to provide alternative answers to the same questions.
Through an integration with Core Banking systems, the Bot can draw on personalized data taken from the Core Banking to provide more tailored responses. For example, you can use the CRM to personalize the greeting if the customer is authenticated.
Furthermore, you can align intents with actions to trigger a specific flow in the Bot. It is important to understand that the integrations with your Core Banking systems are not associated with the LLM integration of our Knowledge Bot. This means that no customer data is fed to the LLM through an integration between Core Banking System and the Intent Bot.
By harnessing state-of-the-art Large Language Model (LLM) technology, training your Bot becomes as simple as a single click, eliminating the need to predict customer queries in advance. This advanced technology empowers your Bot to leverage existing knowledge stored in websites or documents to dynamically adapt and respond effectively to a wide range of user inquiries without explicit pre-training. This streamlined approach not only enhances the Bot’s responsiveness but also maximizes the utilization of your organization’s existing knowledge, ensuring a more intelligent and versatile conversational experience for your customers.
When a customer’s inquiry doesn’t align with pre-defined intents, the LLM steps in to generate answers by drawing upon the knowledge defined by your organization. Deploying LLM technology allows for the enhancement of existing intents, fortifying them to be more resilient and adaptable to a diverse array of customer queries.
The LLM based Bot will only leverage the data made available through the Knowledge Section of the Dialog Builder. This data can be fed as website urls or PDF documents.
As we leverage Azure OpenAI, we can ensure that your prompts (inputs), completions (outputs), embeddings, and training data will not be available to OpenAI or other customers. They will also not be used to improve OpenAI models or other third party services. Your fine-tuned Azure OpenAI models are available exclusively for your use.
Moreover, we provide regional availability in key business hubs such as Switzerland, France, Canada, USA, and the UK. This ensures that your organization has full control over data residency, offering peace of mind by preventing data from being processed in third countries, and maintaining compliance with regional and national regulations in data protection and privacy.
Yes. The Bot Sidekick is able to generate an answer based on the question from the end customer but the agent is the one deciding whether or not the answer is good enough. The answer can also be edited by the agent before being sent.
Specific infrastructure is needed given that LLM requires a certain amount of computing power to operate. Nonetheless, a light version of an LLM can be deployed as a first phase to test the capabilities.
Today we are using Azure OpenAI and IBM Watson X. We have built our connectors to switch from one provider to another depending on our customers’ requirements. For example, for customers requesting an on-premise deployment, we are working on using other types of LLMs. We collaborate with our customers in the financial services industry and closely monitor the relevant services and regulations to identify the optimal LLM and deployment model for every scenario.
Our Knowledge Bot derives insights from two primary sources: websites and PDF documents.
Websites: No action is required, provided the URL remains unchanged. The Bot automatically retrieves the latest information from the specified website.
Documents: If a document is updated, simply remove the previous version from the builder and upload the new one with a few clicks.
It is worth noting that iterative updates are not currently supported. To address this, we recommend structuring the knowledge base with frequent updates and static documents in separate sections. This allows content managers to easily identify files requiring regular updates. Our service team is available to assist in the setup. For frequently asked questions related to documented answers, we suggest utilizing intent design to retrieve updated documents directly from the source (third-party database).
Give customers the experience they deserve
High satisfaction for long-term loyalty
No-risk doc exchange
Share or exchange text messages, documents, files, or images in a secured location.
Enhance the conversation
All digital bells and whistles we know and love – including reactions, emojis, voice messages, and more.
Hot off the press
See Unblu Spark In Action