When choosing a platform for your next chatbot

calendar Updated October 24, 2023
Glib Dobzhanskiy
VP of Engineering
When choosing a platform for your next chatbot

When choosing a platform for your next chatbot you need to keep in mind that bot’s backend needs to scale easily. Correct understanding of user inputs is critically important for the next-generation bots. I mean not only understanding of predefined user behaviors like menus or buttons but understanding intents in natural language.

Running a complete infrastructure on your own servers is too complex and expensive but nowadays there are some ready solutions especially for bots:

Microsoft Azure  Bot Services

Azure Bot services is a combination of MS Bot builder framework, Azure Functions and MS Cognitive services.

azure-1

Also, there are some bot ready-to-use templates that can run on Azure bot services, so you can easily extend these examples to fit your needs.

Pricing models

azure-2

Functions are billed on the basis of observed resources consumption which is measured in Gigabyte Seconds (GB-s). Observed resources consumption is calculated by multiplying average memory size in Gigabytes by the time in seconds it takes to execute the function. Memory used by a function is measured to the nearest 128MB up to the maximum memory size of 1,536MB. Functions pricing includes a monthly free grant of 400,000 GB-s.

Microsoft Cognitive Services

It is one of the most advanced public available machine learning solutions.

Pricing models:

azure-3

Pros:

Pay only for used machine time (no additional services or runtime monthly charges.

Easy scaling (True Elastic infrastructure).

One backend for multiple platforms.

Easy integration of Cognitive Tools (NLU, Sentiment Analysis).

LUIS platform.

Cons:

We need to use Microsoft bot framework and can’t access the latest platform-specific features of bots.

We are limited by Azure functions infrastructure and can choose only between Node.JS and C# development platforms.

Google cloud

Google doesn’t have its own bot platform or specific cloud plans but nevertheless Google’s machine learning alongside with compute services makes google solution comparable with competitors.

Google cloud platform delivers microservices similar to AWS lambda and Azure functions – Cloud functions but it is limited only to JavaScript. Cloud functions is now in alpha and available only for EAP (Early Access Program) members.

PaaS form Google App Engine

It is more powerful but at the same time more expensive. App Engine supports Python, Java, Go and PHP languages. App Engine price models comparable to standard VPS solutions are as follows:

gcloud-1

Google Natural Language API is great for text analysis, extracting information, parsing intents or detecting text sentiment. There is a free tier also, but the overall service looks a little bit more expensive than a competitor from Microsoft:

gcloud-2 (1)

(Prices for 1000 records per month)

Pros:

App engine supports many programming languages.

Easy-to use and well-documented API.

Free tier plans for Natural language APIs.

Cons:

There is no dedicated solution for bot hosting.

Cloud functions are  still in closed beta.

Natural Language API is quite expensive and not deeply configurable.

Amazon Lex

Conversational interfaces for your applications are powered by the same deep learning technologies as Alexa. It was released just today.

Great idea from Amazon that bots can be not only text based, but voice-based as well. A good example of the voice based bots are skills for Amazon Alexa. So one backend can handle both text and voice requests.

Pricing models

With Amazon Lex, you pay only for what you use. You are charged on the basis of  the number of text or voice requests processed by your bot, at $0.004 per voice request, and $.00075 per text request. For example, the cost for 1,000 speech requests would be $4.00, and 1,000 text requests would cost $0.75. Your usage is measured in “requests processed”, which are added up at the end of the month to generate your monthly charges.

Lex uses Amazon Cognitive services and Amazon Lambda for stored functions.

Pros:

Unified text and voice interface for bots implementation (Voice-based bots now work only on Amazon Alexa platform).

Lex backend is bot-platform independent. There are ready recipes how to run Facebook messenger bot on AWS lambda. But we can build any bot there.

You can choose between JS, Python and Java to write Lambda functions code.

Pay-as you use (No service monthly charges).

Access to full AWS stack.

Cons:

Lex is in private beta now (already joined the waiting list).

Amazon vendor lock.

You still need to implement interactions low-level code.

Smart functions and natural language understanding are the key features for today’s bots. Many cloud providers deliver such APIs or plan to introduce them in the nearest future.  Combining smart APIs and event-based cloud functions can help you build smart, easy-to-scale bots with relatively low costs.

Got a project idea?

Master of Code designs, builds, and launches exceptional mobile, web, and conversational experiences.


















    By continuing, you're agreeing to the Master of Code
    Terms of Use and
    Privacy Policy and Google’s
    Terms and
    Privacy Policy

    Also Read

    All articles