A chatbot is an automated service that operates with rules and artificial intelligence to interact with users using a chat interface. Here are few examples of a chatbot:
- Weather bot: gives the weather information
- Grocery bot: helps in making decision for grocery items
- News bot: gives news updates
There are two types of chatbots:
- Chatbots that are based on rules – They are limited in functions because they only respond to specific commands.
- Chatbots that are based on artificial intelligence – They are more dynamic because they respond to language, and don’t require specific commands. They learn continuously from the conversations they have with people.
Now let’s look at the working of Facebook Chatbots.
A Facebook chatbot is made up of following components: –
- Facebook Page: The Facebook Page is used to define the chatbot, including the name and image that appears when someone chats with it inside Facebook Messenger.
- Facebook App: The Facebook App needs to be configured for every bot which gives it a unique App ID. This App is required for sending and receiving messages to and from the user.
- Bot Server: The Bot server is the soul of the chatbot that understands the message, processes it and accordingly responds back to the message. The chatbot developer can prefer to use the Facebook bot server or host their own server.
- Webhook Endpoint: A webhook or a web callback holds the address of the bot server, it delivers data to the Bot server as it happens.
The architecture of a chatbot with all these components in action is shown below.
The user’s chats are sent as HTTP request to Chatbot with user’s Facebook ID, page ID and the chat messages. This in turn is relayed to the Bot server by Facebook. The bot server utilizes AI, which transforms the natural language request into a very structured JSON format. This JSON response is sent back to the user in the form of HTTP response. This is how a chatbot works.
After having understood the working of the chatbot, lets understand its security threats.
The security threats, which are applicable to different components are marked in the chatbot architecture below:
- Users can be deceived by fake Facebook chatbot
If the user wants to initiate a conversation with any Facebook chatbot, he/she needs to search for it. Once found, the user can start sending messages to the chatbot.
Here, as Facebook does not restrict creation of Facebook pages/chatbots with similar/same name; there can be a possibility of chatbots with the similar names to exists. Thus, users can get deceived by chatbots created with malicious intents, making them prone to social engineering attacks.
- Insecure Use of Facebook IDs
During the user’s chat session, through a messenger, as the user is already authenticated with Facebook, the user’s Facebook ID is sent in every request to the Bot server. And in most cases the bot servers rely on these IDs to identify the users who are requesting for actions. Thus, spoofing becomes a major threat in such scenarios. Here the user can change the Facebook ID to that of some other valid user and perform actions on his/her behalf.
- Performing Unauthorized actions
Generally, chatbots accept some commands from the user and perform actions as per them. Each command is mapped to a use case on the server. However, as observed in few chatbot instances before, some default commands/use cases may remain enabled on the bot server inadvertently like “edit”, “delete”, “modify”, etc.
This might facilitate an attacker to brute force commands on the chat interface and attempt to invoke such unauthorized actions on the server.
- Second order Injection attacks
The chat messages sent by the user are usually stored on the server for monitoring purposes. The un-sanitized chat messages, if displayed or processed by other applications might lead to second injection attacks.
- Sensitive information can be stolen from user’s mobile device
The Facebook messenger application may store some of the sensitive information in clear-text on the device. Such details can be useful for an attacker to gain information about the user. Some of the locations within the Facebook messenger’s app folder, where such information can be seen are:
- Shared Preferences
These are some of the security areas that must be considered for a chatbot. However, given the nature of chatbots and their underlying technologies more threats might get surfaced. It’s imperative therefore to understand the context of the chatbot and build a robust threat model for it to enumerate all its security implications.