Context over chaos. Disconnected technologies, siloed data, and reactive processes can only get you so far. Protecting businesses in today’s threat landscape demands more than a set of security tools – it requires context.
That's where Avertium comes in
Security. It’s in our DNA. It’s elemental, foundational. Something that an always-on, everything’s-IoT-connected world depends on.
Helping mid-to-enterprise organizations protect assets and manage risk is our only business. Our mission is to make our customers’ world a safer place so that they may thrive in an always-on, connected world.
Best-in-class technology from our partners... backed by service excellence from Avertium.
Interested in becoming a partner?
With Avertium's deal registration, partners can efficiently and confidently connect with Avertium on opportunities to protect your deals.
Microsoft Copilot for Security analyzes and synthesizes high volumes of security data which can help healthcare cybersecurity teams do more with less.
Dive into our resource hub and explore top
cybersecurity topics along with what we do
and what we can do for you.
In the past, chatbots mainly provided canned answers to simple questions, but the increased sophistication of artificial intelligence (AI) has allowed chatbots to fulfill a useful role within enterprise customer support departments. However, as chatbots become more sophisticated and useful, they collect more valuable and sensitive data, making chatbot security an important priority.
Enterprise chatbots are designed to fulfill a customer service role, and a crucial part of this job is data collection.
Like a human customer service representative, a chatbot is often going to need to collect personal data in order to help with an issue or complaint. At a minimum, they will request a name and some other piece of identifying information (account number, email address, phone number, etc.) in order to locate the relevant account on their system.
The security issues with chatbots arise after this data is collected. The data needs to be transmitted from the client’s browser to the enterprise’s server. If the data is not secured in transit, there is the potential for a hacker to eavesdrop on the transmissions, or failure to properly secure and discard data-at-rest could mean that an organization is leaving a large repository of customer personal information on an untrusted server.
If the chatbot is running on the enterprise website, then securing communication channels and data at rest is fairly straightforward. Only offering chatbot functionality on HTTPS webpages protects the data in transit and a good organizational data management process protects the data at rest as well.
However, with the increased usage of social media platforms as an extension of organizations’ marketing and customer service departments, many organizations also operate chatbot software on these platforms as well. This means that the ability to secure sensitive customer data may not be completed within an organization’s control.
Recent incidents have also demonstrated that social media platforms are willing to sell data to third parties without user authorization, meaning that data collected by your organization could be sold to willing buyers.
The risk of chatbot-caused breaches is not a theoretical one. In recent years, several enterprises including Delta, Sears, and Ticketmaster had data breaches caused by a failure to secure chatbot systems.
Delta and Sears were breached in September 2017 due to a hack of a third-party chatbot provider that was discovered the following month. Both companies relied on a chatbot provided by [24]7.ai as part of their customer service process. The chatbot provider’s servers were infected with malware in September and leaked payment card details of customers from both companies. Sears estimated a loss of personal data of less than 100,000 customers, but Delta believes that the breach compromised the payment card information of hundreds of thousands of users.
In June 2018, Ticketmaster discovered that it had suffered a similar breach due to chatbot software provided by a third party. Ticketmaster used Inbenta’s JavaScript chatbot software as part of its payments page. The attacker modified JavaScript code used in the chatbot to steal the payment card information of an estimated 40,000 users in the UK. While the breach was discovered in June 2018, infiltration may have stretched back as far as September 2017.
The impact of these breaches is significant due to the type of data breached and the affected users. Since the breached data as payment card information, the PCI-DSS standard is relevant in all cases. The affected users of Ticketmaster were UK citizens, who are protected under GDPR as well.
Artificial intelligence in general and chatbots, in particular, are powerful tools for helping an organization operate at scale. Even simple chatbots can help with data collection, answering basic questions, and routing inquiries, decreasing the load on human customer service representatives, and improving service by decreasing wait times and allowing agents to spend more time per customer.
The volume and value of the data collected by chatbots make securing them a priority. The data collected, transmitted, and stored by chatbots needs to be protected at the level required by any applicable regulations (GDPR, HIPAA, PCI-DSS, etc.).
If you have a chatbot system or plan to implement one, reach out for a consultation to learn the best ways to ensure that your system is secure.