Threat Reports

AI and Cybersecurity: Is There a Balance Between AI and Privacy?

Written by Marketing | Aug 15, 2023 6:49:24 PM

Executive Summary

Artificial Intelligence (AI) is everywhere, powering virtual assistants like Siri and even self-driving vehicles. However, this technology has raised significant concerns regarding the protection of personal and corporate information. AI can predict our preferences and track our activities, offering convenience, but it also poses a serious privacy risk. Many are left wondering if it's even worth incorporating AI into their daily lives. It’s true that AI is convenient and enhances productivity, but are we truly safe?

Perhaps the missing link lies in creating clear and robust protocols for managing personal data within the realm of AI. Striking the right balance between pushing technological boundaries and protecting individual privacy is tricky but is it impossible?

If strong security measures are in place, it’s possible to successfully utilize AI while safeguarding personal and corporate data. It's a challenging task, but it's vital for a peaceful coexistence between AI innovation and privacy protection. Let’s explore AI in cybersecurity and whether or not a true balance can exist.


 

 

AI AND CYBERSECURITY

ChatGPT, launched to the public in late November 2023, enables users to generate essays, stories, and song lyrics through simple prompts. Both Google and Microsoft have introduced similar AI tools, functioning in the same manner, and powered by large language models (LLM) trained on extensive online data. When users provide information to these tools, no one knows how that information will later be used. This creates considerable concerns, particularly for companies.

As online data sharing grows, safeguarding privacy becomes vital. We must carefully assess AI's impact on personal and corporate data, as well as privacy in this evolving digital world. Additionally, we should consider where cybersecurity fits in with these concerns.

 

OPEN AI

In the early days of widespread availability for ChatGPT and similar chatbots, the primary concern within cybersecurity revolved around the potential use of AI technology for launching cyberattacks. In fact, Avertium published a Threat Intelligence Report addressing this very concern. It didn't take long for threat actors to discover ways to bypass safety checks, using ChatGPT to craft malicious code.

However, the scenario seems to have turned around and instead of focusing on attackers exploiting ChatGPT to cause cyber attacks, the focus is now on the technology itself. In March 2023, OpenAI, the entity behind the development of the chatbot, confirmed a data breach within the system, attributed to a vulnerability found in the code's open-source library. Due to a problem with OpenAI's systems, some ChatGPT users ended up seeing chat data that belonged to other people.

Upon investigation, OpenAI determined that the data breach exposed the titles of active users' chat history, along with the initial message of newly created conversations. Additionally, payment-related details of 1.2% of ChatGPT Plus subscribers, including their first and last names, email addresses, home addresses, credit card expiration dates, and the last four digits of their card numbers, were also exposed.

Also, the leaked data may have been present in subscription confirmation emails sent in March 2023 and may have been viewable on the subscription management page in ChatGPT accounts. OpenAI confirmed the exposure occurred during a nine-hour period on March 20 but acknowledged the possibility that information may have been leaked even before that date. The breach resulted in the temporary shutdown of the service until the issue was resolved.

The quick resolution of the data leakage in ChatGPT, which seemed to cause minimal harm, affected fewer than 1% of its paying users. However, this incident could serve as a warning about potential risks that might users down the line.

 

CVE-2023-28432

In March 2023, GreyNoise noticed that OpenAI provided code examples to customers trying to integrate plugins with the new feature, including a docker image associated with the MinIO distributed object storage system.

The docker image version used in OpenAI's illustration, specifically release 2022-03-17, has a vulnerability tracked as CVE-2023-28432, which could disclose sensitive information. This security flaw could be exploited to gain access to secret keys and root passwords. During this time, GreyNoise observed attempts to take advantage of the vulnerability in the wild.

Grey Noise cautioned that when attackers embark on mass-identification and mass-exploitation of vulnerable services, everything is at risk. This includes any deployed ChatGPT plugins utilizing the outdated version of MinIO.

 

 

privacy concerns

Certain companies, such as JPMorgan Chase (JPM), have now placed restrictions on their employees' use of ChatGPT, Jasper Chat, Microsoft Bing AI, and other generative chat bots. The reason for these restrictions comes from compliance concerns associated with employees haphazardly using third-party software.

Additionally, Italian regulators announced a sudden and temporary ban on ChatGPT, days after the software’s breach in March 2023. The regulators highlighted the absence of sufficient information regarding the data collection process. They also raised additional concerns, specifically focusing on the lack of age verification for users of ChatGPT. The regulator argued that non verification "puts children at risk of receiving responses that are entirely inappropriate for their age and level of understanding." Notably, the platform is intended for users aged 13 and above.

 

 

the risk

With the growing trend of employees incorporating these tools into routine tasks like work emails or meeting notes, the likelihood of unintentionally sharing company data with different AI bots is going to rise. While using AI to summarize meeting notes or write company emails may appear as a productivity boost, users often fail to ensure that sensitive company or personal information isn't inadvertently disclosed to the bot. This could result in the accidental exposure of a considerable volume of sensitive data. Here are five significant security concerns associated with generative AI models:

  • Data Breaches: The exposure of sensitive information due to a breach in data security.
  • Poor Software Development: The use of subpar coding practices leading to vulnerabilities.
  • Poor Security: Insufficient measures to protect the AI model and its associated data.
  • Data Leaks: Accidental or intentional disclosure of confidential data.
  • Deep Fakes: Voice and facial recognition are more prominent ways to access devices and sensitive data these days. Threat actors can now create deepfakes that bypass security.

Although there is cause for major concern, these risks seem to be addressed in AI security policies. For example, in April 2023, OpenAI released a blog post explaining their safety and privacy measures. It's unclear whether these measures were implemented before or after the ChatGPT data breach in March 2023.

 

REGARDING PRIVACY

“While some of our training data includes personal information that is available on the public internet, we want our models to learn about the world, not private individuals. So, we work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for personal information of private individuals and respond to requests from individuals to delete their personal information from our systems. These steps minimize the possibility that our models might generate responses that include the personal information of private individuals.” – OpenAI.com

 

REGARDING CHILDREN AND HATE

“One critical focus of our safety efforts is protecting children. We require that people must be 18 or older—or 13 or older with parental approval—to use our AI tools and are looking into verification options.

We do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories. Our latest model, GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5 and we have established a robust system to monitor for abuse. GPT-4 is now available to ChatGPT Plus subscribers and we hope to make it available to even more people over time.” – OpenAI.com

 

 

is there a balance?

While concerns about AI's potential cybersecurity threats have been extensively reported, there are ways to mitigate certain risks. Most organizations establish proper social media usage policies for employees, along with cybersecurity training. However, with the widespread adoption of generative AI tools, it's time to integrate new policies and training modules. These can include:

  • Guidelines on Information Sharing with Generative AI: Clearly defining what employees can and cannot share with generative AI.
  • Basic Understanding of Large Language Models (LLMs): Providing an overview of how LLMs function and outlining potential risks associated with their use.
  • Approval Mechanism for Company Devices: Restricting the use of AI and AI apps to those approved by the company, ensuring a controlled environment for AI tool integration.Top of Form

With the ongoing development of generative AI tools, we can expect a rise in specialized cybersecurity solutions designed to address their unique vulnerabilities. Two examples are LLM Shield and Cyberhaven, aimed at preventing employees from sharing sensitive or proprietary data with generative AI chatbots. Additionally, you have the option to utilize a network auditing tool to keep track of the AI apps currently connecting to your network. This way, you can stay aware of the AI tools in use within your organization.

So, yes, there is a balance between using AI in a productive manner while being aware of security risks. On one hand, there are legitimate concerns about the potential risks and vulnerabilities introduced by AI tools, particularly when it comes to data breaches, data leaks, and other security challenges. On the other hand, there are efforts being made to address these concerns, such as the development of cybersecurity tools to keep sensitive information safe.

Additionally, if organizations incorporate AI into their existing cybersecurity strategies, create clear rules, and train their employees, they can actively handle these concerns. Being aware of potential privacy problems and recognizing the need for extra safeguards shows a balanced approach that considers both the benefits of AI and the importance of reducing its risks.

 

 

avertium's recommendations

We don't know where generative AI chatbots are going or how long it'll take to figure that out. In the meantime, organizations can try to balance the benefits of chatbots with the safety of company and employee data by considering the following:

  • Robust Policies: Develop clear and comprehensive policies specifically addressing the use of chatbots. These policies should cover the sharing of sensitive information, data handling, and appropriate use of AI tools.
  • Training: Provide training to employees on the responsible and secure use of chatbots. Make sure they understand the potential risks, how to interact safely, and what information should not be shared.
  • Regular Auditing: Conduct regular audits to monitor chatbot interactions and ensure compliance with data privacy policies. This helps identify any potential breaches or misuse.
  • Encryption: Ensure that data transmitted between employees and chatbots is encrypted, protecting it from unauthorized access during transmission.

 

 

How Avertium is Protecting Our Customers

  • Avertium simplifies Governance, Risk, and Compliance (GRC) by providing contextual understanding instead of unnecessary complexity. With our cross-data, cross-industry, and cross-functional expertise, we enable you to meet regulatory requirements and demonstrate a robust security posture without any vulnerabilities. Our GRC services include:
     
    • Cyber Maturity
    • Compliance Assessments and Consulting
    • Managed GRC

  • Avertium aligns your Cybersecurity Strategy with your business strategy, ensuring that your investment in security is also an investment in your business. Our Cybersecurity Strategy service includes:
     
    • Strategic Security Assessments - Strengthening your security posture begins with knowing where your current program stands (NIST CSF, Security Architecture, Business Impact Analysis, Sensitive Data Inventory, Network Virtualization and Cloud Assessment).
    • Threat Mapping – Leverage Avertium’s Cyber Threat Intelligence, getting a more informed view of your most likely attack scenarios (Threat Assessment and MITRE ATT&CK).
    • Cyber Maturity Roadmap - Embrace a comprehensive, quantifiable, and well-organized approach to establishing and continuously enhancing your cybersecurity resilience (Policy + Procedure Development, Virtual CISO (VCISO), Training + Enablement, Tabletop Exercises, and Business Continuity + Disaster Recovery Plan)

 

 

 

Supporting Documentation

ChatGPT Data Breach Confirmed as Security Firm Warns of Vulnerable Component Exploitation - SecurityWeek

OpenAI Clarifies its Data Privacy Practices for API Users (maginative.com)

Cybersecurity Challenges and Opportunities With AI Chatbots (bankinfosecurity.com)

Our approach to AI safety (openai.com)    

5 security risks of generative AI and how to prepare for them (zapier.com)

67a7081c-c770-4f05-a39e-9d02117e50e8.pdf (washingtonpost.com)

Italy blocks ChatGPT over privacy concerns | CNN Business

JPMorgan restricts employee use of ChatGPT | CNN Business

Study warns deepfakes can fool facial recognition | VentureBeat

OpenAI is massively expanding ChatGPT’s capabilities to let it browse the web and more - The Verge

Can Someone With No Programming Experience Write Ransomware Using ChatGPT? (avertium.com)

ChatGPT Confirms Data Breach, Raising Security Concerns (securityintelligence.com)

When you're talking to a chatbot, who's listening? | CNN Business

ChatGPT and Open AI Security: Protecting Your Privacy in the World of Advanced Language Models | by Rohit Vincent | Version 1 | Medium

12 Best ChatGPT Alternatives in 2023 (Free and Paid) | Beebom

Generative AI Data Privacy with Skyflow LLM Privacy Vault - Skyflow

Generative AI and Its Impact on Privacy Issues | DataGrail

ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications - SecurityWeek

The New Risks ChatGPT Poses to Cybersecurity (hbr.org)

Privacy in the Age of AI: Risks, Challenges and Solutions (thedigitalspeaker.com)

Ethical Considerations in AI-Powered Cybersecurity | by Besnik Limaj, MBA | Medium

AI and automation for cybersecurity (ibm.com)

ai privacy: AI and Privacy: The privacy concerns surrounding AI, its potential impact on personal data - The Economic Times (indiatimes.com)


 

 

APPENDIX II: Disclaimer

This document and its contents do not constitute, and are not a substitute for, legal advice. The outcome of a Security Risk Assessment should be utilized to ensure that diligent measures are taken to lower the risk of potential weaknesses be exploited to compromise data.

Although the Services and this report may provide data that Client can use in its compliance efforts, Client (not Avertium) is ultimately responsible for assessing and meeting Client's own compliance responsibilities. This report does not constitute a guarantee or assurance of Client's compliance with any law, regulation or standard.