Navigating ChatGPT's Blindspots

By Steve Levinson on March 21, 2023 (Last Updated on March 27, 2023 )

Get latest articles directly in your inbox, stay up to date

Back to main Blog
Steve Levinson

Cybercrime is an ever-present threat, and the continued advancement of Artificial Intelligence only serves to amplify the threat.  As such, it is essential that the enterprise community is aware of the potential risks associated with AI adoption in their organization, and take the necessary steps to protect their data.

ChatGPTsBlindspots

I’ve been in the IT industry for well over three decades, and my peers and clients all agree that the rapid evolution of easy-to-use Artificial Intelligence (AI) engines such as ChatGPT can provide the potential for us to asymmetrically advance our businesses and technologies at rates we’ve never seen before. These Large Language Models (LLM), including natural language processing (NLP) engines, learn based on how we humans interact with them.

Organizations that rely on any form of technology risk being left in the dust by their more agile competitors if they don’t consider the potential implications of how AI chat may (or may not) allow for rapid evolution.

 

So, what’s to worry about?

AI chat can pose a risk to organizations by potentially exposing corporate secrets, customer information, and intellectual property. While droves of technologists (note: you don’t even have to be a programmer to write a program!) have found great value in using ChatGPT to refine code or double-check work, what do we know about the collection of that data, where it goes or, how it could potentially be used?

Training data extraction attacks, also known as "exfiltration via machine learning inference," could gather sensitive information or steal IP, according to MITRE's Adversarial Threat Landscape for Artificial-Intelligence Systems (Atlas) knowledgebase. By querying a generative AI system in a special way (think of it as digital interrogation) an “attacker” (which may also just be a very curious researcher) may trigger the model to recall a specific piece of information, rather than generate synthetic data.

Given the current state of AI, almost all models require a human chaperone.
These things are like a highly knowledgeable and somewhat sociopathic teenager: they lack the intelligence and wisdom to act appropriately or understand what is appropriate in a particular situation.

 

Where to start?

Better yet, as I’m a believer in drinking my own champagne (doing what I tell others to do), I asked ChatGPT if it could generate advice on how to best protect my IP as it pertains to using AI chat for development purposes.

This was the output (along with my comments).

  • EDUCATE EMPLOYEES: It is essential to ensure that all employees understand the significance of intellectual property and the risks of sharing confidential information in chats. Conduct training sessions and provide resources that educate employees on how to identify and safeguard sensitive information. [Steve: Fully agree with this as being the number one suggestion. But that said, when one is in the heat of the moment working on innovative back and forth, they may not realize that the questions being asked can potentially be viewed by others and may allow for a competitor to piece the parts together, so this will be a whole new kind of education beyond just protecting “static” IP. Of note, it should be a given to keep sensitive data out of training sets in the first place.]

  • ENFORCE POLICIES AND CONSEQUENCES: Establish a strict code of conduct for AI chat use and enforce it consistently. Clearly outline the consequences of violating the policy to deter employees from sharing confidential information. [Steve: This is critical - it’s unfortunate that technologies often advance more quickly than humans can create policies to address these technologies.]

  • LIMIT ACCESS: Restrict access to confidential information to only those who need it. Apply access controls, such as password protection and two-factor authentication, to prevent unauthorized access to chat platforms. [Steve: This will become increasingly difficult as companies will need to not only have policies or guidelines that state what is allowed, but also technical controls such as content filters/app-level firewalls to help prevent this. In addition, this could end up inadvertently bypassing “legacy” access controls based on need-to-know. That said, innovators will have good business reasons to require access to work their magic.]

  • USE SECURE AI CHAT PLATFORMS: Choose AI chat platforms that have end-to-end encryption and other security features that can prevent data breaches. Ensure that the chat platform is compliant with industry regulations such as GDPR, CCPA, etc. [Steve: Yes, but… once data is in the chat engine (for example no longer in your control), it is possible that others could take advantage of the content – especially if chat bots are in use.]

  • MONITOR AI CHAT ACTIVITY: Implement a chat monitoring system to track all conversations in chat platforms. Review and analyze chat logs regularly to identify irregularities, suspicious behavior, or inappropriate use of company data. [Steve: Likely sounds easier than it is… plus there is the issue of privacy.]

  • REGULARLY AUDIT AI CHAT SYSTEMS: Regularly audit AI chat systems to ensure they are up-to-date, and vulnerabilities are addressed. [Steve: Applicable to the extent that these are controlled by your organization, but still a large concern for public engines such as ChatGPT.]

 

Another thing to monitor is how the software vendors are addressing privacy and security concerns to their offering. Last week, Microsoft announced Office 365 Co-Pilot, basically a ChatGPT add-in for all Office products (only available to a small group of beta users now, some Fortune 500 companies). This tool allows a business user to generate notes, email, presentations, spreadsheets, etc., based on the information the user has in Office.

Microsoft recognizes the concerns many have and has stated, “Copilot presents only data that each individual can access using the same underlying controls for data access used in other Microsoft 365 services.  Additionally, we do not use customer data to train or improve the large language models used to generate content.”


A funny thing I’ve been saying since the mid-80’s (where the null hypothesis has yet failed to be disproved) is:

“With each and every advance in artificial intelligence, we have an equal
and opposite advance in artificial stupidity.”

While we are all about to strap in for a wild, and perhaps the most exciting ride in our IT lifetimes, we need to approach use of these new technologies with eyes wide open. While the guidelines above are a start, we are all going to learn a heck of a lot about advancing our organizations as well as the pitfalls that some suffer by not adequately thinking through what could go wrong.

For now, the guidelines above can help you mitigate the risk of leaking intellectual property through AI chat platforms like ChatGPT, therefore protecting your organization's assets, reputation, and clients. But tune in again shortly as the journey is just beginning, and who knows… perhaps ChatGPT will pick up some of my insights to further enrich us.

 

For your reading pleasure:

At Online, we understand the implications of AI on cybersecurity, and have a wealth of experience in helping organizations both take advantage of the power of AI while navigating the various risks involved with integrating AI.

Our team of experts can help to identify potential blind spots and vulnerabilities pertaining to your systems and your adaptation of new technologies, and develop strategies to mitigate these risks. We can also provide guidance on how to best implement AI in a secure manner to help ensure that your intellectual property and data remain safe. If you are looking to take advantage of the opportunities that AI presents but need help to understand the potential risks, then please get in touch with us. We can provide tailored advice on how to safely and securely implement AI, helping to ensure that your organization remains safe from cybercrime.” 

Let's Talk


Steve_Levinson_Heashot


As Vice President, Risk, Security and Privacy and Online's Security Officer, Steve is responsible for building, growing, and leading a pragmatic, risk-based, business-minded practice focusing on governance/program management, vulnerability management, and data protection.

 Steve is a recognized thought leader in the industry. He has spoken at numerous industry events and blogs regularly.

Steve is a CISSP with over twenty years of information security experience and over thirty years of IT experience. He holds an MBA from Emory Business School and an Industrial Engineering degree from Georgia Tech. Prior to joining Online, Steve helped lead the consulting practices at AT&T Consulting and at Verisign and he has extensive experience in the Payment Card Industry.

 

Submit a Comment

Get latest articles directly in your inbox, stay up to date