This week, researchers at Group-IB discovered that upwards of 100,000 ChatGPT user accounts were up for sale on the dark web market over the last year. The type of malware used to gain stolen credentials is known as information-stealing malware.
What is Information-Stealing Malware?
Information-stealing malware, otherwise known as info stealers are Trojans explicitly created to harvest data from a compromised system. The most popular type of information stealers collect login data, such as usernames and passwords, which it then passes to the hackers’ systems in the form of a log.
Thus far, Open AI has refuted any reports that alluded to a ChatGPT breach and attributed the stolen user accounts for sale to “commodity malware” present on users’ devices.
“The findings from Group-IB’s Threat Intelligence report result from commodity malware on people’s devices and not an OpenAI breach. We are currently investigating the accounts that have been exposed,” an OpenAI spokesperson explained.
“OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.”
The Security Risks of ChatGPT
A popular feature of ChatGPT is the option for users to store their dialogs with the AI bot. If malicious actors gain entry into a user account that has their conversations logged, they get their hands on a trove of sensitive business information like business strategies, software code, and classified information.
As the risks of using cloud-based language generators for sensitive business information are becoming more widely recognized, Samsung joined a growing list of companies that have implemented strict policies against using ChatGPT on work devices.
How to Keep Your Information Safe on ChatGPT
One practical piece of advice for those who want to use ChatGPT for sensitive data is to disable the feature that saves chat logs or to manually delete the conversations that are better off remaining a secret.
But these tips are not foolproof. Many info stealers capture screenshots of the hijacked system or install keyloggers that track the text of the conversation being typed into the victim’s computer.
Readers are urged to periodically change their login passwords and use two-factor authentication (2FA) or multifactor authentication to reduce the hazards associated with compromised ChatGPT accounts.