What are the Implications of ChatGPT for InfoSec?

What does ChatGPT have in store for information security? 

Will the artificial intelligence-powered chatbot empower the information security field or tip the balance of power into the hands of cybercriminals?

Still in its infancy, the ChatGPT bot piqued the attention of the world, gained millions of users in mere days, and even had the “Musk” tweeting about its capabilities. “ChatGPT is scary good. We are not far from dangerously strong AI,” he wrote.

With the advent of ChatGPT, some in the infosec world feel like the industry was endowed with a new set of powers that opens up both new opportunities and new challenges for security professionals. Let’s analyze the potential implications of ChatGPT and infosec combining forces. 

What are the Implications of ChatGPT for InfoSec?

Two Sides of the Same Weapon

Cybersecurity solution developers have been predicting for years that AI will revolutionize cybersecurity and make computing far safer. But one-sided statements of this sort have a bad history of not delivering. They tend to overlook the fact that a new weapon that lands in two adversaries’ hands will never stop the fight; it may shift the battle point, change the offense and defense strategy, and sadly- up the casualties and victims. 

It was widely believed that Hiram Maxim’s invention of the machine gun in 1884 would end all battles. I mean, with that kind of armament at its disposal, what army could conceivably attack a defensive position? But as John Ellis wrote in his historical work The Social History of the Machine Gun, “without Hiram Maxim, much of subsequent world history might have been different.”

AI, as revolutionary as it may be,  isn’t going to eradicate the threat landscape, but will in all probability change it drastically. 

For example, ChatGPT will undoubtedly be used to generate code. This is arguably the most obvious feature of how ChatGPT will be leveraged for cyberattacks. Yeah, writing computer code based on a straightforward text request is what ChatGPT excels at, and hackers have already shown that ChatGPT has the ability to create new malware strains. But wait! Security professionals have the same tool at their disposal to reverse engineer new strains of malware, and study how hackers have developed the malware to create identifiers that catch infected files.  ChatGPT  information security will jumpstart the reverse-engineering process giving security teams a fair chance.

Increase in Targeted Attacks

There will undoubtedly be an increase in targeted attacks in the near future. With ChatGPT at the disposal of malicious actors, we can expect to see an uptick in attacks that emphasize reaching out to each potential victim in a customized way. Reaching out to each victim with an attack message fit for them has been a labor-intensive operation till now. Hackers would need to identify the exact interests of their targets in order to modify their messaging based on the data they have collected and opt for the simpler method of sending a single general message to everyone.

Attacks will have a significantly higher likelihood of success with little to no personal effort on the part of attackers if a trusty AI bot can research likely triggers and create personalized attack vectors for each target based on data that the actors have. It’s like having a built-in social advisor.

If an AI can do the work for them, find out what will most likely trigger each individual, and craft a targeted attack for each of them, then attacks have a much bigger chance of success while keeping their personal effort minimal.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Learn more about ChatGPT for InfoSec

Evil Chatbots

Another highly probable new development that will result from the ChatGPT revolution is the rise of evil chatbots. The creation of chatbots that mimic real humans using tools like ChatGPT could be used by malicious actors to convey misinformation or coerce individuals. Guess what?: Online crooks are already offering malicious ChatGPT bots for sale.

Closing the Workforce Hole

Do you want to hear the good news or the bad news? The good news is that there are now more cybersecurity professionals than ever before. A security-related job is now held by 4.7 million people, according to (ISC)2’s annual Cybersecurity Workforce Survey.

The study also discovered a global shortage of 3.4 million cybersecurity workers, which is bad news. The security team at their company, according to 70% of those polled, is believed to be understaffed, which reduces its efficacy.

This is really all old news, and industry leaders have been looking to new methods of accessing talent, such as leveraging next-gen technology like automation, to help close the gap before a critical shortage sets in. ChatGPT might be what they were waiting for.

ChatGPT has the potential to close the biggest vulnerability the infosec community faces: the lack of sufficient manpower and talent in security skills. But as with every tool, there is a flip side as well. On one hand, ChatGPT will be able to build the workforce by enabling employees with less experience who have the right soft skills to enter the security field. Additionally, ChatGPT can be extremely beneficial as a force multiplier that enables our limited number of analysts to do the job of more people.

On the other hand, actors with very little technical expertise can now create attack execution tools for practically no expense, whereas this ability was previously only available to a small number of highly professional hackers. Recent examples include programmers creating malware that can retrieve useful documents from infected devices and upload them to a remote server or download more aggressive payloads like crypto lockers.

Examples of How ChatGPT May be Leveraged:

Triage:

SOC analysts can feed alerts and logs to the chatbot and then request from ChatGPT to interpret the data to prompt the next steps in the triage workflow. 

Deobsufucation:

Malware researchers can use the tool to deobfuscate long winding codes that were purposely misleading in their meaning and malicious agenda. Deobsufacation, which typically takes hours of work to deobfuscate and rewrite the code “beautifully” can now be performed in seconds, thanks to some really smart bots.

Incident Response:

A team can use the existing model and natural language processing to feed all available data about an incident and describe the rationale for a potential response. ChatGPT could then immediately prove or disprove a theory about a compromise. Today, that involves several days of work by an incident response lead, an engineer, and several analysts to fully resolve an incident. I can foresee a future where the process doesn’t need an analyst at all.

ChatGPT and Data Privacy

One of the main concerns with the new language model is ChatCPT privacy issues. The model uses any data it is fed, including personal information and social media content. The model uses this personal data without obtaining permission from the owners, making it difficult to control. You can refer to the privacy policy from chatGPT, which allows the company to access any information fed into it.

If someone were to try to delete their personal data from ChatGPT, it would be quite impossible to accomplish, making it virtually impossible to exercise the “right to be forgotten.” To date, there is no practical way to remove personal data from the machine learning model once the model has processed that information. 

“People are furious that data is being used without their permission,” Sadia Afroz, AI researcher with Avast, says. “Sometimes, some people have deleted the data but since the language model has already used them, the data is there forever. They don’t know how to delete the data.” 

To combat ChatGPT privacy concerns, efforts are being made to allow users to delete their personal information from the model, but there is no timeframe yet of when this service will be available, or if it will work on a technical level. 

Long-Term Impact

The expectation is for a shift in privacy and data regulation to control the role ChatGPT will play in the years going forward. Additionally, as technology is here to stay, there will be new safeguarding technologies built around it, like detection tools. 

There is the inevitable concern about whether it will replace humans and their jobs, but we don’t believe ChatGPT will make this happen. On the contrary, it will likely up the battle at all levels. 

The potential of ChatGPT is almost endless. As individuals and businesses around the world leverage ChatGPT for their objectives, we’ll begin to see the effects taking shape. At Centraleyes, we share the security sector’s enthusiasm for Chat GPT’s potential to fundamentally alter how information security is practiced in the future.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Looking to learn more about ChatGPT for InfoSec?
Skip to content