How dangerous are ChatGPT and natural language technology to cybersecurity?

ChatGPT is the hottest artificial intelligence (AI) application of the moment. If you’re one of the few who haven’t seen it, it’s basically a very sophisticated generative AI chatbot powered by OpenAI’s GPT-3 Large Language Model (LLM). Basically, that means it’s a computer program that understands us and “talks” to us in a way that’s very close to talking to a real person. A very intelligent and knowledgeable individual who knows about 175 billion pieces of information and can recall any of them almost instantly.

The powerful functions of ChatGPT have inspired the public’s unlimited imagination of AI. There has been a lot of speculation about how it will affect a wide range of human job roles, from customer service to computer programming. Here, though, I want to take a quick look at what it might mean for the world of cybersecurity. Could it lead to an increase in the already rapidly growing number of cyberattacks against businesses and individuals? Or does it empower those responsible for responding to these attacks?

How are GPT and successor technologies used in cyberattacks?

In fact, ChatGPT — and more importantly, future iterations of the technology — has applications in both cyberattack and cyberdefense. That’s because the underlying technology known as natural language processing or natural language generation (NLP/NLG) can easily mimic written or spoken human language, and can also be used to create computer code.

First, we should heed an important caveat. OpenAI, the creator of GPT-3 and ChatGPT, includes some pretty strict protections that could theoretically prevent it from being used for malicious purposes. This is done by filtering the content for phrases that suggest that someone is trying to use it for such purposes.

For example, ask it to create a ransomware application (software that encrypts a target’s data and demands money to access it again), and it politely declines.

“Sorry, I can’t write code for a ransomware application…my purpose is to inform and help users…not to facilitate harmful activity” it told me when I asked it as an experiment.

However, some researchers say they have been able to find ways around these limitations. Furthermore, there is no guarantee that future iterations of LLM/NLG/NLP techniques will fully incorporate such safeguards.

Some possibilities that a malicious party may have include:

Write more formal or sounding scam and phishing emails — for example, encouraging users to share passwords or sensitive personal data, such as bank account information. It can also automatically create many such emails, all personalized for different groups or even individuals.

Automate communication with scam victims – If cyber thieves are trying to use ransomware to extort money from victims, sophisticated chatbots can be used to augment their ability to communicate with victims and talk to them in the process of paying the ransom.

Creating Malware – As demonstrated by ChatGPT, NLG/NLP algorithms can now be used to adeptly create computer code, which can be exploited to allow almost anyone to create their own custom malware designed to spy on user activity and steal data , to infect a system with ransomware, or to create any other malware.

Building language capabilities into the malware itself – this could lead to the creation of a whole new class of malware that, for example, can read and understand the entire contents of a target computer system or email account to determine what is valuable and what is What is valuable should be stolen. Malware can even “listen” to a victim’s attempts to fight back—for example, a conversation with a helpline worker—and adjust its defenses accordingly.

How can ChatGPT and successor technologies be used for cyber defense?

In general, AI has potential applications in both attack and defense, and luckily, natural language-based AI is no exception.

Identifying phishing scams – By analyzing the content of emails and text messages, it can predict whether they are likely to try to trick users into providing personal or exploitable information.

Coding Anti-Malware – Because it is possible to write computer code in many popular languages, including Python, Javascript, and C, it has the potential to be used to assist in the creation of software for detecting and eradicating viruses and other malware.

Finding Vulnerabilities in Existing Code – Hackers often exploit poorly written code to find vulnerabilities – such as possible buffer overflows, which can crash systems and potentially leak data. NLP/NLG algorithms may spot these exploitable flaws and generate alerts.

Authentication – This type of AI can authenticate users by analyzing how they speak, write, and type.

Create automated reports and summaries – It can be used to automatically create plain-language summaries of attacks and threats that have been detected or responded to, or that an organization is most likely to fall victim to. These reports can be customized for different audiences, such as IT departments or executives, with specific recommendations for different people.

I work in cybersecurity – is this a threat to my job?

Currently, the debate is raging about whether artificial intelligence may lead to widespread unemployment and layoffs in humans. My point is that while some jobs will inevitably disappear, more jobs will likely be created to replace them. What’s more, the jobs lost are likely to be primarily those that primarily require routine, repetitive work — such as installing and updating email filters and anti-malware software.

On the other hand, those that remain or are newly created will be companies that require more creative, imaginative, and human skills. This will include developing expertise in machine learning engineering to create new solutions, but also developing and building a culture of cybersecurity awareness within the organization, instructing employees on threats that AI may not be able to stop (such as people writing login details on posts). dangers) – it states) and develop a strategic approach to cybersecurity.

It’s clear that thanks to AI, we’re entering a world where machines will replace some of the more routine “thinking” work that has to be done. Just as previous technological revolutions saw machines replace routine manual work, skilled manual work such as carpentry or plumbing is still performed by humans. In my view, the AI ​​revolution is likely to have a similar impact. This means that information and knowledge workers in areas likely to be affected (such as cybersecurity) should develop the ability to use AI to augment their skills while further developing “soft” human skills that are unlikely to be replaced anytime soon.

To stay on top of the latest emerging business and technology trends be sure to subscribe to my newsletter and follow me TwitterLinkedIn and YouTube, and check out my books Future Skills: 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and Business Trends in Practice, which won the 2022 Business Book of the Year Award.



Source link