`

    Exploring ChatGPT, Part 2: Nemesis

    By Ronak D Jain
    Published on March 22, 2023

    The first part of the Exploring ChatGPT series, Genesis, discussed the technology behind OpenAI's ChatGPT, how ChatGPT differentiates itself from other chatbots, and how it disrupts the cybersecurity industry. The second part in this series will be discussing ChatGPT's position in the AI-driven cybercrime landscape and, subsequently, the cybercrime cases that originated with ChatGPT as an accomplice.

    ChatGPT in relation to AI-driven cybercrime

    Like every other technology, AI can be used for both positive and negative purposes. Its duality allows it to change its stance depending on who uses it and what they use it for. As you can guess, ChatGPT is already on the radar of cybercriminals, who are deeply fascinated by its ease of use. Furthermore, machine learning (ML) and deep learning (DL) are also being leveraged by cyberattackers to devise new and complicated infiltration techniques. Therefore, it is important to understand the landscape of an AI-based cyberthreat.

    Dating back to 2020, the European police agency Europol and Trend Micro, a security provider, carried out a study which resulted in the identification of ways cybercriminals are pouncing on the latest developments in AI for a greater chance of success in their crimes. The study also identified the futuristic impact of AI-driven cybercrime and how AI will bolster cybercrime.

    The head of forward-looking threat research at Trend Micro, Martin Roesler, said, "Cybercriminals have always been early adopters of the latest technology and AI is no different," during the publication of the report. The report also stated that 37% of businesses have already integrated AI with other security disciplines in one way or another.

    The following are some of the most prevalent and notable ways AI is being used to carry out sophisticated cyberattacks:

    • Crafting malicious emails with the capability to circumvent spam filters
    • Cracking CAPTCHAs used by a majority of websites online
    • Integrating AI with social engineering attacks
    • Creating deepfakes
    • Automating and optimizing cyberattack operations

    It's important to realize that without the power of AI, carrying out such operations on such a large scale is close to impossible. And ChatGPT is no exception. As mentioned before, ChatGPT uses AI and ML, which makes it a powerful tool for cybercriminals. In fact, most cybercriminals see ChatGPT as an assistant, and even more so since its commercialization. In respect to the misuse of AI, ChatGPT can be used to do the following:

    • Data theft, like hacking and breaching system software by creating an infostealer
    • Malware creation, like codes and scripts for ransomware and encryption tools
    • Website denial-of-service (DoS) assaults, which may subsequently lead to extortion
    • Sophisticated phishing, which is the creation of believable but fake emails requesting personal or sensitive business information

    The following are two cybercrime cases relating to ChatGPT that have been brought to light by Check Point Research.

    Cybercrime case 1: Creation of an infostealer

    This happened on December 29, 2022. A thread was created on an underground hacking forum. The title of the thread was "ChatGPT – Benefits of Malware" and the publisher of the thread revealed that he is trying to redevelop strains and techniques of known malware from existing research publications and pieces of writing on common malware via ChatGPT. He posted an image of the code he developed, which is an information stealing code based on Python. The malware hunts for 12 similar file types (like Microsoft Docs, PDFs, and images) on a computer system, replicates them, and stores and ZIPs them in a random folder inside the Temp folder before finally uploading them to an FTP server which is hard-coded, shooting them across the web.

    Check Point analysts and researchers affirmed that this script is, in fact, a rudimentary infostealer, confirming the publisher's claims. Furthermore, note that the files that were uploaded by the cybercriminal were neither encrypted nor secured, which means that third parties can use them.

    The second example posted by the same cybercriminal was a Java snippet, which was also developed via ChatGPT. What it does is download PuTTY (a general SSH and telnet client) and subsequently, using Powershell, runs it stealthily on the computer system. Apart from modifying the script to run any program, it can also be modified to run malware that can launch DoS attacks, ransomware, or create a backdoor account.

    Looking at the cybercriminal's participation history on the hacking forum, it is clear that the whole idea behind his thread is to establish that ChatGPT can be used by anyone without any deep technical capabilities to develop malicious scripts. Check Point Research also revealed that he had been sharing numerous scripts that can carry out phishing attempts and automate post-exploitation phase techniques.

    This case implies that via ChatGPT, relatively skilled cybercriminals can polish their skills even further, and even create an entire, separate market for cybercrime by leveraging AI.

    Cybercrime case 2: Creation of an encryption tool

    This happened on December 21, 2022. A threat actor going by the username USDoD created his first-ever Python script using ChatGPT that performs cryptographic operations. Analysis by Check Point Research confirmed it. An interesting thing to note is that, when the script was seen by another cybercriminal, they commented that there was a resemblance between USDoD's script and OpenAI's code, to which the publisher replied that OpenAI gave him a "nice [helping] hand to finish the script with a nice scope."

    It might not be obvious at first, but the script can be used to carry out a set of varied functions like the following:

    • The foremost segment of the script is used for signing files by generating a cryptographic key (to be specific, it utilizes an elliptic curve cryptography and the curve ed25519).
    • The next segment of the script is used to encrypt all the files in a specific directory or repository. It uses Blowfish and Twofish algorithms that leverage the hard-coded passwords of a system to encrypt the files.
    • It uses RSA keys, stored certificates in PEM format, MAC signing, and the hash function blake2 for hash comparison and more.

    A noticeable aspect of the script is that apart from the encryption functions, decryption is also enacted in the script. The two primary functions included in the script shared by the cybercriminal are:

    1. Single-file encryption, and appending a message authentication code to the end of the file.
    2. Encryption of a hard-coded path, and decryption of the list of files received as an argument.

    The implications of these functions might seem unclear at first, but they can actually be used for complete encryption of someone's system by modifying the script. The most unnerving fact is that a system can be completely encrypted without any sort of interaction with the user. For instance, if the problems related to the script and syntax are fixed, the code can be transformed into ransomware. Although the cybercriminal USDoD does not seem to possess greater technical skills, and it looks like he is not a developer, he is quite popular and well-respected on the underground forum, and was able to create an encryption tool--which requires considerable knowledge--using ChatGPT.

    In conclusion, accumulating around 100 million users in two months, OpenAI's ChatGPT is one of the fastest-growing platforms for content generation. In an interview with The Wall Street Journal, the CEO of Microsoft, Satya Nadella, said that he fully intends to incorporate ChatGPT and its functions into Microsoft products. Microsoft invested $1 billion into OpenAI a couple of months ago, and is now contributing an additional investment of $10 billion with the release of ChatGPT-3.

    Although it is being backed by tech giants like Microsoft, the questionable effects of AI are already reverberating across the cybercrime landscape. The global AI market size was valued at $93.5 billion in 2021 and is projected to expand at a compound annual growth rate of 38.1% from 2022 to 2030. This implies that the rate at which AI-based cybercrime rises is also going to increase since cybercriminals will target AI-based technology as the primary attack vector. While on one hand, industries have to worry about the implications of AI on cybersecurity and cybercrime, on the other hand, users and governments have to worry about the ethical implications and standards of incorporating and using AI as a stand-alone function and/or bundled with other platforms.

    All things considered, there is no denying that the capabilities of ChatGPT can effectively lead to a reduction in the total amount of manual effort exerted to complete specific tasks; however, ChatGPT is still considered to be in the nascent stage, and functions which require high levels of permissions and industry-specific contextual expertise can leave organizations exposed to legal risks and new cyberthreats.

    It would not be wrong to say that the advantages of ChatGPT are easily on par with its disadvantages. And that is why, in an upcoming blog which will serve as a continuation of this series, we will be discussing more about the nature of ChatGPT—specifically, the benefit-to-cost ratio and the trade-off involved in its dual utility in cybercrime and cybersecurity.

    Related Stories

    2023 Zoho Corporation Pvt. Ltd. All rights reserved.