Ever since it was released in November 2022, learning language model ChatGPT has taken the world by storm for its wide range of applications — from writing essays and programming code, to making jokes and improving resumes. It has become so popular that several universities in Hong Kong have either banned or regulated its use in academic work.

Experts warn of vulnerabilities

However, according to a report released by Europol, ChatGPT can be exploited to carry out illegal activities. “The impact these types of models might have on the work of law enforcement can already be anticipated. Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT,” the report stated.

While ChatGPT is programmed to refuse to comply with requests that may cause harm, there are ways to get around its content filter system. Users who have successfully done this have tricked the AI model into giving them step-by-step instructions on how to shoplift and build bombs.

This ready-to-use information can be used by people with no prior knowledge of a particular crime, warns the Europol report: “ChatGPT excels at providing the user with ready-to-use information in response to a wide range of prompts. If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps.”

See also
7 Questions Answered About VPN

Europol and top executives express concerns about growing risks

Europol’s experts identified the following three crime areas that ChatGPT could be exploited for that are of most concern to law enforcement:

  • Fraud and social engineering: The language learning model’s ability to draft realistic text can be taken advantage of to reproduce language patterns that can be used to impersonate the style of speech of specific individuals or groups. This could help commit offences such as phishing.
  • Disinformation: ChatGPT’s ability to generate authentic sounding text at speed and scale makes it ideal for propaganda and disinformation purposes.
  • Cybercrime: The language learning model can generate code in various programming languages, which can help a potential criminal with little technical knowledge produce malicious code.

In March 2023, the creators behind ChatGPT, Open AI, released GPT-4, which is supposed to be capable of solving more advanced problems more accurately and less likely to respond to requests for ‘disallowed content’. However, Europol experts found that potential criminal acts that could be committed with the aid of GPT-3.5 still work on GPT-4.

Last week, Elon Musk and leading AI researchers urged for a pause on large-scale AI experiments and the establishment of independent regulators to ensure the safety of future AI systems. “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium”, the open letter says.

See also
Creative Fabrica : Your One-Stop Online Design Tool

Header image credits: Noes_Cucho via Pixabay/Canva

Share this article with your friends ~
5/5 - (2 votes)

From the Middle East to the Far East and a couple of places in between, Anjali has lived in no fewer than seven cities in Asia, and has travelled extensively in the region. She worked as a lifestyle journalist in India before coming to Hong Kong, where her favourite thing to do is island-hopping with her daughter. You can check out her musings on motherhood, courtesy her Instagram profile.

Comments are closed.