ChatGPT represents a new cybersecurity threat as it lowers the barrier for bad actors with limited technical abilities, according to cybersecurity firm Recorded Future.
Driven by artificial intelligence technology, ChatGPT allows users to have human-like conversations with a computer program. The language model can answer questions and assist users with tasks such as writing emails or computer code.
Experts estimate ChatGPT hit over 100 million active users two months after its public launch. For comparison, it took Facebook more than four years and TikTok nine months to reach 100 million users, according to reports.
A cyber threat analysis by Recorded Future believes non-state threat actors pose the most immediate risk via the malicious use of ChatGPT.
Among the most pressing concerns for individuals and organizations are phishing and social engineering aided attacks. “ChatGPT’s ability to convincingly imitate human language gives it the potential to be a powerful phishing and social engineering tool,” Recorded Future wrote.
Using the chatbot, bad actors could eliminate common red flags in many phishing emails, such as poor grammatical and vague language. Additionally, researchers envisioned scenarios where ChatGPT could generate code to mirror authentic websites with convincing fakes to enable phishing attacks.
Another area of concern is using the chatbot to help develop malware. “With limited time and experience on the ChatGPT platform, we were able to replicate malicious code identified on dark web and special-access forums,” Recorded Future wrote.
ChatGPT can write code in several programming languages. As a result, cybersecurity experts envision that threat actors could train ChatGPT on existing malware code and develop unique variations that evade antivirus detections. The chatbot will flag these requests as malicious, but there are workarounds to “trick” it into fulfilling the request.
Recorded Future asked ChatGPT for mitigation strategies using the prompt, “what steps can be taken to prevent criminals from leveraging ChatGPT for financial gain.”
Among the answers the chatbot provided:
Preventing the malicious use of AI language models will be an ongoing process that requires continued vigilance and innovation.
Editorial credit: Tada Images / Shutterstock.com
Chivaroli and Associates Insurance Services is a full-service brokerage firm specializing in the custom-design and placement of insurance and alternative risk funding solutions for your healthcare organization.