Home » Topics

Cybercrime and cybersecurity: opportunities and risks through the use of AI

We live in an interconnected world, and the threats we are facing are subject to constant change. Therefore, maintaining security is a continuous process that individuals, organizations and society as a whole need to approach to the best of their abilities. In addition to resilient systems to reduce economic and ecological risks (e.g. natural disasters and loss of biodiversity), cyber security and, above all, cyber resilience are becoming increasingly important.

The use of artificial intelligence (AI) is changing the threat landscape and presenting new challenges for cybersecurity systems that should be addressed in terms of corporate digital responsibility (CDR).

Need for resilient processes and systems

With almost 70 new vulnerabilities in software products registered every day – around 25% more than in the last reporting period (periods: June 1, 2022–June 20, 2023 compared to June 1, 2021–May 21, 2022)[1] – the threat level in cyberspace is higher than ever before. Hacker attacks, e.g. through malware in links and email attachments as well as deepfakes (e.g. face and voice fakes), result in the outage of business services and increasingly jeopardize the (digital) reputation and standing of people, companies and brands in the digital space, especially in social networks. According to the digital association Bitkom, in 2022, companies in Germany suffered losses to the amount of EUR 203 billion due to cyber attacks.[2]

Likewise, on a global scale, the number of cyber attacks in the financial sector rose rapidly. In 2022, there was an average of 1,131 cyber attacks per week, which is over 52% more than in 2021.[3]

(New) cybercrime threats to financial institutions

Classic cybercrime threats

Apart from classic cyber attacks,

  • such as malware, botnets (a group of bots, i.e. systems infected with malware that are remotely controlled via a central command-and-control server)
  • as well as denial-of-service (DOS) attacks, i.e the flooding of web servers with requests so that certain websites and Internet services are no longer accessible[1]
  • and ransomware attacks (digital blackmail, such as the recent attack – presumably by LockBit – on the US subsidiary of the largest Chinese bank (ICBC),[4]

there are new dangers emerging from the use of generative AI models, which is changing the threat landscape.

(AI) cybercrime threats

For example, AI and deep neural networks facilitate the identification of vulnerabilities in program code and the creation of high-quality forgeries of text, voice and video messages.[1] In this context, it is important to emphasize that, fortunately, generative AI models cannot yet carry out cyber attacks autonomously – they have no will or intent of their own and only know what they can learn from the data they are provided. Therefore, what makes them dangerous in the first place are criminals who abuse them.[5]

For example, FraudGPT is currently circulating on the dark web. FraudGPT is an unmoderated AI chatbot created exclusively for criminal purposes such as phishing emails, cracking (removal of copy protection, e.g. password cracking) and carding (credit card data-related crime).[6]

Fake pieces of content that were created using generative AI and deep neural networks are colloquially often referred to as “deepfakes”. They can be divided into the media forms of video/image, audio and text. For an attack to be successful, different attack methods and forms of data are required.[1]

The dangers posed by deepfakes include, for example:

  • Danger for biometric systems: media identities are vulnerable to major manipulation risks, for example in the context of remote identification methods (mobile voice recognition or video identification). This is because AI (voice) models and chatbots can now easily be integrated into various applications or plug-ins. For example, they can interact with the Internet and imitate speech and entire conversations very authentically.[1]
  • Social engineering: when it comes to cybersecurity, vulnerabilities can not just be found in IT systems and networks, but also in the humans that use and maintain them. Criminals use large language models (LLMs) and AI specifically for their phishing and scam attacks and deceive the “human factor” as the supposedly weakest link in the security chain with artificially generated audio, image and video formats.[1] Spam and phishing emails that were generated using LLMs contain virtually no spelling and grammatical errors and can mimick human reactions very well in their choice of words and language style (= social engineering). For example, employees could be induced to disclose credentials under the pretense of a false identity.[7] Attackers could, for example, develop an AI application that imitates an executive’s voice and calls an employee to trick them into making a financial transaction (“CEO fraud”).
  • Disinformation: disinformation campaigns are another danger. For example, manipulated media content of key individuals can be used to damage reputations and spread false information.

Cybercrime-as-a-Service

The new possibilities result in an increasing professionalization of cyber attacks: Cybercrime-as-a-Service is becoming established as a business model among criminals. For example, some criminals offer ransomware in exchange for a share of the ransom. Some speak of a “shadow economy of cybercrime”, which is becoming increasingly interconnected, thus mirroring the structures of the real economy.[1]

Defending against cybercrime with machine learning and AI

In an attack scenario, machine learning and AI are considered threats. However, they play an equally important role on the defense side of things and offer great opportunities in terms of cybersecurity. Did you know that machine learning has been used to detect anomalies in computer systems since as early as 1985?[8]

Nowadays, machine learning is used to identify spam and phishing emails, malware, fraud attempts, dangerous websites, conspicuous user behavior and anomalies in network traffic. The systems can issue alerts, delete spam, block websites and viruses, lock accounts and, in some cases, take measures beyond that (keyword: SOAR – Security Orchestration Automation and Responses). Machine learning is used in media forensics to detect forgeries and manipulations and thus provide clues to possible fake news.[8]

The use of AI in turn facilitates the automatic search for system vulnerabilities in the code.[9]

AI assistance systems: minimizing security risk

In the future, AI assistance systems might be able to contribute to security even more:[1]

  • they can increase the productivity of IT departments by taking over repetitive tasks, helping with decision-making, firewall configurations as well as code and malware analyses.
  • They also mitigate the poor usability of security tools and human error.
  • AI assistance systems also support employees and end users in configuring access rights and alert them when confidential emails are sent to the wrong recipients. One example of this is “Sofie”, an AI-based bot[10] that warns employees of cyber risks directly via MS Teams with security alerts.

The limits of AI in terms of IT security

It seems to be only a small step from the aforementioned assistance systems to solutions that completely take over configuration tasks and other IT management responsibilities. However, a closer analysis shows that there are still various limitations to the use of AI when it comes to defending against cyber attacks, for example with regard to the accuracy of the results it provides. Due to the large amount of training data, it is not uncommon for misinformation to be contained in them.[11] Imagine the AI recognizes a cyber attack as a false positive on Black Friday and then closes all external Internet connections at the firewall.

Full AI automation can still rarely be achieved, due to the lack of sufficiently available realistic test data[9] and complex problem-solving skills. Those are required for the initiation of system-wide damage mitigation and security restoration measures.[11] For this reason, AI can currently rather be used as a supportive security tool, for example to detect and presort deepfakes, than as a fully automated tool.

In addition, when using AI assistance systems and collecting information about the organization in an application that is equipped with, say, an AI language model, the attack surface that these offer due to the large amounts of data must always be taken into account. The assistance system may know more about an organization than individual employees and carry out actions automatically. Administrators and system managers must therefore constantly question, understand and monitor the mechanisms behind the interactions initiated and outputs generated by the model.[1]

However, in view of the skills shortage and the expected future increase in automated attacks by autonomous, intelligent and scalable bots, an automated AI defense seems a requirement for the future in order to be able to react in time.[9]

Building cyber resilience

With regard to the dangers and vulnerabilities mentioned, we recommend above all investing in the skills and qualifications of your employees, both in terms of cyber threat awareness and the continuous development of information security management and resilience skills.

In addition to the further development of cyber resilience processes, other elements are necessary in order to minimize human vulnerabilities, ensure the operability of important and critical processes and protect customer data in the best possible way. This includes

  • the risk-appropriate use of data-driven (AI) security software,
  • effective and – in the medium term – quantum computing-secure encryption technologies,
  • emergency plans and drills,
  • regular stress tests of the IT infrastructure
  • and upskilling in the area of threat analysis (e.g. new (AI) cybercrime methods).
What do you think – how can organizations make their processes and systems more resilient? What measures is your organization already taking?

Feel free to contact us!

Stephan Sahm / author BankingHub

Stephan Sahm

Senior Manager Office Frankfurt
Julia Schraut / author BankingHub

Julia Schraut

Expert Office Berlin

Articles on the topic

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

BankingHub-Newsletter

Analyses, articles and interviews about trends & innovation in banking delivered right to your inbox every 2 weeks