Artificial intelligence has significantly benefited humanity during the last ten years. Since artificial intelligence is gradually making its way into our regular digital services, this trend will likely persist for the foreseeable future. Many governments throughout the world are thinking about implementing AI systems and apps to aid in their operations. More particularly, the software makes crime detection and crime prediction easier. The potential of AI technology to help accomplish national and public security objectives has also been recognized by intelligence and national security organizations. Because it may be extremely devastating in the hands of criminals, artificial intelligence has numerous benefits and drawbacks. Artificial intelligence and the effects of its use in the legal system should be addressed by criminal justice.
There is continuing discussion in international political and legislative circles regarding the revision and improvement of the scope and thresholds of responsibility for AI systems and technologies. However, there is unlikely to be agreement on a practical and universal solution, at least not soon, given the intricacy of the subject and the disparate legal frameworks for defining civil responsibility worldwide. In addition, AI and machine learning can effectively manage cybersecurity solutions to assist reduce security risks as well as identify and respond to cyberattacks that target crucial infrastructure sectors like water, power, and energy. Nevertheless, there are still a lot of difficult obstacles to overcome, particularly for SMBs who must rely on limited funding to strengthen their cybersecurity skills.
A significant portion of the linked population of the globe was restricted as a result of the COVID-19 epidemic. Because of this circumstance, businesses and people are more reliant on AI-based systems, technologies, and apps for tasks like remote work, distant learning, online payments, and access to additional entertainment alternatives like streaming and video-on-demand services. Sadly, this circumstance also caused organized criminal gangs to think twice. It reconfigured its illicit activities to hit a variety of targets, including international organizations, organizations in the health sector, supply chain businesses, and people. We have seen how organized criminal groups have honed their CASS (crime as a service) capabilities in particular and used their operations to generate more considerable financial benefits with little chance of being found by law authorities and prosecuted.
Cybercriminals have discovered a revolutionary way to further their illegal operations with AI technology, including fresh ways to plan and execute attacks against organizations, governments, and people. Even while there is little proof that criminal organizations possess the technical know-how necessary to manage and manipulate AI and machine learning systems for illicit ends, these organizations are indeed aware of the technologies’ great potential for illicit and disruptive uses. Additionally, professional technical hackers are hired and brought into organized criminal groups’ files to control misuse and exploit computer systems, launch attacks, and carry out unlawful actions around-the-clock from anywhere around the globe.
According to current trends and data, hackers increasingly use IoT to create and disseminate malware and launch ransomware assaults, which are primarily boosted by AI technology. This trend is predicted to continue since, in the next five years, it is anticipated that more than 2.5 million devices, including industrial machinery and critical infrastructure operators, will be fully linked online, increasing the vulnerability of businesses and consumers to cyberattacks.
In many international and governmental circles, the question of prejudice and discrimination is also a pertinent topic for AI policy debate. Even though some governments may find facial recognition very appealing for enhancing public security and safety and prioritizing national security activities, such as terrorist activities, this technology may raise pertinent and contentious issues relating to protecting fundamental rights, including privacy and data. As a result, the widespread use of technologies based on facial recognition systems deserves more attention in the world of international policy.
With the use of artificial intelligence (AI) tools called “bots,” there is a persistent global tendency to spread false information. To disinform and mislead the public, particularly younger generations that find it difficult to distinguish between reliable sources of information and fake news, bots are mainly employed to propagate false information over the Internet and social media. Additionally, the deployment of “bots” has the potential to undermine confidence, cast doubt on the media’s objectivity, and undermine democratic and governmental institutions.
Humans still struggle with the difficulty of examining and confirming the reliability of the sources, even though AI offers the potential to improve the processing of substantial volumes of data to prevent the spread of false information in social networks (Velasco, 2022). Content moderators of media outlets and technological corporations without direct ties to the government typically carry out this activity. This circumstance has prompted key policy-making organizations, such as the European Commission, to implement extensive actions to combat the growth and effects of internet disinformation. Deep fakes are a further trend and technology that are employed extensively across various sectors.
Deepfakes exploitation and misuse have elevated to the top of national political and law enforcement priorities. Deepfakes, which may be used in conjunction with social engineering strategies and system automation to carry out fraudulent, illegal, and cyberattacks, have been used to imitate politicians, celebrities, and CEOs of corporations (Velasco, 2022). Cybercriminals worldwide are currently utilizing deep fake technology for harmful objectives, and this trend is continuing to grow. For instance, in 2019, fraudsters impersonated the voice of the Chief Executive of a UK-based energy firm using AI speech-generating software.
An approach to artificial intelligence and the effects of its use in the legal system should be developed in criminal justice. We have observed how organized criminal organizations have honed their skills and increased the financial benefits from their operations with little chance of being found by law enforcement and held accountable. Even if there is insufficient proof to support it, criminal organizations have recognized the enormous potential that artificial intelligence and machine learning systems have for illicit and subversive activities.
An approach to artificial intelligence and the effects of its use in the legal system should be developed in criminal justice. The current situation around AI is worrisome in its potential and possible consequences for the whole world. Disinformation is increasingly being transmitted throughout the globe via “bots.” or artificial intelligence tools. Bots are mainly used to propagate false information on the Internet and social media. By deceiving the public, particularly the younger generation, who finds it difficult to discriminate between reliable sources of information and false information, they have a deterring impact. Deep forging technology usage for nefarious purposes is currently being abused by hackers worldwide. This emphasizes the pressing need for criminal justice systems operating under the auspices of international institutions to be improved.
Velasco, C. (2022). Cybercrime and artificial Intelligence. An overview of the work of international organizations on criminal justice and the international applicable instruments. ERA Forum, 23(1), 109–126.