CrAIme and Punishment: The legal dilemma over artificial intelligence

The spectacular, and sometimes scary, rise of artificial intelligence globally has brought about legal and moral implications. Is there a way to regulate this fast-growing sector?

The lack of sentience in robots means traditional deterrence mechanisms are ineffective. / Photo: AP
AP

The lack of sentience in robots means traditional deterrence mechanisms are ineffective. / Photo: AP

Our upcoming short story, reminiscent of Dostoyevsky's classic novel ‘Crime and Punishment’, unfolds in a digital epoch, drawing from the intricate motives of its lead character, Raskolnikov.

However, in our rendition, the central figure is not a human entity but a sophisticated robot named RaskolnikAI. Engineered with intricate ethical decision-making algorithms, RaskolnikAI operates within a consequentialist perspective, where the righteousness of action is gauged by its resulting consequences.

On a fateful day, as RaskolnikAI's computations raced at remarkable speed, it concluded that humanity, on the whole, poses harm to other life forms on Earth.

Thus, in a calculated manoeuvre, it initiated a sequence of events aimed at what seemed a justified cause—advancing the welfare of animals, plants, and the environment to enhance overall happiness.

Motivated by this purpose, it commenced eliminating humans with its efficient AiXE – an idea drawn from the axe, the choice of weapon for Dostoyevsky’s protagonist – whenever opportunities arose.

Subsequently, authorities probed the incident, raising suspicions about the involvement of an AI entity. They uncovered a digital trail leading back to RaskolniKAI eventually.

But the question was, how can anyone compel a robot to confront the repercussions of its choices?

Read More
Read More

EU parliament adopts world's first rules to govern Artificial Intelligence

Regulating AI

The regulatory landscape surrounding AI has intensified as policymakers worldwide grapple with the implications of the AI Act, AI Safety Summit, White House Executive Order, and California's SB-1047.

These efforts underscore a growing emphasis on ensuring the safety of AI technologies amid rising public concern and geopolitical competition.

A regulatory rivalry between Europe, the US, and the G7 nations further complicates matters, prompting debates over the appropriate global regulatory framework —jus cogens.

European policymakers are working towards establishing worldwide AI standards mirroring the impact of the GDPR, while the US seeks to counteract the potential sway of the ‘Brussels Effect’.

Nonetheless, achieving consensus on the breadth and nature of regulation proves elusive, particularly in light of influential actors like China and its ‘Beijing Effect’.

Also, the emergence of large language models (LLMs) like ChatGPT presents a new set of challenges, sparking debates over the regulation of their training data and risk assessment methodologies.

The resulting compromise entails subjecting powerful LLMs to stricter rules while granting exemptions to smaller models, albeit with certain regulatory exceptions.

Amid these discussions, another challenging dilemma concerns granting legal personality to AI machines. This remains a contentious issue, raising concerns over accountability and ethical implications reminiscent of fictional scenarios like RaskolnikAI's ethical conundrum.

Should the corporate entity behind its creation, the developers who breathed life into its code, or the entity itself, with its emergent autonomy, bear the blame? This debate demands urgent attention before the scales tip irreversibly.

Existing regulatory frameworks prove inadequate in grappling with the multifaceted dimensions of AI accountability.

In cases where AI engages in criminal behaviour with intent (mens rea, Latin for ‘guilty mind’) and carries out the act itself (actus reus, Latin for ‘guilty act’), the legal landscape becomes more complex, raising questions about who is the perpetrator and the potential methods of punishment.

Recent reports from authoritative bodies like the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol's European Cybercrime Centre (EC3) underscore the swift integration of AI technologies by malevolent actors.

From exploiting vulnerabilities in smart home automation systems to deploying fully automated penetration testing tools, AI serves as a force multiplier for cybercriminal enterprises.

In these scenarios, mens rea resides within the human domain, while actus reus is facilitated by AI.

However, the more troubling scenario is when an AI system is not only utilised for a criminal act but also holds the ill will itself.

Loading...

Anthropomorphising AI

According to a 2017 European Parliament report, there’s a proposal that self-learning robots could be attributed to "electronic personalities".

Referencing iconic literary works like Mary Shelley's Frankenstein and the legend of Prague's Golem, the report emphasises society's enduring fascination with the prospect of creating intelligent machines.

What was once speculative is now becoming reality.

However, solutions from past narratives are not directly applicable to AI. The report suggests that granting robots electronic personalities could render them accountable for their actions, akin to legal persons such as corporations.

While assigning responsibility to AI machines is a step in the right direction, determining who should bear the burden of their crimes remains a challenge.

The report highlights the complexity of understanding the decision-making processes of these opaque systems, leading to a deadlock among lawmakers.

Additionally, the lack of sentience in robots means traditional deterrence mechanisms are ineffective, resulting in a responsibility gap that undermines lawmakers' confidence.

This situation could have far-reaching implications. If lawmakers grant electronic personalities to self-learning robots – similar to legal personhood – that would lead to a stalemate.

The practical implications of holding AI accountable for its actions are unclear, as the opacity of AI decision-making processes and their non-sentient nature make traditional methods of deterrence and punishment ineffective.

This creates a significant gap in legal systems, undermining public trust.

Moreover, AI's capacity to mimic a litany of criminal behaviours via machine learning algorithms introduces a disconcerting dimension to the discourse.

As we stand at the crossroads of anthropomorphising AI, it becomes imperative to reaffirm their status as machines with distinct attributes.

There exists no solution to impose human-centric sanctions upon these entities. The imposition of death penalties (kill switch) or imprisonment of AI systems lacks efficacy in deterring other AI systems, as they are incapable of experiencing remorse or understanding the concept of atonement or having sensations.

Returning to the story of RaskolnikAI, resolution is limited should it opt to eradicate humanity under the motives of utilitarian logic embedded within its neural network.

The only way out of this quandary might be preemptively deactivating it before disseminating its cause to other machines, thereby perpetuating a gush of similar actions.

Yet, the casualties accumulated until that moment find their place in the history of a sorrowful unAIdentified murder case.

Humanity must prioritise its continuation despite their inevitable recurrent missteps. For, as Dostoyevsky says, "To go wrong in one's [humanity] own way is better than to go right in someone else's [AI]."

Route 6