AI has no idea what it’s doing, but it’s threatening us all
Artificial intelligence is reshaping law, ethics, and society at a speed that threatens fundamental human dignity. Dr. Maria Randazzo of Charles Darwin University warns that current regulation fails to protect rights such as privacy, autonomy, and anti-discrimination. The “black box problem” leaves people unable to trace or challenge AI decisions that may harm them.
The age of artificial intelligence (AI) has transformed our interactions, but threatens human dignity on a worldwide scale, according to a study led by Charles Darwin University (CDU).
Study lead author Dr Maria Randazzo, an academic from CDU's School of Law, found the technology was reshaping Western legal and ethical landscapes at unprecedented speed but was undermining democratic values and deepening systemic biases.
Dr Randazzo said current regulation failed to prioritize fundamental human rights and freedoms such as privacy, anti-discrimination, user autonomy, and intellectual property rights - mainly thanks to the untraceable nature of many algorithmic models.
Calling this lack of transparency a "black box problem," Dr Randazzo said decisions made by deep-learning or machine-learning processes were impossible for humans to trace, making it difficult for users to determine if and why an AI model has violated their rights and dignity and seek justice where necessary.
"This is a very significant issue that is only going to get worse without adequate regulation," Dr Randazzo said.
"AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behavior.
"It has no clue what it's doing or why - there's no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom."
Currently, the world's three dominant digital powers - the United States, China, and the European Union - are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models respectively.
Dr Randazzo said the EU's human-centric approach is the preferred path to protect human dignity but without a global commitment to this goal, even that approach falls short.
"Globally, if we don't anchor AI development to what makes us human - our capacity to choose, to feel, to reason with care, to empathy and compassion - we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition," she said.
"Humankind must not be treated as a means to an end."
"Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes" was published in the Australian Journal of Human Rights.
The paper is the first in a trilogy Dr Randazzo will produce on the topic.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
regulationrightsInkSF, an Opening on Finding the Highest Impact in AI Safety and Moving to SF
How can we actually minimize the odds that AI leads to catastrophic outcomes for all of us humans? This question has been rattling around my head for the last two months. The world might be ending. Nobody seems to care. The incentives are steaming us ahead. When I ask strangers on the street: “ How likely is it that superhuman [1] AI could become too powerful for humans to control?”, 78% say either "very likely" (51.6%) or "somewhat likely" (26.3%) [2] . My guess is AI capabilities spending is at least 20x the spending on ensuring AI leads to the flourishing of humans [3] . Moloch [4] is winning. So what can actually be done? As a toy example [5] : let’s say I currently think there is a 40% chance AI eventually goes extraordinarily bad for humanity. I could either: Try really hard to get l
Channel-Sec set to spotlight cybersecurity growth, regulation and AI - IT Europa
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxNdnNCZzE5WTQzYy1TMHBLOFdqZ2c2SHZQLU5uZUhPY1VQT0NNSnhkUldkU0podjdkLXBKWG13bmpKMXRIMlVQQXRabllyVE45Z243VFo0X2JxZ0ZGOEJfdGVTbXJiTU41cjZYOEN4YjAteGZYWWlsOHBlS2pSZTctTzRCajBIdk9Jb1Zva0NtRmpFODFwMURjaTB3?oc=5" target="_blank">Channel-Sec set to spotlight cybersecurity growth, regulation and AI</a> <font color="#6f6f6f">IT Europa</font>

India's 3-Hour Deepfake Deadline Puts Evidence and Investigators at Risk
<p><strong><a href="https://go.caracomp.com/n/0401261618?src=devto" rel="noopener noreferrer">Analyzing the impact of deepfake regulation on biometric workflows</a></strong></p> <p>The news of India's 3-hour deepfake takedown deadline is a massive stress test for computer vision (CV) engineers and biometric developers. When the response window is that tight, you aren't just building a feature; you're building a race against a clock that doesn't care about false positives or forensic integrity. For those of us in the facial comparison space, this regulation creates a significant technical hurdle: how do you maintain accuracy when the law mandates speed over verification?</p> <p>For developers working in biometrics, this regulation triggers a cascade of architectural problems. If a platform
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Laws & Regulation
Exclusive | Government Agencies Raise Alarm About Use of Elon Musk’s Grok Chatbot - wsj.com
<a href="https://news.google.com/rss/articles/CBMiqANBVV95cUxOalZtcGhWTjFjcExULWowQjUyWUNEUmtLSndqV0JqMXZKbmt2N3pfRlZxcW1jNWpYM3gtNXlmM1lPQ1NZSjRRQzlwWk5KcENZT2lId1pKczVpeFUyM1Y2d0EzVVBzaUZ0dGhLNzJIbVpwMjFwZVVCOFVGdEFmams2R3UwbmFmdTZ4RmE0OE12aGZuMkRXMXVLa3EtcnBFQWFycGtaUzZNcFJ1LWlfcjlVaW0tMmNmYzUtY0xrbVFSWkktNTRmRG1KblM0d2NyUjBsVUh5RVlQY2dvNThkZDdmeXltTl8wakc4d2swMjR0QlZMSjZRWEh3YndnbzJEazAwRHNsRUxoWXRlYmhaOTQwWDdlZHR0UTk2RXdqSnYwQ2lOWWpSdUpib0pHWE1xZjVLWWduZVI5ZG1IaF9CNHlpNGl6YnYzcy1pME1zY2w1T3d5VmpHRXQ5b3B1QUFMZkdqV2xhSlNyUko5VzB6dUlFRTdxZkN5cHhVQ1MxZlhjd2Fia2M2UktEbmlhcU5Ud1I0UjVrdWlMX0xvODFpWExJTGFQbm42UUtm?oc=5" target="_blank">Exclusive | Government Agencies Raise Alarm About Use of Elon Musk’s Grok Chatbot</a> <font color="#6f6f6f">wsj.com</font>
EU countries maintain diverging views on AI Act interplay with sectoral laws - MLex
<a href="https://news.google.com/rss/articles/CBMiuwFBVV95cUxQNUZ1aWJnYkdCRXJkNHFDV0l0MnAyNVJ5dHRZSVZtMktEc0ZaUmJWNUU2Vmw3Wm9MU3RMVnZ6UHRaVGczMHZma1U4LWJJRTlpV29Db1hjaFNEYjhWWEl2VGJrS1FNTERCMlhkV3d5Sk5xazlYZlNZR1ktZ2lFQ0ZfN0JmX0ZtaktNVjVNNE9Tcjdmd2Y1MW9zQ0lOc1BicWRFMXdBa20zSDFiX0kxQzVISVZzU0JLUWJta05z0gFaQVVfeXFMUDBtT0lfUVozRXl6Zmd1WFd6VWlSRVA3b1BSN21lSWZ2d196dzAtMW9yNFRLOVoyaXhuXy1Vb2lTSDYwV3lndzhWM3Q5REpCcnNFQWFWaFhCT3NB?oc=5" target="_blank">EU countries maintain diverging views on AI Act interplay with sectoral laws</a> <font color="#6f6f6f">MLex</font>
Canada’s labour protections aren’t ready for the age of AI - Policy Options
<a href="https://news.google.com/rss/articles/CBMibkFVX3lxTE8xeXJfTVREMjBpcVFsR2FKbU5JVURuR3czTGVVNWthTVZnXzFYV3liYXExR0o0ejVmRkNqQk1OeUdvOEhDSEkxRHpydkd4OUNibGVadjNRTkJ1VjZXYkpRM0M2bHdmZFJSTXAtMDRB?oc=5" target="_blank">Canada’s labour protections aren’t ready for the age of AI</a> <font color="#6f6f6f">Policy Options</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!