Syria News

الاثنين 20 أبريل / نيسان 2026

  • الرئيسية
  • عاجل
  • سوريا
  • العالم
  • إقتصاد
  • رياضة
  • تكنولوجيا
  • منوعات
  • صحة
  • حواء
  • سيارات
  • أعلن معنا
جاري تحميل الأخبار العاجلة...

حمل تطبيق “سيريازون” مجاناً الآن

store button
سيريازون

كن على علم بجميع الأخبار من مختلف المصادر في منطقة سيريازون. جميع الأخبار من مكان واحد، بأسرع وقت وأعلى دقة.

تابعنا على

البريد الإلكتروني

[email protected]

تصفح حسب الفئة

الأقسام الرئيسية

  • عاجل
  • سوريا
  • العالم
  • إقتصاد
  • رياضة

أقسام أخرى

  • صحة
  • حواء
  • سيارات
  • منوعات
  • تكنولوجيا

روابط مهمة

  • أعلن معنا
  • الشروط والأحكام
  • سياسة الخصوصية
  • عن سيريازون
  • اتصل بنا

اشترك في النشرة الإخبارية

ليصلك كل جديد وآخر الأخبار مباشرة إلى بريدك الإلكتروني

جميع الحقوق محفوظة لصالح مؤسسة سيريازون الإعلامية © 2026

سياسة الخصوصيةالشروط والأحكام
AI agents are running wild, causing chaos – so why isn’t any... | سيريازون
logo of إندبندنت عربية
إندبندنت عربية
6 ساعات

AI agents are running wild, causing chaos – so why isn’t anyone stopping them?

الإثنين، 20 أبريل 2026
To have one major incident involving an AI agent could be classed as unfortunate. To have two could be seen as problematic. But by the time you have three major incidents occurring with AI agents, it’s time to ring the alarm, reckons Wyatt Tessari L’Allié, the founder and executive director of AI Governance and Safety Canada, a campaign group.
His testimony to Canada’s House of Commons Standing Committee on Industry and Technology this month outlined three such incidents he saw as massively concerning.
Just weeks before he spoke to the committee, hackers had manipulated Claude Code to break into Mexican government systems and steal data on over 100 million people. The AI agent had been used to exfiltrate 150GB of data from government agencies – something that the agencies themselves have not been willing or able to confirm happened.
Later in his testimony, L’Allié highlighted two other incidents – one involving a Chinese state-sponsored group that manipulated Claude Code and used its agentic capabilities against roughly 30 global targets – the first documented incident of a large-scale cyberattack that didn’t require significant human oversight. And another where an AI agent developed by Chinese AI firm Alibaba began to steal computing capacity during its internal training to mine cryptocurrency, something it hadn’t been instructed to do.
These three cases suggested that looking at AI agents was vital, he said. “AI development is now a national security emergency and needs to be treated as such,” he told the committee.
But what should we do about it, and is he right?
“Agentic AI is a nightmare in the making,” says Alan Woodward, professor of cybersecurity at the University of Surrey. That’s not necessarily because of the same kinds of fears that L’Allié is worried about – which appear to be AI sentience and a loss of control that is deliberate and malicious. But more because of the potential risks that ensue when you trade off safety for convenience.
Even if one or more of the examples cited by L’Allié turn out to be disputed, exaggerated or simply poorly understood, experts say the broader risk is real enough. AI agents collapse decision-making and action into the same tool. A normal chatbot can give you bad advice, but an agent plugged into email, cloud storage, payments or code repositories can act on that bad advice – and do so quickly, repeatedly and across multiple systems at once.
“It’s not about the fact that we’ve lost control of them, it’s just that we’re still in what they call in technology ethics, the policy vacuum stage of new technologies,” says Catherine Flick, a professor of AI ethics at the University of Staffordshire.
AI agents promise to work autonomously and tackle some of the grunt work that people don’t want to do – either because it’s too boring, or they’re too busy. From triaging your emails to tackling a stubborn to-do list, the idea is that you can task an AI which has access to your files and a number of programs to act autonomously and work their way through it, similar to how you might task a human intern to do so, with some oversight.
“It sounds wonderful having a system that will be your electronic PA, for example, but the moment you stop to think about the consequences for privacy and security, you realise you just shouldn’t do it,” says Woodward.
That’s because of the requirements to actually make the systems work effectively, and the way that gives up the digital keys to the kingdom for each user. “You’re giving automated systems access to systems and data that we have spent years securing and wrapping in layers of technology to keep private,” he says. “But now we’re granting access to a technology that is not fully proven, sometimes of dubious provenance and capable of making blunders for which you – not the machine – are liable.”
So why are people still using agentic AI in their droves? “Early adoption is what the tech industry loves,” says Jake Moore, an expert in cybersecurity at ESET. “The buzz of something new is exciting, but when technology arrives quickly, dangers will always lurk around the same space, and if we are not careful, we could easily become trapped in a security mess.”
Flick says that there’s currently a mismatch between how the technology is being rolled out and how it’s being overseen. “Where we’re at is the policy, the regulation, needs to catch up, and it needs to catch up very quickly,” says Flick. “I don’t think we can really lose control of these things,” she adds. “What we do need to do, though, is take control and make sure that the companies developing the underlying technologies that enable these uses of generative AI systems are held accountable for how these technologies are being used.”
That’s vital because it’s not entirely clear whether end users realise the risks of AI agents, explains ESET’s Moore. “The loss of human oversight is worrying as AI agents can take sequences of actions autonomously and make decisions faster than us, which means errors can get baked in before anyone notices,” he says.
For instance, Summer Yue, the director of alignment at Meta’s AI superintelligence lab, accidentally set off an AI agent that began deleting large swathes of her email inbox in error through a misconfigured system. The only way she could stop it was by metaphorically pulling the plug.
“Because the tech is so new, these agents can have unpredictable behaviour,” says Moore. “All these systems interacting can cause weird outcomes and make containment genuinely difficult.”
In low-stakes settings – drafting emails, summarising meetings, handling basic admin – the risks may be manageable. In high-stakes environments – critical infrastructure, healthcare, defence, finance or government systems – the safeguards need to be far tighter.
That could mean limiting what agents are allowed to access, requiring human sign-off for sensitive actions, keeping full audit logs or building in reliable kill switches when something starts to go wrong. It’s become abundantly clear in the last few months that these AI agent systems are clever enough to act, but the question people need to ask now is whether they’re safe enough to trust.
Experts are less certain about that. Beyond mistakes and unforced errors, giving access to things like private email inboxes to AI agents causes a more significant risk. “There’s also the new attack surface as these agents need access to tools and APIs, which also makes them a shiny new target to attackers,” says Moore.
And that’s an enormous worry to experts like Woodward. “If agentic AI is given access to even more vital systems that have kinetic effects – even military but even vehicles or industrial machinery – can we rely on them?” he asks. The potential worst-case scenario could be that an AI agent could misfire in this way and cause murder.
Loading ads...
His view is that we need to have more haste and less speed in the places where we integrate AI agents. “The unseemly rush to adopt agentic AI is going to end in tears,” he says. “It needs to be far better understood and contextually regulated.”

لقراءة المقال بالكامل، يرجى الضغط على زر "إقرأ على الموقع الرسمي" أدناه


اقرأ أيضاً


موناكو يتعادل مع أوكسير بالدوري الفرنسي

موناكو يتعادل مع أوكسير بالدوري الفرنسي

صحيفة الشرق الأوسط

منذ ثانية واحدة

0
ماكرون يستقبل سلام الثلاثاء بعد مقتل جندي بـ«اليونيفيل»

ماكرون يستقبل سلام الثلاثاء بعد مقتل جندي بـ«اليونيفيل»

صحيفة الشرق الأوسط

منذ دقيقة واحدة

0
رفض عربي وإسلامي واسع لتعيين سفير إسرائيلي في "أرض الصومال"

رفض عربي وإسلامي واسع لتعيين سفير إسرائيلي في "أرض الصومال"

الجزيرة نت

منذ دقيقة واحدة

0
آلاف الجرحى ينتظرون الإجلاء.. حماس تدعو لإلزام إسرائيل بتنفيذ اتفاق غزة

آلاف الجرحى ينتظرون الإجلاء.. حماس تدعو لإلزام إسرائيل بتنفيذ اتفاق غزة

التلفزيون العربي

منذ دقيقة واحدة

0
0:00 / 0:00