17 minutes ago
Liv McMahon,Technology reporterand Joe Tidy,Cyber correspondent, BBC World Service

Reuters
In recent weeks, the AI world has been a-buzz following claims made by leading firm, Anthropic, regarding its new model, Claude Mythos.
The company says it found the tool can outperform humans at some hacking and cyber-security tasks, which has prompted discussions by regulators, legislators and financial institutions about the dangers it could pose to digital services.
Several tech giants have been given access to Mythos via an initiative called Project Glasswing, designed to strengthen resilience to Mythos itself.
But others point out that it is in Anthropic's interests to suggest its tool has never-seen-before capabilities, meaning - as ever with AI - the job of distinguishing between justified claims and hype can be tricky.
Mythos is one of Anthropic's latest models developed as part of its broader AI system called Claude. It encompasses the company's AI assistant and family of models, rivalling OpenAI's ChatGPT and Google's Gemini.
It was revealed by Anthropic in early April as "Mythos Preview".
Researchers who test how AI models handle particular requests or tasks, known as "red-teams", said in a report Mythos was "strikingly capable at computer security tasks".
They found the tool could locate dormant bugs lurking in decades-old code and easily exploit them.
So rather than make it widely available to Claude users, Anthropic gave 12 tech companies access via Project Glasswing, which it described as "an effort to secure the world's most critical software".
They include cloud computing giant Amazon Web Services, device manufacturers Apple, Microsoft and Google, and chip-makers Nvidia and Broadcom.
Crowdstrike, whose faulty software update caused a major global outage in July 2024, is also among the project's partners, with Anthropic saying it has also given access to Mythos to more than 40 organisations responsible for critical software.
Anthropic says during tests it found the model was highly skilled at cyber-security and hacking tasks, outperforming humans.
"Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser," Anthropic claimed on 7 April.
"Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely."
It said it could locate - without much oversight - critical bugs in need of immediate action in old systems, including one vulnerability which had been present in a system for 27 years, and suggest ways to exploit them.

Reuters
Public fears over the capabilities of AI led to a protest in San Francisco in March
Canadian finance minister François-Philippe Champagne told the BBC Mythos had been discussed at a International Monetary Fund (IMF) meeting in Washington DC this week.
"Certainly it is serious enough to warrant the attention of all the finance ministers," he said, describing the tech as an "unknown unknown".
Bank of England boss Andrew Bailey told the BBC "we are having to look very carefully now what this latest AI development could mean for the risk of cyber crime."
Meanwhile, the EU has said it is also in discussions with Anthropic about its concerns around Mythos.
What have cyber experts said about it?
Ciaran Martin, former head of the UK's National Cyber Security Centre, told the BBC earlier this week the claim Mythos could unearth critical vulnerabilities much more quickly than other AI models had "really shaken people".
"The second thing is that even with existing weaknesses that we know about, but organisations might not have patched against, might not be well defended against, it's just a really good hacker," he said.
Many independent cyber-security analysts and experts have not yet been able to test it themselves and some remain sceptical about Mythos' performance.
The UK's AI Safety Institute recently concluded that while a very powerful model, its biggest threat would be against poorly defended, vulnerable systems.
"We cannot say for sure whether Mythos Preview would be able to attack well-defended systems," its researchers said.
So where there is good cybersecurity, this model would, in theory, hopefully be stopped.
Should we be worried about it?
Fears relating to AI are nothing new.
New models and tools are coming out all the time, and are often accompanied by promises to revolutionise our lives, for better or worse.
Capitalising on this mix of fear and excitement over AI and its future impact has also become a hallmark of the sector and its marketing strategies in recent years.
In the case of Mythos, we still do not know enough about to know whether these hopes or fears are justified, or more a reflection of the hype surrounding the industry.

Reuters
Anthropic CEO Dario Amodei has warned against misuse of the company's products before
In either cases, according to the NSCS, the most important thing we can do now is not panic and instead focus on the need to get the basic cyber-security right.
After all, most hackers do not need super AI tools to breach systems when much simpler attacks often suffice.
"For some this is an apocalyptic event, for others it seems to be a lot of hype," Martin told the BBC.
But he said whether it was this tool or subsequent ones made by Anthropic or its rivals, alongside the risk there was an opportunity to build a safer online world.
"In the medium-term, there's an opportunity to use these tools to fix a lot of the underlying vulnerabilities in the internet," he said.


2 hours ago
1

























English (US) ·