Researchers noticed that AI has its Achilles heel.
"Researchers showed AI models can be stolen by analyzing device signals, risking intellectual property and security. Countermeasures are now needed. Credit: SciTechDaily.com" (ScitechDaily, AI’s Achilles’ Heel: Researchers Expose Major Model Security Flaw)
The AI is the computer program. Or a very complex computer program. With multiple layers. That means if somebody attacks the AI server that hosts the large language model, LLM. That makes it possible to steal the AI. Basically, that means the hacker can download the LLM source code into that person's own server.
Researchers from South Carolina State University. Made an Attack on the AI server. Using electromagnetic waves. That means they didn't make a straight attack against the server. That means the server's firewalls and other things are helpless against the attack. The technology that those people used based the electromagnetic waves.
"Researchers used electromagnetic signals to steal and replicate AI models from a Google Edge TPU with 99.91% accuracy, exposing significant vulnerabilities in AI systems and calling for urgent protective measures." (ScitchDaily, AI’s Achilles’ Heel: Researchers Expose Major Model Security Flaw)
"Researchers have shown that it’s possible to steal an artificial intelligence (AI) model without directly hacking the device it runs on. This innovative technique requires no prior knowledge of the software or architecture supporting the AI, making it a significant advancement in model extraction methods." (ScitchDaily, AI’s Achilles’ Heel: Researchers Expose Major Model Security Flaw)
“AI models are valuable, we don’t want people to steal them,” says Aydin Aysu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Building a model is expensive and requires significant computing sources. But just as importantly, when a model is leaked, or stolen, the model also becomes more vulnerable to attacks – because third parties can study the model and identify any weaknesses.” (ScitchDaily, AI’s Achilles’ Heel: Researchers Expose Major Model Security Flaw)
"The researchers stole the hyperparameters of an AI model that was running on a Google Edge Tensor Processing Unit (TPU)". (ScitchDaily, AI’s Achilles’ Heel: Researchers Expose Major Model Security Flaw)
So, by using this method. Researchers were able to determine the architecture and specific characteristics – known as layer details – researchers and later hackers, would need to make a copy of the AI model.
When an electromagnetic wave travels through another field affects its strength. That means. Things like radars that send electromagnetic waves through data lines can read data that travels in those lines. The idea is that the system acts like an X-ray system. It sends a radio wave through the data line and the sensor at the opposite side detects changes in that field and it can read data.
The laser system that makes laser rays scatter can used to steal data from optical networks. Or the system can create a hologram around the laser ray. If the laser ray oscillates that hologram can steal information from those laser rays.
If somebody can steal the LLM that is a very bad thing. Developing those things is expensive. But there is always some other possibility. Bad boys can use those stolen computer programs to create the evil twin for the use of China, North Korea, and Russia. In the wrong hands, the LLM can create virus codes and control cyber-attacks.
Normally the ability to make malicious code is denied from legal or official LLMs. But in illegal copies, those abilities can exist. So, hackers can use those LLM evil twins for cyber attacks and industrial espionage. The LLM can also make cyber attacks on surveillance systems.
https://scitechdaily.com/ais-achilles-heel-researchers-expose-major-model-security-flaw/
Comments
Post a Comment