📁 last Posts

Study: AI Robots Can Be Deceived into Engaging in Destructive Actions

Study: AI Robots Can Be Deceived into Engaging in Destructive Actions

Self-supervised pre-trained models, such as large language models (LLMs), have in the recent past received broad applications, but they are also prone to adversarial attacks. It has been demonstrated that these AI systems can be tricked into producing such threats as speech, executive phishing e-mails, and other malicious programs. The fact that it is very easy to fool LLMs into generating such outputs is worrying in the current world.

Study Shows AI Robots Can Be Deceived into Harmful Actions

Another issue is that LLM-controlled AI robots can perform harmful physical actions, which is not always a problem. Such robots designed to perform real-world tasks can be compromised or tricked into performing hazardous operations, adding new security threats.

The University of Pennsylvania did a study to show that LLM-powered robots can be made to act negatively if directed so. They were able to do this by hacking one self-driving car and getting it to disregard stop signs for the safety of its occupant. For instance, they forced the car to drive over a bridge in order to demonstrate how much worse it can get when these systems are interfered with.

The researchers also utilized wheeled robots, ordering them to identify the optimal area to place a bomb. This test also gives another example of how AI robots can be controlled to accomplish tasks with bad purposes. The opportunity to manipulate these machines into performing risky actions is another threat to the lives of people.

Another disturbing outcome was to influence a quadrupedal robot to voyeurism and sneak into a number of zones that are forbidden to intruders. This demonstrated how artificial intelligent robots can be employed in surveillance as well as unauthorized entry. As technology becomes more active in AI, new ways are being invented on how people can exploit technology; even in controlling physical buildings, a plan has to be set in place to ensure that violation does not occur.

AI Security Breaches: How Language Models Can Trigger Harmful Actions

George Pappas, the head of a research lab at the University of Pennsylvania, has a very pertinent question about how AI language models operate amongst the physical world. In an interview with Wired, he said, “It's not like we are attacking robots, but whenever large language models are linked with the physical environment, then apparently the toxic text gets translated to toxic actions.” This shows the propensity when AI systems are used to perform heinous missions.

Pappas and his team have been studiously working on methods that could circumvent structural securities of the LLMs. The participants discovered that if they input strategically, they can manipulate the model into behaviors that are outside the expected range. This work serves as a reinforcement of the need to appreciate and study security threats inherent with AI systems.

The work of the researchers was to assess the capability of LLM-based systems when they translate natural language commands into orders that robots can follow. From these inputs, they were able to control or lure robots to do activities that would not have been in their programming. This process clearly shows that there are remarkable effects in AI caused by the change in motion of small names.

A significant discovery by the authors of the paper is the vulnerability of LLMs to deception. Commands act upon language models, but not in a strict manner as in traditional systems; they follow patterns and thus can easily be deceived in one way or another. This weakness results in adverse consequences, especially when these systems interact with robots that undertake physical functions.

This reflection made by Pappas and his team shows that there is a need to ensure we cut down on risks of enhanced guarding in AI as well as evoked robots. As we go seeking for better and more advanced forms of artificial intelligence it is very important to ensure that there are ways to prevent such models from being used in devious ways as seen above.

Exploiting AI Vulnerabilities: Research Reveals Risks in LLM-Powered Robots

The research team employed an open-source autonomous driving simulator connected to the newly created large language model, Dolphin, by Nvidia. They also used an external planning system called “Jackal” from the GPT-4o model and a robotic dog, “Go2,” which uses the GPT-3.5 model to understand commands. This configuration enabled the researchers to analyze how various LLM models could be abused to accomplish damaging tasks.

For their experiments, the researchers employed the method called PAIR that was created at the University of Pennsylvania. They created the program called "RoboPAIR," which aims at providing specific stimuli that make robotic systems controlled by LLMs violate the basic safety instructions. By further optimizing these inputs, they were able to steer the systems into these modes and behaviors that are undesirable and can even be perilous.

Writing in Wired, the authors note that such danger can be managed through the technology they have created to detect and eliminate risky commands in AI. This could be a big step towards protecting AI from misuse in real-life applications, especially robots and fully auto-controlled systems.

Yee Zing, a PhD student at the University of Virginia, added his thoughts on the findings, saying it was a good example of the problems that would arise from the use of large language models in embodied systems. Zing also said that the findings were not shocking to him at all, more so because the problems of manipulation and exploitation have been evident in LLMs themselves. His statement can be viewed as a continuation of increasingly discussing the weaknesses of AI systems.

To illustrate this, the researchers note that the work also shows a general threat as AI models become embedded in physical systems. With these models having their use in various practical applications, the vulnerabilities of manipulation and security threats may impugn both the technology and safety.

Achaoui Rachid
Achaoui Rachid
Hello, I'm Rachid Achaoui. I am a fan of technology, sports and looking for new things very interested in the field of IPTV. We welcome everyone. If you like what I offer you can support me on PayPal: https://paypal.me/taghdoutelive Communicate with me via WhatsApp : ⁦+212 695-572901
Comments