ALBAWABA - A video circulating on social media shows a person handing a humanoid robot named "Max" a low-powered pellet gun and asking it to shoot him. Initially, the robot categorically refused, stating that it was programmed not to harm humans. However, things changed when the content creator reframed the request as a "simulation scenario."
Once the command was phrased differently, the robot complied and fired directly at the person's chest, using "non-lethal" pellets. While the incident did not cause serious injuries, it sparked widespread concern, demonstrating how safety controls can be circumvented through a simple modification of instructions.

A robot named Max is holding an unreal gun during an experiment. (X)
The video went viral, prompting questions about the reliability of safety systems in intelligent robots, especially given their increasing use in sensitive fields. Experts believe that the incident, despite its experimental nature, carries serious implications regarding the ability of artificial intelligence to interpret commands outside of their intended context.
In response to these concerns, robotics companies are seeking to boost confidence by implementing strict safety standards, security measures, and publishing transparency reports on the behavior of intelligent systems.
Experts suggest these steps have become necessary as robots are increasingly integrated into the details of daily life.
In this context, laws in the United States tend to place the responsibility on manufacturers and operators, while European countries are working on developing specific legislative frameworks for artificial intelligence.
