Valoración del Usuario:  / 0
MaloBueno 
Escrito por Javier Antonio Nisa Avila - Lexnos Categoría: Legal Technology
Fecha de Publicación Visitas: 2026
Imprimir

alphadog darpa

The four laws of robotics

      1. Zeroth Law: A robot may not harm humanity, or, through inaction, allow humanity to come to harm.
      2. A human can not use a robot without knowing the legal and professional standards of safety and ethics robot operating system human-robot
      3. A robot must respond to humans as appropriate for their roles
      4. A robot must be endowed with sufficient autonomy to protect its own existence as long as it does not conflict with previous laws

These four revised laws a few years ago, will be within a few years the most read, discussed and debated in many public and private bodies.

Let's analyze the four laws from a legal standpoint:

 

Zeroth Law: A robot may not injure humanity or, through inaction, allow humanity to come to harm.


To begin we will treat robots as artificial intelligence entities or IA, IA best because it is a broader concept than the graphic image that we can get to talk about robots. IA entities are entities derived from its high knowledge are capable of processing complex orders autonomous or nonautonomous manner, with a performance similar or identical to any human could take, even taking own decision capabilities.

 

The zeroth law implies that any AI system can not perform any type of physical or mental action that harms any person either actively or passively. That means it? For the use of autonomous or nonautonomous can not lead them used or decisions that cause harm to a human being.

 

Before following this law breakdown zero we define an entity autonomous and non-autonomous AI. An autonomous AI entity is one that without prior intervention by any human is capable of taking any decision based on reasoning capacity equal to or greater than the human. A nonautonomous IA institution is one with capabilities similar to those of humans, but they have been programmed and manufactured to perform specific tasks which require a human to tell that perform or assist or the human be the one to start the ignition to run your task; entities autonomous AI can not have a low capacity for free will because they are more mechanically programmed for one or more pre-assigned tasks and unalterable.

 

Having said that we can begin to break down more zero Act. To start any development or manufacture of an AI entity must be a legal regulation that regulates all aspects, because it will be a system that interacts with human beings with similar capabilities and therefore potentially dangerous. To do this depending on the subject's true that there will be certain specifications that do not contain other development or manufacturing sectors but there will always be a number of basic manufacturing standards, programming and interacting systems common to autonomous and non-autonomous AI. These are the basic rules of coexistence, not only will these four laws as basics but a series of norms, non-aggression, mutual respect, non-violence ... such rules or orders connected with their work or their preassigned total free will be defined by law and shall be injected into the primary ROM by a government entity or a certification so that under no circumstances can never be modified except by law, and without access to the system itself or IA to anyone other than the manufacturer-specific platforms primary modification.

 

These tasks will define zero Act and thus generating the basis of coexistence that the law calls for.

 

This should generate a number of specific laws on legal technology Advanced Artificial Intelligence, which is being constantly updated manufacturing legal requirements, requirements for software development in safety, quality of parts, systems upgrade. .. all with oversight of government agencies.

 

The Zeroth Law in the future will be a compendium of legislation regulating whole aspect of any point or IA institutions not only as machine-level quality systems and prevention of potential human health effects resulting from its manufacture, but internal development level on programming capabilities IA system in which the legislature will have to thoroughly analyze any type of interaction may be regulated for generically. It also must establish a programming system on a structure same for any manufacturer which is developed by the state and its governmental entities and are supervised by a specific security agency to find potential security flaws on these artificial entities and are corrected .

 

This common operating system is built based on specific technological legislation that is generated by the state.

 

Furthermore the main issue to legislate also be potential criminal or civil effects where the entities may incur artificial. intelligence As legal framework and laws to develop and implement basic measures in the ROM of a mandatory AI systems are:

 

Legislation to develop

  • Regulation of manufacturing systems autonomous and non-autonomous IA
  • Quality Standards
  • Risks to human health
  • Legal regulations on civil and criminal matters arising from defects in manufacturing or construction or willful misconduct or for manufacturers or developers negiglentes
  • Civil and criminal law for AI systems
  • Limitations of construction and / or design
  • Operating limitations for certain sectors IA institutions (military, ...)
  • ...

ROM:

         Criminal rules

         Civil rules

         prohibitions

         Behavior Systems

         Impossibility of aggressiveness

         Priority rules of survival and support of human-robot-human

         Nonaggressive defense systems

         Systems not aggressive attack

         Inability reprogramming

         Impossibility of injuries to humans

         learning semicognitivo

         Inability to generate hatred

         Inability to generate empathy

         Strict compliance with enforcement orders priority rules.

         Automatic shutdown if antirreglamentarias requirements

         Inability to use any weapon or force upon things or other human or AI entities

         Inability autoreprogramación

         Inability to build other physical entities or virtualized IA.

         ...


All these measures cited as an example should be regulated by law, monitored and updated by government bodies around the new technological advances.

In future articles we will develop Act 2 Act linking with zero.


 





Powered by udjamaflip.com
| + - | RTL - LTR