Valoración del Usuario:  / 0
MaloBueno 
Escrito por Super User Categoría: Legal Technology
Fecha de Publicación Visitas: 1104
Imprimir

technological singularity

Continuing with our analysis of the four laws of robotics, today we will analyze the law robotics 2 link with zero law. The Act 2 tells us that "A human can not use a robot without knowing the legal and professional standards of safety and ethics Operating System robot human-robot"

Let's start with a definition of the legal nature of this Law 2.

 

- Ignorance of the law does not exempt compliance of the same
- It requires a minimum of knowledge for the use of systems and artificial intelligence entities
- Inability or prohibiting the use in case of ignorance.

¿This involved?? It means that any human organization want to use an artificial intelligence entity needs to demonstrate a basic knowledge of the full operation of the IA in the field of use. That is, it requires administrative authorization directly linking an AI with an individual, so that effects can be traced to the entity responsible for controlling or supervising IA system.

So if you do not have the administrative authority can not use the AI ​​affected. Therefore ignore the need for authorization does not mean that it can be used under pain of administrative offense, but can not be used and in fact the systems are locked.

This system can be translated into technological processes to several organizations tecnolegal procedural programming, ie taking into account the restrictions imposed under the Act 2 and its breakdown we've done, you must program the AI ​​system to require certain conditions for use.

For this system divide into two areas IA as previously indicated and programmable memory ROM.

Tecnolegales processes that must be injected into ROM should be those below indicate. These processes imply that if you do not meet the conditions that allow to understand how fulfilled Act 2 IA system will not boot and will not work since the first Act 2 observable law should take into account the IA to start operation or not. observed once the conditions of Act 2 and compliance. Then start the AI ​​system and the rest of the three laws begin their combined procedure will continuously monitor its correct operation.

The legal constraints arising from the Act 2 and injected into ROM necessarily by a government entity are:

- Recognition for biometric identification systems the physical operator intend to boot the IA
- Verification of existence of authorization issued by IA using authorized entity.
- Cross IA boot, emitting a signal that the machine is activated IA physical operator that has been previously identified.


If the start verifying the existence of authorization IA lacks the same or there is any irregularity that involves blocking IA unit, it will signal the measures that identify the operator in case of need for further research for use irregular artificial intelligence entities.

If instead there is verified correctly physical operator authorization. They begin the AI ​​system and initiate a series of continuous tests between the orders given to IA by the operator and the other laws of robotics.

In this case we will analyze once introduced the subject, the interaction between the Law and the Law 2 0.

All AI system must have a set of standards injected into the ROM that much as the operator, the violation inadvertent and intentional, never to be obeyed by the IA terminal. This regulation of IA pre exist by physical operator.

Although a physical operator sends order to perform harm to another human, the AI ​​system will paralyze the order, send off your systems and part of what happened to the supervisory body, in the same way that if a human is intended for your AI inaction come to harm another person intentionally, also turn off their systems and forward part of what happened to the supervisory body.

That is never an operator sends commands to their AI or an AI can ignorance or unintentionally violate legal regulations, safety or ethical systems. All thanks to the inherent capacity ROM standards that must be associated with ZERO LAW which is the cornerstone of all IA, and as we have seen in previous articles and as a reminder quote, were among others:

ROM
Criminal Rules
Civil Rules
Prohibitions
Behavioral systems
Impossibility of aggressiveness
Priority rules for survival and support of human-human robots
Nonaggressive Defense Systems
Nonaggressive attack Systems
Inability to reprogramming
Inability of human injury
Learning semicognitivo
Inability to generate hatred
Inability to generate empathy
Compliance orders strict enforcement of the priority rules.
Automatic shutdown if antirreglamentarias requirements
Inability to use any weapon or force upon things or other human or AI entities
Inability autoreprogramación
Unable to build physical AI entities or virtualized.
...

In future articles we will discuss Act 3 individually and together with zero Law and Law 2.


Powered by udjamaflip.com
| + - | RTL - LTR