Contents
Download PDF
pdf Download XML
338 Views
42 Downloads
Share this article
Research Article | Volume 6 Issue 2 (July-December, 2025) | Pages 1 - 9
Mistakes That Cause of Civil Liability for the Intelligent Applications Use (A Comparative Applied Legal Study)
1
Department of Law, Al-Idrisi University College, Ramadi, Anbar, Iraq
Under a Creative Commons license
Open Access
Received
June 16, 2025
Revised
July 20, 2025
Accepted
Aug. 5, 2025
Published
Aug. 21, 2025
Abstract

The issue of determining the rules of civil liability for damages resulting from intelligent applications depends on the person responsible for these damages. Identifying the responsible person means identifying the person at fault who must pay compensation for damages resulting from intelligent applications. The difficulty lies in determining who is responsible for damages resulting from intelligent applications. This could be the manufacturer, distributor, programmer, owner, or custodian. The study's problem relates to the difficulty of determining the legal nature of intelligent applications, which have become characterized by privacy and decision-making autonomy, to the point that they have impacted theories of liability for damages caused by these applications, which utilize artificial intelligence. This has created difficulty in applying traditional liability theories, such as the defective product theory and the human agent theory. This calls for a search for solutions and the presentation of ideas that can contribute to building civil liability resulting from errors in intelligent applications.

Keywords
INTRODUCTION

Artificial intelligence applications enjoy a degree of autonomy in their operations, based on the dictates of their pure technical programming, on the one hand and the requirements of their surrounding environment, on the other. 

 

They make their decisions independently of their users and without consulting them. This feature raises concerns about the degree of legal liability that results from errors associated with the actions of these applications. If a self-driving vehicle causes significant damage to others, as a result of factors that cannot be predicted or controlled, even by humans themselves, this could raise the liability of the owner, programmer, user, manufacturer, operator of the intelligent application, or others. 

 

The manner, in which errors caused by artificial intelligence applications are dealt with, given the variety and multiplicity of errors resulting from the use of artificial intelligence applications, is a matter of concern. This error is not always due to negligence or error on the part of the operator or beneficiary of these applications. This means that civil liability may arise for errors in programming and development processes, or problems in use and guidance. The error is sometimes linked to the technical nature of the program and the digital environment, or to other factors that are difficult to accurately determine. Therefore, it is necessary to examine the specific forms of harmful actions that give rise to liability in the context of artificial intelligence applications.

 

These vary and multiply according to technological development and the person causing the harm. The importance of this study is linked to the presence of a set of challenges posed by artificial intelligence application technology.

 

These challenges are not yet perfected and their software remains vulnerable to technical, computer, or digital risks, as well as technical malfunctions. These risks can cause them to operate in unexpected or unauthorized ways, causing significant harm to others.

 

        In light of this problem, we must answer several directly related questions, the most important of which are:

 

  • Can the civil liability of the responsible party be determined according to modern theories of error in intelligent applications

  • Can the error be attributed to the designer or programmer of the artificial intelligence application

  • Can civil liability for damages from intelligent applications be determined according to the theory of guardianship or the theory of defective products

 

How can we envision the occurrence of personal error in the context of artificial intelligence applications? We will attempt to answer all these questions and others, employing a descriptive scientific approach at times and an analytical approach at other times. We will compare the jurisprudential and legislative positions related to these topics, according to a scientific plan divided broadly into three sections. The first section will address the definition of civil liability for damages from intelligent applications under the guardianship theory.

 

The second section will address the definition of civil liability for damages from intelligent applications under the guardianship theory.

 

The third section will address the definition of civil liability for those responsible according to modern theories of error in intelligent applications.

 

We will conclude the research with a conclusion that includes the most prominent findings and recommendations, which we believe will be scientifically and practically beneficial to adopt. Researcher

 

First Topic

Programming Errors Occurring in the Development or Operation of Artificial Intelligence Applications: We can imagine this type of error occurring in reality by the designer or professional responsible for programming and designing artificial intelligence applications, as well as errors occurring due to the pure programming of artificial intelligence applications. This research will be detailed as follows:

 

Errors Made by the Designer or Programmer

The artificial intelligence designer is responsible for designing or programming an artificial intelligence system, regardless of the design method, whether through algorithms, self-learning, or expert learning. In other words, he is the one to whom the very idea of ​​the existence of artificial intelligence is attributed [1].

 

What distinguishes the error contained in the general rules from an error committed by the designer or programmer in artificial intelligence applications is that it represents a deviation from the usual behavior of a careful professional (programmer), leading to harm to others. That is, the person's professional status must be taken into account and the harmful act must result from an illegal activity or one that violates the rules of legality. To establish a person's civil liability, a breach must be committed by the careful professional (programmer), as this leads to harm to others. This is on the one hand.

 

On the other hand, errors that give rise to liability may be programming errors. In the world of computing, a programming error refers to the commission of errors or mistakes.

 

During the design phase of a computer program or while writing in a programming language, this error results in poor or unexpected performance. In English and other languages, the word "bug" is used when referring to a programming error [2]. The effects of this error often affect software, but it also affects some devices and applications whose operation is controlled by software. Programming errors are usually committed by professionals responsible for organizing the application. Programming errors may occur in the design of the program, such as a manufacturer's error in designing an intelligent application [3]. A programmer may make a mistake in writing programming instructions or algorithms, or any other error that prevents the computer program from reading the data or executing the instructions correctly.

 

In addition, errors may occur in the data entered, as this requires continuous updating of the data and accuracy in entering it. Errors may also occur in the application operation process by users, or as a result of programming errors when attempting to make the application compatible. From a technical standpoint, the affected computer hardware requires modifying the application to make it work on specific devices or adding other software to ensure it operates properly and achieves its intended purpose [4].

 

Here, it is important to note that most software errors are not discovered until after the application has been used and the damage has occurred. After its use, the technical loopholes and errors become apparent. Furthermore, in most cases, there is no direct relationship between the programmer and the injured party. All of this leads us to the impossibility of establishing a general rule as a basis for the programmer's liability based on the harmful act committed by him. Rather, it requires examining each case individually. From a technical standpoint, it is almost impossible to find a program free of technical errors. Therefore, French courts have relied on the general numerical normal percentage of 2%. They also considered that if the error percentage ranges between 2-10%, it is not abnormal, meaning that an error occurring within these technically recognized percentages makes it difficult for the injured party to prove the fault of the perpetrator of the harmful act [5].

 

These errors can have extremely serious consequences. Errors in the Therac-25 radiotherapy device's control software were directly responsible for the deaths of several patients in the 1980s. In 1996, a European Space Agency Ariane 5 rocket was lost less than a minute after launch due to a glitch in the onboard computer guidance software. In June 1994, a British Royal Air Force Chinook crashed, killing 29 people. Initially, pilot error was attributed to the pilot, but a Computer Weekly investigation revealed sufficient evidence to convince the House of Lords that the cause of the accident was a software error in the aircraft's computer [6].

 

Errors Caused by Pure Programming of AI Applications

AI applications have a high degree of autonomy and the ability to negotiate and conclude deals based on their acquired experience and self-modified instructions, without any human knowledge or intervention in their operations [7]. 

 

Unlike traditional software, which operates only within the framework of instructions predetermined by the programmer or controlled in a predictable, stereotypical manner, it operates autonomously and unpredictably, as dictated by its surrounding environment. It makes decisions without consulting its user, which may raise concerns about who is responsible for the harmful actions caused by these programs [8].

 

What if a surgical robot causes serious errors to a patient? Or a failure in drug production, leading to harm to individuals, or botched robotic surgeries or incorrect treatment [9]?

 

What if a self-driving vehicle causes severe damage, as a result of factors that cannot be predicted or controlled [10]?

 

We find some weak artificial intelligence systems, or those that have not been fully tested and whose introduction was rushed, causing damage ranging from minor to catastrophic, even though such damage was not intended by the programmers.

 

In 2015, a German Volkswagen worker died after being crushed by a robot operating at a Volkswagen factory, mistaking it for a car part.

 

In 2016, Microsoft launched a Chabot, Tay, that learned to use racist and sexist language. Noel Sharkey of the University of Sheffield stated that the ideal solution to avoid harm would be for an AI program to be able to detect when something goes wrong and have the ability to stop itself. However, many AI experts warn the public that solving the problem in general will be a truly significant scientific challenge [11].

 

The Second Section

Defining Civil Liability for Damages from Intelligent Applications According to the Guardianship Theory: The concept of guardianship over things plays a significant role in defining civil liability for civil liability resulting from things in general and intelligent applications in particular. To determine the effectiveness of determining civil liability for damages from intelligent applications, it is necessary to examine the theory of guardianship within the scope of civil liability in general and civil liability for damages from intelligent applications in particular, as follows:

 

The Theory of Guardianship Within the Scope of Civil Liability for Damages from Intelligent Applications

The meaning of guardianship within the scope of civil liability in general is the justification upon which the legislator relies in placing the burden of compensation for damages on the shoulders of a person who has control over an inanimate object. The theories of fault and damage have been the reference for demanding compensation in most civil law legislation in this area. However, in the field of artificial intelligence and the damages resulting from its use and given the difficulty of attributing the error caused by these applications to a specific person, the difficulty of the matter has increased due to the significant degree of power these robots possess reduction in decision-making.

 

The theory of guardianship has played a significant role in determining civil liability for the use of intelligent applications and subsequently determining civil liability for the resulting civil liability. The established theory of determining civil liability for harmful acts caused by intelligent applications or any other type of application or object has taken it upon itself to attribute this liability to the concept of guardianship, that the guardian is responsible for the damages caused by these objects to others.

 

The guardian is defined as "the natural or legal person who has actual authority over the object or has actual authority over this object to direct and monitor its activity. When this actual authority is achieved, guardianship is achieved [12]."

 

The concept of guardianship is based on liability for objects, which is regulated by numerous pieces of legislation. Therefore, we find that these laws have clarified that liability for damages caused by objects and mechanical applications falls on the person responsible for them and who possesses actual authority at the time over these applications, whose liability is presumed by the law unless it is proven that they took precautions and care to prevent the occurrence of harm [13].

 

Custody is based on two main elements: the material and the moral. The material element is manifested in three powers. The first is the power of use. This power refers to the ability to use the thing to achieve a specific purpose, according to its nature or the wishes of the person using it. The person controlling the thing does not necessarily have to actually use it, the person remains the user of the thing even if that use is carried out by someone else. 

 

The second is the power of direction. This power means that the person holding this power is able to take control of the thing, thus having the ability to control and manage it and to issue orders that regulate its use and relate to it when the thing is in the possession of someone else. Direction, then, is the commanding power that applies to the use of the thing [14]. 

 

The third power is the power of oversight, the concept of oversight refers to the power to inspect, maintain and repair the thing and to replace any parts that may be partially or totally damaged with new, sound ones. It is oversight to ensure the suitability of the thing for use and to prevent any harm that may occur as a result of the use or direction of the thing. The moral element of custody means using the thing for the benefit of the person who has actual authority over it, a person who controls the physical element of custody cannot be considered a custodian unless its use is linked to achieving profit for his own benefit [15]. 

 

Comparative legislation explicitly adopts this approach. This is found in Article (231) of the Iraqi Civil Code in force: "Anyone who has at his disposal mechanical equipment or other items that require special care to prevent harm shall be liable for any damage caused by them unless he proves that he took sufficient precautions to prevent such damage, without prejudice to the special provisions contained therein." Fault is based on the person who has actual control over a particular item.

 

Every person is liable for damage arising from what is in his custody if it is proven that the cause of the damage is from those same items, unless he proves that he did everything necessary to prevent the damage and that the damage arose due to an emergency, force majeure, or someone who harmed him.

 

While the American legislature has regulated the concept of liability for things in a very broad manner, almost unmatched by any other law, it has regulated all of this through the Torts Law, a very broad and detailed law [16]. It has regulated the concept of liability for things (a duty of care), which has greatly expanded the concept of care to include liability for industrial products and their damages, as well as vicarious liability. 

 

It is noteworthy that the custodian's error here is a presumed error subject to proof of the opposite, as evidenced by the fact that his obligation here is to achieve a result, not to exercise care, therefore, he has no way to deny this error unless he proves that he took sufficient precautions to prevent the occurrence of the damage. this is the view of the Iraqi legislature, as well as that of the Egyptian, Emirati and French legislators, who have proven the foreign cause [17].

 

Regarding the other aspect of civil liability for intelligent applications in particular, legal jurisprudence has specifically addressed how to apply guardianship provisions exclusively to intelligent applications. The goal was to classify the basis of guardianship liability for the actions of intelligent applications that cause harm, given that robots do not possess legal personality or independent financial liability to be relied upon to compensate the injured party. The research then began to determine the scope and meaning of guardianship and whether it includes the person who created these applications? Does it also include the person who uses them? Or does it depend on the presence of actual authority at the time the harm occurs?

 

To answer the above questions, legal jurisprudence divided guardianship of intelligent machines in general into two categories: (a guardian of creation and a guardian of use). This division resulted in differences in liability for harm caused by intelligent applications.

 

The intelligent application custodian is the manufacturer or programmer of the intelligent machine. He is considered the guarantor who directly owns the configuration, as he is the one who manufactured the robot. He exercises technical oversight over the content of the robot's internal configuration and programming. The configuration custodian bears the responsibility for the damage. If it is proven that the damage was due to an internal defect, whether in the robot's manufacturing or programming, the configuration custodian is responsible for compensating the injured party, as he has actual authority over the robot at that time [18].

 

The use custodian is the custodian who has actual control over the robot that he uses for various purposes in his interest, whether the user is (a tenant or an investor). He is responsible for the damage caused by the robot’s actions to others and is obligated to compensate as long as he has actual authority at the time of the damage. The use custodian has the right to transfer the custody to others by virtue of a legal action and the responsibility is transferred with the transfer of custody, but the formation custodian is not able to do that because the internal defect in the existence of the thing remains. As long as this thing exists, this defect is not negated by its transfer from one person to another [19].

 

Legal jurisprudence believes that applying the responsibility for the custody of objects to it seems appropriate, because an object requires special care if it is dangerous in its nature, composition, or structure. The standard of special care depends on the hazardous nature of the object under custody. The principle is that a machine should be given special protection due to its composition, unlike non-dangerous objects that do not require this level of care. Here, the concept of custody, in its two forms, construction and use, is applied to robots [20].

 

Therefore, in order to classify the basis of the guardian's liability for the actions of intelligent applications that cause harm, given that they do not have legal personality or an independent financial liability to be relied upon for compensation, the guardian is either the manufacturer or the programmer. He is considered the guarantor who directly owns the configuration, as he is the one who manufactured the robot and who exercises technical oversight over the content of the robot's internal configuration and its translation. 

 

If it is proven that the damage occurred due to an internal defect, whether in manufacturing or programming, the guardian is responsible for compensating the injured party, as he has actual authority over the robot [21].

 

The Ineffectiveness of the Guardianship Theory as a Regulator of Civil Liability for Intelligent Applications

The ineffectiveness of the guardianship theory in determining civil liability for intelligent applications is confirmed because the damages caused by intelligent applications in the field of private international relations necessitate determining the civil liability resulting from these damages. The traditional rule in legislation, as previously stated, is that the guardian of intelligent applications bears responsibility and the court that considers their claim for compensation for these damages is the court of the nationality of the debtor of the personal debt or the foreigner residing in the country, based on Articles 14 and 15 of the Iraqi Code [22]. The scope of civil liability in this area is within the scope of the personal debt that the guardian bears for the damages caused by intelligent applications [23].

 

Perhaps the reason for the ineffectiveness of determining civil liability in this regard lies in the inappropriateness of the presumed error, which legal jurisprudence has unanimously agreed upon, stating that the basis of liability for guardianship of objects is based on a presumed error, represented by the failure to exercise due care and caution by the guardian of objects under their actual control, jurisprudence differs on this assumption. Is it a presumption capable of being proven otherwise or a conclusive presumption that cannot be proven otherwise [24]?

 

Based on the above, the reason for the ineffectiveness of the presumption of guard error for intelligent machines is the futility of the concept of caution and care in this regard the legal requirements for negligence, caution and care required by law to guarantee the guard against damage to objects under their control are incompatible with the nature of intelligent applications, as they operate using artificial intelligence, which requires special care in the method of actual control over them, unlike the traditional method of control and monitoring. This is especially true if these applications are characterized by autonomous decision-making and discretion, independent of the authority of their operator, as in the case of self-driving cars, this means that actual control cannot be achieved in the field of artificial intelligence, because these applications operate in most of their actions independently of their operator or the person actually controlling them, who may be an owner, a tenant, a borrower, or even a thief or a usurper [25].

 

It appears that the ineffectiveness of relying on the theory of presumed fault in determining liability arising from damages caused by intelligent applications in private international relations stems from three arguments. 

 

The first is that the ineffectiveness is due to the special nature of applications that incorporate artificial intelligence, given that they are programmed and possess superior intelligence. The liability stipulated in civil laws and legislation is a liability pertaining to inanimate and static objects, while robots are completely excluded from this perspective and therefore, existing legal texts cannot be applied to them.

 

The second argument is that, even assuming the concept of guardianship is accepted to determine civil liability for damages resulting from intelligent applications, this can be envisioned in intelligent applications with limited or weak intelligence,[26] within a specific area, such as a factory, which may be monitored and controlled by a guard. However, for machines with strong or super-intelligent intelligence, such as self-driving cars and autonomous robots with autonomous decision-making and sensing capabilities, it is impossible for the presumed fault and guardianship concepts to succeed in determining personal and civil liability for these applications. The error emanating from it is an independent or semi-independent error from its guardian or supervisor.

 

The third argument is that the elements of error, such as perception and discrimination, which constitute the components of error, as previously explained and upon which a person's liability for proven or presumed errors is determined, cannot be imagined in highly intelligent applications that enjoy autonomous management independent of their users and are able to make decisions to some extent. Therefore, basing civil liability on the presumed error of the guardian of intelligent applications is ineffective and somewhat unfair in this regard.

 

Determining the Civil Liability of the Responsible Person According to Modern Theories of Error in Intelligent Applications

Although the guardianship theory has departed from the position of determining civil liability for damages from intelligent applications and legislation has created other modern and alternative theories to the guardianship theory, such as the theory of liability for defective products and the theory of the human representative, they remain ineffective in determining civil liability for the use of intelligent applications. To explore this topic, we will devote each point to one of the newly developed theories and demonstrate their ineffectiveness in determining civil liability for liability for intelligent applications.

 

To determine the effectiveness of determining civil liability for damages from intelligent applications, we must rely on the theory of liability for defective products. When understanding the concept of liability forDefective products and the ineffectiveness of defining civil liability rules for damages from intelligent applications under product liability.

 

Accordingly, we will first address intelligent technology errors according to the theory of liability for defective products. We will then demonstrate the effectiveness of defining civil liability for damages from intelligent applications based on the human agent theory, as follows:

 

Intelligent Technology Errors According to the Theory of Liability for Defective Products

Civil liability for damages from intelligent applications can be defined based on liability for defective products. We will demonstrate the ineffectiveness of the concept of intelligent application liability under product liability, as follows:

 

Defining Civil Liability for Damages from Intelligent Applications Based on Liability for Defective Products

The American legislature has adopted this type of liability,[27] at the level of American legislation, the landmark New York case (MacPherson v. Buick Motor Co., 1916) is considered the first step toward enacting modern product liability law, which led to the collapse of the privacy barrier that prevented the recovery of personal rights related to negligence actions. By 1955, James cited MacPherson, stating: "The fortress of privacy had collapsed, although Maine was the last to enact a law. It was not until 1982 that the MacPherson case relied on liability established by law on the basis of inadequate safety and security from defective products. That is, the manufacturer of the product is liable for damages resulting from the defect in the product, whether or not he contracted with the injured party [28].

 

Liability according to American jurisprudence is characterized by its objective nature, that is, it does not take into account the element of fault. The injured party is not required to prove fault, but rather proves the existence of a defect in the product, i.e., the failure to meet the safety and security specifications of the product that was offered for sale [29].

 

At the European Union level, Article 1 of the European legislative approach stipulates: Liability for defective products states: "The producer is liable for damages resulting from defects in its products [30].

 

European and American legislators have adopted a new standard for liability: the consumer expectation standard, replacing the old standard of caution and prudence. This means that a person is essentially held civilly liable if safety specifications are not met due to a defect, measured by the degree of safety if the person had taken all necessary precautions and measures and exerted all possible effort. Under the new standard, however, determining the defect as a basis for liability is based on an objective basis, based on the legitimate expectations of consumers [31].

 

This liability is characterized by its special legal nature, as it establishes a special system of civil liability that applies to all those harmed by product defects, regardless of the nature of their relationship with the product or the degree of danger of the products. Therefore, it is neither contractual liability nor tort liability. The purpose of this liability is to achieve equality among those harmed and eliminate the inequality that arises depending on whether there is a contractual relationship between the harmed party and the product. This liability is also characterized by its mandatory rules, meaning that it is a matter of public order, such that any agreement or condition that results in exemption from liability is void.

 

As for the relationship between defective products and intelligent applications, Arab and Western jurisprudence has addressed the establishment of civil liability for damages caused by robots based on defective products. This jurisprudence stipulates that for liability to be established, such that the manufacturer is liable for damages caused by the robot, three elements must be met:

 

  • First: The presence of a defect in the robot. A defect occurs when the product lacks safety and security.

  • Second: Element is damage, which is considered one of the most important elements of liability and its occurrence is sufficient grounds for liability and a claim for compensation. If it is proven that no damage occurred, there is no basis for establishing liability, as liability exists and does not exist with the damage. The third element is causation and it falls upon the injured party to prove the relationship between the damage and the fault [32]

 

Some jurisprudence suggests the need to further establish and clarify some aspects of European regulations regarding intelligent applications, including their inclusion under the legal concept of the term "product," and to examine consumer expectations and non-pecuniary damages, as these aspects will generate controversy in the future [33].

 

Another jurisprudential view is that the current European approach to regulating damages caused by robots is insufficient and that it conflicts with robots that have the ability to learn themselves, requiring special regulation. This prompted the European Parliament to issue the document known as the "Rules of Civil Robot Law" in 2017 [34].

 

The Ineffectiveness of Claiming Liability for Intelligent Applications Under Product Liability

Despite the jurisprudential disagreement regarding the application of product liability provisions to intelligent applications, what concerns us in this area is the adequacy of determining civil liability resulting from civil liability for intelligent applications under product liability. In other words, can product liability rules be sufficient and effective in determining the competent court to consider damages caused by robots or intelligent applications within the scope of private international relations?

 

Although this question has not been answered, legal logic dictates it. If we were to answer this question, the answer would naturally be no, that is, the rules of liability for defective products established by European law and American legislation remain insufficient to address the distinction between civil liability and civil liability for the use of intelligent applications. Perhaps the difficulties in applying this matter are due to several arguments. 

 

The first argument is the absence of any reference to the competent court in liability legislation for defective products to consider damages resulting from the use of intelligent applications within the scope of private international relations. Rather, these laws regulate the elements of product liability or the manufacturer or supplier is responsible for the damages caused by these products and for assessing compensation for them within the framework of national, rather than international, internal relations between individuals [35].

 

The second argument: Jurisprudence has not agreed to attribute civil liability for damages caused by intelligent applications to product liability. Therefore, civil liability for the use of intelligent applications cannot be limited to this type of legislation. Thus, the provisions of defective product liability legislation are ineffective in determining the competent court to hear intelligent application damages.

 

The third argument is that, under American law, the most significant aspect of the ineffectiveness of determining civil liability for civil liability for the use of intelligent applications is the difficulty of determining the court that will hear the debt collection or compensation based on the nationality of the person responsible. American law has multiple parties responsible for defective products, as is clearly understood in Section 3 of the Defective Product Liability Act, which is titled "Responsibility of the 

Seller or Commercial Distributor for Damage Resulting from Defective Products [36]. " The difficulty arises when the nationality of those responsible for defective products differs (producer, manufacturer, distributor, seller), making it difficult to determine the competent court to hear compensation. 

 

The fourth argument is that liability for a defective product does not depart from the traditional relationship between the person and the product based on the guardianship theory. All the problems raised in determining civil liability for damages from intelligent applications based on the guardianship theory can be addressed here in terms of presumed fault and actual control.

 

The Effectiveness of Determining Civil Liability for Damages from Intelligent Applications Based on The Human Representative Theory

By first clarifying the order of origin of the human representative idea, its ineffectiveness in determining civil liability for the use of intelligent applications under this theory becomes clear.

 

Defining Civil Liability According to the Human Representative Theory

As previously explained, the European legislator enacted a law specific to robotics, expressed in the Civil Code on Robotics issued on February 16, 2017. It devised a new theory in this field called the "Human Representative Theory Responsible for Robots" [37]. The human representative theory is based on proven fault on the part of the manufacturer, operator, owner, or user [38].

 

Thus, we find that European law did not treat robots as part of the concept of things, but rather granted them a special legal status. The European Union's position was based on adopting the philosophy that artificial intelligence is harnessed to serve humans and those robots are the creation of the intelligence attributed to machines. They are obedient servants of humans, but not objects or inanimate objects that lack reason. Rather, they are human beings with human logic. The evidence lies in the description of humans as their representatives, not guardians or supervisors. Rather, they are robots with human logic, capable of development and rationalization, as a result of imitating the human mind through technological imitation. 

 

This theory differs from the theory of guarding mechanical applications. Although both theories are based on the presumed fault of the responsible person, [39] they differ in the nature of the objects that require special care. This is because the description of a representative differs from the description of a guardian. Artificial intelligence cannot be considered a legal subordinate to a human being, as the principal is bound by a subordinate relationship, not a representative or representative one. Furthermore, the principal has full oversight over the fully competent subordinate and the principal can sue the subordinate. European law imposes civil liability on the representative. Because it is impossible to impose it on the robot itself, not as a human subordinate, but as a machine with a special legal status that currently serves humans [40].

 

Under this theory, the following question arises: Who is the human representative supposed to be responsible for the robot? 

 

To answer this question, legal jurisprudence has put forward several hypotheses that could define the person who is the human representative [41]. The first hypothesis relates to the factory: Here, the factory owner is asked about defects that appear in applications, which are attributed to a manufacturing defect that led to the applications adopting an abnormal behavior, resulting in harm, as in the case of a self-driving car deviating from its normal path. This resulted in a traffic accident. In addition to manufacturing defects, the manufacturer is liable for negligence in maintenance, such as the company's failure to update the GBS system.

 

The second hypothesis relates to the operator: a professional who operates intelligent applications, such as the virtual central bank, which operates a intelligent application that relies on robots to manage a portion of banking operations. Errors may occur in customer bank accounts, such as deleting numbers from some customer accounts.

 

The third hypothesis relates to the owner: a person who operates robots for their own services, such as someone who uses self-driving cars to deliver their customers.

 

The fourth hypothesis relates to the user: a person who uses the robot independently and who is responsible for the robot's behavior that causes harm to others. A person may be a representative of intelligent devices and thus be liable for their damages after proving the harmful act and causal relationship. While legal frameworks do not allow for the possibility of holding the perpetrator of the error personally accountable, liability rests with the representative, whom French jurisprudence calls the robot's counterpart.

 

According to European law, civil liability may be established against the human representative for damages caused by applications. Intelligent applications are for the customers of the company that owns or operates the robot. If intelligent applications fail to perform the task assigned to them for the benefit of the person contracting with the company, or if they perform in a manner contrary to the agreement, the aggrieved contractor is entitled to compensation.

 

Ineffectiveness of determining civil liability for the use of intelligent applications under the human agent theory

Stating the adequacy of determining civil liability resulting from Civil liability for intelligent applications, according to the human agent theory, raises a question similar to that raised in liability for defective products. The question that arises here is: Is the human agent theory effective in determining the competent court to hear damages caused by robots or intelligent applications within the scope of private international relations?

 

This question can be answered in the negative. The difficulties in applying this principle are numerous and can be presented in the form of arguments.

 

The first argument is that the human agent theory has increased the difficulty of determining the traditional rules of international jurisdiction in civil liability for intelligent applications. The European Civil Code on Robotics of 2017 did not refer to specific provisions for determining the court that hears damages from robots, a fact that some European jurisprudence has noted. We believe this is because the European Union does not struggle with the issue of determining civil liability for robot damages,[42] as the European Union relies on legal participation, or rather legal bloc, which allows courts to apply EU law to every incident or private international relationship occurring within the Union, while this is not the case for other countries. 

 

The second argument is that the human agent theory does not depart from the traditional relationship between the person and the product, which does not exclude liability for defective products themselves, based on the guardianship theory. All the problems that arose in determining civil liability for damages from intelligent applications based on liability for defective products according to the guardianship theory can be addressed here in terms of presumed fault and actual control.

 

The third argument is that the notion of presumed fault in this case, upon which the person's liability for a robot is based, has become more difficult and vaguer in determining civil liability. This is because the human agent theory has taken a closer look at the relationship between the person and the robot, elevating the robot from the status of other inanimate objects. This makes it difficult to identify the fault: should we attribute it to the agent or the robot? What would be the ruling if the robot were highly intelligent and there was no negligence on the part of the agent? These problems are naturally reflected in determining civil liability for damages from intelligent applications.

 

The fourth argument is that while it can be asserted that the origin of this theory is European Civil Code, it remains insufficient in addressing the distinction between civil liability and civil liability for the use of intelligent applications and it has not provided a comprehensive framework for the subject.

CONCLUSION
  • Errors that cause harm occur within the context of using and operating artificial intelligence applications. They occur in the form of programming errors or software design errors that lead to incorrect execution of instructions, in addition to errors in input data. Errors may also occur during application operation when attempting to make the application suitable for the technical tasks required of it

  • Error, within the framework of civil liability for damages from artificial intelligence applications, refers to any personal act committed by the person responsible for operating the application or benefiting from it, which causes harm to others. This error must be proven, as it is not presumed. The injured party must prove it. It is necessary to prove the user's error in use, or the producer's error in programming the robot or other artificial intelligence applications

  • Due to the ineffectiveness of traditional controls on civil liability arising from the use of intelligent applications, Modern legal approaches have been proposed to replace these traditional theories with modern alternatives that are compatible with the nature of intelligent applications. This approach avoids the theory of guardianship or defective products and focuses on addressing the harm resulting from the use of these applications, as this is more logical and just in the eyes of the court considering civil liability

 

Recommendations

 

  • The study proposes restructuring the philosophy of civil liability rules, issuing provisions specific to damages resulting from the use of intelligent applications and regulating them with a separate, independent law that regulates the specific issues of civil liability arising from the use of intelligent applications

  • The study proposes adopting a legislative amendment that reduces the burden on the injured party by limiting the burden of proving the occurrence of the harm before the court, despite the due diligence they exercised to avoid the risks and harms resulting from the use of artificial intelligence applications. This should explicitly stipulate that this obligation is a type of obligation to achieve a result, the failure to do so constitutes a civil error that entails civil liability for damages caused by the use or operation of artificial intelligence applications

 

Funding Information

The author received no funding from any party for the research and publication of this article.

REFERENCES
  1. Othman, Ahmed Ali Hassan. "Implications of artificial intelligence on civil law: a comparative study." Journal of Legal and Economic Research, Faculty of Law, Mansoura University, no. 76, 2021, p. 1580.

  2. "Programming error." Arabica Encyclopedia, 1 Mar. 2022, https://ar.m.wikipedia.org/wiki. Accessed 3 Mar. 2022.

  3. "Types of programming errors." E3arabi, http:// www. e3arabi.com. Accessed 3 Mar. 2022.

  4. Al-Mohammadi, Nawal Mohammed Nayel. Legal protection of open source digital works: a comparative study. Master’s thesis, University of Fallujah, 2022, p. 122.

  5. Al-Fazie, Anwar Ahmed. "Liability of computer software designers for tort: a study in Kuwaiti and comparative law." Journal of Law, Kuwait University, Academic Publication Council, 1995, p. 145.

  6. Rogerson, Simon. "The Chinook helicopter disaster conferences." IMIS Journal, vol. 12, no. 2, Apr. 2002, https://web.archive.org/web/20160410091529/http://www.ccsr.cse.dmu.ac.uk/resources/general/ethicol/Ecv12no2.html.

  7. Al-Dahiyat, Imad Abdul Rahim. "Towards a legal regulation of artificial intelligence in our lives: the problematic relationship between humans and machines." Journal of Ijtihad for Legal and Economic Studies, vol. 8, no. 5, 2019, p. 17.

  8. Al-Duwairi, Mohammed. "Challenges facing artificial intelligence practices within the framework of legal and ethical responsibility." Al-Ruwayya Newspaper, https://www.alroeya.com. Accessed 17 Feb. 2025.

  9. Guerra, Giorgia. "Evolving artificial intelligence and robotics in medicine, evolving European law: comparative remarks based on the surgery litigation." Maastricht Journal of European and Comparative Law, vol. 28, 2021, p. 807.

  10. "Self-driving car accidents." Al Jazeerahttps://www.aljazeera.net/news. Accessed 23 Mar. 2024.

  11. "DeepMind has simple tests that might prevent Elon Musk's AI apocalypse." Bloomberg, https://www.bloomberg.com. Accessed 18 Apr. 2024.

  12. Al-Hakim, Abdul Majeed and Abdul Baqi Al-Bakri. The theory of obligations: sources of obligations. Dar Al-Sanhouri, 2012, p. 173.

  13. Abdel Nassar, Enas Makki. "Legal loopholes in civil liability arising from damage to electronic devices: a comparative study." Journal of Law for Legal Studies and Research, University of Dhi Qar, no. 22, 2021, p. 169.

  14. Tanago, Samir. Sources of obligation. Al-Wafa Legal Publishing House, 2009, p. 182.

  15. Abdel Nassar, Enas Makki. "Legal loopholes in civil liability arising from damage to electronic devices: a comparative study." Journal of Law for Legal Studies and Research, University of Dhi Qar, no. 22, 2021, p. 169.

  16. "American tort law." General overview of tort law in the United States.

  17. Sultan, Anwar. General legal principles. Dar Al-Jami’a Al-Jadida, 2005, p. 209.

  18. Al-Hamrawi, Hassan Muhammad Omar. "The basis of civil liability for robots: between traditional rules and modern trends." Journal of the Faculty of Legislation and Law, vol. 23, no. 2, 2021, p. 82.

  19. Abdel Nassar, Enas Makki. "Legal loopholes in civil liability arising from damage to electronic devices: a comparative study." Journal of Law for Legal Studies and Research, University of Dhi Qar, no. 22, 2021, p. 166.

  20. Amer, Hussein and Abdul Rahim Amer. Civil liability for tort and contract. Dar Al-Maaref, 1979, p. 599.

  21. Youssef, Christian. Civil liability for the act of artificial intelligence. Al-Halabi Legal Publications, 2022, p. 38.

  22. Iraqi Civil Code, arts. 14–15.

  23. Abdel Nassar, Enas Makki. "Legal loopholes in civil liability arising from damage to electronic devices: a comparative study." Journal of Law for Legal Studies and Research, University of Dhi Qar, no. 22, 2021, p. 183.

  24. Abdullah, Hoda. Prospects of civil liability in light of legal texts and jurisprudential and ijtihad opinions: a comparative study. Al-Halabi Legal Publications, 2020, p. 361.

  25. Wahba, Abdul Razzaq Ahmed Sayed Ahmed Mohammed. "Civil liability for damages due to artificial intelligence: an analytical study." Journal of Generation of In-Depth Legal Research, no. 43, 2020, p. 23.

  26. Sadiq, Ahmed Tariq. Fundamentals of artificial intelligence. Dar Al-Dhakira for Publishing and Distribution, 2016, p. 16.

  27. Howell, Geraint. Comparative product liability. Dartmouth Publishing, 1993, p. 34.

  28. Navas, Susana. Robot machines and civil liability. Harvard University Press, 2020, p. 32.

  29. Navas, Susana. Robot machines and civil liability. Harvard University Press, 2020, p. 33.

  30. Craig, Paul and Gráinne De Búrca. EU law: text, cases and materials. Oxford University Press, 2012, p. 28.

  31. Muhammad, Ali Fawzi. Liability for defective products: a comparative study. Alexandria Press, 2010, p. 63.

  32. Craig, Paul and Gráinne De Búrca. EU law: text, cases and materials. Oxford University Press, 2012, p. 31.

  33. Craig, Paul and Gráinne De Búrca. EU law: text, cases and materials. Oxford University Press, 2012, p. 37.

  34. Al-Khatib, Muhammad Irfan. Artificial intelligence and the law: a critical comparative study of French and Qatari civil legislation in light of European rules in the civil code for robots of 2017 and the European industrial policy for artificial intelligence and robots of 2019. Dar Al-Nahda Al-Arabiya, 2020, p. 78.

  35. Product Liability Directive (85/374/EEC), Council of the European Union, 1985.

  36. US Defective Product Liability Act, 2015.

  37. Al Kaakour, Nour. Artificial intelligence and civil liability. Detectable Civility, 2017, pp. 116–131.

  38. Al Muhairi, Neela Khamis Mohammed Kharour. Civil liability for robot damages: an analytical study. PhD thesis, UAE University, 2020, pp. 53 ff.

  39. Al-Hamrawi, Hassan Mohammed Omar. "The basis of civil liability for robots: between traditional rules and modern trends." Journal of the Faculty of Sharia and Law, Dakahlia University, vol. 23, no. 2, 2021, p. 1087.

  40. Al Muhairi, Neela Khamis Mohammed Kharour. Civil liability for robot damages: an analytical study. PhD thesis, UAE University, 2020, p. 72.

  41. Al-Qousi, Hammam. "The problem of the person responsible for operating a robot: the impact of the human deputy theory on the future effectiveness of law." Al-Jeel Journal for In-Depth Legal Research, no. 25, 2018, p. 77.

  42. European civil law on damages caused by intelligent robots.
Recommended Articles
Research Article
The Legal Regulation for Protecting Persons with Mental Disabilities
Published: 30/06/2025
Download PDF
Research Article
The Role of the United Nations in Combating the Crime of Money Laundering
Published: 04/01/2026
Download PDF
Research Article
Better Understanding as To Artificial Intelligence, Argumentation and Law in India
Published: 26/11/2020
Download PDF
Research Article
The Management of Forest Area by Forest Management Unit in Creating Law Certainty and Justice (A Case Study in Daerah Istimewa Yogyakarta Province)
...
Published: 30/06/2021
Download PDF
Chat on WhatsApp
Flowbite Logo
PO Box 101, Nakuru
Kenya.
Email: office@iarconsortium.org

Editorial Office:
J.L Bhavan, Near Radison Blu Hotel,
Jalukbari, Guwahati-India
Useful Links
Order Hard Copy
Privacy policy
Terms and Conditions
Refund Policy
Shipping Policy
Others
About Us
Team Members
Contact Us
Online Payments
Join as Editor
Join as Reviewer
Subscribe to our Newsletter
+91 60029-93949
Follow us
MOST SEARCHED KEYWORDS
Copyright © iARCON International LLP . All Rights Reserved.