Intellectual Property, Information Technology & Cybersecurity

Artificial Intelligence and Professional Responsibility

The legal profession stands at a crossroads between tradition and technology. The rapid emergence of generative artificial intelligence (GenAI) tools, such as ChatGPT and Google Gemini, has introduced unprecedented opportunities for efficiency and innovation. Yet, these same tools have also created new ethical and professional challenges. Recent judgments across the United States and the United Kingdom provide a stark warning that while technology can radically transform legal practices, it can never absolve a legal practitioner from his or her professional duties.

This growing awareness is increasingly reflected even within AI developers themselves. An example of this can be seen with OpenAI’s latest update to ChatGPT, announced on 29 October 2025, explicitly restricting the use of its services for “automation of high-stakes decisions in sensitive areas without human review,” including legal matters. It further clarifies that users “cannot use our services for the provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” While this policy does not prohibit the provision of general legal information, it underscores an important shift; a recognition that GenAI must remain a tool under professional oversight, not a replacement for it. The ramifications of this move extend beyond mere compliance; they signal a broader recalibration in how legal professionals engage with AI.

Understanding the Technology

For lawyers, understanding the technology on which they rely is no longer a matter of convenience but one of professional competence. In recent years Courts in the USA and in the UK have consistently stressed that legal practitioners must approach GenAI with caution and verify its output through authoritative sources. Blind reliance on these systems, which can produce fabricated but convincing material, referred to as “hallucinations”, has led to sanctions, reputational harm, and disciplinary proceedings.

U.S. Case law

The first and most widely publicised case arose in Mata v. Avianca, Inc. (S.D.N.Y. 2023). In this case, counsel for the plaintiff relied on ChatGPT to draft legal submissions and generate case citations. When opposing counsel and the court questioned the existence of these authorities, the lawyers persisted, submitting further materials containing wholly fictitious judgments and citations. The Judge imposed a sanction of $5,000 upon the lawyers and their firm. Whilst the court acknowledged that there was nothing inherently improper in using AI tools, it condemned their deployment as a substitute for competent research.

In Shahid v. Esaam, the Georgia Court of Appeals uncovered numerous fictitious case citations within a trial court’s order, prepared by counsel using AI. The appellate court vacated the decision, expressing concern that the lawyer had continued to rely on false citations even after their authenticity had been challenged. Counsel was fined USD 2,500, with the court echoing U.S. Chief Justice John Roberts’s 2023 Year-End Report on the Federal Judiciary, which cautioned that “any use of AI requires caution and humility”.

In Lacey v. State Farm (2025), a California judge imposed USD 31,000 in sanctions on two law firms after receiving a brief containing “false, inaccurate, and misleading legal citations and quotations.” The Judge remarked that “no reasonably competent attorney should outsource research and writing” to AI, criticising the use of tools such as Google Gemini and Westlaw’s CoCounsel without proper verification.

U.K. Case law

British courts have taken an equally firm stance. In Ayinde v. London Borough of Haringey [2025] EWHC 1383 (Admin) and Hamad Al-Haroun v. Qatar National Bank QPSC and QNB Capital LLC, the High Court issued an unambiguous warning, that lawyers remain fully accountable for any material submitted under their name, whether produced by a human or by AI.

In Ayinde, a barrister submitted pleadings containing five fictitious case citations and other stylistic indicators of AI generation. In Al-Haroun, a solicitor filed a witness statement citing forty-five authorities, eighteen of which were found to be non-existent. Both practitioners were fined £2,000 and referred to their professional regulators. The Court observed that GenAI cannot replace professional judgment and that these obligations should apply “from the earliest stages of training.”

The same warning was reiterated in ANPV & SAPV v. Secretary of State for the Home Department [2025]. An immigration barrister was discovered to have used AI to prepare submissions containing “entirely fictitious” or “wholly irrelevant” authorities. Upper Tribunal Judge criticised the barrister’s attempt to conceal his reliance on AI and referred the matter to the Bar Standards Board, noting that such conduct “wasted the tribunal’s time” and undermined the administration of justice.

Implications for Maltese Legal Practice

Although no Maltese case law case law has yet addressed the professional use of generative AI, the direction of foreign courts provides valuable guidance. The emerging principle, consistent across both the U.S. and the U.K., is that while AI may assist, it does not excuse. The duty to verify, to act diligently, and to maintain professional integrity remains squarely with the lawyer.

Within the Maltese context, the Code of Ethics and Conduct for Advocates, imposes duties of competence, diligence, integrity and honesty. These obligations extend to any technological tool employed in practice. Thus, if a lawyer incorporates AI-generated material into pleadings, advice, or correspondence, they bear full responsibility for its accuracy and reliability.

Although the EU Artificial Intelligence Act does not specifically regulate the legal profession, its framework makes clear that deployers (users) of AI systems, and not only providers, have defined obligations under the law. Nonetheless, it must be emphasised that the Act does not yet establish a comprehensive, uniform civil-liability regime for users in all contexts. For Maltese legal practitioners, this means that the use of AI tools does not reduce their responsibility or professional accountability. Lawyers must treat AI-generated content with appropriate scepticism, verify outputs, maintain oversight and document controls. Failure to do so could expose them to civil liability, disciplinary sanction, or reputational harm as foreign precedents demonstrate.

Conclusion

Legal pronouncements in both the United States and the United Kingdom make it abundantly clear that the prevailing judicial approach is one of caution: generative artificial intelligence cannot be relied upon without verification, and any lawyer who fails to confirm the accuracy of AI-generated material will bear full responsibility for any resulting inaccuracies. While it is not unlawful for a lawyer to employ AI tools in the course of their work, the case law of both jurisdictions delivers a consistent and unequivocal warning.

For Maltese legal practitioners, this serves as a cautionary tale. Although local case law on the matter has yet to emerge, the direction of foreign judgments provides valuable foresight. The use of AI does not diminish accountability. No technology, however advanced, can absolve a lawyer or law firm from their professional duties of competence, integrity, and diligence. Courts in both the U.S. and the U.K. have demonstrated, through the imposition of fines and sanctions, that professional responsibility remains personal and non-delegable, even when technology is involved.

On the 27th of February 2025 the Court of Justice of the European Union (CJEU) in its First Chamber delivered a preliminary reference balancing the rights of a data subject and their right to an explanation and the protection of trade secrets.

Facts:

The case concerned CK, whose creditworthiness was assessed via automated decision-making (“ADM”) by third-party company Dun & Bradstreet Austria GmbH (“D&B”), resulting in rejection for a mobile phone contract. CK challenged the outcome before the Austrian Data Protection Authority, which ruled in his favour due to D&B’s failure to explain the ADM logic. D&B claimed further disclosure would endanger trade secrets, but although the Federal Administrative Court upheld the decision, the Vienna City Council declined enforcement, stating D&B had met its legal obligations. The case was ultimately referred to the Court of Justice of the European Union (“CJEU”).

Opinion of the CJEU:

The CJEU addressed two main issues:

  1. the scope of the data subject’s right of access under Article 15(1)(h) of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (the “GDPR”) in cases involving ADM; and
  2. the potential conflict between this right and the protection of trade secrets.

The Court held that data subjects are entitled to clear, concise, and accessible explanations of the procedures and principles behind ADM, including profiling, to ensure a meaningful understanding of how personal data is processed. This supports the GDPR’s broader transparency obligations, allowing individuals to verify the lawfulness of such processing.

However, access rights are not absolute. When the requested information is considered a trade secret, the controller must submit all protected material to the competent supervisory authority or court, which will then balance the data subject’s right of access with the legitimate interests of protecting intellectual property or trade secrets. If necessary, safeguards must be implemented to prevent disclosure that would harm competing rights or freedoms, while still ensuring an appropriate level of transparency for the data subject.

Conclusion:

The CJEU, in its decision, confirmed that the mere existence of a trade secret and its potential conflict with data access rights should not be used to deny access, particularly in the case of logic which underlies the ADM. This is balanced with the disclosure of such trade secrets being subject to confidentiality, therefore striking a balance between the two rights. This is especially relevant considering the rise in the use of ADM, particularly through the prolific use of AI models.

The information provided in this Insight does not, and is not intended to, constitute legal advice. All information, content, and materials available are for general informational purposes only. This Insight may not constitute the most up-to-date legal information and you are advised to seek updated advice.

< Back