The Sedona Conference's AI-Related Activities

The Sedona Conference's AI-Related Activities

Since generative Artificial Intelligence (AI) burst on the scene at the end of 2022, several Sedona Conference Working Group Series members have asked "What is Sedona doing in AI?"

Some Working Groups have already started projects centered on AI and the law, and others will soon. To assist members in selecting how they would like to allocate their valuable volunteer time and to help coordinate activites across the Working Groups, below is a short description of each active Working Group's current AI-related activities. These brief descriptions are subject to change, and we realize that there are some overlaps and significant gaps. But AI is a moving target, to say the least, and we will keep members appraised of developments.

For now, be on the lookout for "Calls for Volunteers" for specific projects and circle April 4-5 on your 2024 calendar to meet in Reston, Virginia for The Sedona Conference on AI and the Law.

WG1 (eDiscovery and electronic document management): Technologies and processes that may expedite – and perhaps enhance – search and retrieval are critical underpinnings of our modern discovery processes. Full text indexing, Boolean searching, analytics, and technology-assisted review (TAR), for example, have all had significant impact on how we process, review, and produce information in legal proceedings and investigations. Considering the recent and increasing embrace of generative artificial intelligence, especially in the legal world, WG1 is forming a brainstorming group to consider whether guidance may be beneficial on the use of machine learning (ML) and artificial intelligence (AI), including generative AI, in discovery. Specifically, the new brainstorming group will consider topics including:

  • When, and how, may AI and ML be appropriate and useful in the discovery process? When might they be less appropriate or useful?
  • To what extent – now and potentially in the future – do AI and ML overlap with TAR or other technology-based tools and processes, and our prior guidance thereto?
  • Are there recommendations around AI and ML that may help encourage their use and acceptance by courts and practitioners, if and when appropriate? For example, a code of conduct, guidelines, or a similar framework for use of AI and ML.  Will looking back at our experience with the developments around law and practice for TAR be helpful to consider?
  • AI and ML in relation to discovery topics, including those that may have been subjects of prior Working Group 1 guidance, such as Sedona Principle 6, FRCP required disclosures, and evidentiary issues.
  • Ethical requirements of lawyers relating to AI and ML, including certification under FRCP 26(g) and ABA model code (and state bar) requirements on supervision and technology competence, and the use of AI and ML tools with client information.
  • What format and type of guidance from Working Group 1 would be helpful to courts and practitioners?  For example, a document providing guidelines, a primer, and/or educational outreach and resources?  And how would WG1 maintain the relevance of such guidance in light of the rapidly evolving AI landscape?

WG6 (cross border discovery and privacy): WG6 has nothing specific to AI in the works, but will be incorporating AI issues in the context of other matters, specifically in regards to the Brainstorming Group on cross-border information governance and updates to International Litigation Principles.

WG7 (Sedona Canada): A WG7 drafting team has been working for the past six months on a Primer on AI. We anticipate a 15-to-20-page paper that addresses AI from the 50,000-foot level, that can be handed to a client or partner who doesn’t know what AI is and how it’s being used in the legal field. It will cover lots of different areas, but not in great depth, being more definitional and introductory in focus. It will include a glossary of terms and will cross-reference to other Working Groups’ work as appropriate. The Primer will not be Canada specific, so could serve as a foundational document for other working groups. When completed, it will be translated into French.

WG9 (patent damages/remedies): As the use of AI tools continues to expand across society, it remains unsettled whether the unauthorized use of third-party IP rights in the course of such uses constitutes infringement, and if so, how liability should be determined and imposed. More specifically, with respect to patent infringement and damages (the focus of WG 9), questions include 1) whether an AI tool trained on patented subject matter without the patentee’s authorization is liable for infringement; 2) whether an AI tool that engages in conduct (such as performing a patented process) or generates an output that is covered by an existing patent constitutes infringement; and 3) if so, who is liable (e.g. the AI designer, the AI user, or the AI owner)? While the WG cannot alone answer these questions, it might consider advancing certain principles, or advocating for certain theories of liability over others. At minimum, it might consider monitoring court developments in these areas as cases begin to arise.

WG10 (patent litigation best practices): In recent months, there have been several high-profile cases of courts sanctioning attorneys who have used AI to generate briefs and legal arguments without independent verification of the accuracy of the citations. In one such case, the citations were to fictional opinions and inaccurate statements of law. Following these cases, several courts have issued standing orders and local rules requiring attorneys to disclose their uses of AI in court practice to various degrees. Some orders appear to require disclosure of any and all uses of AI, while others are more limited to uses that implicate confidential client information. To date, there are no consistent standards in this area, or “best practices.” WG 10 could consider recommending a set of best practices for courts to employ, particularly in view of existing ethical obligations on attorneys under the ABA model rules.

WG11 (data security and privacy): Artificial Intelligence (AI) has the potential to solve some of the world’s most complex problems and to bring about a sea-change of innovation, but also raises myriad challenges for the future of work, privacy, and data governance. AI, in all of its forms, requires data, and often the more data the better. This raises questions about from where such data is collected, how it is used and processed, its security, the ways in which AI can be harnessed for cyberattacks or to bypass security measures or exploit system security and privacy vulnerabilities, and the possible need for consent and the form and mechanism for obtaining it. While many jurisdictions are beginning to consider and enact laws to address these questions, this body of law remains in a nascent state and much remains to be discussed and decided. WG11 is forming a brainstorming group to consider these topics and prepare an analysis of whether there are any, and if so what, legal issues related to the topics of privacy, consent, and cybersecurity impacting the development and use of AI that might be worthy of a drafting team effort to prepare a Commentary on said issue(s).

WG12 (trade secrets): AI, its constituent parts, including algorithms and inputs, and its outputs can be tools, protectible intellectual property (IP) or both. AI, its constituent parts and its outputs can be trade secrets. Importantly, if those outputs are generated by only AI – that is, produced by generative AI without any human involvement – then patent and copyright protection are not available under current U.S. law. AI, IP law and practices and privacy law and practices may need to be considered together. In some circumstances, competing interests, such as protective measures, data minimization and data access, may need to be balanced. With respect to trade secret law and practices, AI implicates multiple areas, including trade secret identification, trade secret management, the duration of trade secret status and the employee life cycle.

Technology Resource Panel (TRP): The TRP will work with the Working Groups, especially WG7 and its AI Primer drafting team, to make sure terminology is being used consistently across all Sedona publications and reflects standard use in the legal technology community. This will likely require significant updates to the Glossary, which are already underway. TRP members will also be available to review Working Group drafts or to answer technical questions from Working Group drafting teams.

Cooperation Proclamation Resources for the Judiciary: The co-editors of the Judicial resources are collecting standing orders, local rules, court decisions, and scholarly articles addressing the various case management issues raised by AI as part of an overall update leading to a projected Fourth Edition in 2024. Submissions are actively encouraged and will be vetted by a panel of state and federal trial judges for value as guidance to the judiciary.

The Sedona Conference Journal: Going back to the roots of The Sedona Conference, we will be holding a conference on AI and the Law, April 4-5, 2024, in Reston, Virginia. This conference will feature 6 to 8 panels on the impact of AI in several areas of the law, culminating in a panel on state, federal, and international regulatory initiatives. The conference will be open to the public and we hope to draw participants, dialogue leaders, and sponsors from outside the “Sedona Bubble.” Each panel will be responsible for drafting a law-review-quality paper (independently of any Working Group project) in advance that will be subjected to live “peer review” as part of the conference and subsequently published in a special symposium edition of The Sedona Conference Journal in Summer 2024.

Announcement Date: 
Thursday, August 31, 2023