Chatbots: the legal marketing device that could get you into trouble

Chatbots are now being used in legal marketing to help lawyers find valuable clients. The technology is basically a computer program that is powered by artificial intelligence that simulates conversation with people.  Potential clients who visit a firm’s site can type questions and comments into a chatbox and, when doing so, they think they are speaking with a real person.  The bot collects contact info as well as other details about the potential client’s case, asks questions to the potential client, analyzes the data, and gives that information to the lawyer.  The chatbot companies say that their AI sifts out the tire kickers and identifies valuable prospects for the firm, thereby improving conversion rates. 

The chatbots are provided by tech vendors. A lawyer contracts with a vendor that offers the chatbot software, the vendor provides a bit of code that is inserted into the lawyer’s website, and a chat box becomes a part of the lawyer’s site. Someone coming to the website wouldn’t know that another vendor is involved at all— it simply looks like a chat box that is part of the lawyer’s website. 

Using a chatbot isn’t necessarily off limits. What you need to be concerned about is the nature of the exchange between the bot and the potential client.  It’s a problem, for instance, if a chat bot engages in conversation with a potential client and actually dispenses legal advice. But chatbots aren’t likely to be programmed to give advice. They are, however, programmed to engage in conversation. They talk to the potential client to learn about their case so they can weed out the garbage contacts from the good prospects. But it’s that conversation that could create problems.

During the conversation between the bot and the prospect, the prospect will be providing information about their case. What we need to worry about is the potential that people who visit the lawyer’s site and engage in a conversation with the chatbot end up being considered “prospective clients” under Rule 1.18. If they do attain that status, the lawyer could have conflict problems. To see what I mean, first understand how the rule works.

How Rule 1.18 Works

Rule 1.18 says that if a person “consults with a lawyer about the possibility of forming a client-lawyer relationship” they could be a prospective client. All they need to do is consult about the possibility of forming the lawyer client relationship.  But why should a lawyer care if someone is technically considered a “prospective client?” 

First, you can’t tell anyone about the information that the prospective client gave you.  Rule 1.18(b) explains that “Even when no client-lawyer relationship ensues, a lawyer who has learned information from a prospective client shall not use or reveal that information…” Second, you might be conflicted out of representing someone else in the future. Even if you don’t take the prospective client and you never work on their matter, subsection (c) says that if you received information from the prospective client that could be significantly harmful to that person, and some time in the future a different person approaches you to represent that different person against the prospective client in the same matter, you might not be permitted to do so. You would be conflicted out of the representation. 

That could be devastating. Think about it— if you have a consultation with someone about a lucrative matter and you decide not to take their case…but later you are approached by someone who wants you to represent them in that very case, you can’t take that other client. You could be forced to forego a lot of money in fees.

The problem with chatbots

When it comes to Rule 1.18, what’s important is the trigger for becoming a prospective client. As you saw in the rule above, that trigger is a consultation.  The key question, of course, is, when does an interaction rise to the level of a consultation?  The answer is that it depends on the circumstances. But the key circumstances to focus on are your website text and the content of the chatbot’s communications.

If your website just lists your contact information you’re going to be okay.  If you simply put your information out there and someone sends you information about a case, that’s not going create a prospective client relationship.  Comment [2] confirms that: “…a consultation does not occur if a person provides information to a lawyer in response to advertising that merely describes the lawyer’s education, experience, areas of practice, and contact information, or provides legal information of general interest.” Basically, that comment is saying that if you simply tell someone that you exist and that you are qualified, it’s not a “consultation.”  If someone replies in that situation, the person “communicates information unilaterally to a lawyer, without any reasonable expectation that the lawyer is willing to discuss the possibility of forming a client-lawyer relationship.” That person, therefore, is not a prospective client.

However, you’re  going to have a problem if your website encourages people to offer information and your chatbot follows up by engaging with that person. The comment explains that “…a consultation is likely to have occurred if a lawyer…through the lawyer’s advertising in any medium, specifically requests or invites the submission of information about a potential representation without clear and reasonably understandable warnings and cautionary statements that limit the lawyer’s obligations, and a person provides information in response.”

If your site specifically requests or invites a person to submit information about a potential representation, and your chat bot provides information in response, then you are risking the creation of a prospective client relationship. Obviously, the ethical danger is dependent upon the responsiveness of the chatbot because the rule says that you have to “provide information in response.”  Well, the more lengthy, intense, and detailed the chatbot’s responses, the more likely there will be a problem.  

Oh, and don’t get hung up on the fact that your chatbot is not a “person” under the rules. If the bot provides information I think a tribunal will see the software as an extension of the lawyer.  Plus, if the AI software is doing its job correctly, the potential client should believe that they are actually communicating with a real person. For those reasons I wouldn’t be surprised if a tribunal concluded that the AI in the chatbot is the functional equivalent of a “person” for the purposes of the rule. 

Of course, there is a huge get-out-of-trouble card. All you have to do is include the disclaimers set forth in the rule.  If your site has “clear and reasonably understandable warnings and cautionary statements that limit the lawyer’s obligations” as stated in Comment [2], you’re probably ok. This, however, is a situation where you can win the ethical battle, but lose the overall war. Here’s what I mean: what if this issue isn’t raised in the context of an ethics grievance? What if it is, instead, raised in a disqualification motion? Consider this hypothetical…

Win the ethical battle, but lost the disqualification war

Let’s say you’re in a medium sized firm that handles a variety of different types of matters. Your firm represents Business X and you’ve been their counsel on nearly all of their legal matters for years. Your firm has a website that utilizes a chatbot to evaluate the strength of new, potential clients. You have language on the website that properly disclaims Rule 1.18. Someone visits your site and explains that they have a workplace discrimination claim. They provide details of the case to the chatbot. The bot inquiries further and the prospect provides more information, in fact, the client wants to make sure that the lawyer with whom they are chatting has a complete understanding of the case so they provide a lot of details.

The chatbot sends the info to the attorney at the firm responsible for reviewing prospect data, and that lawyer thinks that the prospect has a great case.  After reviewing the information, t he attorney contacts the prospect and learns that the adverse party is Business X. However, the lawyer figures that the firm will probably be representing Business X in that matter because the firm does all of their work. As a result the firm doesn’t take the potential client.

The prospect finds another lawyer, and they file suit against Business X. As the lawyer anticipated, your firm is representing Business X. The prospect’s lawyer files a motion to disqualify you as counsel and you oppose it.  You claim that there is no violation of the rule—  the prospect never became a “prospective client” under Rule 1.18 because you had the  proper disclaimer. And you’re probably right. But there is a good chance that a judge will disqualify you anyway.

That’s because the judge isn’t deciding whether discipline should be imposed — the judge is deciding whether you should be disqualified. They don’t necessarily care about the technicalities of the rules, they care about two things — the two things that are at the core of every conflict— loyalty and confidential information.  

The critical question that the judge will ask was, during the interaction the firm had with the prospect, did you learn confidential information from the other party? And when the judge realizes that your chatbot gathered information that would ordinarily be considered confidential information and it was passed on to the lawyer in your firm for review, they’re going to say you have a conflict and kick you out of the case.  You’re not going to be saved by the disclaimers because those disclaimers only helped you avoid discipline under Rule 1.18. In the disqualification context the court cares about loyalty and confidential information. And when it finds out that you were privy to a slew of details from the potential client’s case, they will disqualify you.

How to make chatbots safer

This doesn’t mean that chatbots are forbidden, they just need to be used carefully. What can you do to make the chatbot safer? Here are 4 ideas:

  1. Use disclaimers that comply with the rules.
  2. Make sure the bot is just gathering information and not giving any information. And if it does give information, make sure it’s super limited. Keep Comment [4] to Rule 1.18 in mind which states, “In order to avoid acquiring disqualifying information from a prospective client, a lawyer considering whether or not to undertake a new matter should limit the initial consultation to only such information as reasonably appears necessary for that purpose.”
  3. Go over Rule 1.18 with the vendor supplying your chatbot. Make sure they understand it. Also explain the disqualification issue. Remember, most tech vendors have no idea about the details rules like 1.18.
  4. Train the staff/lawyers in your office who are responsible for following up on the leads developed by the bot. Let them know about Rule 1.18 and the issue of disqualification. 
  5. Create a process that limits the exposure of the lawyers who review the information provided by the chatbots. It is possible to screen those attorneys per 1.18(d)(2). Here’s what that section states, in part:
    • (d) When the lawyer has received disqualifying information…representation is permissible if…(2) the lawyer who received the information took reasonable measures to avoid exposure to more disqualifying information than was reasonably necessary to determine whether to represent the prospective client; and (i) the disqualified lawyer is timely screened from any participation in the matter and is apportioned no part of the fee therefrom; and (ii) written notice is promptly given to the prospective client.
Share