Category Archives: Attorney Ethics & Technology

The lawyer’s emerging duty not to share

As technology morphs, the lawyer’s obligation to protect client data becomes more difficult. With each new advancement there become new ways that client information could be revealed, and our duty to take reasonable steps to protect that data changes.  It gets tougher and tougher to figure out what actually constitutes “reasonable steps” to protect the data as required by Rule 1.6(c).  Today I believe that the question of whether a lawyer is taking “reasonable steps” to protect client data is being redefined once again.  I believe that there is an evolving duty not to share information.

I’m not talking about the stupid kind of sharing like posting a comment on social media about a client matter. I’m talking about lawyers’ less obvious way sharing of information, and there are two ways that concern me in particular: sharing access to contacts and sharing access to our location.  

Here’s how those two concerns appear in the practice. Some platforms we use ask us if we want to import our contacts, or provide that site “access” to our contact list. It’s most likely done because it allows the software we’re using to make communication more efficient.  Or consider other instances where you share your location— not by checking in somewhere on Facebook— maybe it’s a fitness apps that runs constantly in the background and tracks your location. Other times, the app is not so obvious. In fact, most of us probably don’t even realize that certain apps are sharing our location.  When I started looking into this issue I learned that I was allowing the app that updates the firmware on my headphones to track my location.  I also inadvertently gave location-tracking permission to the app that helps me organize my reimbursable expenses.  I even remember reading somewhere that crossword puzzle apps sometimes track your location.  

My concern is that these sources of information can be put together by the bad guys to find out lots of stuff about our practices and our clients. That’s why I see contact lists and location information as puzzle pieces.  We are revealing bits and pieces of our practice that, when put together, could end up revealing client relationships, the status of client matters, etc. 

A bad guy with access to this information could learn a slew of things: when you were at your adversary’s office? How often did you go to your adversary’s office? Does that mean a deal is imminent? Did you accompany your client to a meeting with a bankruptcy attorney, or a white collar criminal lawyer? What if bad guys want to target a particular corporation, and they have focused on a particular corporate officer. They realize you’re the lawyer, so they hunt through your contact lists to see if you’re connected with that individual. You became the hard target…and you gave them another step toward the client.  Now maybe the bad guys can find your client’s mobile number and track the client.  Or maybe they learn a personal email address which allows them to send phishing emails, malware, or ransomeware to the client. Plus, there’s other information people can get from our contact list — what if the contact-sharing that you authorized also imports the notes that you keep in your contact’s entry? There could be information covered by the attorney-client privilege in those notes.

It’s true that we can’t say how this danger will actually manifest itself. We can’t say, “this is the specific app watch out for” or “stop using this particular platform.” We don’t know how or when the bad guys will put the puzzle together.  But here’s something that we do know for sure— they are trying. That is a given. But, despite accepting that undeniable fact, we continue to voluntarily provide the puzzle pieces and that doesn’t seem to be reasonable.

Think about it — if we know that they are constantly and consistently trying put these pieces together, is it reasonable to think Hey these bad guys are scouring the internet trying to put these puzzle pieces together…so I need to keep giving them those pieces. No. It’s not. Our duty is to stop helping the bad guys.  Lawyers need to reconsider whether we should continue sharing this type of data.

My concern gained a bit more credibility after I read an article in the Wall Street Journal recently (Note, that we live in a politically charged environment and I am not giving you this quote because the Trump Administration was involved, nor because it’s about immigration issues. This is about a tech concern.) According to the Wall Street Journal, 

“The Trump administration has bought access to a commercial database that maps the movements of millions of cellphones in America and is using it for immigration and border enforcement…The location data is drawn from ordinary cellphone apps, including those for games, weather and e-commerce, for which the user has granted permission to log the phone’s location. The Department of Homeland Security has used the information to detect undocumented immigrants and others who may be entering the U.S. unlawfully, according to these people and documents.”

See what I mean? It’s not about the fact that the Trump Administration bought access to this commercial database. It’s about the fact that the commercial database exists at all and that anyone can purchase access to such a database. It’s a problem for lawyers because it means that people are collecting data that could reveal information about our practice and our client matters. Oh, and the kicker is that that information is being delivered by us— we are sharing it voluntarily and gifting it to the company that’s collecting it for their database. 

Maybe, today, part of taking “reasonable measures” to protect client confidential information includes putting up a barrier….making it more difficult for people to gather our information…making it tougher for them to put the pieces together.  Given what we know about the relentless efforts that people are making to gather and use that information, maybe we have a duty to take appropriate evasive tactics.  

I think good analogy is “proper password selection.” We would all agree, I’m sure, that it’s not reasonable to have a password that is your birthday, or something common like the word, “Password.”  Everyone would agree that it is not reasonable to use easily discoverable passwords, and that doing so is not taking “reasonable steps” to protect client information.  But it wasn’t always like that. There was a time when no one considered the need for uncommon passwords. That was, of course, until people started getting hacked because of their weak passwords. Once the infiltration started, the standard changed. Today it’s simply expected the lawyers will have proper passwords. And that’s where we are headed with the duty not to share. 

Up to today every lawyer has shared their contacts and location and we’ve never batted an eye. But times are changing. The danger of doing so is becoming apparent and it might be time that we stop giving away the puzzle pieces. That’s why I think we are witnessing the evolution of a lawyer’s duty not to share.


Chatbots: the legal marketing device that could get you into trouble

Chatbots are now being used in legal marketing to help lawyers find valuable clients. The technology is basically a computer program that is powered by artificial intelligence that simulates conversation with people.  Potential clients who visit a firm’s site can type questions and comments into a chatbox and, when doing so, they think they are speaking with a real person.  The bot collects contact info as well as other details about the potential client’s case, asks questions to the potential client, analyzes the data, and gives that information to the lawyer.  The chatbot companies say that their AI sifts out the tire kickers and identifies valuable prospects for the firm, thereby improving conversion rates. 

The chatbots are provided by tech vendors. A lawyer contracts with a vendor that offers the chatbot software, the vendor provides a bit of code that is inserted into the lawyer’s website, and a chat box becomes a part of the lawyer’s site. Someone coming to the website wouldn’t know that another vendor is involved at all— it simply looks like a chat box that is part of the lawyer’s website. 

Using a chatbot isn’t necessarily off limits. What you need to be concerned about is the nature of the exchange between the bot and the potential client.  It’s a problem, for instance, if a chat bot engages in conversation with a potential client and actually dispenses legal advice. But chatbots aren’t likely to be programmed to give advice. They are, however, programmed to engage in conversation. They talk to the potential client to learn about their case so they can weed out the garbage contacts from the good prospects. But it’s that conversation that could create problems.

During the conversation between the bot and the prospect, the prospect will be providing information about their case. What we need to worry about is the potential that people who visit the lawyer’s site and engage in a conversation with the chatbot end up being considered “prospective clients” under Rule 1.18. If they do attain that status, the lawyer could have conflict problems. To see what I mean, first understand how the rule works.

How Rule 1.18 Works

Rule 1.18 says that if a person “consults with a lawyer about the possibility of forming a client-lawyer relationship” they could be a prospective client. All they need to do is consult about the possibility of forming the lawyer client relationship.  But why should a lawyer care if someone is technically considered a “prospective client?” 

First, you can’t tell anyone about the information that the prospective client gave you.  Rule 1.18(b) explains that “Even when no client-lawyer relationship ensues, a lawyer who has learned information from a prospective client shall not use or reveal that information…” Second, you might be conflicted out of representing someone else in the future. Even if you don’t take the prospective client and you never work on their matter, subsection (c) says that if you received information from the prospective client that could be significantly harmful to that person, and some time in the future a different person approaches you to represent that different person against the prospective client in the same matter, you might not be permitted to do so. You would be conflicted out of the representation. 

That could be devastating. Think about it— if you have a consultation with someone about a lucrative matter and you decide not to take their case…but later you are approached by someone who wants you to represent them in that very case, you can’t take that other client. You could be forced to forego a lot of money in fees.

The problem with chatbots

When it comes to Rule 1.18, what’s important is the trigger for becoming a prospective client. As you saw in the rule above, that trigger is a consultation.  The key question, of course, is, when does an interaction rise to the level of a consultation?  The answer is that it depends on the circumstances. But the key circumstances to focus on are your website text and the content of the chatbot’s communications.

If your website just lists your contact information you’re going to be okay.  If you simply put your information out there and someone sends you information about a case, that’s not going create a prospective client relationship.  Comment [2] confirms that: “…a consultation does not occur if a person provides information to a lawyer in response to advertising that merely describes the lawyer’s education, experience, areas of practice, and contact information, or provides legal information of general interest.” Basically, that comment is saying that if you simply tell someone that you exist and that you are qualified, it’s not a “consultation.”  If someone replies in that situation, the person “communicates information unilaterally to a lawyer, without any reasonable expectation that the lawyer is willing to discuss the possibility of forming a client-lawyer relationship.” That person, therefore, is not a prospective client.

However, you’re  going to have a problem if your website encourages people to offer information and your chatbot follows up by engaging with that person. The comment explains that “…a consultation is likely to have occurred if a lawyer…through the lawyer’s advertising in any medium, specifically requests or invites the submission of information about a potential representation without clear and reasonably understandable warnings and cautionary statements that limit the lawyer’s obligations, and a person provides information in response.”

If your site specifically requests or invites a person to submit information about a potential representation, and your chat bot provides information in response, then you are risking the creation of a prospective client relationship. Obviously, the ethical danger is dependent upon the responsiveness of the chatbot because the rule says that you have to “provide information in response.”  Well, the more lengthy, intense, and detailed the chatbot’s responses, the more likely there will be a problem.  

Oh, and don’t get hung up on the fact that your chatbot is not a “person” under the rules. If the bot provides information I think a tribunal will see the software as an extension of the lawyer.  Plus, if the AI software is doing its job correctly, the potential client should believe that they are actually communicating with a real person. For those reasons I wouldn’t be surprised if a tribunal concluded that the AI in the chatbot is the functional equivalent of a “person” for the purposes of the rule. 

Of course, there is a huge get-out-of-trouble card. All you have to do is include the disclaimers set forth in the rule.  If your site has “clear and reasonably understandable warnings and cautionary statements that limit the lawyer’s obligations” as stated in Comment [2], you’re probably ok. This, however, is a situation where you can win the ethical battle, but lose the overall war. Here’s what I mean: what if this issue isn’t raised in the context of an ethics grievance? What if it is, instead, raised in a disqualification motion? Consider this hypothetical…

Win the ethical battle, but lost the disqualification war

Let’s say you’re in a medium sized firm that handles a variety of different types of matters. Your firm represents Business X and you’ve been their counsel on nearly all of their legal matters for years. Your firm has a website that utilizes a chatbot to evaluate the strength of new, potential clients. You have language on the website that properly disclaims Rule 1.18. Someone visits your site and explains that they have a workplace discrimination claim. They provide details of the case to the chatbot. The bot inquiries further and the prospect provides more information, in fact, the client wants to make sure that the lawyer with whom they are chatting has a complete understanding of the case so they provide a lot of details.

The chatbot sends the info to the attorney at the firm responsible for reviewing prospect data, and that lawyer thinks that the prospect has a great case.  After reviewing the information, t he attorney contacts the prospect and learns that the adverse party is Business X. However, the lawyer figures that the firm will probably be representing Business X in that matter because the firm does all of their work. As a result the firm doesn’t take the potential client.

The prospect finds another lawyer, and they file suit against Business X. As the lawyer anticipated, your firm is representing Business X. The prospect’s lawyer files a motion to disqualify you as counsel and you oppose it.  You claim that there is no violation of the rule—  the prospect never became a “prospective client” under Rule 1.18 because you had the  proper disclaimer. And you’re probably right. But there is a good chance that a judge will disqualify you anyway.

That’s because the judge isn’t deciding whether discipline should be imposed — the judge is deciding whether you should be disqualified. They don’t necessarily care about the technicalities of the rules, they care about two things — the two things that are at the core of every conflict— loyalty and confidential information.  

The critical question that the judge will ask was, during the interaction the firm had with the prospect, did you learn confidential information from the other party? And when the judge realizes that your chatbot gathered information that would ordinarily be considered confidential information and it was passed on to the lawyer in your firm for review, they’re going to say you have a conflict and kick you out of the case.  You’re not going to be saved by the disclaimers because those disclaimers only helped you avoid discipline under Rule 1.18. In the disqualification context the court cares about loyalty and confidential information. And when it finds out that you were privy to a slew of details from the potential client’s case, they will disqualify you.

How to make chatbots safer

This doesn’t mean that chatbots are forbidden, they just need to be used carefully. What can you do to make the chatbot safer? Here are 4 ideas:

  1. Use disclaimers that comply with the rules.
  2. Make sure the bot is just gathering information and not giving any information. And if it does give information, make sure it’s super limited. Keep Comment [4] to Rule 1.18 in mind which states, “In order to avoid acquiring disqualifying information from a prospective client, a lawyer considering whether or not to undertake a new matter should limit the initial consultation to only such information as reasonably appears necessary for that purpose.”
  3. Go over Rule 1.18 with the vendor supplying your chatbot. Make sure they understand it. Also explain the disqualification issue. Remember, most tech vendors have no idea about the details rules like 1.18.
  4. Train the staff/lawyers in your office who are responsible for following up on the leads developed by the bot. Let them know about Rule 1.18 and the issue of disqualification. 
  5. Create a process that limits the exposure of the lawyers who review the information provided by the chatbots. It is possible to screen those attorneys per 1.18(d)(2). Here’s what that section states, in part:
    • (d) When the lawyer has received disqualifying information…representation is permissible if…(2) the lawyer who received the information took reasonable measures to avoid exposure to more disqualifying information than was reasonably necessary to determine whether to represent the prospective client; and (i) the disqualified lawyer is timely screened from any participation in the matter and is apportioned no part of the fee therefrom; and (ii) written notice is promptly given to the prospective client.


A short while ago I told lawyers that we had to stop using gmail. I said that because Google is allowing its contractors to read through users’ messages for the purpose of software improvement.  According to a 2008 ethics opinion out of New York, that meant that  lawyers no longer had a reasonable expectation of privacy in the gmail system.  The same problem now applies to Amazon Alexa.

Recently Bloomberg reported that Amazon is recording some peoples’ use of Alexa-powered devices and it’s providing those recordings to employees and contractors.  Those personnel are then reviewing the recordings for the purposes of improving the algorithms and correcting software errors. But if lawyers are now aware that human beings are listening to recordings from these devices, then it follows that we no longer have a reasonable expectation of privacy in the product. 

Watch the video for the full explanation. And when you’re on YouTube, subscribe to my channel if you want to see more of these videos. Click the “bell” icon to get notifications when they’re posted!





A little advice to avoid phishing scams


There’s only so much that virus scanning/blocking software can do to protect lawyers against cyber threats.  That’s because one of the primary ways the bad guys gain access to our computer systems is by human error- when someone in our office clicks on an attachment or link and lets the bad guys in the door.  Toward that end, here’s some advice about avoiding a common trap: If it’s scary, be wary.  The bad guys are sending emails that are designed to be scary in order to motivate you to click on their evil link.  If you see something super scary, pause and take steps to verify it’s validity.


The ABA is late to the tech party….again

Tech gurus around the country have been tweeting about the new ABA opinion like it’s some sort of revelation that was brought down from a mountain on stone tablets.  I don’t know why everyone is going up in arms about this.  Here’s what I think.  The ABA is (a) on point (as usual), and (b) 7 years too late (as usual).  The opinion is 11 pages of stuff that ethics professionals and various states have been shouting for almost a decade.  If you’re a lawyer and you didn’t know the contents of Opinion 477 already, you should be embarrassed.

After all 11 pages, it comes down to the last two sentences of the opinion.  They basically say that lawyers need to take special security precautions to protect  client information if you’re required to do so by agreement (really, you didn’t know that?), by law (someone needed to issue an opinion to tell you that you need to abide by the law?), or when the nature of the information requires a higher degree of security (teachers like me have been preaching that for YEARS). Opinion 477 at 11.

It takes everything in my being not to say, “…duh.”

Of course you need to consider the sensitivity of the information when determining how you communicate that information to your client.  The State of California told us that….in 2010 (go look at Formal Opinion 2010-179. And California did it in only 7 pages).  The ABA even told us that in their revised rules…in 2012.  But now, in 2017, they finally get around to writing this opinion?

All of the information in this opinion is important.  But it should have been issued years ago. “But wait,” you might protest, “Opinion 477 gives some factors to consider.”  Listen— if the seven precautionary recommendations that they list in this opinion are new to you, then here’s a newsflash: You haven’t been meeting your duty of competence for years.  Maybe in their next opinion they’ll give us some more useful tech advice like, “To rename a file, type the following command after the C:\…”  Seriously, this is all coming to us a bit late.

Here’s another helpful nugget from Op. 477:  It reminds us that the rules “may require a lawyer to discuss security safeguards with clients.” Opinion 477 at 5.  People, technology issues like that should be a part of every lawyer’s initial conversation with their client…and it should have been that way already for years.  If you haven’t been talking about it, then you’re in borderline malpractice territory. It also means that you haven’t been listening because every respectable ethics teacher has been shouting about that for almost a decade.

Here’s what I would have tweeted about this opinion (if I had more than 140 characters):

To the lawyers: If any of this is new to you, stop what you’re doing and (a) chastise yourself for being 10 years behind the curve and (b) read the opinion. My gut tells me that there will be a total of 3 lawyers who are surprised by the contents of Opinion 477.

To the ABA: Move quicker and talk less.  You’ll serve all lawyers better.


Open Source Software Could be Off Limits to Lawyers

I think it’s unethical for lawyers to use open source software for client work.

I want you to read that again.  I said that I THINK it’s unethical for lawyers to use open source software.  Truth is, I’m not so sure. That, however, is how I’m leaning after doing a bit of research.  Permit me to explain how I arrived at that conclusion….and please let me know if you agree.  I’d love to hear what the lawyer-universe thinks.

First, my disclaimer.  I am not scared of technology, and I don’t want to discourage lawyers from using it.  The question I’m grappling with is not, “Should lawyers be making use of cutting edge technology like open source software.”  The question is, “Given the actual opinions and standards that exist, are lawyers violating the ethics rules by using open source software.” So don’t attack me for trying to be anti-technology, because I’m not.

What is open source software?  A program is considered open source if, “its source code is freely available to its users. Its users – and anyone else – have the ability to take this source code, modify it, and distribute their own versions of the program. The users also have the ability to distribute as many copies of the original program as they want. Anyone can use the program for any purpose; there are no licensing fees or other restrictions on the software.….The opposite of open-source software is closed-source software, which has a license that restricts users and keeps the source code from them.”( last checked by the author on January 25, 2017). In order to understand the ethical issue, you’ll need a brief understanding about a key ethical concern with email.  I’m sorry to bore you with the history lesson, but trust me, it’s necessary.

Go back to the 90s when email first became popular.  For those of use who are old enough to recall, lawyers couldn’t use email in their practice because it was unencrypted. Our duty to safeguard client confidences per Rules 1.1 and 1.6 prohibited us from using the tool.  The ABA and state bars across the country deemed that unencrypted email was too insecure and that lawyers who used it weren’t taking the necessary steps to fulfill their duty of protecting clients’ confidential information.  So what changed? Today email is generally still unencrypted, but lawyers use it every day. Here’s the change— congress criminalized the interception of email.

Once Congress made the interception of email a crime the powers that be then agreed that this change, when combined with other factors, meant that now lawyers had a reasonable expectation of privacy in using the medium. The key phrase is “a reasonable expectation of privacy.”  The ABA issued a formal opinion in 1999 confirming that idea:

“The Committee believes that e-mail communications, including those sent unencrypted over the Internet, pose no greater risk of interception or disclosure than other modes of communication commonly relied upon as having a reasonable expectation of privacy. The level of legal protection accorded e-mail transmissions, like that accorded other modes of electronic communication, also supports the reasonableness of an expectation of privacy for unencrypted e-mail transmissions. The risk of unauthorized interception and disclosure exists in every medium of communication, including e-mail. It is not, however, reasonable to require that a mode of communicating information must be avoided simply because interception is technologically possible, especially when unauthorized interception or dissemination of the information is a violation of law. The Committee concludes, based upon current technology and law as we are informed of it, that a lawyer sending confidential client information by unencrypted e-mail does not violate Model Rule 1.6(a) in choosing that mode to communicate. This is principally because there is a reasonable expectation of privacy in its use.” ABA Commission on Ethics and Professional Responsibility Formal Opinion 99-413.

States have since followed suit and permitted the use of unencrypted email in the practice of law. What’s key here is that we see the standard clearly— the reasonable expectation of privacy.  It’s important to understand that rationale for permitting such email communications, because it continues to be relevant today.  As new technologies are developed, the authorities apply the same reasoning.  Consider the furor over gmail and other free email services back in 2008.

In it’s Opinion 820, the New York State Bar Association opined about those free email systems. nNew York State Bar Association Committee on Professional Ethics Opinion 820 – 2/8/08.  The systems were a concern because of the business model that the systems use to keep the service free.  Here’s how they work: in return for providing the email service, “the provider’s computers scan e-mails and send or display targeted advertising to the user of the service. The e-mail provider identifies the presumed interests of the service’s user by scanning for keywords in e-mails opened by the user. The provider’s computers then send advertising that reflects the keywords in the e-mail.”  NYSBA Op. 820 at 2. The obvious problem is that if we’re using the email system for client work, then we’re allowing the provider to scan confidential information.

When considering whether these new email systems would be permitted, the NY authorities first considered the rationale for permitting email back in the 90s. Email was allowed because, “there is a reasonable expectation that e-mails will be as private as other forms of telecommunication and…therefore…a lawyer ordinarily may utilize unencrypted e-mail to transmit confidential information. NYSBA Op. 820 at 1.  They applied that same reasoning to the question of free emails.

Even though the email messages in the current systems are scanned, the opinion noted that humans don’t actually do the scanning.  Rather, it’s computers that take care of that task.  Thus, they stated that “Merely scanning the content of e-mails by computer to generate computer advertising…does not pose a threat to client confidentiality, because the practice does not increase the risk of others obtaining knowledge of the e-mails or access to the e-mails’ content.”  NYSBA Op. 820 at 2.

What the opinion is basically saying is that there continues to be a reasonable expectation of privacy in these email systems.  Maybe the better way to phrase it is a reasonable expectation of “confidentiality,” but the idea is the same. What’s important to note is that the technology developed, but the standard that was applied remained the same.

If we take that standard and apply it to open source software, then…Houston, we have a problem.  Earlier I noted that the characteristic that makes open source software “open” is that any programmer could change the source code.  That’s the whole point of open source software.  But that ability to change the source code is what worries me.

If any programmer could change the code to an open source program, then isn’t it possible that some version of that software could contain a virus or other nefarious element?  What if the programmer installed a hidden web bug or other software device that allows the programmer to view or copy your confidential client information?  Such a devious act isn’t out of the realm of possibility.  In fact, it seems realistic, and such tactics are being debated in the real-life practice today. Take the recent opinion out of Alaska.

In 2016 the state of Alaska issued an opinion that dealt with the ethical propriety of lawyers using web bugs to obtain information from their adversaries/opposing parties.  The Alaska authorities reviewed a case where an attorney actually utilized a bug and the Bar opined that using such tools would be an ethical violation because it “impermissibly infringes on the lawyer’s ability to preserve a client’s confidences as required by Rule 1.6.” Alaska Bar Association Ethics Opinion 2016-1.  I realize that the opinion isn’t really on point— in the open source question we’re not talking about a lawyer installing a bug.  I brought it up, however, because it shows that the use of those software devices is very much a reality in today’s practice.

What if a programmer installs a similar type of software device in a piece of open source software and that device allows the programmer to view, copy, and disseminate your confidential client information? Getting hacked or taken advantage of doesn’t give rise to ethical liability, per se.  But there are opinions that have said that you have a duty to avoid the obvious scams. See, New York City Bar Association Formal Opinion 2015-3, April 22, 2015 (“In our view, the duty of competence includes a duty to exercise reasonable diligence in identifying and avoiding common Internet-based scams, particularly where those scams can harm other existing clients.”).  Being infested with a virus/web bug certainly seems like an obvious concern, given the realities of the world today.  The question is, should we have expected that to happen?

Should a reasonable lawyer have known that there is a realistic probability that some dangerous device could be installed in open source software?  Should a reasonable lawyer have considered the open source software platform to be off limits because our client’s information is too vulnerable in that way?  Given the open nature of the software and given the real potential of having web bugs inserted into code, do lawyers have a reasonable expectation of privacy in open source software?

My answer is no.

It seems easy for a programmer to secretly install some bug or other information viewing device.  There are no controls or procedures that stop them from doing so. It is an open opportunity for any bad actor to wreak havoc and there is little to no protection against it.

A critical counter argument needs to be addressed. It is true that a programmer could still install some bug-like device even in a closed software environment.  A programmer in Microsoft or Apple could do it, and we might never be the wiser.  But I don’t think the question is whether it could happen — the question is whether it is likely.  One would think that the corporate software developer would have quality control measures that would ferret that out. There would be supervisory procedures to avoid that type of thing from happening.  Given those measures, I would think that it’s reasonable for lawyers to assume that there would not be a web bug installed in the corporate-purchased software.  Even if it did occur, it would have to be some employee/programmer gone rogue. That sort of extraordinary circumstance could be detrimental to the client, but it wouldn’t necessarily mean that the lawyer was derelict in their ethical duties by trusting the software.  It could probably still be said that the lawyer had a reasonable expectation of privacy in that corporate/closed source-created software.

One could argue that there are informal quality control measures in the open source environment. There are apparently very strong ethical underpinnings to the open source movement.  Behaving unethically is looked down upon in the open source community and there is a decent amount of peer pressure on programmers to uphold those unwritten ethical standards.  My concern is that there is no actual mechanism to enforce it.  The only thing stopping open source programmers from installing is the communal sense of morality that  discourages such behavior.  The lack of any formal mechanism is problematic.

It’s the ability of almost any programmer at any time to manipulate the code that makes me believe that lawyers do not have a reasonable expectation of privacy when using open source software.  Now, I realize that that is a blanket statement.  There are likely to be a variety of factors that could alter the equation.  For instance, maybe the main open source software system of some sort could have excellent quality control.  That’s fine, but what about the plug-ins you may download to use in connection with that tool?  Maybe some open source systems will be inherently more secure than others because the cooperative that developed it adopts some quality control.  Okay, so then maybe we con’t have to avoid all open source software, just the sketchy ones.  I’m sure that there are issues and I confess to not having an expert understanding of the programming world, so there are surely plenty of other considerations that I haven’t accounted for.  But these type of factors would simply make otherwise ethically impermissible systems permitted in some way.  It wouldn’t change my overall analysis.

Here, however, is why you should take my opinion seriously…even if you think it comes from a place of relative ignorance.  I have a decent understanding of technology. I also have a decent understanding of the ethics rules.  Truth is, I probably have as much knowledge in both areas as any ethics investigator who would be evaluating a grievance.  And if I’m leaning toward believing that open source software is an ethics violation, then that ethics investigator might be too.

Now….tell me why I’m wrong. But please be polite.