Most orgs have a pre-review process to edit and approve high-level text sent to a wide org scope, such as to the public or to distant orgs. Legal, PR, and other sub-orgs get to weigh in on how to avoid: legal liability, making unintended promises, giving offense, suggesting disliked political affiliations, or deviating from official tone and org mission/vision concepts. This process is expensive, however, and thus applies with less effort at lower org levels and for narrower org scopes. At low enough levels and scopes, people just talk to each other directly, using only the internal filters that they have learned with experience.
It seems likely to me that large language models (LLMs) will soon greatly lower the cost of such text pre-review. I envision each org training a LLM with its priorities re legal liability, tone, offense, mission, etc. Then when person X wants to send text at a high enough level to a distant enough org Y, the following process ensues:
Person X writes out text T1, giving their simple meaning in plaintext, perhaps adding meta comments re tone, what to obfuscate, etc.
T1 is submitted to X’s LLM, which expands it into T2, a candidate message to sent out meeting the org’s text criteria.
That same LLM also summarize T2 into T3, a simple plaintext summary of T2.
Person X and their org associates compare T3 to T1, and, if the two are close enough, approve the sending of T2 to recipient in Y. Else X restarts with new T1.
Recipients of T2 use org Y’s LLM to summarize T2 into T4, which is what they then read.
Note that the fact that X and Y use different LLMs often makes T3 ≠ T4, and thus makes T4 deniable by X. “No, I only said T2; I can’t be held responsible if they interpreted that as T4”. Without such deniability, recipients could complain that X said T4, defeating many of the purposes of this whole process.
Step #1 seems to be a problem. For a LLM to know what you need to obfuscate and are most vulnerable to, presumably you would need to be explicit. But now you have a written trace which will be stored in records, and especially in finance, this will be legally required to be retained indefinitely; your chats will no longer show up in Matt Levine's newsletter, but your T1 LLM prompts... And if you switch to something like voice input, regulators may just require you to record *that* - because they now can! (As Levine points out, the move from in-person or telephone calls to text-based chat and email has resulted in a staggering increase in legibility to prosecutors.) Nor would this be theoretical: as soon as the first T1 shows up and guarantees a loss in a high-profile trial or lawsuit, now *every* prosecutor or lawyer worth their salt will prioritize looking for T1s in discovery. The only really secure T1 remains in places where it would be impossible to demand: inside your head or in person discussions.
Whoa, wait a minute. Are we entering a golden age of plausible deniability?
Will lawyers as a bloc allow this to happen?