Discover more from Overcoming Bias
LLMs For On-Brand Org Text
Most orgs have a pre-review process to edit and approve high-level text sent to a wide org scope, such as to the public or to distant orgs. Legal, PR, and other sub-orgs get to weigh in on how to avoid: legal liability, making unintended promises, giving offense, suggesting disliked political affiliations, or deviating from official tone and org mission/vision concepts. This process is expensive, however, and thus applies with less effort at lower org levels and for narrower org scopes. At low enough levels and scopes, people just talk to each other directly, using only the internal filters that they have learned with experience.
It seems likely to me that large language models (LLMs) will soon greatly lower the cost of such text pre-review. I envision each org training a LLM with its priorities re legal liability, tone, offense, mission, etc. Then when person X wants to send text at a high enough level to a distant enough org Y, the following process ensues:
Person X writes out text T1, giving their simple meaning in plaintext, perhaps adding meta comments re tone, what to obfuscate, etc.
T1 is submitted to X’s LLM, which expands it into T2, a candidate message to sent out meeting the org’s text criteria.
That same LLM also summarize T2 into T3, a simple plaintext summary of T2.
Person X and their org associates compare T3 to T1, and, if the two are close enough, approve the sending of T2 to recipient in Y. Else X restarts with new T1.
Recipients of T2 use org Y’s LLM to summarize T2 into T4, which is what they then read.
Note that the fact that X and Y use different LLMs often makes T3 ≠ T4, and thus makes T4 deniable by X. “No, I only said T2; I can’t be held responsible if they interpreted that as T4”. Without such deniability, recipients could complain that X said T4, defeating many of the purposes of this whole process.