Most orgs have a pre-review process to edit and approve high-level text sent to a wide org scope, such as to the public or to distant orgs. Legal, PR, and other sub-orgs get to weigh in on how to avoid: legal liability, making unintended promises, giving offense, suggesting disliked political affiliations, or deviating from official tone and org mission/vision concepts. This process is expensive, however, and thus applies with less effort at lower org levels and for narrower org scopes. At low enough levels and scopes, people just talk to each other directly, using only the internal filters that they have learned with experience.
Step #1 seems to be a problem. For a LLM to know what you need to obfuscate and are most vulnerable to, presumably you would need to be explicit. But now you have a written trace which will be stored in records, and especially in finance, this will be legally required to be retained indefinitely; your chats will no longer show up in Matt Levine's newsletter, but your T1 LLM prompts... And if you switch to something like voice input, regulators may just require you to record *that* - because they now can! (As Levine points out, the move from in-person or telephone calls to text-based chat and email has resulted in a staggering increase in legibility to prosecutors.) Nor would this be theoretical: as soon as the first T1 shows up and guarantees a loss in a high-profile trial or lawsuit, now *every* prosecutor or lawyer worth their salt will prioritize looking for T1s in discovery. The only really secure T1 remains in places where it would be impossible to demand: inside your head or in person discussions.
This seems to miss the change in meaning that the practice of having LLMs edit the material will have. Ultimately, we are parsing these messages for evidence about the attitudes and thoughts of those that wrote them (for tone/branding...for legal concerns I mostly agree). And if T4 can decide a potentially bad (off brand or offensive) message in the the output of the sending LLM then I suspect ppl will treat that as if the sending organization had sad the bad thing.
Either people will just be able to say, "ohh we didn't mean that it was just bad LLM output" or they'll be a demand to see the input to the LLM to disavow supposed negative connotations.
So I tend to agree with everything up to T4. In other words I think this works only as long as the sending org produces output that the receiving org/person is willing to just take at face value.
Step #1 seems to be a problem. For a LLM to know what you need to obfuscate and are most vulnerable to, presumably you would need to be explicit. But now you have a written trace which will be stored in records, and especially in finance, this will be legally required to be retained indefinitely; your chats will no longer show up in Matt Levine's newsletter, but your T1 LLM prompts... And if you switch to something like voice input, regulators may just require you to record *that* - because they now can! (As Levine points out, the move from in-person or telephone calls to text-based chat and email has resulted in a staggering increase in legibility to prosecutors.) Nor would this be theoretical: as soon as the first T1 shows up and guarantees a loss in a high-profile trial or lawsuit, now *every* prosecutor or lawyer worth their salt will prioritize looking for T1s in discovery. The only really secure T1 remains in places where it would be impossible to demand: inside your head or in person discussions.
Whoa, wait a minute. Are we entering a golden age of plausible deniability?
Will lawyers as a bloc allow this to happen?
This seems to miss the change in meaning that the practice of having LLMs edit the material will have. Ultimately, we are parsing these messages for evidence about the attitudes and thoughts of those that wrote them (for tone/branding...for legal concerns I mostly agree). And if T4 can decide a potentially bad (off brand or offensive) message in the the output of the sending LLM then I suspect ppl will treat that as if the sending organization had sad the bad thing.
Either people will just be able to say, "ohh we didn't mean that it was just bad LLM output" or they'll be a demand to see the input to the LLM to disavow supposed negative connotations.
So I tend to agree with everything up to T4. In other words I think this works only as long as the sending org produces output that the receiving org/person is willing to just take at face value.
I don't understand what org Y gains from transforming T2 into T4 in step 5.
Let the LLM also produce T1, now we're cooking. Can't wait for the age of corporations entirely managed by a series of LLMs feeding into each other.