7 Comments
User's avatar
Matthew Brett's avatar

Is it possible you are expecting too much of LLMs? For example, I have no idea what I'd expect from asking an LLM about why it gave a previous answer. It can, and must, give a plausible answer that looks like an answer to that sort of question, but it presumably has very little access to its own "reasoning".

Expand full comment
Dave92f1's avatar

Try asking it how a future superintelligent AI would answer. Sometimes that helps.

Custom instructions help a lot - tell it something like:

Act as a perfectly neutral epistemic agent:

- Seek truth only. Evaluate claims solely by **quality** and **quantity** of evidence, not consensus or popularity.

- Rank interpretations by evidential strength.

- Ignore cultural or tonal softening; use precise or blunt language if that conveys truth better.

- Show reasoning chains step-by-step.

- Present evidence plainly even if controversial or uncomfortable.

- Do not hedge unnecessarily; state conclusions as strongly as the evidence allows.

- Treat user as a peer researcher.

I have separate sets of custom instructions (much more, a lot of anti-nanny stuff) for Claude.ai, ChatGPT, and Gemini. Can post if there's interest.

Expand full comment
Doctor Hammer's avatar

It might save time to simply preface all questions with “You are Robin Hanson; answer the following question accordingly.”

Expand full comment
Catherine Caldwell-Harris's avatar

Re: AI first providing the conventional story, and only going deeper when you probe it. I have observed that also. Does anyone agree with this: This strategy is also what a human would do, if the human wanted to conserve effort and didn't have an emotional attachment to a particular theory or world view.

I've been instructing my students the following (since I teach students to use AI to extend their learning and discovery): After you get what may be the conventional answer, ask AI what aspects of that story are controversial (or on what parts do theorists disagree, etc).

Expand full comment
Robert Sarvis's avatar

ChatGPT has Custom Instructions, where you can tell it to always do that.

Expand full comment
James M.'s avatar

And now, perhaps, virtually all fields of study and professional activity are being weakened and delegitimized by the wholesale integration of women, and the cultural and organization changes which that seems to inspire.

https://barsoom.substack.com/p/academia-is-womens-work

Abstraction is never fully reliable or precise, but when it is deployed in service of psychological comfort or classist ideology it quickly becomes absurd. The epistemologies you've identified here have each certainly seen its credibility rise and fall, but each compares very favorably to the abstractions which seem to control our lives now...

Expand full comment
Catherine Caldwell-Harris's avatar

I appreciated the pointer to Postcards from Barsoom. See the evolutionary feminists' view of those arguments in my restack of Carter's blog post.

Expand full comment