Discussion about this post

User's avatar
Jon B's avatar

Glad that you are thinking about this topic; it's definitely important and neglected.

Regarding some of those poll questions, the reason I think animals probably have feelings and LLM's probably don't is because animals' brains have similar architectures to humans', and humans have feelings. The fact that a transformer can write about feelings doesn't mean that it actually has feelings.

An example: A sentiment analysis algorithm can sort words into buckets like "happy" or "sad", but that doesn't mean it feels happy or sad. If tasked with sorting words into buckets like "ancient" and "futuristic", I wouldn't think that it felt anything about those buckets. And sorting words into buckets for emotional valence is functionally the same as sorting words into buckets for topic or adjective.

Another example: If a graphics program can animate an emoji face between a smiling and frowning face, that doesn't mean the program feels happy or sad.

All of that said, I wouldn't be surprised if LLM's had some sort of subjective inner experience while they were running inference. I just don't think the fact that they can mimic humans' emotional signals/outputs means they have experiences like human emotions.

Hope to see more on this topic.

Expand full comment
Some Guy's avatar

I’m surprised so many people are so confident that LLM’s don’t feel anything. Although my experience has been people translate “anything” to “exact human experience.”

Expand full comment
66 more comments...

No posts