Of the other motives you mention, trying to impress is the one we should most want others *not* to attribute to us. Having a pre-existing position is not so bad—maybe we have already figured things out. Being loyal and supporting the team is good—it shows that you can be counted on.
As for trying to figure stuff out: being seen as doing this *on your own* seems disrespectful of others, whom you *could* invite to help you. And even being seen as doing it cooperatively may have an annoying aspect, if others thought the relevant issues were settled (well enough), and regard you as needlessly stirring up trouble.
The famous (apocryphal or not) Adlai Stevenson story would apply in most such discussions. You'd win every thinking person's vote. But that's not enough, if you need a majority.
One example: It's not surprising to me that, of all people, Bernie Sanders is the Senator most seriously engaging with AI risk. He's in a safe seat, and old enough that he really doesn't have to care about what anyone else thinks about him. So he can ask the hard questions and not be embarrassed to explore such ideas. Most people who choose to engage in public discussions regularly are too afraid to step that far outside the norm to do so.
FWIW (I may be unusual) I have a lot of conversations like that with AI. I seek truth more than "winning" and convos with AI are off the record so there's little ego involved.
AI (mostly Claude) rarely disagrees substantially with me directionally, but probably that's partly sycophancy (despite my earnest attempts to supress that; I notice those attempts usually result in meta-sycophancy at a higher level rather than actually reducing it).
Still, AI does push back on a lot of stuff and the result is usually better than where I started.
Of the other motives you mention, trying to impress is the one we should most want others *not* to attribute to us. Having a pre-existing position is not so bad—maybe we have already figured things out. Being loyal and supporting the team is good—it shows that you can be counted on.
As for trying to figure stuff out: being seen as doing this *on your own* seems disrespectful of others, whom you *could* invite to help you. And even being seen as doing it cooperatively may have an annoying aspect, if others thought the relevant issues were settled (well enough), and regard you as needlessly stirring up trouble.
The famous (apocryphal or not) Adlai Stevenson story would apply in most such discussions. You'd win every thinking person's vote. But that's not enough, if you need a majority.
One example: It's not surprising to me that, of all people, Bernie Sanders is the Senator most seriously engaging with AI risk. He's in a safe seat, and old enough that he really doesn't have to care about what anyone else thinks about him. So he can ask the hard questions and not be embarrassed to explore such ideas. Most people who choose to engage in public discussions regularly are too afraid to step that far outside the norm to do so.
Ostensibly that's what the academic setting was for, until it was replaced with ideological echo chambers, credentialism and "publish or perish".
FWIW (I may be unusual) I have a lot of conversations like that with AI. I seek truth more than "winning" and convos with AI are off the record so there's little ego involved.
AI (mostly Claude) rarely disagrees substantially with me directionally, but probably that's partly sycophancy (despite my earnest attempts to supress that; I notice those attempts usually result in meta-sycophancy at a higher level rather than actually reducing it).
Still, AI does push back on a lot of stuff and the result is usually better than where I started.