On Institutional Review Boards (= IRBs), I wrote in March: It makes little sense to have extra regulation on researchers just because they are researchers. That mainly gets in the way of innovation, of which we already have too little.
Seems pretty reasonable to me, particularly for government or university funded research:
The study is not producing any real goods by paying the subjects only changing who has the dollars so no matter how many dollars you pay the subjects you can't make a net harmful study into a net beneficial study. This tacitly makes the reasonable assumption that otherwise the money would be used for something of comparable utility value as handing it over to the subjects.
It's only the weird case where some crazy billionare comes in with a obsessive interest in seeing people undergo some painful only slightly useful experiment and would otherwise spend that money in wasteful ways that compensating subjects could render a net harmful study into a net beneficial study.
The real issue is asking random members of the public who will just express whatever makes them appear the most upstanding/good/normal.
Organic processes of morality are the only source of morality. Why should generating hierarchy be regarded as an alternative means, without so much as a hypothesis?
Your asking for the IRBs to be regarded as inherently more ethical than researchers, with a positive reputation that seems to go without saying. Problem is, reputation and trust have to be earned, they cannot be established by fiat.
I don't understand the nature of your objection. Is "committee" a term of evil in your ontology? If the organic processes of morality fail, then some institution needs to step into the role of rule enforcer. Whether it's a committee or something else.
Of course reputation exists, and IRBs are one of the tools that reputable research institutions use to preserve their reputation. That's why you might feel safer signing up for an experiment at Harvard than with, say, some self-proclaimed doctor advertising on Craigslist.
So norms and morality are to be determined and enforced by committees? Who, in turn, will keep a good eye on the committees?
...a paid research subject is almost always going to have much poorer information about risks and benefits than the payer, making exploitation inevitable.Only if you assume trust and reputation don't exist, and if they don't, then why the faith in IRBs?
Yet another post written as if the author was an alien who had just landed on Earth yesterday and knew nothing of its history. IRBs exist because of actual abuses committed by researchers in the past.
It makes little sense to have extra regulation on researchers just because they are researchers.
Well, it does, because by definition researchers are trying to do things that are beyond the bounds of normal practice. Everyday activity has norms and morality built in, and to go beyond that is to enter a realm where right and wrong are harder to discern, for researcher and subject alike. Given that, and the competitive nature of research, it is absolutely unsurprising that abuses will occur. IRBs as currently constituted are a blunt instrument with many irrationalities, to be sure, but they didn't just spring into existence for no good reason.
As to why it might make sense to exclude monetary payments, I suggest reading Satz's Why Some Things Should Not Be For Sale. I don't know if she treats this particular case, but her general arguments might apply. In particular, a paid research subject is almost always going to have much poorer information about risks and benefits than the payer, making exploitation inevitable.
Although the standard view has become a virtual mantra in research ethics, no [official] document contains an argument in its defense. … The scholarly literature also contains little defense of that view.Robin, I'm surprised you didn't make more of the fact that the standard view is essentially implicit, regardless of it's content. Isn't this even more interesting than the attitude to money - that the rules and standards applying to a high status group are rarely made explicit? Is this a general rule - that the higher the status of a group or individual, the less we speak of the social rules that apply to them?
This sounds a lot like what is going on in market failure/government fix advocacy, where the imperfections of various markets and biases of individuals will be remedied by hyper-rational, super-competent, highly informed and disinterested politicians and government officials. Government failure is rarely recognized, let alone formally modeled. The implicit belief here being that, for all intents and purposes, governments are perfect. It's the implicitness of the belief that fascinates me even more than the belief itself.
Only one of the prisoners (and two of the guards) gave their perspective in that link. They seemed to have turned out all right.
...how does one compensate someone for being changed like this?
One of the reasons that researchers have tightened safety requirements for experimental subjects is because researchers learn from their experiments, and they have the attitude of treating experimental subjects as human beings and not as objects to be used, used up, and replaced if broken or destroyed.
This “experiment” isn't any different than situations that people are put into every day. Prisoners are put into prisons, prison guards are assigned to guard them.
When situations can have such effects on people, effects that are irreversible and unpredictable, how does one compensate someone for being changed like this?
A possible argument that I don't necessarily think is in play: If someone is fishing or coal-mining, at least they can make a somewhat rational estimate of the risks. This is much more difficult for experimental subjects.
Perhaps they don't want to be in the position of having to pay the market rate?
While I agree with Robin that there is an idealistic element in the above reasoning relating to "research as a high pure far ideal thing", latent in the reasons above is also a practical justification.
If considering justifications from the perspective of the self-interest of the research institution as a whole, reasons #6 and #7 do strike at the heart of the matter.
A research institution faces a constant risk that its autonomy will be undermined based on allegations of exploitation of research subjects in respect of some controversial research method. (I take it as self-evident that, from the institution's perspective, autonomy is a good that warrants protection.)
Irrespective whether you agree with the (to use the pejorative) paternalistic reasons set out at #1 to #5, there is likely to be a significant segment (not necessarily a majority) of the population who hold those views. This will include some of those making the allegations, and some of those who assess the legitimacy of those allegations and who have the power to undermine the autonomy of the institution.
In light of the above, without controls on research, including a review process that excludes considering financial compensation as a relevant factor, a research institution exposes itself to a real risk (perhaps a likelihood) that its autonomy will be undermined as a reaction to the allegations of exploitation. There is further a real risk that autonomy would be undermined to an extent greater than that which is self-imposed in the form of the IRBs.
In sum, while I do not disagree with Robin's gloss on reason #7 that anchors it in idealistic thinking, there also exists an independent justification that, while reflecting in part the idealistic thinking of others, is firmly rooted in the practical object of an institution pursuing its own self-interest.