Tag Archives: Academia

Response To Hossenfelder

In my last post I said:

In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. … To fix these problems, Hossenfelder proposes that theoretical physicists learn about and prevent biases, promote criticism, have clearer rules, prefer longer job tenures, allow more specialization and changes of fields, and pay peer reviewers. Alas, as noted in a Science review, Hossenfelder’s proposed solutions, even if good ideas, don’t seem remotely up to the task of fixing the problems she identifies.

In the comments she took issue:

I am quite disappointed that you, too, repeat the clearly false assertion that I don’t have solutions to offer. … I originally meant to write a book about what’s going wrong with academia in general, but both my agent and my editor strongly advised me to stick with physics and avoid the sociology. That’s why I kept my elaborations about academia to an absolute minimum. You are right in complaining that it’s sketchy, but that was as much as I could reasonably fit in.

But I have on my blog discussed what I think should be done, eg here. Which is a project I have partly realized, see here. And in case that isn’t enough, I have a 15 page proposal here. On the proposal I should add that, due to space limitations, it does not contain an explanation for why I think that’s the right thing to do. But I guess you’ll figure it out yourself, as we spoke about the “prestige optimization” last week.

I admitted my error:

I hadn’t seen any of those 3 links, and your book did list some concrete proposals, so I incorrectly assumed that if you had more proposals then you’d mention them in your book. I’m happy to support your proposed research project. … I don’t see our two proposals as competing, since both could be adopted.

She agreed:

I don’t see them as competing either. Indeed, I think they fit well .

Then she wrote a whole blog post elaborating!: Continue reading "Response To Hossenfelder" »

GD Star Rating
loading...
Tagged as:

Can Foundational Physics Be Saved?

Thirty-four years ago I left physics with a Masters degree, to start a nine year stint doing AI/CS at Lockheed and NASA, followed by 25 years in economics. I loved physics theory, and given how far physics had advanced over the previous two 34 year periods, I expected to be giving up many chances for glory. But though I didn’t entirely leave (I’ve since published two physics journal articles), I’ve felt like I dodged a bullet overall; physics theory has progressed far less in the last 34 years, mainly because data dried up:

One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.

In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. Previously, physics foundations theorists were disciplined by a strong norm of respecting the theories that best fit the data. But with less data, theorists have turned to mainly judging proposed theories via various standards of “beauty” which advocates claim to have inferred from past patterns of success with data. Except that these standards (and their inferences) are mostly informal, change over time, differ greatly between individuals and schools of thought, and tend to label as “ugly” our actual best theories so far.

Yes, when data is truly scarce, theory must suggest where to look, and so we must choose somehow among as-yet-untested theories. The worry is that we may be choosing badly:

During experiments, the LHC creates about a billion proton-proton collisions per second. … The events are filtered in real time and discarded unless an algorithm marks them as interesting. From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.

One bad sign is that physicists have consistently, confidently, and falsely told each other and the public that big basic progress was coming soon: Continue reading "Can Foundational Physics Be Saved?" »

GD Star Rating
loading...
Tagged as: , , ,

How To Fund Prestige Science

How can we best promote scientific research? (I’ll use “science” broadly in this post.) In the usual formulation of the problem, we have money and status that we could distribute, and they have time and ability that they might apply. They know more than we do, but we aren’t sure who is how good, and they may care more about money and status than about achieving useful research. So we can’t just give things to anyone who claims they would use it to do useful science. What can we do? We actually have many options. Continue reading "How To Fund Prestige Science" »

GD Star Rating
loading...
Tagged as: , ,

Intellectual Status Isn’t That Different

In our world, we use many standard markers of status. These include personal connections with high status people and institutions, power, wealth, popularity, charisma, intelligence, eloquence, courage, athleticism, beauty, distinctive memorable personal styles, and participation in difficult achievements. We also use these same status markers for intellectuals, though specific fields favor specific variations. For example, in economics we favor complex game theory proofs and statistical analyses of expensive data as types of difficult achievements.

When the respected intellectuals for topic X tell the intellectual history of topic X, they usually talk about a sequence over time of positions, arguments, and insights. Particular people took positions and offered arguments (including about evidence), which taken together often resulted in insight that moved a field forward. Even if such histories do not say so directly, they give the strong impression that the people, positions, and arguments mentioned were selected for inclusion in the story because they were central to causing the field to move forward with insight. And since these mentioned people are usually the high status people in these fields, this gives the impression that the main way to gain status in these fields is to offer insight that produces progress; the implication is that correlations with other status markers are mainly due to other markers indicating who has an inclination and ability to create insight.

Long ago when I studied the history of science, I learned that these standard histories given by insiders are typically quite misleading. When historians carefully study the history of a topic area, and try to explain how opinions changed over time, they tend to credit different people, positions, and arguments. While standard histories tend to correctly describe the long term changes in overall positions, and the insights which contributed to those changes, they are more often wrong about which people and arguments caused such changes. Such histories tend to be especially wrong when they claim that a prominent figure was the first to take a position or make an argument. One can usually find lower status people who said basically the same things before. And high status accomplishments tend to be given more credit than they deserve in causing opinion change.

The obvious explanation for these errors is that we are hypocritical about what counts for status among intellectuals. We pretend that the point of intellectual fields is to produce intellectual progress, and to retain past progress in people who understand it. And as a result, we pretend that we assign status mainly based on such contributions. But in fact we mostly evaluate the status of intellectuals in the same way we evaluate most everyone, not changing our markers nearly as much as we pretend in each intellectual context. And since most of the things that contribute to status don’t strongly influence who actually offers positions and arguments that result in intellectual insight and progress, we can’t reasonably expect the people we tend to pick as high status to typically have been very central to such processes. But there’s enough complexity and ambiguity in intellectual histories to allow us to pretend that these people were very central.

What if we could make the real intellectual histories more visible, so that it became clearer who caused what changes via their positions, arguments, and insight? Well then fields would have the two usual choices for how to respond to hypocrisy exposed: raise their behaviors to meet their ideals, or lower their ideals to meet their behaviors. In the first case, the desire for status would drive much strong efforts to actually produce insights that drives progress, making plausible much faster rates of progress. In this case it could well be worth spending half of all research budgets on historians to carefully track who contributed how much. The factor of two lost in all that spending on historians might be more than compensated by intellectuals focused much more strongly on producing real insight, instead of on the usual high-status-giving imitations.

Alas I don’t expect many actual funders of intellectual activity today to be tempted by this alternative, as they also care much more about achieving status, via affiliation with high status intellectuals, than they do about producing intellectual insight and progress.

GD Star Rating
loading...
Tagged as: , ,

Bets As Signals of Article Quality

On October 15, I talked at the Rutgers Foundation of Probability Seminar on Uncommon Priors Require Origin Disputes. While visiting that day, I talked to Seminar host Harry Crane about how the academic replication crisis might be addressed by prediction markets, and by his related proposal to have authors offer bets supporting their papers. I mentioned to him that I’m now part of a project that will induce a great many replication attempts, set up prediction markets about them beforehand, and that we would love to get journals to include our market prices in their review process. (I’ll say more about this when I can.)

When the scheduled speaker for the next week slot of the seminar cancelled, Crane took the opening to give a talk comparing our two approaches (video & links here). He focused on papers for which it is possible to make a replication attempt and said “We don’t need journals anymore.” That is, he argued that we should not use which journal is willing to publish a paper as a signal of paper quality, but that we should use the signal of what bet authors offer in support of their paper.

That author betting offer would specify what would count as a replication attempt, and as a successful replication, and include an escrowed amount of cash and betting odds which set the amount a challenger must put up to try to win that escrowed amount. If the replication fails, the challenger wins these two amounts minus the cost of doing a replication attempt; if not the authors win that amount.

In his talk, Crane contrasted his approach with an alternative in which the quality signal would be the odds in an open prediction market of replication, conditional on a replication attempt. In comparing the two, Crane seems to think that authors would not usually participate in setting market odds. He lists three advantages of author bets over betting market odds: 1) Authors bets give authors better incentives to produce non-misleading papers. 2) Market odds are less informed because market participants know less that paper authors about their paper. 3) Relying on market odds allows a mistaken consensus to suppress surprising new results. In the rest of this post, I’ll respond.

I am agnostic on whether journal quality should remain as a signal of article quality. If that signal goes away, then we are talking about what other signals can be how useful. And if that signal remains, then we can be talking about other signals that might be used by journals to make their decisions, and also by other observers to evaluate article quality. But whatever signals are used, I’m pretty sure that most observers will demand that a few simple easy-to-interpret signals be distilled from the many complex signals available. Tenure review committees, for example, will need signals nearly as simple as journal prestige.

Let me also point out that these two approaches of market odds or author bets can also be applied to non-academic articles, such as news articles, and also to many other kinds of quality signals. For example, we could have author or market bets on how many future citations or how much news coverage an article will get, whether any contained math proofs will be shown to be in error, whether any names or dates will be shown to have been misreported in the article, or whether coding errors will be found in supporting statistical analysis. Judges or committees might also evaluate overall article quality at some distant future date. Bets on any of these could be conditional on whether serious attempts were made in that category.

Now, on the comparison between author and market bets, an obvious alternative is to offer both author bets and market odds as signals, either to ultimate readers or to journals reviewing articles. After all, it is hard to justify suppressing any potentially useful signal. If a market exists, authors could easily make betting offers via that market, and those offers could easily be flagged for market observers to take as signals.

I see market odds as easier for observers to interpret than author bet offers. First, authors bets are more easily corrupted via authors arranging for a collaborating shill to accept their bet. Second, it can be hard for observers to judge how author risk-aversion influences author odds, and how replication costs and author wealth influences author bet amounts. For market odds, in contrast, amounts take care of themselves via opposing bets, and observers need only judge any overall differences in wealth and risk-aversion between the two sides, differences that tend to be smaller, vary less, and matter less for market odds.

Also, authors would usually participate in any open market on their paper, giving those authors bet incentives and making market odds include their info. The reason authors will bet is that other participants will expect authors to bet to puff up their odds, and so other participants will push the odds down to compensate. So if authors don’t in fact participate, the odds will tend to look bad for them. Yes, market odds will be influenced by views others than those of authors, but when evaluating papers we want our quality signals to be based on the views of people other than paper authors. That is why we use peer review, after all.

When there are many possible quality metrics on which bets could be offered, article authors are unlikely to offer bets on all of them. But in an open market, anyone could offer to bet on any of those metrics. So an open market could show estimates regarding any metric for which anyone made an offer to bet. This allows a much larger range of quality metrics to be available under the market odds approach.

While the simple market approach merely bets conditional on someone attempting a replication attempt, an audit lottery variation that I’ve proposed would instead use a small fixed percentage of amounts bet to pay for replication attempts. If the amount collected is insufficient, then it and all betting amounts are gambled so that either a sufficient amount is created, or all these assets disappear.

Just as 5% significance is treated as a threshold today for publication evaluation, I can imagine particular bet reliability thresholds being important for evaluating article quality. News articles might even be filtered or show simple icons based on a reliability category. In this case the betting offer and market options would more tend to merge.

For example, an article might be considered “good enough” if it had no more than a 5% chance of being wrong, if checked. The standard for checking this might be if anyone was currently offering to bet at 19-1 odds in favor of reliability. For as long as the author or anyone else maintained such offers, the article would qualify as at least that reliable, and so could be shown via filters or icons as meeting that standard. For this approach we don’t need to support a market with varying prices; we only need to keep track of how much has been offered and accepted on either side of this fixed odds bet.

GD Star Rating
loading...
Tagged as: , ,

Maps of Meaning

Like many folks recently, I decided to learn more about Jordan Peterson. Not being eager for self-help or political discussion, I went to his most well-known academic book, Maps of Meaning. Here is Peterson’s summary: 

I came to realize that ideologies had a narrative structure – that they were stories, in a word – and that the emotional stability of individuals depended upon the integrity of their stories. I came to realize that stories had a religious substructure (or, to put it another way, that well-constructed stories had a nature so compelling that they gathered religious behaviors and attitudes around them, as a matter of course). I understood, finally, that the world that stories describe is not the objective world, but the world of value – and that it is in this world that we live, first and foremost. … I have come to understand what it is that our stories protect us from, and why we will do anything to maintain their stability. I now realize how it can be that our religious mythologies are true, and why that truth places a virtually intolerable burden of responsibility on the individual. I know now why rejection of such responsibility ensures that the unknown will manifest a demonic face, and why those who shrink from their potential seek revenge wherever they can find it. (more)

In his book, Peterson mainly offers his best-guess description of common conceptual structures underlying many familiar cultural elements, such as myths, stories, histories, rituals, dreams, and language. He connects these structures to cultural examples, to a few psychology patterns, and to rationales of why such structures would make sense. 

But while he can be abstract at times, Peterson doesn’t go meta. He doesn’t offer readers any degree of certainty in his claims, nor distinguish in which claims he’s more confident. He doesn’t say how widely others agree with him, he doesn’t mention any competing accounts to his own, and he doesn’t consider examples that might go against his account. He seems to presume that the common underlying structures of past cultures embody great wisdom for human behavior today, yet he doesn’t argue for that explicitly, he doesn’t consider any other forces that might shape such structures, and he doesn’t consider how fast their relevance declines as the world changes. The book isn’t easy to read, with overly long and obscure words, and way too much repetition. He shouldn’t have used his own voice for his audiobook. 

In sum, Peterson comes across as pompous, self-absorbed, and not very self-aware. But on the one key criteria by which such a book should most be judged, I have to give it to him: the book offers insight. The first third of the book felt solid, almost self-evident: yes such structures make sense and do underly many cultural patterns. From then on the book slowly became more speculative, until at the end I was less nodding and more rolling my eyes. Not that most things he said even then were obviously wrong, just that it felt too hard to tell if they were right.  (And alas, I have no idea how original is this book’s insight.) 

Let me finish by offering a small insight I had while reading the book, one I haven’t heard from elsewhere. A few weeks ago I talked about how biological evolution avoids local maxima via highly redundant genotypes:

There are of course far more types of reactions between molecules than there are types of molecules. So using Wagner’s definitions, the set of genotypes is vastly larger than the set of phenotypes. Thus a great many genotypes result in exactly the same phenotype, and in fact each genotype has many neighboring genotypes with that same exact phenotype. And if we lump all the connected genotypes that have the same phenotype together into a unit (a unit Wagner calls a “genotype network”), and then look at the network of one-neighbor connections between such units, we will find that this network is highly connected.

That is, if one presumes that evolution (using a large population of variants) finds it easy to make “neutral” moves between genotypes with exactly the same phenotype, and hence the same fitness, then large networks connecting genotypes with the same phenotype imply that it only takes a few non-neutral moves between neighbors to get to most other phenotypes. There are no wide deep valleys to cross. Evolution can search large spaces of big possible changes, and doesn’t have a problem finding innovations with big differences. (more) 

It occurs to me that this is also an advantage of traditional ways of encoding cultural values. An explicit formal encoding of values, such as found in modern legal codes, is far less redundant. Most random changes to such an abstract formal encoding create big bad changes to behavior. But when values are encoded in many stories, histories, rituals, etc., a change to any one of them needn’t much change overall behavior. So the genotype can drift until it is near a one-step change to a better phenotype. This allows culture to evolve more incrementally, and avoid local maxima. 

Implicit culture seems more evolvable, at least to the extent slow evolution is acceptable. We today are changing culture quite rapidly, and often based on pretty abstract and explicit arguments. We should worry more about getting stuck in local maxima.  

GD Star Rating
loading...
Tagged as: , ,

Sloppy Interior Vs. Careful Border Travel

Imagine that you are floating weightless in space, and holding on to one corner of a large cube-shaped structure. This cube has only corners and struts between adjacent corners; the interior and faces are empty. Now imagine that you want to travel to the opposite corner of this cube. The safe thing to do would be to pull yourself along a strut to an adjacent corner, always keeping at least one hand on a strut, and then repeat that process two more times. If you are in a hurry you might be tempted to just launch yourself through the middle of the cube. But if you don’t get the direction right, you risk sailing past the opposite corner on into open space.

Now let’s make the problem harder. You are still weightless holding on to a cube of struts, but now you live in 1000 dimensional space, in a fog, and subject to random winds. Each corner connects to 1000 struts. Now it would take 1000 single-strut moves to reach the opposite corner, while the direct distance across is only 32 times the length of one strut. You have only a limited ability to tell if you are near a corner or a strut, and now there are over 10300 corners, which look a lot alike. In this case you should be a lot more reluctant to leave sight of your nearest strut, or to risk forgetting your current orientation. Slow and steady wins this race.

If you were part of a group of dozens of people tethered together, it might make more sense to jump across the middle, at least in the case of the ordinary three dimensional cube. If any one of you grabs a corner or strut, they could pull the rest of you in to there. However, this strategy looks a lot more risky in a thousand dimensions with fog and wind, where there are so many more ways to go wrong. Even more so in a million dimensions.

Let me offer these problems as metaphors for the choice between careful and sloppy thinking. In general, you start with what you know now, and seek to learn more, in part to help you make key decisions. You have some degree of confidence in every relevant claim, and these can combine to specify a vector in a high dimensional cube of possible beliefs. Your key choice: how to move within this belief cube.

In a “sloppy interior” approach, you throw together weak tentative beliefs on everything relevant, using any basis available, and then try to crudely adjust them via considerations of consistency, evidence, elegance, rhetoric, and social conformity. You think intuitively, on your feet, and respond to social pressures. That is, a big group of you throw yourselves toward the middle of the cube, and pull on the tethers when you think that could help others get to a strut or corner you see. Sometimes a big group splits into two main groups who have a tug-o-war contest along one main tether axis, because that’s what humans do.

In a “careful border” approach, you try to move methodically along, or at least within sight of, struts. You make sure to carefully identify enough struts at your current corner to check your orientation and learn which strut to take next. Sometimes you “cut a corner”, jumping more than one corner at a time, but only via carefully chosen and controlled moves. It is great when you can move with a large group who work together, as individuals can specialize in particular strut directions, etc. But as there are more different paths to reach the same destination on the border, groups there more naturally split up. If your group seems inclined toward overly risk jumps, you can split off and move more methodically along the struts. Conversely, you might try to cut a corner to jump ahead when others nearby seem excessively careful.

Today public conversations tend more to take a sloppy interior approach, while expert conversations tend more to take a careful border approach. Academics often claim to believe nothing unless it has been demonstrated to the rigorous standards of their discipline, and they are fine with splitting into differing non-interacting groups that take different paths. Outsiders often see academics as moving excessively slowly; surely more corners could be cut with little risk. Public conversations, in contrast, are centered in much larger groups of socially-focused discussants who use more emotional, elegant, and less precise and expert language and reasoning tools.

Yes, this metaphor isn’t exactly right; for example, there is a sense in which we start more naturally from the middle a belief space. But I think it gets some important things right. It can feel more emotionally “relevant” to jump to where everyone else is talking, pick a position like others do there, use the kind of arguments and language they use, and then pull on your side of the nearest tug-o-war rope. That way you are “making a difference.” People who instead step slowly and carefully, making foundations they have sufficient confidence to build on, may seem to others as “lost” and “out of touch”, too “chicken” to engage the important issues.

And yes, in the short term sloppy interior fights have the most influence on politics, culture, and mob rule enforcement. But if you want to play the long game, careful border work is where most of the action is. In the long run, most of what we know results from many small careful moves of relatively high confidence. Yes, academics are often overly careful, as most are more eager to seem impressive than useful. And there are many kinds of non-academic experts. Even so, real progress is mostly in collecting relevant things one can say with high enough confidence, and slowly connecting them together into reliable structures that can reach high, not only into political relevance, but eventually into the stars of significance.

GD Star Rating
loading...
Tagged as: ,

Social Innovation Disinterest Puzzle

Back in 1977, I started out college in engineering, then switched to physics, where I got a BS and MS. After that I spent nine years in computer research, at Lockheed and NASA. In physics, engineering, and software I saw that people are quite eager to find better designs, and that the world often pays a lot for them. As a result, it is usually quite hard to find even modesty better designs, at least for devices and mechanisms with modest switching costs.

Over time, I came to notice that many of our most important problems had cores causes in social arrangements. So I started to study economics, and found many simple proposed social innovations that could plausibly lead to large gains. And trying my own hand at looking for innovations, I found more apparently plausible gains. So in 1993 I switched to social science, and started a PhD program at the late age of 34, then having two kids age 0 and 2. (For over a decade after, I didn’t have much free time.)

I naively assumed that the world was just as eager for better social designs. But in fact, the world shows far less interest in better designs for social arrangements. Which, I should have realized, is a better explanation than my unusual genius for why it seemed so easy to find better social designs. But that raises a fundamental puzzle: why does the world seem so much less interested in social innovation, relative to innovation in physical and software devices and systems?

I’ve proposed the thesis of our new book as one explanation. But as many other explanations often come to people’s minds, I thought I might go over why I find them insufficient. Here goes: Continue reading "Social Innovation Disinterest Puzzle" »

GD Star Rating
loading...
Tagged as: , ,

When Disciplines Disagree

Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.

On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)

My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.

Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.

This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.

When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.

GD Star Rating
loading...
Tagged as: , , ,

Automatic Norms in Academia

In my career as a researcher and professor, I’ve come across many decisions where my intuition told me that some actions are prohibited by norms. I’ve usually just obeyed these intuitions, and assumed that everyone agrees. However, I only rarely observe what others think regarding the same situations. In these rare cases, I’m often surprised to see that others don’t agree with me.

I illustrate with the following set of questions on which I’ve noticed divergent opinions. Most academic institutions have no official rules to answer them, nor even an official person to which one can ask. Professors are just supposed to judge for themselves, which they usually do without consulting anyone. And yet many people treat these decisions if they are governed by norms.

  1. What excuses are acceptable for students missing an assignment or exam?
  2. If a teacher will be out of town on a class day, must a substitute teacher always be found or can classes sometimes be cancelled? How often can this be done?
  3. Is there any limit on how much extra help or extra credit assignments teachers can offer only to particular students?
  4. Should students be excused for misunderstanding questions due to poor understanding of English?
  5. Is it okay in college to teach students to just remember and then spit back relatively dogmatic statements, instead of trying to teach them how to think about more complex problems?
  6. Is it okay to assign a final exam, but then toss the exams and give out final grades based on all prior assignments?
  7. Is it okay to give all grad students A grades, and to praise all their papers as brilliant, as a way to compete to get students to pick you as their PhD advisor?
  8. Is it okay to lecture while stumbling drunk?
  9. Must you cite the work that actually influenced your work if it is lowbrow like blogs, wikipedia, or working papers, or if it is outside your discipline?
  10. Can you cite prestigious papers that look good in your references if they did not influence your work?
  11. Is it okay to write as if the first work of any consequence on a topic was the first to appear in a top prestige venue, in effect presuming that lower prestige prior work was inadequate?
  12. Should you cite papers requested by journal referees if you don’t think them relevant?
  13. How much searching is okay, searching in theory assumptions or in statistical model specifications, in order to find the kind of result you wanted? Must you disclose such searching?
  14. Is it okay to publish roughly the same idea in several places as long as you don’t use the exact same words?

I expect the same holds in most areas of life. Most detailed decisions that people treat as norm-governed have no official rules or judges. Most people decide for themselves without much thought or discussion, assuming incorrectly that relevant norms are obvious enough that everyone else agrees.

GD Star Rating
loading...
Tagged as: ,