Tag Archives: Inference

Report UFOs as Physical Likelihood Ratios

It is probably too late to influence the upcoming US government UFO report, and even if it isn’t I’m probably not high enough status to do so, but if I had any influence I have one main recommendation to its authors, and to authors of similar future reports: Express results in terms of likelihood ratios for simple physical hypotheses. Let me explain.

On a topic like UFOs, we must make a chain of inferences between data and theory. At one end is the data itself, expressed in the lowest and most primitive data levels: pictures, videos, physical remnants, testimony, biographies of testifiers, etc. At the other extreme are the main abstract hypotheses of interest, such as: error/delusion, hoax/lies, hidden Earth orgs, hidden non-Earth orgs.

Such a report should probably not give posterior probabilities for the abstract hypotheses. Making good judgements about those requires different kinds of expertise than they have, and consideration of data well beyond what they are tasked with reporting on. And government agencies are famously risk-averse; I’m pretty sure they want to limit what they say, and avoid controversial topics where they’d be more open to criticism.

But because these report authors have access to sensitive data that they’d rather not share, it also doesn’t make sense for them to just reveal all of their relevant detailed data. Yes, we’d like them to reveal what they can, so we can make independent analyses, and they probably can safely reveal more than they have. But we also want them to usefully summarize the data that they can’t share.

This means that their report needs to be expressed in terms more abstract than the pixels in a picture, even if they are less abstract than the main abstract hypotheses of interest. So what is a good level of abstraction for a UFO report?

It seems to me that the obvious choice here is in terms of physical objects. Report UFOs in terms of what objects seemed to have what shapes, medium (air, water, space), position/speed/acceleration histories, brightness, reflectivity, sounds, fluid disturbances, shadows, and apparent reactions to humans. Speaking to these sort of physical abstractions seems within the range of their expertise and data. And it avoids venturing into other harder areas.

Note I used the word “seems”. They shouldn’t be trying to consider how plausible are various combinations of shapes, accelerations, sounds, etc., beyond applying basic physics. (Beyond using simple physical priors over relative angles, distances, etc.) They should just ask which physical hypotheses would make the data seen most likely. Plausibility of theories are “priors” and  which theories make the data seem most likely are “likelihoods”, and Bayesian analysis famously recommends estimating these separately, a recommendation that I’m echoing here.

For example, testimony from someone on drugs can be discounted, as might radar data that goes away after the radar system is rebooted. But the more independent sources seem to show an object with a place, size, speed, etc., then the stronger is the likelihood evidence for that event, ignoring the question of what organizations might want to or be able to induce such an event.

Now while likelihoods can be expressed in absolute terms, I think it makes more sense here to express them here in relative terms. Both relative to other physical parameter values, and relative to simple error/delusion theories.

For example, regarding the estimate of the speed of a particular object at a particular point during some event, report a max likelihood speed, and also say how much relative likelihoods fall as speed changes. For example, the max likelihood speed might be 2000 mph, with the likelihood falling to 10% of that max value at speeds of 1000 and 3000 mph.

For an entire event, consider the mostly plausible sources of error or delusion: drunk observers, Venus reflected on a windshield, bits of fluff floating close to the camera, ball lightning, swamp gas, etc. And then give us a relative likelihood for the event really being the physical objects sizes, speeds, etc. that they seem, relative to the best error theories they could find wherein these are mistakes or illusions.  That is, how often would people report seeing something like this, given that this is actually what was physically happening there, relative to how often they would report seeing something this strange due to sources of mistakes and delusions most likely to appear this way.

I’m not an expert on UFO report details, and I will defer to experts when available, but my impression is that that for the ten “hardest to explain” UFO events, this last likelihood value will be huge. Well over a thousand, and maybe over a million. Which doesn’t answer the question of what they are if not illusions. For that we need to also consider the priors and relative likelihoods of the other theory categories. A task that goes beyond the limited expertise and data tasked to these report authors. A fact for which they are probably grateful.

So please, UFO report authors, you don’t need to discuss the main big theory categories, including aliens. And you can keep your military, etc. secrets. Just tell us what physical objects were seen where when and with what event features. Tell us how much less likely we would be to see that under the best error theories you can find, and tell us how steeply your parameter estimates fall away from your max likelihood estimates. (That is, give error bars.)

With a report like that then the rest of us can struggle to interpret this more abstracted physical data in terms of the big explanations of interest, with gratitude to you for your central contributions.

 

GD Star Rating
loading...
Tagged as: ,

A LONG review of Elephant in the Brain

Artir Kel has posted a 21K word review of our book, over 1/6 as long as the book itself! He has a few nice things to say:

What the book does is to offer deeper (ultimate) explanations for the reasons (proximate) behind behaviours that shine new light on everyday life. … It is a good book in that it offers a run through lots of theories and ways of looking at things, some of which I have noted down for further investigation. It is because of this thought-provokingness and summarisation of dozens of books into a single one that I ultimately recommend the book for purchase.

And he claims to agree with this (his) book summary:

There exist evolutionary explanations for many commonplace behaviours, and that most people are not aware of these reasons. … We suffer from all sorts self-serving biases. Some of these biases are behind large scale social problems like the inflated costs of education and healthcare, and the inefficiencies of scientific research and charity.

But Kel also says:

Isn’t it true that education is – to a large degree – about signaling? Isn’t it true that politics is not just about making policy? Isn’t it true that charity is not just about helping others in the most efficient way? Yes, those things are true, but that’s not my point. The object-level claims of the book, the claims about how things are are largely correct. It is the interpretation I take issue with.

If you recall, our book mainly considers behavior in ten big areas of life. In each area, people usually give a particular explanation for the main purposes they achieve there, especially went they talk very publicly. For each area, our book identifies several puzzles not well explained by this main purpose, and offers another main purpose that we suggest better explains these puzzles.

In brief, Kel’s “interpretation” issues are:

  1. Other explanations can account for each of puzzling patterns we consider.
  2. We shouldn’t call hidden purposes “motives”, nor purposeful ignorance of them “self-deception”.

Continue reading "A LONG review of Elephant in the Brain" »

GD Star Rating
loading...
Tagged as: , ,