The typical modern credential (i.e., standard worker quality sign of widely understood significance) is based on a narrow written declared test of knowledge given early in one’s career on a pre-announced date at a quiet location.
It doesn't resonate with me. In most of my courses, the final exam made up a decent chunk (often ~50% like you), but many instead had final papers, and the ones where there was a final exam, it was never multiple choice standardized tests like Robin claims. They were typically long essays (philosophy, social sciences), free response (derivations in math courses), or computer graded (program submissions run against test suites in CS). This was over 3 different programs in 3 colleges, 2 public and 1 private.
Sounds like an interview followed by a probationary appointment and on the job assessments. Are you suggesting something new here, or seeing if anyone recognises the status quo when described in unfamiliar terms?
Combining this with another idea found in your recent posts, what if judges were to express their predictions by speculating in a futures market on testees' future performance? Apart from the obvious problem of testees bribing judges, I mean.
I second this as a worthy case to evaluate. It has the advantage of a variety of different credentialing mechanisms, and the bonus of controversy about neglecting fundamental ones, as in this piece: http://www.theatlantic.com/...
A lot of the ideas you propose involve giving a lot of flexibility to the judge to decide the terms of the test. That is arbitrary, at the whim of the arbiter.
To me the exemplar of the improved credentialing task is the blind audition for an orchestra, which is successful because it removes a factor from the judge's control rather than granting the judge more control. The more control the judge has the more the successful applicant is going to resemble the judge in ways that do not involve competence at a task.
Is the objective to arrive at a single accurate standardized measure of worth? That sounds like a fever dream for a central planner, not an efficient market.
There is a reason why there are a range of different tests, with different perceived values: because people disagree on their fundamental value. There's no magical valuation formula for a corporation, why should a skill set be any different?
There are too many opportunities for corruption and discrimination when the tests are arbitrary. There's a reason that governments use formal, written tests for the civil service.
As there are as many credentials as judges, the problem then becomes one of finding the best judges of performance and then finding out their ratings on particular people.
Something much like what you have described here has been implemented in engineering schools as a "senior design project"...
The job assessments would have to be standardized so they could be meaningful to people in very different contexts.
I didn't read Robin as saying college examinations were usually multiple choice standardized exams.
It doesn't resonate with me. In most of my courses, the final exam made up a decent chunk (often ~50% like you), but many instead had final papers, and the ones where there was a final exam, it was never multiple choice standardized tests like Robin claims. They were typically long essays (philosophy, social sciences), free response (derivations in math courses), or computer graded (program submissions run against test suites in CS). This was over 3 different programs in 3 colleges, 2 public and 1 private.
Sounds like an interview followed by a probationary appointment and on the job assessments. Are you suggesting something new here, or seeing if anyone recognises the status quo when described in unfamiliar terms?
Combining this with another idea found in your recent posts, what if judges were to express their predictions by speculating in a futures market on testees' future performance? Apart from the obvious problem of testees bribing judges, I mean.
I second this as a worthy case to evaluate. It has the advantage of a variety of different credentialing mechanisms, and the bonus of controversy about neglecting fundamental ones, as in this piece: http://www.theatlantic.com/...
Would you prefer Kafkaesque?
A lot of the ideas you propose involve giving a lot of flexibility to the judge to decide the terms of the test. That is arbitrary, at the whim of the arbiter.
To me the exemplar of the improved credentialing task is the blind audition for an orchestra, which is successful because it removes a factor from the judge's control rather than granting the judge more control. The more control the judge has the more the successful applicant is going to resemble the judge in ways that do not involve competence at a task.
Who said anything about arbitrary?
The idea is to substitute for existing credentials based on the standard sort of tests.
Is the objective to arrive at a single accurate standardized measure of worth? That sounds like a fever dream for a central planner, not an efficient market.
There is a reason why there are a range of different tests, with different perceived values: because people disagree on their fundamental value. There's no magical valuation formula for a corporation, why should a skill set be any different?
I don't think society is dominated by people who excel at exams. In fact qualifications don't appear very relevant at all.
I think so. Until I (the decision maker) the receiver of the signal, am familiar with it, the singal is 100% noise.
In high school, I always resented surprise exams as paternalistic. [Where I went to college, class attendance wasn't even mandatory.]
There are too many opportunities for corruption and discrimination when the tests are arbitrary. There's a reason that governments use formal, written tests for the civil service.
As there are as many credentials as judges, the problem then becomes one of finding the best judges of performance and then finding out their ratings on particular people.
Sounds a lot like networking to me...