Showing posts with label measurement. Show all posts
Showing posts with label measurement. Show all posts

Sunday, December 30, 2012

Celsius' temperature scale (and other delights at Uppsala University)

While in Sweden I had a delightful visit to Uppsala University.

It was there, in the 1700's (as the University was  already nearing its 300th anniversary) that Anders Celsius undertook the meteorological research that required his invention of the temperature scale that bears his name. Except not quite in the form we know it today.  Here's a picture I took of one of his original thermometers, through the display case in a University museum: If you can click through and look at the enlargement, you'll see that the top of the scale (far left), which marks the boiling point of water, is marked 0 degrees, while the middle of the scale, which marks water's freezing point, is marked 100. (Celsius apparently felt that solids were 'more' than gases...). The reversal to what we know today as the conventional Celsius scale came in 1745, shortly after his death.

***********

It turns out that Uppsala University has other interesting museums, and in one of them I took a picture of what is said to be an example of the first circulating bank note...

Wednesday, August 3, 2011

Incentives for cheating and for cheating-detection

A chilling and surprisingly complicated story from Panos Ipeirotis at NYU Stern (who blogs under the title "A Computer Scientist in a Business School): Why I will never pursue cheating again (since deleted, but that URL now has related links). Apparently he got a cease and desist letter, and the story has since attracted some press.

Here's one story about it: NYU Professor Catches 20% Of His Students Cheating, And He's The One Who Pays For It,

and here's another: NYU Prof Vows Never to Probe Cheating Again—and Faces a Backlash

Ultimately he draws a market design conclusion:
"In Mr. Ipeirotis’ view, if there’s one big lesson from his semester in the cheating trenches, it’s this: Rather than police plagiarism, professors should design assignments that cannot be plagiarized.

"How? He suggested several options. You could require that projects be made public, which would risk embarrassment for someone who wanted to copy from a past semester. You could assign homework where students give class presentations and then are graded by their peers, ratcheting up the social pressure to perform well. And you could create an incentive to do good work by turning homework into a competition, like asking students to build Web sites and rewarding those that get the most clicks."

Saturday, July 4, 2009

Getting what you measure: college rankings version

As the rankings of universities conducted by the magazine US News and World Reports have become more influential, there are a growing number of reports of the ways, fair and not so fair, that universities respond to what USNWR tries to measure.

Clemson University has been in the news in connection with their stated efforts to rise higher in the US News and World Report rankings of colleges.
They and their critics agree that they want to do this; the question is are they doing it in the right way for the right reasons.

Here's a critic who says no:Researcher Offers Unusually Candid Description of University's Effort to Rise in Rankings:
"Clemson University is run in an almost single-minded direction, with nearly all policies driven by how they will help the land-grant institution rise in U.S. News & World Report’s rankings, according to a university official whose candid comments stirred debate among conference-goers here on Tuesday."

and the reply:
Clemson Assails Allegations That It Manipulates 'U.S. News' Rankings
"Clemson University, stung by charges by one of its own researchers that it willfully manipulates the U.S. News & World Report rankings, fired back on Wednesday, saying the accusations are “outrageous” examples of “urban legends” that have surrounded the university’s campaign to reach the top 20 of public research universities.“The accusation that Clemson, its staff, and administrators have engaged in unethical conduct to achieve a higher ranking is untrue and unfairly disparages the sincere, unwavering, and effective efforts of faculty and staff to improve academic quality over the past 10 years,” reads a statement issued by the university’s chief spokeswoman, Catherine T. Sams. “While we have publicly stated our goal of a top-20 ranking, we have repeatedly stressed that we use the criteria as indicators of quality improvement and view a ranking as the byproduct, not the objective.” "

Here's a summary: Clemson Explains Its Approach to U.S. News Rankings

And here's a story about alleged simple mis-counting at USC's School of Engineering: More Rankings Rigging , and a summary reflecting the relation between what is measured and what is reported: Gaming the Rankings. Here's an illuminating paragraph:

"Any performance measure is ripe to be gamed. The percentage of alumni giving is a measure worth 5 percent of a ranking in U.S. News. A few years ago, Albion College made its own stir in the higher education rankings world when it increased its percentage of alumni making donations with the stroke of a pen. As The Wall Street Journal reported, the college recorded a $30 donation from a graduating senior as a $6 alumnus gift for the next five years. Clemson, in its systematic approach to raising its rank — “no indicator, no method, no process off limits to create improvement,” as Watt stated — solicited alumni donations in such a way as to increase their giving rate: Alumni were encouraged to give as little as $5 annually."

Note incidentally that there are different ways to try to rise in the rankings, and some may be strictly gaming (e.g. soliciting and/or reporting the same $30 contribution in a different way), while others (lowering the number of classes with more than 20 students) may have a positive effect by themselves. But whenever the goal is one thing, but what is or can be measured is another, there of course will be incentives to respond to what is being measured.

Friday, July 3, 2009

Medical tourism and medical data

An Op-Ed in the NY Times reports on the difficulty of evaluating how well foreign hospitals do compared to American hospitals: Overseas, Under the Knife . A big difficulty is that appropriate data aren't collected on medical outcomes:

"There is reason to think the quality of care at some foreign hospitals may be comparable to quality in the United States. More than 200 offshore hospitals have been accredited by the Joint Commission International, an arm of the organization that accredits American hospitals. Many employ English-speaking surgeons who trained at Western medical schools and teaching hospitals.
So should offshore surgery be welcomed as a modest way to make American health care more affordable? We can’t know until we can directly compare the outcomes with those of American surgery. To begin, we must adopt a uniform way for American hospitals and surgeons to report on the frequency of short-term surgical complications.
Medicare could do this by requiring that all participating hospitals and surgeons count pre-surgical risk factors and post-surgical complications during hospitalization and for 30 days afterward, when most short-term problems become evident. The system used for many years by Veterans Affairs hospitals to reduce surgical complications is the best option for this, since it is available to all American doctors through the American College of Surgeons. So far, however, only a small minority of surgeons participate in this or any other valid national system of reporting surgical outcomes.
Patients and their surgeons also need comparable measurements of long-term success. Medicare should lead by adopting Sweden’s method of monitoring hip joint replacement outcomes. It tracks, for example, a patient’s ability to walk without pain six years after surgery.
Finally, Medicare should invite accredited offshore hospitals and their affiliated doctors to participate in all of its comparative performance reporting systems. Beyond informing Americans contemplating treatment abroad, such comparisons would allow us to learn if our care is the world’s best — and to accelerate our improvement efforts if it is not. "

Agreeing on what data to collect, and collecting it, isn't easy. (And of course what data you collect can influence what outcomes you get in ways that aren't all desirable.) But the lack of outcome data is a weak link in American medicine, which makes it difficult to evaluate alternative practices and procedures. I see this in discussions about kidney exchange, and my guess is that this is a big problem in improving medicine and the medical marketplace generally.