Ratings, serendipity and selling wine

March 29th, 2010

97 points.

97 points.

Today, on PalatePress.com, Ben Simons thinks out loud about the place and utility of wine rating. I did a very light edit of this well-written piece. The day after I received the assignment, my Google alerts notified me about an article in a seemingly obscure journal in which the wine rating scale I had developed for redwinebuzz.com is compared to some pretty formidable contenders.

The article (available here) looks at my rating scale and those of Robert Parker, Stephen Tanzer, Wine Spectator, Nick Chebnikowski’s Winespider system, the Amerine and Roessler wine rating system, as well as one proposed by Tim Elliott. The authors of the article selected these seven scales to assess them for their “utility for producers, consumers, and oenologic researchers“.

The authors said of my rating system:

This carefully crafted wine scale would have appeal and be quite useful for the wine producer, the consumer, and the oenologic research scientist.

Well, when I think about it, that is why I based my scale in concrete criteria rather than my own preference or enjoyment of a wine.

Now, this paper by Cicchetti & Cicchetti was not intended to select a “winning” scale or rating system, one of its authors tells me. Its aim was to examine the utility of various scoring protocols. As part of this, the authors also addressed the disparity of wine ratings in the context of inter-rater reliability. Perhaps low inter-rater variability is what leads Ben to call wine scores “empty numbers, devoid of any real meaning” in his article.

Where I part ways, philosophically, with Simons is on the issue of wine being subjective. Wine is not subjective. Enjoyment and preference are. When quality is argued to be subjective, the contention always hinges on individual preferences and enjoyment. But quality of cars, bluejeans, apples and even wines can be put in non-subjective terms. It’s a matter of what criteria of quality a group agrees to adopt.

I was able to contact one of the study’s authors, Dr. Domenic Cicchetti, by telephone, and we discussed the phenomena underlying this variation in wine scores. Ultimately, Dr. Cicchetti and I agreed, the degree of variability in wine scores depends on the type of criteria for assigning and accumulating points as well as the degree of adherence to a methodology and those criteria. When a rating system relies more on the tasters’ preferences, there will be much disagreement on a particular wine. When the tasters have pre-determined criteria for awarding points and follow a methodology for assessing a wine (and adhere to both), then the degree of disagreement will diminish significantly.

When one seeks to communicate about wine (and more specifically, rate wines), they can either base their evaluation in preference or in an unbiased description of wines using some pre-set criteria for rating quality. The most reasonable benchmarks are: varietal fidelity, regional typicity, balance, structure, food friendliness and age worthiness. This way of pre-determining what specific characteristics (which ones? how much? how little?) a wine will have to exhibit in order to achieve a particular score makes the final number mean something. Never mind that along the way to developing such a system, one becomes aware of what the individual sensory attributes mean for and about the wine: its origins, its make up and its destiny.

Nothing indicates that this kind of approach moves wine inventory, however, as Dr. Cicchetti points out. As much as people bemoan critics “dictating” preferences, consumers find comfort and confidence in enjoyment-based scores. In 2008, this confidence amounted to $18.5 billion USD.

 

Email & Share

 


Comments are closed.


A D V E R T I Z I N G