The place for all things wine, focused on serious wine discussions.

WSJ On WineScores

Moderators: Jenise, Robin Garr, David M. Bueker

no avatar
User

TomHill

Rank

Here From the Very Start

Posts

8373

Joined

Wed Mar 29, 2006 12:01 pm

WSJ On WineScores

by TomHill » Sun Feb 05, 2012 8:38 pm

Intersting article in the WSJ from some time ago that I just read:

WSJ/WineScores

May be old stuff to some folks.

Tom
no avatar
User

Brian Gilp

Rank

Wine guru

Posts

1440

Joined

Tue May 23, 2006 5:50 pm

Re: WSJ On WineScores

by Brian Gilp » Mon Feb 06, 2012 1:33 pm

I have probably posted this before but here goes again. When I worked at a small winery, the winemaker sent me home with two unidentified bottles that he wanted me to drink and report back on next week. Both where a light red in color. Definetly not rose but much lighter than the cab they produced. While the winery was in Indiana, the grapes were all sourced from Napa and Carneros and they did not make any hybrids or fruit wine like most of the other wineries in the state. I honestly don't remember now what I thought of the first wine I tried. I only recall that I eventually decided on it being red vinifera. I remember that I felt the second wine was a touch light in body with a lot of cherry flavor but lacking complexity. I figured this must be a Pinot Noir.

Since I was using what I knew about what the winery currently produced and using the color as a major indicator, I missed both wines badly. The first wine was Chardonnay that had been passed through the filter immediately after the merlot thus picked up the red color. The second was an experiment with cherry wine.
no avatar
User

wnissen

Rank

Wine guru

Posts

1297

Joined

Wed Mar 22, 2006 1:16 pm

Location

Livermore, CA

Re: WSJ On WineScores

by wnissen » Mon Feb 06, 2012 3:12 pm

Brian, yours is an extreme example, but the studies I've seen on truly blind (no color cues) tasting shows that it's not unusual.

The article's discussion of how gold medals are essentially awarded by chance certainly fits with my empirical observations. I have found that the number of medals awarded to a wine is determined almost entirely by the number of contests it is submitted to. It can't be cheap, mailing off three bottles plus an entry fee, but some wineries just love to be able to put "medal winner" on the tasting notes or the bottle.

I hope that truly flawed wines, those with excessive VA, or cork taint, are in fact judged accurately. But within the broad universe of essentially sound wines, despite the best, earnest efforts of many dedicated judges, the results might as well be produced by a random number generator.
Walter Nissen
no avatar
User

Pinchas L

Rank

Just got here

Posts

0

Joined

Fri Apr 17, 2009 2:04 pm

Location

Brooklyn, NY

Re: WSJ On WineScores

by Pinchas L » Mon Feb 06, 2012 3:50 pm

While these findings don't surprise me, I wonder if a similar study was conducted on wine notes taken by those who drink a bottle of wine over the course of a meal. While I still expect the wine's pedigree to influence the taster, nonetheless I would be very surprised if the same level of inconsistency prevails. Granted, the particular foods paired with the wine in question will influence one's experience, but if that person were to make a conscientious effort to weigh the effect of his choice of food and wine paring on his overall experience, the results should be revealing and consistent.

One of the arguments made against professional critics reporting on wines they have with their meals is that such results are highly subjective, yet this article points to a greater flaw inherent in their practice of reporting on wines they drink in formal tastings; that they are meaningless.

-> Pinchas
no avatar
User

TomHill

Rank

Here From the Very Start

Posts

8373

Joined

Wed Mar 29, 2006 12:01 pm

Hmmmmm...

by TomHill » Mon Feb 06, 2012 3:56 pm

Pinchas L wrote:One of the arguments made against professional critics reporting on wines they have with their meals is that such results are highly subjective, yet this article points to a greater flaw inherent in their practice of reporting on wines they drink in formal tastings; that they are meaningless.
-> Pinchas


Hmmm.....not sure "meaningless" is the term I would use, Pinchas. They are only a brief snapshot in time and "limited validity" is probably the
term I would use. I think people put a precision on these scores that is unwarranted. OTOH, these professional critics assure us that, because of
their vast tasting experience, their brief snapshot is all they need in order to predict the evolution of that wine over time. Yeah..sure!!
Tom
no avatar
User

David M. Bueker

Rank

Childless Cat Dad

Posts

36371

Joined

Thu Mar 23, 2006 11:52 am

Location

Connecticut

Re: WSJ On WineScores

by David M. Bueker » Mon Feb 06, 2012 3:59 pm

Study all you want - scores are not going away.
Decisions are made by those who show up
no avatar
User

Brian Gilp

Rank

Wine guru

Posts

1440

Joined

Tue May 23, 2006 5:50 pm

Re: WSJ On WineScores

by Brian Gilp » Mon Feb 06, 2012 8:26 pm

wnissen wrote:
But within the broad universe of essentially sound wines, despite the best, earnest efforts of many dedicated judges, the results might as well be produced by a random number generator.


One other really interesting thing I learned while working at the winery was that some wines show more variation than others. There was a 1988 merlot that would taste vastly different week to week while all bottles opened during a single day or weekend would taste consistent bottle to bottle so I don't think it was bottle variation. The other wines the winery made showed no where near the variation of this merlot. It got to the point that the decision to pour the 1988 was made each morning depending upon how it was showing on that given day.

I assume this is an extreme case but I am willing to bet its not unique.
no avatar
User

Craig Winchell

Rank

Just got here

Posts

0

Joined

Tue Jan 19, 2010 1:09 pm

Re: WSJ On WineScores

by Craig Winchell » Mon Feb 06, 2012 8:30 pm

It's true that almost any wine will win a medal if it is entered into many competitions. However, to generalize and say that medals are meaningless is idiotic. The isolated or occasional medal is meaningless, if the wine has been entered into many competitions. The habitual medal is, on the other hand, a representation of quality, or style, or both. Any wine entered into 10 competitions, that comes away with 8 medals, 4 of which are gold, is one that is certainly worth trying, in my opinion. As would even be a wine entered into 4 competitions and winning 3 or 4 medals. The fact that in my experience, only California Grapevine polls the wineries to determine which events they entered, and tries to accurately depict the medals won in those competitions, just means that one should always try to obtain their compendium of results. After all, only with such a compendium is there any hope of comprehending competition results. But even in the absence of such a compendium, competition results should not be pooh pooed. While wineries know that with enough tries, they're bound to get a few medals, there is still the understanding by the winery, warranted or not, that the judges know what they're doing, that they can't be fooled too badly. Thus, while we may try the shotgun approach to yield a few medals to a marginal wine, the costs associated with entering these competitions, in terms of entry fees, packaging, shipment and wine cost are expensive- much too expensive to justify habitually entering bad wine.
no avatar
User

Mark Lipton

Rank

Oenochemist

Posts

4729

Joined

Wed Mar 22, 2006 1:18 pm

Location

Indiana

Re: WSJ On WineScores

by Mark Lipton » Tue Feb 07, 2012 1:30 am

Craig Winchell wrote:It's true that almost any wine will win a medal if it is entered into many competitions. However, to generalize and say that medals are meaningless is idiotic. The isolated or occasional medal is meaningless, if the wine has been entered into many competitions. The habitual medal is, on the other hand, a representation of quality, or style, or both. Any wine entered into 10 competitions, that comes away with 8 medals, 4 of which are gold, is one that is certainly worth trying, in my opinion. As would even be a wine entered into 4 competitions and winning 3 or 4 medals. The fact that in my experience, only California Grapevine polls the wineries to determine which events they entered, and tries to accurately depict the medals won in those competitions, just means that one should always try to obtain their compendium of results. After all, only with such a compendium is there any hope of comprehending competition results. But even in the absence of such a compendium, competition results should not be pooh pooed. While wineries know that with enough tries, they're bound to get a few medals, there is still the understanding by the winery, warranted or not, that the judges know what they're doing, that they can't be fooled too badly. Thus, while we may try the shotgun approach to yield a few medals to a marginal wine, the costs associated with entering these competitions, in terms of entry fees, packaging, shipment and wine cost are expensive- much too expensive to justify habitually entering bad wine.


Craig,
I am not as sanguine as you about the significance of medals, even when large numbers of them have been garnered. My first criticism goes to the heart of judging at wine competitions. Tasters there often have to taste many dozens, perhaps as many as 100, wines and are not given time to linger over any one wine. Even trained tasters fatigue, and many tasters at lesser-known competitions aren't necessarily that well trained. Under ideal circumstances, there may be a fair amount of unanimity between competitions about the merit of a given wine, but does that mean it will be the best choice for dinner tonight? Most people I know claim that big, fruity and forward wines will tend to win competitions, as they make the greatest impression on fatigued palates... but most of us would also agree that those same wines can be difficult to pair well with food. When recently visited McWilliams' Mt. Pleasant winery in the Hunter Valley, the very informative man pouring for us mentioned (in response to my observation about Aussie wineries tending to place more emphasis on medals received than their US counterparts do) that the emphasis on medals received was done for their customers from E. Asia, who place great store in those medals. He also added that a gold medal from an unknown competition out in the boondocks may not be a great index into the wine's quality.

Mark Lipton
no avatar
User

Craig Winchell

Rank

Just got here

Posts

0

Joined

Tue Jan 19, 2010 1:09 pm

Re: WSJ On WineScores

by Craig Winchell » Tue Feb 07, 2012 1:47 am

Mark, of course, the ripe, rich, big, fruity, forward wines can make the biggest impression upon fatigued judges. And Robert Parker, and his ilk. What of it? The best ones are still worth a shot. Notice that I said that habitual medals could be indication of style in addition to quality. Just because a wine deserves a taste does not mean that it will be the best wine for a given purpose. I never said that a medal was the equivalent of tasting notes by someone with whom one's tastes jive, I simply said medals should not be pooh-pooed.
no avatar
User

Howie Hart

Rank

The Hart of Buffalo

Posts

6389

Joined

Thu Mar 23, 2006 4:13 pm

Location

Niagara Falls, NY

Re: WSJ On WineScores

by Howie Hart » Tue Feb 07, 2012 9:11 am

While I agree with Mark regarding medals, they can have value. They can help a new, small winery get established. From a consumer's viewpoint, if a wine has been awarded a medal, it probably doesn't suck. It's interesting that there are now about 15 wineries in Niagara County now and only about 3 of them have ever entered any of their wines in competitions.
Chico - Hey! This Bottle is empty!
Groucho - That's because it's dry Champagne.
no avatar
User

David Creighton

Rank

Wine guru

Posts

1217

Joined

Wed May 24, 2006 10:07 am

Location

ann arbor, michigan

Re: WSJ On WineScores

by David Creighton » Tue Feb 07, 2012 10:51 am

i'm tempted to agree with the conclusions here but certainly not with the supposed science.

i'm reminded that early greek philosophers were so struck by the changability of even the most basic things that one said that you can't step into the same river twice - to which another added that you can't even step into the same river once since new waters constantly flow around you.

this seems to have been forgotten in the experiment where the 'same' wine was served to the same panel of experts on three occassions from the 'same' bottle. we aren't told how far apart these experiences happened by i would hope it was at least the same day. so, how long did the wine breathe after the bottle was opened, how long did it breathe in the first glass after it was poured. lets guess that both were negligable. so a couple of hours of breathing later the 'same' wine is served again; and then again a couple of hours later. would any of us expect the wines to taste the same? the judges can only judge the wine in front of them at that moment.

different competitions: competitions are spread out over the year usually from january to november. so, wineries in various climates send wines to competitions in various climates by the usual shippers that are delivered at the ambient temperature of that day plus the heat built up in the truck. sometimes the wines sent have been recently bottled; sometimes have been in the bottle for 8 months or more. so, the labels all say the same thing, but are bottles sent from or to really warm places the same when they arrive as ones sent from and to cooler climes? are recently bottled wines really the same in the appropriate sense as ones that are not?

add to all this in either case vagueries of serving temperature, glass washing technique(esp. rinse agent), and the room itself, and what can one expect.

so, judges deall with the wines in front of them - and without any scientific justification, the author claims that alll the wines are the 'same'. hogwash. i know that if the cork is just drawn at a tasting and i am the first pour, i need to go back later to get a real sense of the wines quality. so2, h2s, ferm odors and more are likely in a just opened bottle.

so, one problem that competitions have NOT faced is the one of trying to find a protocol for serving wines. so far as i know there is no agreed upon proceedure for:
1. how long the bottle should be opened before it is served
2. how long the wine should be in the glass before it is presented to the judges
3. serving temperature
4. glass cleanliness proceedures
5. shipping
6. minimum time in bottle
etc.

in addition to all this, a consumer may well purchase a gold medal wine a year after the competiton. still the same wine?
david creighton
no avatar
User

Craig Winchell

Rank

Just got here

Posts

0

Joined

Tue Jan 19, 2010 1:09 pm

Re: WSJ On WineScores

by Craig Winchell » Tue Feb 07, 2012 12:24 pm

In the old days, there was a fellow by the name of Craig Goldwyn (still around, but not big in wine these days, as far as I know). He developed a series of protocols for wine tastings based upon a 13 point hedonic scale (like/dislike), had large tasting panels, tested the judges by having duplicate wines in each flight (rejecting the judges who failed from the results of that flight), subjected the results to statistical analysis, and published the results in a publication called "International Wine Review" (now defunct). His company, Beverage Testing Institute, is still around, though he is not curently associated with them (as far as I know) and I don't know whether or not they still use his protocols. But in its prime, the competitions it held were top notch, probably the best run competitions in the history of wine competitions. But they were asking a question which few ask these days. Now, the question asked by everyone seems to be, "Which is the best wine?". His competition tried to answer, "Which wine is enjoyed the most under these conditions, at this moment in time?" And that really is the only answer that can accurately be quantified in such a panel. We can quantify taste analysis or even descriptive analysis of various components, but we are doing it at a moment in time. We can project how a wine will develop over any particular time period, but that projection will rarely be accurate. Craig understood this, and attempted simply to quantify likes and dislikes, noting that different people liked different things differently, so therefore a large panel was needed. Taking it one step further, Analysis of Variance could be used to discover which tasters liked the wines differently than others, the tasters could be grouped in terms of similarities of taste, one could discover which types of wines appealed most strongly to the largest groups of tasters (assuming tasters are chosen randomly), and therefore one could conceivably craft wines to those specifications, if one desired. Craig was way ahead of his time in the use of scientific methodologies, which are unfortunately not practiced by competitions at the present time.
no avatar
User

David Creighton

Rank

Wine guru

Posts

1217

Joined

Wed May 24, 2006 10:07 am

Location

ann arbor, michigan

Re: WSJ On WineScores

by David Creighton » Tue Feb 07, 2012 1:55 pm

i've met him - he was in charge at the finger lakes competition one year i beleive. i like his approach to the tasting situation. many of my points about protocol focused on the 'back room' - the period before the tasting experience begins. thanks for reminding me of him and his ideas.
david creighton
no avatar
User

Kelly Young

Rank

Ultra geek

Posts

473

Joined

Wed Feb 17, 2010 3:37 pm

Location

Washington, DC

Re: WSJ On WineScores

by Kelly Young » Tue Feb 07, 2012 2:56 pm

I had forgotten about the BTI. I knew about them and the tasting regimen from the world of beer, I didn't even realize he did this for wine, though it makes perfect sense.

As I've said before, I personally find sites like this far more useful for making wine buying decisions than any medal/point system.
no avatar
User

Victorwine

Rank

Wine guru

Posts

2031

Joined

Thu May 18, 2006 9:51 pm

Re: WSJ On WineScores

by Victorwine » Wed Feb 08, 2012 8:23 pm

Commercial wine competitions (awarding wines “Best of Show”, “Double Gold”, etc.) and just scoring a wine to “evaluate it” is not exactly the same thing. A competition is just that a “contest”, and with all contests you will have “winners” and “losers”. A commercial wine competition that doesn’t award “Best of Show”, “Best of Class”, “gold medals” etc. IMHO would not be very “successful”. (Who would spend the money on entry fees and shipping to a competition that never has any “winners”? If a commercial wine competition had a “Jug wine Class” surely a wine in that class could be awarded (or deemed) a “Gold medal” wine in that class). I agree with Howie, commercial wine competitions whether they are local, regional, or international are great for marketing the wines. For the small boutique wineries that don’t have large sums of money set aside for advertising (and getting the “word out”) commercial wine competitions are options.
After saying all this though, I don’t think a “competition setting” is appropriate for a fair “evaluation” of an individual wine.

Salute
no avatar
User

Hoke

Rank

Achieving Wine Immortality

Posts

11420

Joined

Sat Apr 15, 2006 1:07 am

Location

Portland, OR

Re: WSJ On WineScores

by Hoke » Wed Feb 08, 2012 9:01 pm

In the old days, there was a fellow by the name of Craig Goldwyn (still around, but not big in wine these days, as far as I know). He developed a series of protocols for wine tastings based upon a 13 point hedonic scale (like/dislike), had large tasting panels, tested the judges by having duplicate wines in each flight (rejecting the judges who failed from the results of that flight), subjected the results to statistical analysis, and published the results in a publication called "International Wine Review" (now defunct). His company, Beverage Testing Institute, is still around, though he is not curently associated with them (as far as I know) and I don't know whether or not they still use his protocols. But in its prime, the competitions it held were top notch,


I had the distinct pleasure of being a judge on Craig's original panels. Not only were they stringently run comps, the judges walked away with a very good sense that they had done good work in getting the right wines to the right places in terms of accolades. Craig ran a tight ship, and everyone understood and complied with his rules. I'm proud to say that I had few disqualifications---but that's the case with most of us that got to participate in the "grand finals", or we would not have been there.

I thought at the time Craig's American Wine Review comps were the best there was, period. And as a young guy in the biz I was honored (and amazed) to be in the august selection of wine luminaries that participated.

The BTI on the other hand....well, let's say that it has been a very long time since Craig has been associated with them. They have become quite widely rumored ITB for being willing to say good things about wines that have been sent them with hefty "entry fees" attached. While I might...might...have paid some attention as a buyer back in the day, nowadays and for some time I pay no attention whatsoever to the BTI. It's a 'closed shop' approach: you pay your fee, you send in the wine, and they come back with some sort of score/description. That's it.

As to the question 'are medals valid and meaningful?' Damn right they're valid and meaningful for the trade and for those customers (quite a few) who pay attention to them. They help get writeups on a wine, and visually stimulate someone to pick up a bottle to look at it. And in the REAL wine biz, the money/volume biz, as opposed to the 1% here who actually investigate what they're drinking, medals are meaningful. If they weren't so the wineries/producers wouldn't be paying those entry fees and shipping costs. (And yes, competitions are very expensive...fees, staff, PR people, product, etc. Major part of a PR budget for a winery.)
no avatar
User

wnissen

Rank

Wine guru

Posts

1297

Joined

Wed Mar 22, 2006 1:16 pm

Location

Livermore, CA

Re: WSJ On WineScores

by wnissen » Wed Feb 08, 2012 9:20 pm

So, Hoke, (or anyone familiar), how would you characterize the strength of the California State Fair protocol and judges that were studied? I was under the impression that the State Fair was among the stronger competitions in terms of the requirements for judges, but I don't really know.
Walter Nissen
no avatar
User

Hoke

Rank

Achieving Wine Immortality

Posts

11420

Joined

Sat Apr 15, 2006 1:07 am

Location

Portland, OR

Re: WSJ On WineScores

by Hoke » Wed Feb 08, 2012 9:38 pm

wnissen wrote:So, Hoke, (or anyone familiar), how would you characterize the strength of the California State Fair protocol and judges that were studied? I was under the impression that the State Fair was among the stronger competitions in terms of the requirements for judges, but I don't really know.



As things go, the rules in the judging are strict and well controlled. That doesn't necessarily speak to the overall quality of the judges though. You can be consistent (which is what the rules insist upon) without being correct, if you see what I mean.

One of the problems in competitions that weighs heavily (again in large part depending upon the judges) is when discussion and negotiation is allowed. I'm not necessarily proud of the fact, but nonetheless can say that I have frequently swayed other judges to moderate their positions up or down to influence the level of award of a wine. And yes, the concomitant is so as well.

Put a young judge on a short panel with someone who is quite prestigious, or with someone who is loud even, and you'll have a skewed score. Might be skewed good, or skewed bad, but it will be skewed. And just to throw another wrench in there: winemakers often make really poor judges. Often.
no avatar
User

Tom N.

Rank

Wine guru

Posts

797

Joined

Thu Mar 23, 2006 10:17 pm

Location

Soo, Ont.

Re: WSJ On WineScores

by Tom N. » Wed Feb 08, 2012 10:27 pm

Interesting discussion on wine ratings. I have been aware of this phenomenon for some time. I have compared ratings that experienced wine tasters give the same wine and often found significant (5-10 pts) discrepancies in scores. However, the variance seems to usually be in the 3 -5 point range for most wines. I learned my lesson one time with a highly rated wine that I bought because the taster loved its mocha/coffee notes (I do not like or drink coffee) that I really did not like at all. Now I read the tasting note first and then look at the score, if I consider it at all. I also think that if you use a specific taster's ratings you should calibrate your palate against theirs.

But it is not all variable perception. I recently had a 2010 Michigan pinot noir (Brys Estate) that I really enjoyed at a restaurant in late December. I visited the winery in late January, only one month later, to taste the same wine with the idea that I would like to buy some. I did not even recognize this wine as the same wine. It had changed significantly in 1 month from a nice medium-bodied fruity pinot with refreshing acidity to an oak dominated wine with bacon notes over riding the cherry fruit. When I said this to our tasting room attendant, she agreed and said that this young wine had changed character in the last two-three weeks and taken on a much more oaky tone. She also confirmed that the winery had aged the wine in light and medium toast barrels. Fruit was still there but more in the background. Needless to say I did not buy any even though it was still a tasty wine, just not my style. This type of wine evolution could explain at least some of the different ratings, although they are most likely due to the variability of human perception.
Tom Noland
Good sense is not common.
no avatar
User

Hoke

Rank

Achieving Wine Immortality

Posts

11420

Joined

Sat Apr 15, 2006 1:07 am

Location

Portland, OR

Re: WSJ On WineScores

by Hoke » Wed Feb 08, 2012 11:11 pm

Um, Tom? Have you considered that there was one more variation you should be taking into account? You were in entirely different circumstances the two times you tasted this wine. Thus, the condition of the wine and your perception of it were both affected.

Wine in a tasting room and wine in a restaurant...or the same wine in a tasting as a restaurant, I guess I should say...are two entirely different wines. And you are in entirely different modes of perception.

Taste that same wine in an airplane and you'd have yet a third wine.

And imagine the variability of even a professional taster when assigning points and writing tasting notes.
no avatar
User

Victorwine

Rank

Wine guru

Posts

2031

Joined

Thu May 18, 2006 9:51 pm

Re: WSJ On WineScores

by Victorwine » Wed Feb 08, 2012 11:36 pm

In a commercial wine competition, who is responsible for entering a wine in the correct class or category? The entrant (winery)?
Could a wine produced from say 75% Zinfandel and 25% Alicante (being marketed with a “fantasy” name) be entered in the “Other Red Vinifera” or “Red Vinifera blend” class or category if the winery staff feels that this is where the wine would “show best” or does it have to be entered in the “Zinfandel” class or category? Would this be considered “cheating”?

Salute
no avatar
User

Hoke

Rank

Achieving Wine Immortality

Posts

11420

Joined

Sat Apr 15, 2006 1:07 am

Location

Portland, OR

Re: WSJ On WineScores

by Hoke » Wed Feb 08, 2012 11:44 pm

Victorwine wrote:In a commercial wine competition, who is responsible for entering a wine in the correct class or category? The entrant (winery)?
Could a wine produced from say 75% Zinfandel and 25% Alicante (being marketed with a “fantasy” name) be entered in the “Other Red Vinifera” or “Red Vinifera blend” class or category if the winery staff feels that this is where the wine would “show best” or does it have to be entered in the “Zinfandel” class or category? Would this be considered “cheating”?

Salute


Competition organization/administrator sets the rules, Victor, and has the privilege of deciding if the category submission is correct. Depends on how they are stated. They can be broken down as variety with sub-category of price, or residual sugar for instance.

In the case you cited, it would depend on how the wine was labeled. If it was varietally labeled (correctly) as Zinfandel, it would go into Zinfandel in every competition I know of. If it was the same exact wine labeled as Fantasy Name "Red Wine", then it would go into a blend or 'other red' category.

Also, judges in most competitions have the right to question an entry and its classification. The wine then has to be checked and a decision made by the administrator (without informing the judges what the wine actually is, of course).
no avatar
User

Mark Lipton

Rank

Oenochemist

Posts

4729

Joined

Wed Mar 22, 2006 1:18 pm

Location

Indiana

Re: WSJ On WineScores

by Mark Lipton » Thu Feb 09, 2012 12:38 am

wnissen wrote:So, Hoke, (or anyone familiar), how would you characterize the strength of the California State Fair protocol and judges that were studied? I was under the impression that the State Fair was among the stronger competitions in terms of the requirements for judges, but I don't really know.


Years ago, the information I got was that the best-run and most reliable wine judging competition was that of the Orange County Fair, specifically in comparison to the California State Fair and the Napa and Sonoma county fairs. I believe that it was Jerry Mead who made that statement, but that was 30 years ago if it was a day.

Mark Lipton
Next

Who is online

Users browsing this forum: AhrefsBot, ClaudeBot, FB-extagent, LACNIC Exp, SemrushBot and 2 guests

Powered by phpBB ® | phpBB3 Style by KomiDesign