Still Images from the Wheaton Warrenville South Choral Classic 2018 (Email me for video links - [email protected]):
Mixed Group Performances – Prelims
Waconia – The Current
Stills - Waconia HS – The Current:
http://www.knollwebmedia.com/Timeline/2018/2018-03-10-STILLS-Waconia-Current-Choral-Classic/n-gRWXzw
Loveland – By Request
Stills - Loveland HS – By Request:
http://www.knollwebmedia.com/Timeline/2018/2018-03-10-STILLS-Loveland-By-Request-Choral-Classic/n-6brfqF
Waconia – Power Company
Stills - Waconia HS – Power Company:
http://www.knollwebmedia.com/Timeline/2018/2018-03-10-STILLS-Waconia-Power-Co-Choral-Classic/n-553PVN
Rolling Meadows – Leading Ladies
Stills - Rolling Meadows HS - Leading Ladies:
http://www.knollwebmedia.com/Timeline/2018/2018-03-10-STILLS-Rolling-Meadows-LLadies-Choral-Classic/n-cggPBf
Waubonsie Valley – Girls in Heels
Stills - Waubonsie HS - Girls in Heels:
http://www.knollwebmedia.com/Timeline/2018/2018-03-10-STILLS-Waubonsie-GIH-Choral-Classic/n-F3WVWz
Setup/Teardown
Video – Timelapse Setup/Teardown:
https://www.knollwebmedia.com/Timeline/2018/20180310-VIDEO-Choral-Classic-Timelapse/n-v7vLHT
IMO, we shouldn’t. Part of this larger discussion of scoring systems should include judging. I think all scores from all competitions should be made public. I think judges would be held more accountable that way, or competition directors could be more informed about their future hiring decisions.
To be fair, I'm with Sam. Contests hire competent judges to make decisions about the event. If a judge performs outside the scope of their duties, then it should be noted - especially by the contest organizers - and other events should be doing their research as to who they choose to hire before they hold their own competitions. But just because we don't like one judge's opinion doesn't mean that as an angry crowd we get to overthrow it or call shenanigans. You're right, having Waubonsie in last place isn't very consistent with their history, though they do have a newer director and you can see by their performance record over the season that it's nowhere near where it was even a couple of years ago. Could it be possible that someone just doesn't happen to agree with you that they're the best thing since sliced bread?
I do agree that contests should have transparency, but I also think we need to be careful about burning people at the stake for having opinions. There is far too much politics in show choir as it is, but if we don't accept any results other than "Group X should never lose" then I'm not sure why we bother even having judges in the first place.
I wasn't at this competition, I haven't seen Waubonsie this year, and it's entirely possible there was some sort of error. Wheaton hasn't made any statement about this, however, so it's equally possible there were things about the group the judge didn't fancy. Heaven forbid we have a judge with fresh eyes come in and adjudicate based on what is put before him/her and not what the community thinks he/she is "supposed" to do.
Good points. So what is the criteria for a judge performing “outside the scope of their duties”? Is that a judgment call left up to the contest director? Could that ever be conceived as political?
To be fair, I have never once publicly talked about these results being wrong, nor have I spoken negatively about any of these judges, so I did not form my opinion based on this competition. This is a belief I’ve held onto for a long time, which is why you will tend to see me post the scores/ranks at the competitions I attend. I think we’ve all seen results that some, but obviously not all, of us could find shady. There have been MAJOR situations in the past that needed to be brought to light in order to rectify the mistakes made, and make sure something is changed for the future.
On the flip side, publishing scores could also show the positive side to judging. “ Oh wow, this judge seems to consistently get the rankings right. They seem to really know what they are doing”. “I really thought Group B should’ve beaten Group A, but it looks like the judges were unanimous, so there were no issues.” “I’ve never heard of this judge, but I agreed with their rankings, and want to know more about them.”
It’s not about a witch hunt, it’s about making sure our competitions are run fairly and with integrity. If judges get to choose whether or not their scores are made public, that seems shady to me. Just my opinion.
I don't know if the competition should necessarily make them public as much as it is up to the directors that are receiving those scores. Having been involved with tabulations quite extensively, we guard them with our lives while they are happening. I think that a competition should let the results stand for themselves and let the competitors feel free to release them if they so choose.
What I firmly believe is that all of the directors should be given complete transparency of the scores. There are competitions that only give a composite score(s). Every group should know what score each judge gave each group and any penalties that may have been assessed.
Long story short, as long as the directors are getting the information, I don't feel as a spectator I am entitled or even should have that information.
I’ve always thought it’s crazy we don’t get to know the scores of the groups. I mean.... they’re called competitions. Marching band comps show scores. Lots of acapella competitions show scores. Every Olympic thing with judges shows each judges score (I think?) The coach of the sportsball (all of them) team doesnt keep the score hidden from the players at halftime like it seems most show choir directors do. In show choir we declare who the winner is but telling anyone scores or where people placed outside of prelims is done in hushed tones and in secret.
It just doesn’t make any sense to me. Have it be non-competitive or show the scores (shrug emoji)
Yes!!! Agree 100%. And as a spectator you pay to watch the competition so I feel like you should be allowed to know the full results of the competition since the spectators are the competitions main source of profit.
I do agree that scores should be known to all. I'm guessing that the reason they aren't in a lot of situations, however, is fear of retaliation not dissimilar to the discussion that is happening here. Jay said, "publishing scores could also show the positive side to judging; 'Oh wow, this judge seems to consistently get the rankings right. They seem to really know what they are doing.'" But what is "getting the rankings right?" Rankings that most people agree with? Rankings that give you a warm fuzzy feeling in your tummy? Again, this is a subjective activity. I think most of us can agree on a lot of things, but a lot of it does come down to personal taste as well.
I do think we need better training and better standards for judges, and I absolutely don't think anyone connected to a school or program should be judging an event where one of their groups is attending... whether you're retired or not. That's just my opinion. But until we start respecting decisions, I can see why people don't want to make point totals freely available. We most certainly don't have to agree with a judge, but there's a difference between not sharing an opinion and calling someone's opinion "wrong." By definition, an opinion can't be wrong.
Scores should be backed up by thorough explanation (notes, recorded comments, etc.) so there is a basis and understanding for them. Judges should be able to have their own opinions but they should also be able to clearly articulate them. That's what I meant by "outside the scope of their duties." A judge who simply puts groups in the order they want and cannot provide any context for doing so doesn't have any business judging as far as I am concerned. Just because a group that "normally wins" doesn't place where lots of people want them to, however, doesn't mean the judge is not credible. Show choir is an artistic, diverse activity with countless shades and styles. There are objective components for sure, but even those are scored somewhat arbitrarily. One person's arms weren't quite straight on that last hit, so you get an "8" in precision. What does that even mean?
We hire educated judges to make the best determinations they can about the groups they see... and hopefully they have the moral fortitude to leave the politics at the front door. The rest is in their hands. I think probably the best way to make the playing field as fair as possible is to have as many judges as possible... but obviously there are reasonable limits to that as well.
At a lot of competitions, the fifth judge rates overall effect, while there are 2 judges each for vocals and choreo. Wabonsie’s show, while technically excellent, might have not told a story to the judge.
On the reverse side of this, I’ve seen Waconia from their premeire through their competition in their auditorium, with spotlighting and just an incredible atmosphere, and while they have technical flaws, it was a hell of lot of fun to watch. This overall effect might have led them to get a second place from the same judge who ranked Wabonsie 11th. I would personally prefer to watch Waconia over some choirs that they have lost to this year that didn’t draw me in.
I will admit though, that ranking Wabonsie of all groups last and lower than a prep group is incomprehensible to me.
I understand the point you’re trying to make, but to my understanding, neither Glenwood nor Loveland have show concepts/ show themes and they were in first and third respectively with that judge, so I’m doubtful that he was just a Show Concept judge.
You’re confusing Show Concept with Overall Show. The scoresheet was 70/50/30 with Vocals, Choreo and Overall Show/Band respectively as categories. You are correct, however, with the fact that the judge was not Judging one category specifically. Every judge has the same sheet
It's always kinda sad when one judge tanks a group so they don't make finals, since most competitions don't use something like "Fehr Fair" until finals.
I really wish more competitions did Olympic style judging (take away the highest and lowest score). There might be a reason why they don’t, but I don’t know what that reason is, lol. Also, whenever there is an outlier score at an Olympic event, the judges will talk to each other about why they placed someone where they did. Now this might be harder to do with show choir, and show choir isn’t the olympics, but I think it would be interesting to have a competition do that.
Olympic scoring usually throws out the same judges every time because some judges have a tendency to score higher or lower as a whole, not just per group.. I’ve lost a few times from Olympic scoring when the other group was behind by ranking and total points. Although it would’ve saved Waubonsie today, it is a flawed system overall.
After nearly 30 years at Disney entertainment Rich Taylor became Dean of Oklahoma University's School of Musical Theater. He's been on judging panels longer than some judges have been alive. He's seen it all since the very first competitions. It's absolutely fair to wonder why he had WV so low, but I would be equally as interested as why the others had WV so high.
It should be noted that even if he had given Waubonsie the same score as Wheaton North or Broken Arrow, they still would not have made finals. >>>>> I actually suspect that his scores for Batavia and Waubonsie got switched. <<<<<
He also had Wheaton North in 8th. And I'm sorry, but anyone who was there on Saturday can attest that that right there is a head-scratcher as well. One judge had them in first for goodness sake