I haven't been able to find a video of studio 2019 yet, BUT there is a "drum cam" video for 2019 that is pretty entertaining. I'll link it here: https://www.youtube.com/watch?v=KhFibpBLAj0
I too have crunched the numbers (assuming these rankings were authentic) using the consensus ordinal rating system as shown in a previous post. And I too came up with Johnston Prep 6th, Decatur 7th, and AC ER 8th. Unfortunately for Decatur they saw the 8th place choir leap frog them into finals. Urbandale ran a great competition, but should not have deviated from their own rules. When just using prep group rankings and using the consensus ordinal system it is clear that AC ER won the prep division. However the rules state that all the choirs and their rankings will be used to determine the 6 finalist. When expanding the number of choirs to everyone Johnston Prep was the clear 6th place finisher. In my opinion Urbandale should have lived within the rules as published and spent time explaining to AC ER why in this rare instance, Johnston Prep leap frogged them into finals and lived with only 6 finalist instead of adding a 7th.. Might seem a little weird that the winner of the Prep division doesn't rank as high as the second place finisher when calculated with all choirs participating in the ratings but so be it. Those were the published rules. By making an exception to allow AC ER into the finals and not Decatur has created a bit of drama that would have been avoided if you had just lived with your rules as published. If you want to add a new rule that would cover this rare and unique case then it probably should have been done after this competition. Congrats to Urbandale for being gracious and warm hosts. It was fantastic watching these great choirs in such a fine auditorium. After running the numbers and examining the results, I am now convinced that the Consensus Ordinal Ranking System is probably the fairest system for judging show choir competitions even if the results occasionally generate a surprise or two.
This was truly an interesting exercise after a wonderfully talented competition. My opinion on scoring is this, we can come up with worse case scenarios no matter what the system. My flaw with the consensus system is that once 3 judges pick a 1, then that tosses out the opinions of the other two judges. Since each judge rates on 20% of a shows content, that would mean that 40% of a show doesn’t matter. IMO then in this situation the raw scores are the accurate reflection of a program an what it should be rated on.
On the other had if the 5 judges worked from the same scorecard, the consensus works well as it’s an Apples to apples comparison.
The consensus system was created to take away the ability for one judge to single handily change the outcome of the competition. This particular competition is a great example of that. After daytime performances 4 judges had LM ranked higher than JO on their individual RAW score sheets as indicated by the ordinal rankings . And two judges had raw scores that showed JO as 4th in visuals to LM 1st. How in the world then, if 4 out of 5 judges thought LM did better after daytime performances did the raw scores have JO 5pts higher than LM. Common sense would tell you that LM should of had higher raw scores by a healthy margin. Yet obviously one judge must have ranked JO very high and LM very low compared to the other 4 judges scores. So what you said above is exactly right. With 5 judges each should only get 20% vote in the matter. Yet clearly from the days raw score tally one judge had JO so high compared to LM that (in raw score), this judge single handily negated the other 4 judges opinions. If raw score determined the winner, one of the judges became 100% of the vote not 20% of the vote. For show choir to thrive, we must have faith that organizers of competitions try very hard to get a cross section of judges that if each given equal vote in the matter will correctly rank the participants.
My point being each judge actually judges a different aspect of the competition(unless the point made below on the judge rankings (3 vocal(effects, execution and?) and 2 visual). And if no two judges judge the same thing, the. You discount aspects of the show. Worst case scenario being I sweep the vocals over by a small margin over you. My visuals are average at best while yours are first. The consensus system just discounted visuals when determining 1st and 2nd. Saturday when determining 6th, one school had 2 of 5 votes over 3 that had one.
I’m just throwing this out there to debate the merits and flaws of the systems involved.
Hate to burst your bubble, but that’s not what happened. It was not only one judge who skewed the raw scores so that Johnston’s was higher. At least 2 judges wanted Johnston in 1st place and 3 judges wanted LM. The judging consensus system may work more efficiently and equally, but that skews the results as well. Johnston was outstanding with their traditional show, and it really all came down to each judges preference, since they were both so close in raw scores.
My last and final thought on this competition and how everyone got ranked. I think if you are going to use the same 5 judges all day and they rank all 20 choirs (in this instance) then maybe you generate a total 1-20 ranking and pull the winners of their separate categories ( open, mixed prep, single gender, ect...) and finalist from the same list. In other words don't just rank the prep groups separately. This assumes you are using the same five judges for all the choirs and therefore there is no need to create separate rankings for each division. This way you avoid the rare result as described in my previous post.
First, I was at Urbandale this past weekend, and as many others have expressed, the level of talent displayed by many, many of the groups was truly amazing. Kudos to all you high school students who work so hard at this activity!
Second, I applaud Urbandale for their transparency in the scoring. Sometimes transparency can create tension, which isn't necessarily a bad thing...it can just expose oddities that can arise out of mechanical calculations, thus leading to difficult explanations.
As explained in another post, the reason for 7 finalists was the application of ordinal ranking consensus separately for overall and prep. I re-worked the rankings, and could indeed see how Johnston Synergy finished 2nd in prep, and yet placed ahead of AC-ER overall.
However, there is slightly more to the story. In the case of scoring for this event, once you remove all groups (and their rankings) through Johnston Synergy (6 groups total), the rankings for Decatur were 2,1,1,2,2 and the rankings for AC-ER were 3,3,2,1,1. With each group having two "1" rankings, my understanding is that the first tie-breaker is the sum of ranks (lowest sum of ranks wins). Sum of ranks for Decatur at this point was 8, and AC-ER was 10. I even checked the sum of ranks along the way as each school "came off the board", and while the difference in the sum of ranks fluctuated between 2 and 4, Decatur was always less than AC-ER at each step. Thus, technically Decatur finished 7th overall and AC-ER finished 8th overall (see caveat at end of message)..
At this point, Urbandale was faced with a terrible dilemma...either stick with 6 finalists, and have the first place prep team be overtaken in finals by the second place prep team....Or, if you want to expand finals to seven teams to "fix" the problem, the implication is that you have to leapfrog the 8th place finisher (AC-ER) past the 7th place finisher overall (Decatur). Classic case of "darned if you do, darned if you don't".
Caveat: I have never worked with scoring before. I am merely a curious person who loves numbers and logic, so I used the descriptions of ordinal ranking consensus and tie-breakers as described within this post, and created an Excel model to see how this all worked. If I am missing some nuance here, my results would obviously be flawed.
Well, it’s a funny story. Scores got converted from raw to number rankings. Then got converted again to consensus. This is what caused both ER and Synergy to be bumped up to finals, because the final consensus rankings had Synergy over ER even though they got second in their respective division. This also was the reason we missed finals, although both the raw scores and rankings had us in 6th, the overall consensus judging is what put us below both groups
Saturday Divisions
Placings in each division are determined by ordinal ranking consensus converted from raw scores with three judges scoring on the Vocal Emphasis Sheet and two judges scoring on the Visual Emphasis Sheet. The Band scores will be used to determine the Best Band caption and will not affect the outcome.
Saturday Evening Finals
Placings in Finals are determined by ordinal ranking consensus converted from raw scores on the Finals Adjudication Sheet.
Tiebreakers
Ordinal ranking ties (while exceptionally rare) are broken as follows:
The choir with the lowest Sum of Ranks score wins
If still tied, highest Total score wins
If still tied, highest “Vocal” category score wins
If still tied, highest “Visual” category score wins
If still tied, highest “Show” category score wins
If still tied, highest “Instrumental” category score wins
If tie is not yet broken, the groups remain tied and trophies are awarded
Please note the Ordinal Ranking Consensus/Fair Fehr/Majority Rule system will be used. A detailed explanation will be provided upon request.
@itsbreyer Everyone in Studio was absolutely moved by your show. We were rooting for you and you brought a show like nothing we’ve ever seen around suburban Iowa. So much respect for your program and holy crap you guys can dance! We all only wish we could’ve seen you one last time before you hit the road. - Much love, Studio 2019
Thank you so much! It means a lot knowing that people appreciate what message we’re trying to share with them. I can also say with confidence that everyone here LOVED your show tonight. It was genuinely so pure and fun (I screamed when I saw the buffet trays), and y’all just absolutely rocked it! You guys are phenomenal and we wish you the best of luck for the rest of your season <3
Which I still quite don’t understand as Decatur was ranked ahead of the next two groups by 3 of the 5 judges head to head. The only thing I could think of was that Waukee Nova was favored over all 3 groups on one of the cards...
Why can’t peolle just accept that a certain group didn’t make finals because they didn’t deserve to? If they were in fact better than groups that had made the final rounds, they would have made it. End of discussion
I actually did read the relevant comments that explained multiple times on how scoring worked and that raw scores didn’t matter. Be a good sport and don’t take it as such shocker that groups are better than you. If your group had this mind set, you could have made finals
WM,
All the groups Saturday did a wonderful job. However there are a lot of people who still aren’t quite sure how one group missed. I tried applying the rules out and it doesn’t match. Now I am 100% sure I am missing something(probably obvious), but can you see the confusion? Plus 3 of 5 judges had them over group 1 and vs group 2, 3 of 5 had them better.
So, I worked with Ordinal Consensus yesterday at MoShow. The truth is that the system used is not looking head to head for each group. It is looking for who has the most 1st place votes.
After you have chosen the Top 5 groups, which are non-controversial, the group with the most first votes is Johnston Synergy, with, I believe, 2 First Place votes. Decatur, ER, and Nova would each have one First Place vote. That is why Synergy gets 6th when you compare all of the groups.
Personally, I think this is a flaw of the system. For the record, a similar thing happened at MoShow this weekend.
Ok that explains it. I was thinking that you needed to be ahead on the 3 ballots. So by the time I hit the 5th and 6th place it didn’t add up. Thanks for clearing that up..
Nobody from Decatur has anything to apologize for. It's completely normal and natural to be disappointed if you didn't achieve the result you hoped you would. Anybody who has participated in show choir for any length of time has had an outcome that was confusing, upsetting, or (in the opinion of your choir and its members) downright wrong. It's happened to my groups in the past and I'm sure it will happen again in the future.
These forums were created by our friend Haakon specifically for these types of discussions and as long as people keep the discussion respectful and constructive then there's no problem.
So far I've only seen one person that said anything rude who may owe an apology to the members of Elite Energy and not the other way around. That is whomever is behind the "wmnewparent" account that was created today so they could specifically make those comments and hide behind the anonymity of the internet without listing their associated choir. Telling people to get better if they don't like the result? As if all of the participating choirs don't work hard with the goal of putting the best product they can on the stage? You, sir or madam, need to grow up and learn to be a kinder human that stops and considers the feelings of others. And if you happen to be a member of or supporter of one of the choirs that did qualify for finals instead of Decatur, you need to learn to be a more humble winner.
I for one have no problem debating the merits of one scoring system over another. Obviously we here at Urbandale feel the consensus ranking system is the fairest way to decide the outcome of our event or we wouldn't employ it. As was stated before, raw score isn't and never was intended to be the method used to determine the result. It exists only as a method of breaking ties or we wouldn't even provide that data point.