Monday, January 27, 2014

Dissecting Raw Ratings by Person

We'll take a break from the Meltzer Star Analysis (parts 1/2/3) and take a quick diversion to the world of Raw Ratings. There's a been a lot of talk about whether WWE has a defense on why they haven't strapped a rocket on Daniel Bryan in light of the outspoken crowd reactions at the Royal Rumble. I decided to resurrect a project that I had started months ago, namely - dissecting the raw ratings quarter hour viewership gains & losses by person.

Methodology:
  1. Take all of the detailed Raw quarter-hour reports from Dave Meltzer’s Wrestling Observer newsletter (subscription; available online at wrestlingobserver.com) 
  2. For each “segment”, tally which people were involved. (More difficult than it sounds.) 
  3. By person, take the “average” of all their segments (noting whether we’re including the notoriously hot overrun segment) 

This data covers (though not every week has every quarter-hour, sometimes only over-run available):
2011: 52 weeks
2012: 53 weeks
2013: 42 weeks (weekly ratings only through 10/7)

This is clearly an imperfect science. It covers weeks with strong competition on television to weeks with little competition on television. It covers time-periods where you may be pushing a wrestlers as a main-event title contenders during one month and pushing them a comedy tag team goofball during a different time period. Why people do and do not tune in is not purely driven simply by who is on the screen. However, the hope is that by looking at the information over long periods of time we would be able to draw some conclusions about the trends we’re seeing. Lastly, it gets spotty since early October — this isn’t some fiendish plot by me to thwart people from evaluating Cena/Orton/Punk/Bryan in the post-Battleground world. Instead, Dave just hasn’t been providing the weekly Raw segment data in the newsletter. That’s all.

“The rule of thumb is not to overreact to one week’s rating” – Dave Meltzer
I decided to run the data essentially four different ways: with and without the overrun segment (the final period where Raw spills over the 11 PM EST hour and viewers for next USA programming tune in as well as fans that have been programmed to check out what’s going on to close out Raw) and using show averages and using show maximums. Since I was trying to tallying all the people involved in a segment, oftentimes a wrestler can appear more than once (interview, match, video package, etc.). To give the benefit of the doubt, I ran the numbers using both the “average viewership change” by wrestler in that show as well as a “maximum viewership change” by wrestler in that show.

You did have to appear in a minimum of 5 shows to be included in the calculation. This was to prevent any freak show circumstances from being overly influential and leave off guest hosts (though I think we can all agree that Wayne Brady is the key to a WWE renaissance).2011 – 2013 Viewership Gains/Losses for Raw

Biggest Viewership Gainers 2011

1 Jim Ross 344,893
2 John Cena 331,171
3 HHH 246,231
4 The Miz 241,065
5 CM Punk 165,536
6 Michael Cole 164,008
7 John Laurinaitis 104,608
8 R-Truth 101,238
9 Christian 95,217
10 Alex Riley 93,311
11 The Big Show 86,191
12 Jerry Lawler 83,522
13 Rey Mysterio 23,013

Biggest Viewership Losers 2011

1 Mike McGillicutty (336,938)
2 Zack Ryder (323,981)
3 Santino Marella (308,125)
4 David Otunga (245,324)
5 Kofi Kingston (233,407)
6 Evan Bourne (227,944)
7 Beth Phoenix (222,444)
8 Mason Ryan (218,875)
9 John Morrison (209,629)
10 Nikki Bella (196,341)
11 Kelly Kelly (194,762)
12 Cody Rhodes (191,909)
13 Natalya (184,154)
14 Jack Swagger (169,717)
15 Sheamus (150,990)
16 Dolph Ziggler (136,509)
17 Eve Torres (108,318)
18 Brie Bella (97,567)
19 Vickie Guerrero (76,179)
20 Daniel Bryan (69,000)

Biggest Viewership Gainers 2012
1 Undertaker 572,657
2 HHH 380,624
3 Vince McMahon 342,781
4 The Rock 310,149
5 Shawn Michaels 303,315
6 Brock Lesnar 290,827
7 Vickie Guerrero 223,403
8 John Cena 218,700
9 Paul Heyman 192,610
10 CM Punk 179,546
11 John Laurinaitis 179,165
12 The Big Show 153,624
13 Chris Jericho 124,365
14 AJ Lee 121,523
15 Kane 68,418
16 The Great Khali 58,000
17 Mark Henry 56,738
18 Wade Barrett 50,308
19 Randy Orton 36,935
20 Ryback 30,146

Biggest Viewership Losers 2012
1 Kelly Kelly (251,571)
2 Jinder Mahal (245,750)
3 Santino (216,836)
4 Cesaro (202,467)
5 Primo (191,933)
6 Kofi Kingston (189,067)
7 Epico (187,592)
8 Christian (186,100)
9 Zack Ryder (156,023)
10 Layla (153,273)
11 Kaitlyn (150,700)
12 Titus O’Neil (138,000)
13 Damien Sandow (134,454)
14 Darren Young (131,929)
15 Rey Mysterio (131,580)
16 Cody Rhodes (121,731)
17 Tyson Kidd (121,571)
18 Justin Gabriel (121,167)
19 Jack Swagger (105,278)
20 The Miz (95,891)

Biggest Viewership Gainers Jan-Oct 2013
1 The Rock 335,698
2 Brock Lesnar 330,068
3 John Cena 328,401
4 Paul Heyman 317,531
5 HHH 259,420
6 CM Punk 246,534
7 Stephanie McMahon 242,841
8 Vickie Guerrero 234,359
9 Vince McMahon 203,475
10 Brad Maddox 168,819
11 Seth Rollins 167,388
12 Roman Reigns 166,292
13 The Big Show 161,143
14 Daniel Bryan 155,341
15 Dean Ambrose 135,652
16 Ryback 91,294
17 Curtis Axel 59,699
18 Kane 48,198
19 Sheamus 40,064
20 Randy Orton 36,306

Biggest Viewership Losers Jan-Oct 2013
1 Cameron (496,714)
2 Layla (486,500)
3 Aksana (397,000)
4 Naomi (389,417)
5 R-Truth (322,143)
6 Brie Bella (322,133)
7 Nikki Bella (311,375)
8 Natalya (299,300)
9 Alicia Fox (234,875)
10 Christian (208,000)
11 The Uso’s (205,417)
12 Kofi Kingston (202,158)
13 Great Khali (201,200)
14 Damien Sandow (184,140)
15 Fandango (179,640)
16 Santino Marella (179,438)
17 Bray Wyatt (171,225)
18 Zack Ryder (153,643)
19 AJ Lee (152,838)
20 Erick Rowan (142,250)

The viewership number calculated here is an amalgamation of the four numbers I previously mentioned. It’s the average of w/ & w/o overrun #s split between where you’re using the average method (75% weighting) and the maximum method (25% weighting).
Again, this is quite imperfect but I must say the results do seem to align to general WWE-think. That is to say, when you look at who they push on television and which segments they put those people, there is an intent to promote certain people. While I find the results quite interesting, I do want to emphasize several points:
  • This looks at quarter-hour viewership changes. That’s how many people tuned in or tuned during the fifteen minutes measured. There’s a host of reasons that viewers tune-in and tune-out through a show. Some of it has to do with specific time periods (top of the hour, the end of show overrun). Some of it has to do with television competition – specifically major sports events like Football games. Some of it has to do with who is on the screen. Some of it has to do with who was on the screen (i.e. big-drop offs following major viewership gains). Some of it just appears to related to the unexplained fickle variations that you get from Nielsen household reporting. Also, people in the first segment can be short-changed. Essentially, there isn’t a “delta” to compare them against, so usually the participants for that entire segment don’t get credited with anything even though they were on Raw. (In fact, we know that the night after a PPV usually experiences a major 1st hour boost as people tune in to see what transpired last night.) A possible improvement would be to add a secondary variable looking at hourly Raw viewership so we could account for the people that appeared throughout all four quarters (and smooth it out a bit). 
  • This pretends everyone in each “segment” was equally responsible for driving the viewership change. If JTG is destroyed by Brock Lesnar and a half million people tune in, both JTG and Brock would get a +500,000 for that segment. Clearly, there’s room for improvement because a thoughtful analysis would consider what acts appear to be driving the quarter-hour rating and what acts just happen to have a little cameo during that time. My workaround was to try and focus on wrestlers that appeared on several shows (not just several segments, but many different episodes of Raw) as well as to look across large swaths of time for the average. 
  • This (mostly) ignores normal Raw ratings patterns. There are quarter-hours when Raw viewership normally picks up and there are quarter-hours where Raw viewership normally drops. After more than two decades, WWE has trained and re-trained their fans about when the important stuff normally happens. Interestingly, the dawn of the weekly 3-hour Raw has generated another set of viewership habits where Raw often loses viewers from the start of the show to the end of the show. WWE is hardly ignorant of the trends, and therefore it’s not surprising that they often program similar material and similar people (at least on a status basis) in the same slots week-over-week. In some ways this can become a self-fulfilling prophecy – treat someone like a goof in a blow-off timeslot and the audience will view them that way for a long time. That doesn’t “prove” they can’t draw- it just shows that the company doesn’t think enough of them to protect them. However, without a fully functioning model of who draws and repulses viewers, all we have is our scattered data points. The caveat to this was that I did throw in some safeguards around the “overrun” segment (the “big angle” before WWE goes off the air each week). The overrun segment can see a million people tune in — now, it says something about your placement in the company when you’re in the overrun segment, but on the other hand, it’s going to greatly boost your numbers the most often you’re slotted in there, and that doesn’t necessarily imply you’re the driving force behind why those viewership numbers explode. That’s why I find it necessary to look at people’s numbers with and without overrun included. 
A lot of people have asked to see the data since Daniel Bryan’s start-stop push in August hoping to prove/disprove that he is/is not a draw/failure. I’ve looked at the numbers and honestly feel that our dataset is too constrained to really pull meaningful conclusions.




In this comparison, Bryan has gone from losing viewers (-101,792) in the first quarter, to gaining viewers (+186,331) in the second quarter to strong gains (+491,967) in the third quarter and beyond. Sounds compelling right? But consider Big Show from September to November he was averaging viewership gain of +647,167. However, there hasn’t been a groundswell of Paul Wight supporters trying to prove he was cheated out of a championship run. That’s because simply using skewed viewership numbers without context is just yelling into a vacuum. You can prove or disprove whatever you want based on whatever narrative you have. If you don’t control for who is in the overrun segment (which is key because it’s such a disproportionate viewership swell) you are just going to prove that whomever was in the big angle that week, is the big draw. It’s a self-fulfilling prophesy. (And if people are wondering why authority figures draw well – that’s because when they’re on television, usually important people like the champion is also on TV.)

The reality is that WWE carefully crafts who they put in each segment. Consider how they doubled the length of the ADR/Kofi match this past week so it would fill a quarter hour originally set aside for a CM Punk interview. They didn’t want to throw off the rest of the schedule. Women’s wrestling doesn’t usually draw big numbers. Even the days of Sable driving viewership through the roof are gone. However, WWE routines books the women in the same slot and they routinely lose viewers. Hell, even great wrestling matches like the Shield versus Dolph Ziggler/Usos from 10/14/13 Raw lost over a million viewers because there was a Football competition. In 2011, Daniel Bryan wrestled Sheamus on the 3/14 Raw and lost 1.1 million viewers. There’s people like Rock & Brock who pop ratings and John Cena has been a proven ratings mover. Beyond that, people tune in the beginning of Raw after a major PPV to see what happened, and sometimes they tune in for big returns (like Big Dave Batista a few weeks ago).

- Chris Harrington (@mookieghana)

Saturday, January 25, 2014

What I learned from Meltzer's 280+ WWF PPVs Ratings (Part Three)

We continue...

1/22/2014 Part One: We ask some Questions; We take the first at annual average ratings.
1/23/2014 Part Two: We test some options to little avail; We get distracted; We learn little.

Our goal is to establish a "fair" method of weighting individual matches so that can aggregate our information (in this case, WWF PPV Star Ratings from Dave Meltzer).  To begin, we've been comparing annual PPV averages.  We've looked at weighing all matches the same (a/b), weighing matches that occurred closer to the end of the card (i.e. the "main event") heavier (c/d), and weighing matches according to their length in minutes (e/f).  In each case, we still ended up with same subset of years on top as "best years" (chronologically 2001, 2008, 2009, 2011, 2013) and "worst years" (chronologically 1988, 1989, 1990, 1992, 1999).

This isn't to say there's no variation, especially in the middle - for instance, 1986 could be as high as 10th place (option D - positive ratings only, card placement: 2.55 avg score) or as low as 27th place (option E - all ratings, match length weighted: 1.66 avg score).  Yet overall, we're not getting drastically different results.

Should we continue? Yes.  I am not satisfied with these previous methods.  Every wrestling match is not identical. They're given different amounts of time to work with and there are constraints with the ability of the wrestlers.  The place in the storyline arc these matches occupy will vary (i.e. beginning/middle/end of a feud).  Both match length (in minutes) and match placement (on card) are artificial proxies for estimating how "important" a match is. Furthermore, there can be a separation between what matches "sold the PPV" and which matches are the most "important" -- for example how the title match can be overshadowed by the Royal Rumble.  Simply weighting against the longest matches can be vexing -- is a 30 minute match really FIVE times "more important" than a 6 minute frenzy?  (Perhaps adjusting for time logarithmically so that a the difference would in my example would be more like 211% instead of 500%).

On the subject of Card placement, let's look at some statistics:

As you can see, when PPVs became a monthly affair, the average PPV card has been hovering between 7 and 8 matches (often including the pre-show); the 1996-2013 average was 7.55.  What's interesting is how steady that number has been for the last fifteen years.

Let's discuss an "average" card. Since 1996, the average hovers around 95 minutes, but with some significant variation.  (The standard deviation is 5:24 which suggests with a normal distribution, we could be looking at +/- 11 minutes to the mean: annual average PPVs averages be between 84 minutes and 106 minutes, which they do.)


 As you'll see, it's not quite a normal curve.  But it's not terrible either.

However, it's terribly hard to simply break out the 95 minute card would split out among 8 matches:
  • Match #1: Match E: 11:53
  • Match #2: Match A: 4:20
  • Match #3: Match B: 6:11
  • Match #4: Match C: 7:48
  • Match #5: Match D: 9:47
  • Match #6: Match F: 14:17
  • Match #7: Match G: 17:54
  • Match #8: Match H: 22:51
This is average time per match (A is shortest; H is longest).  In fact, if you just try to average the time per each match, you end up with lots of 9-12 minute undercard matches - but that doesn't match up to reality.  (The standard deviation in my "hypothetical card" for match #1 through #7 was 1.31 minutes while the "real" standard deviation in terms of time was 5.61 minutes.  That told me that I needed more "wild swings".) The final match (#8) was the longest in about half of the instances (and second longest in another quarter of the sample).

We can see how hypothetically card position (#1-#8) relates to match length (A-H), though it's hardily maps like a 1:1 function. What's far more compelling occurs when we look at average time versus average star rating:



This is really a surprising result.  The line goes from 25% (quartile 1) to 75% (quartile 3) with the black box (bottom line is the median and the top line is the average). Essentially, positive star ratings appear to have a linear relationship with time -- on average, the longer the match, the average star rating is higher.

Let's flip the relationship - given a match length, look at the average star rating...
Importantly, the linear relationship between time and average star rating peaks around 21:30.  (Chart was created by breaking matches into quarter minute segments and plotting average star rating of matches in that timeslice.)


Stars vs Time (average)

Keep in mind these are the average times.  If you look at the earlier chart with the quartiles, you'll notice that a 10-minute match could easily land anywhere from * (one star) to **3/4 (2.75 stars); that's just handling the "average results" - covering the the 25th quartile to 75th quartile.

But, it provides us an interesting comparison point: every 2 minutes 45 seconds, the match moves up by about half a star until around the twenty minute mark.  But after twenty minutes, other factors seem to come into play which drive the star rating variance; the linear relationship really crumbles.

We're seeing that card placement is related to match length, and match length related to star rating.  We want to introduce a new variable. Let's call it "importance".


How can we qualify how "important" the matches are?

I decided we could start with a variation of applying the OCELOT (Overly Complicated ELO Theorem) wrestler algorithm:
Named for a physics professor, the Elo Rating System was created as a system for rating chess players.  Each player is assigned a numerical ranking where higher rankings correspond to better performing players. When two people of unequal ability compete several times, the better performing player is "expected" to win a certain number of the battles. The expected outcome can be calculated as a formula based on the Elo rankings of each competitor.  Whenever a competitor performs better than their expected result, their score increases  while their competitor loses points.
While the expected outcome of each competition is calculated by the relative scores of each competitor, the magnitude of the points transfer is dictated by a concept known as the k-value. In Chess, the k-factor is typically a uniform number, with the possibility that players with limited history may have a higher k-factor in order to reach their "true" rating quicker. In my pro-wrestling model, the k-value was essentially the "importance" of each competition.  Therefore, losing a world championship match was rated as far more important than simply losing the preliminary match at an untelevised event.
What I did was calculate the "average ELO" rating for each PPV match based on the simplified ELO system where Title Changes and TV tapings were assigned higher k-values.  (I didn't include card placement because that variable is being evaluated separately.)  People's scores go up and down depending on who they beat in matches. The idea was that the important matches would involve the active wrestlers with the highest average ELO scores.  (Note on a limitation: this model did not add any kind of time lapsing function or alternative Federation scoring, so when Roddy Piper returned to WWE in the 2000s, he carried over his high ELO score from the 80s.)

Here's the highest rated ELO match for each per PPV








As you can see, the lack of time-aging for a wrestler like Hogan (or experienced hands like Undertaker or Bret Hart) are going to do quite well.


The good news is that we have a new approach we can use for weighting the events on a PPV. And we move ahead...

Thursday, January 23, 2014

What I learned from Meltzer's 280+ WWF PPVs Ratings (Part Two)

What I learned from Metlzer's 280+ WWF PPVs Ratings (Part Two)
by Chris Harrington (@mookieghana)

Continuing from yesterday's workstream....

Again, our datasource (WWF/WWE PPV 1985-2013 Star Ratings).

We're still trying to tackle the quandary of how one should distill a myriad of individual match ratings into a single score for each event and/or each year.

Let's review some options:

EQUAL
(a) Weight every single rated match equally
(b) Weight every single rated match that received a rating above DUD equally

CARD PLACEMENT
(c) Weight every single rated match according to their placement on the card
(d) Weight every single rated match that received a rating above DUD according to their placement on the card.

LENGTH OF MATCH
(e) Weight every single rated match according to their length in minutes
(f) Weight every single rated match that received a rating above DUD according to their length in minutes

IMPORTANCE
(g) Weight every single rated match according to importance of the competitors and titles involved
(h) Weight every single rated match that received a rating above DUD according to importance of the competitors and titles involved

COMPLEX COMBINATION
(i) Weight every single rated match according to several factors
(j) Weight every single rated match that received a rating above DUD according to several factors

DISCUSSION
[If you've followed by work previously, you know we're going to settle on the very last choice; still, it may be comforting to know that options a-h don't really provide vastly different pictures.]

In yesterday's post, we explored options a&b (equal weighting, or as I called it "unweighted all ratings" and "unweighted positive ratings).

OPTION A / OPTION B: EQUAL WEIGHTING

As noted before, 1985 and 1986 improve from the bottom third to the middle third when you eliminate those pesky "negative" star ratings such as Dave's infamous disdain for the Hogan/Andre main event at Wrestlemania III which he awarded [bonus points for reading in voice of Bryan Alvarez shouting] MINUS FOUR STARS.

SIDEBAR ON NEGATIVE STAR RATINGS
While Hogan and Andre have both received negative star scores on multiple occasions, the real grand champions of negative scores are the Brothers of Destruction - Undertaker and Kane. They've done it all.  They earned negative scores for wrestling big stiffs (Mabel, Giant Gonzalez, Big Show, Khali, Kamala), wrestling each other (WWF In Your House 25: Judgment Day), teaming together (against KroniK) and even wrestling themselves (Undertaker vs Undertaker: Summerslam 1994). Dennis Knight (Phineas I. Godwinn/Mideon) and Santino Marella have nothing on those two.

OPTION C / OPTION D: CARD PLACEMENT

Under this methodology, we number all of the matches on the card and weight each match heavier than the preceding one. In this fashion, we're emphasizing the importance of the "main event" -- or at least whatever match went on last.  There are some obvious drawbacks to applying this method, not the least is that wrestling cards aren't always structured cleanly from least important to most important match.  In fact, we're all familiar with the "bathroom break match"; they're often used as buffers between hot main events.  (For further exploration, I encourage you to check out the mini-study about the "Viscera Slot" utilizing 1993-2013 Raw data.)  Simply assuming the important stuff goes on late in the show and the minor stuff in the beginning is certainly a fallacy.

In fact, even over-weighting the main events (on a ten-match card, half of the score for the card would be driven by the last three matches), doesn't dramatically alter the annual rankings.

Including All Ratings



The biggest change is that 1999 fares better when you overweight "main events" dropping from 26th place to 22nd place.  Meanwhile, 1990 creeps up from 24th to 26th.  Otherwise, you're just shuffling the candidates all around.

Including Only Positive Ratings (ignore DUD and negative star ratings)


As before, it's largely the same years just slightly shuffled around.  The main difference in this example is that 2011 dropped from 3rd place to 6th place and 2005 jumped up from 6th to 5th.
So, as with many intricate plans, simplicity prevails.  Moving on...

OPTION E / OPTION F: LENGTH OF MATCH
@steenalized put it nicely, "My gut reaction is to include time. Five minutes of -** (minus two stars) is less damaging than 20 minutes of * (one star)".  What happens if we just evaluate the card using a weighting based on the match lengths?


First thing you may notice is that the "space" between the lines is much smaller than in the previous examples.  Whereas previously there was between 0.37 to 0.41 star difference (the delta) from eliminating the DUD/negative stars, in our time-weighted example that "delta" drops to only 0.30.

Essentially, the implication is that while some matches are terrible, they're usually short. And this method underweights short matches (regardless of where those short matches took place on the card).  Now, there's a lot to be said about this and it will be further explored in a later installment when I get to the complex relationship between match lengths and star ratings. (If you want a sneak preview, look at the piece I wrote in October with special attention to the WWF Star Ratings vs. Time graph.)  Your average negative star match is 6 minutes 54 seconds.  Your average "DUD" is 6:34.  Your average positive star rating match is 13:16.  Essentially, your average positive star match (i.e. "good") is going to be more than twice as heavily weighted as a stinker (DUD) or downright terrible match (negative stars).

Including All Ratings



Top five years get reshuffled, but no big surprises. On the other end, 1992 gets a bit of a break as it climbs up out of the bottom five pit as 1997 spills downward into 26th place.

Including Only Positive Ratings

Again, all that work, and not a lot to show for it. The most dramatic move is 2012 jumping up from 9th place to 4th place while 2008 slips to 7th.  At the bottom, the list is dreadfully similar.

That brings us to the next options which will be explored in the next installment: 

OPTION G / OPTION H: IMPORTANCE
OPTION I / OPTION J: SOME CRAZY COMBINATION OF VARIABLES

As usual, a simple idea (weight the matches according to their importance) will introduce a lot more questions: How do we identify the "most important" matches?  Is it tied to the type of match or titles involved or the people involved?  How does card placement influence the ranking of this importance?  And finally, can we combine all of this stuff and produce any kind of an answer that's defensible yet materially different than the same five years on top and the same five years on the bottom which we keep getting?  Tune in next time...


Wednesday, January 22, 2014

What I learned from Meltzer's 280+ WWF PPVs Ratings (Part One)

What I learned from Metlzer's 280+ WWF PPVs Ratings
by Chris Harrington (chris.harrington@gmail.com)

PREAMBLE AND RAMBLE
Last week, I posted a Five Year summary of Wrestling Observer Star Ratings from 2008-2013 covering WWE, TNA and other federations (Dragon Gate/Dragon Gate USA, New Japan Pro Wrestling, Ring of Honor, CMLL and so forth).  I was pretty proud of myself for scraping all of that data out of WO issues available online and extracting the interesting bits.  However, I was quickly humbly (thankfully not old-country way) when I received an email from a mysterious (and benevolent) benefactor with all Wrestling Observer Star ratings from 1985-2013 for WWF/WWE PPVs.

This isn't the first time I dove into the wonderful world of snowflakes aka WO Star Ratings.  A few months ago (right before I got caught up in WWE Network hysteria), I published a few pieces looking at Star Ratings including:

These two pieces were built on the backs of two sites (WWE Observer Star List 1986-Present, compiled by Michael Stamcoff and The History of WWE, administrated by Graham Cawthon).  By contrast, the latest work is the result of information supplied by Alex Sarti and match data extracted from Cagematch. (Since many of these Observers are not available online, using so many different points should ideally give us an opportunity to triangulate and validate the data.)

DATASET AND QUESTIONS
Covers more than 2,000 matches and more than 280 PPV events

I leave the match/PPV counts somewhat vague because there's a few events that have incomplete coverage (NYR 05, IYH 18), and there are some matches which are only rated as either 0 (DUD) or N/A which I'd like to verify that there was no rating given.

Regardless of any small blemishes, this information provides a very rich and complete dataset covering nearly three decades of pay-per-view.  (Supplementing the early years with SNME ratings will nicely fill out the early years for a more complete picture.)

With the debut of the WWE Network looming with all past WWF/WWE, WCW and ECW PPVs immediately available for on-demand viewing, this archive of results provides an important gateway to tackle questions such as:

  1. What were some of the best PPVs and best years in WWF/WWE history?  How should we weight the "importance" of various matches on the card against their performance?
  2. Who are some of the best PPV performers in WWF/WWE history?  How did they evolve over time by age and by opponent?
  3. What are some of the most predictable trends for PPV quality in WWF/WWE history?  
And lastly...

     4. What are the characteristics that define the match ratings?  (For example, what characteristics seem to be common to a **** match - competitor age/history/card placement/length/etc.?)

ANALYSIS

PART ONE: Simply the Best

We'll start with one of those questions that seems deviously simply: "What year had the best for WWF/WWE PPVs?"

Clearly, as with any query involving subjective ratings, we're not going to be able to "prove" anything (yet, I'm still going to try.) Please note that for the purposes of this analysis, I'm going to use Dave's WO ratings as a starting point for defining the "best".  (This is certainly not without controversy, but one of the primary reasons that I use Meltzer's snowflakes is the consistency it delivers: the same person watching the events as they happened and rating them in the context of that time.  Inevitably, as a critic he's prone to biases which are both fair and unfairly driven.  But as the leading Pro-wrestling journalist, his voice is a powerful and resounding one worth considering.)

As you start poking around, you realize that there's more than one way that you can synthesize the individual match ratings into event ratings into annual ratings.

The simplest is the unweighted (straight-line) average.  Every match that has a WO rating is assigned a numerical value.  We add up all of the values and divide by number of matches.

Figure 1: Unweighted Average Star Rating by Year for WWF/WWE PPVs
year unwgt avg matches year unwgt avg matches year unwgt avg matches
1985
1.63
23
1995
1.98
65
2005
2.21
104
1986
1.49
23
1996
2.12
67
2006
2.08
115
1987
2.00
16
1997
1.75
75
2007
2.11
113
1988
1.33
34
1998
1.61
100
2008
2.43
95
1989
1.23
32
1999
1.44
96
2009
2.59
94
1990
1.49
34
2000
2.05
100
2010
2.36
91
1991
2.01
38
2001
2.50
99
2011
2.60
89
1992
1.43
30
2002
2.30
97
2012
2.34
90
1993
1.95
37
2003
2.21
91
2013
2.53
99
1994
1.90
36
2004
1.97
107

Figure  2: Star Rating Distribution by Year for WWF/WWE PPVs

Initial Observations:
  • The move to monthly PPVs (May 1995) greatly inflated the number of PPV matches per year.  (This is another reason why it would make sense to add in SNME results to the 80's numbers.)
  • The number of matches expanded again in the attitude Era (circa '98-'02) as PPVs were longer and some had single-night multi-match tournaments.
  • The percentage of negative stars as a proportion of total matches has generally been falling each decade.  Meanwhile, the percentage of matches rated above 2.5 stars has grown in that timeframe.
This last point is particularly important when you're evaluating the unweighted annual PPV averages.

Q: How should we deal with negative star ratings? 
A: Let's consider when the Mega-Powers exploded..

April 2, 1989: WRESTLEMANIA V (results courtesy of Cagematch)
  • Hercules defeats King Haku (w/Bobby Heenan) (6:57) = 1/2*
  • The Twin Towers (Akeem & The Big Boss Man) (w/Slick) defeat The Rockers (Marty Jannetty & Shawn Michaels) (8:02) = *3/4
  • Brutus Beefcake vs. Ted DiBiase (w/Virgil) - Double Count Out (10:01) = *3/4
  • The Bushwhackers (Butch & Luke) defeat The Fabulous Rougeaus (Jacques Rougeau & Raymond Rougeau) (9:10) = -****
  • Mr. Perfect defeats The Blue Blazer (5:38) = **1/4
  • WWF World Tag Team Title Three On Two Handicap: Demolition (Ax & Smash) (c) defeat Mr. Fuji & The Powers Of Pain (The Barbarian & The Warlord) (8:20) = DUD
  • Dino Bravo (w/Frenchy Martin) defeats Ronnie Garvin (3:06) = DUD
  • The Brain Busters (Arn Anderson & Tully Blanchard) (w/Bobby Heenan) defeat Strike Force (Rick Martel & Tito Santana) (9:17) = **1/2
  • Singles Match (Special Referee: Big John Studd): Jake Roberts defeats Andre The Giant (w/Bobby Heenan) by DQ (9:44) = -***
  • The Hart Foundation (Bret Hart & Jim Neidhart) defeat Greg Valentine & The Honky Tonk Man (w/Jimmy Hart) (7:40) = **1/4
  • WWF Intercontinental Title: Rick Rude (w/Bobby Heenan) defeats The Ultimate Warrior (c) (9:36) = **1/2
  • Bad News Brown vs. Jim Duggan - Double DQ (3:49) = DUD
  • The Red Rooster defeats Bobby Heenan (w/The Brooklyn Brawler) (0:32) = DUD
  • WWF World Heavyweight Title: Hulk Hogan defeats Randy Savage (c) (17:54) = **3/4
14 matches. Ratings were all over the map: two received negative stars, four were DUDs (zero stars), five were positive but below two-and-half stars and three were at at/above two-half stars.  

The straight-line average for WM5 would be 0.66. That's pretty abysmal for a PPV, let alone the major pay-per-view event of the year! 

However, does it really make sense that a four star classic can be completely cancelled by a negative four star travesty?  Furthermore, when your negative matches are on the undercard and your positive matches are the main events, shouldn't that matter?  How should we weight factors such as card importance, length of match, placement of match/star power in match, etc?  How should we deal with shows that do not have ratings for all matches?  Should a "DUD" rating count as zero stars, or just be ignored?  Likewise, how should we deal with the negative ratings (which were more prevalent as a total percentage of matches in WWF 1980s)?

Alternatively, we could look at just matches that at least had a positive star rating.  Now, eight matches at Trump Plaza qualify and our unweighted positive star average for WM5 is 2.03 which may be low, but is at least a passable score for major PPV.

Figure 3: Unweighted Positive Star Ratings (removes DUD and below)
year pos avg matches year pos avg matches year pos avg matches
1985
2.47
18
1995
2.22
60
2005
2.55
91
1986
2.45
16
1996
2.49
59
2006
2.50
99
1987
1.81
15
1997
2.33
59
2007
2.46
100
1988
1.74
27
1998
1.97
87
2008
2.56
90
1989
2.02
24
1999
1.91
81
2009
2.83
87
1990
2.02
28
2000
2.34
90
2010
2.45
87
1991
2.35
33
2001
2.72
93
2011
2.65
87
1992
2.06
24
2002
2.51
90
2012
2.51
85
1993
2.32
32
2003
2.34
86
2013
2.56
97
1994
2.24
32
2004
2.28
96

Illustrated, when we plot Table 1 (unweight star ratings by year) versus Table 3 (positive star ratings by year), we see that while both methods yield generally similar resultst, what tends to happen is that delta between the two lines has continued to diminish over time (as would be suggested by table 2).

Figure 4: Unweighted Star Ratings (all) vs Unweighted Star Ratings (positive)


The three years that are most severely adversely affected by negative star ratings are 1985 (-0.84), 1986 (-0.96) and 1989 (-0.79). Both 1985 and 1986 move from bottom of the pack (22nd and 25th respectively) to middle of the pack (12th and 14th respectively) when you compare their rankings under the unweighted all matches methodology (red) and unweighted positive star ratings only methodology (green).  (On the flipside, 2003 takes a tumble from 9th place to 18th place when you exclude negative ratings; this suggests that there's less great matches that year at the very top as compared to other years.)

Interestingly, we have the same top five highest rated years under either methodology:

Top Five Years for WWF/WWE PPVs by Unweighted Star Ratings
  1. 2009: 2.59 unweighted (all matches), 2.84 unweighted (positive star ratings)
  2. 2011: 2.60 unweighted (all matches), 2.66 unweighted (positive star ratings)
  3. 2001: 2.50 unweighted (all matches), 2.72 unweighted (positive star ratings)
  4. 2013: 2.53 unweighted (all matches), 2.58 unweighted (positive star ratings)
  5. 2008: 2.43 unweighted (all matches), 2.58 unweighted (positive star ratings)
There has been a remarkable cluster of years (2008-2013) with excellent per capita match quality on WWE PPVs as compared to the vast history of the product.  (2010 and 2012 are at 6th/7th with 2.34/2.36 unweighted all match averages).  This result is even more amazing when you consider that during this era there's been an explosion of TV Rights Fees for WWE while PPV revenues have stagnant and dropping. (With the recent huffing and puffing about the death of PPV quality due to the WWE Network, three-hour Raws and meteoric rise of future TV Rights, might this represent a counterargument that "PPV" quality isn't going to necessarily plummet?)

Bottom Five Years for WWF/WWE PPVs by Unweighted Star Ratings
  1. 1988: 1.33 unweighted (all matches), 1.81 unweighted (positive star ratings)
  2. 1999: 1.44 unweighted (all matches), 1.91 unweighted (positive star ratings)
  3. 1989: 1.23 unweighted (all matches), 2.02 unweighted (positive star ratings)
  4. 1992: 1.43 unweighted (all mathces), 2.06 unweighted (positive star ratings)
  5. 1998: 1.61 unweighted (all matches), 1.97 unweighted (positive star ratings)
(Honorable mention 1990 with 1.49/2.02 scores.)

While it's not surprising to see late-1980s WWF poorly received by Dave Meltzer (especially in light of the stellar work in WCW at that time).  Similarly, 1990 and 1992 remind me of the malaise that emerged in the early 1990s in American professional wrestling (as been noted in other places, such as House Show attendance.)

But listing 1999 on a list of worst years of WWF/WWE quality may surprise some especially since that year was in the heart of boom period for WWF and the middle of the Monday Night War(s).  This datapoint brings us to the next question, should other factors - such as card position - play into how we evaluate the totality of a PPV?

For instance, the positive star rating average for 1999 PPVs (which excludes the tragic Over the Edge 1999 event) is a less than impressive 1.91.  However, if you split the matches into those on the first half of the card (36 of 81 matches with positive ratings, 1.49 positive star average) and the second half of the card (45 of 81 matches with positive ratings, 2.24 positive star average), there is a distinct quality difference.  (For this purpose I am using the actual card order as it aired on the PPV. This isn't factoring in the "strength" of the matches in terms of importance.)

Figure 5: 1985-2013 PPV Positive Star Ratings Unweighted Average (first half vs second half of Card)
What does this Radar chart show (besides that I am currently fooling around with Excel in a vain attempt to spice things up)?  The green line represents the second half of the card- where the "main event" who traditionally occur.  The maroon line represents the first half of the card (often including pre-show when that was rated) which would traditionally include the undercard.

Where the lines cross (1991, 1994) there was a distinct dynamic where the later matches had lower average ratings than the earlier matches.  When the lines meet (1993, 2010) there was essentially parity the matches scattered across the card.

In 1991, there was only three WWF PPV matches that were rated ****+:
  • Rockers/Orient Express (Royal Rumble 91) ****
  • Ultimate Warrior vs Randy Savage (Wrestlemania) ****1/4
  • Bret Hart vs Mr Perfect (SummerSlam 91) ****
All three of these matches were considered "first half" matches.  (Even the Warrior/Savage match because there was so many matches on that card.)  Conversely, the three matches that cracked *** in the "top" half of 1991 PPV cards (Beverly/Nasties vs Bushwhackers/Rockers SS and Repo/DiBiase vs El Matador/Virgil Tues in TX) were 3.5.

1994 is more stunning when you consider there were TWO five-star matches (Bret/Owen Cage and HBK/Razor Ladder) which both were in the second half of the card.  And YET, dismal matches involving Mabel, Razor/IRS and Yokozuna vs Lex Luger were rough enough to be a giant albatross.  (This also says something about the quirks of looking at unweighted numbers.)

So, should we weight the main events over the undercard?  And if so, by what system?  Also, are there other weightings we could use - based on time, importance of championships, OCELOT values of wrestlers involved, number of wrestlers involved, etc?

Tuesday, January 14, 2014

WWE Article Round-Up

Some reading material you may want to devour if you haven't read it yet:


Jonathan Snowden: "WWE Network: How the Hardcore Fan Just Won the Wrestling War" (Bleacher Report)

"Wrestling television will no longer build to a mega-event, one designed to attract casual fans. It will be a unified product, one designed to get wrestling fanatics to sign up for six months of a service."
I don't think WWE will end up being driven by hardcore WWE fans. At best, there will be a bifurcated focus where the Network may cater more to them, but overall the dynamics aren't changing -- TV contracts reign supreme (as they have for years) and collapse of WWE 24/7 demonstrates that the few hundred thousand dedicated fans aren't going to represent the majority of the million subscribers that WWE is aiming for with the Network.

Lawrence Lewitinn: "WWE is going to belly flop: portfolio manager" (CNBC)

"Valuation isn't compelling at all, trading at 109 times forward earnings versus an average of 19.4 over the last three years," says Stephenson
Trolling opening paragraph aside, I think there's some serious questions about which buzz is driving WWE stock up, and whether good news on 2015 TV Rights negotiations will defray the massive Network start-up costs/revenue cannibalization that will become evident as the rest of the fiscal year.

David Bixenspan: "Breaking Down Latest Buzz Surrounding Negotiations for WWE's TV Rights" (Bleacher Report)

"A&E itself has never aired anything like pro wrestling, but in 2014, there's no real reason why they can't, as their bread and butter is a wide swath of reality shows with no branding that would get in the way of something like WWE shows. As the home of Duck Dynasty, the most-watched show on cable, the addition of Raw andSmackDown would give them a very nice prime time average."
In the end, I still think that NBCU will retain both Raw and Smackdown. While I continue to lament that Smackdown is on cable and not "free" TV (though there are Hulu replays available), I wouldn't be surprised if WWE strengthen their tie to Comcast/NBCUniversal in 2015 onwards.

Grey Owl Capital: "Will World Wrestling Entertainment Pin NBC?" (Seeking Alpha)

"Thus, we were willing to overlook their shareholder-unfriendly dual share class structure, an operating history that includes questionable capital allocation decisions, and a dividend that is not covered by current cash flow. The immediacy and magnitude of the event present an incredible risk reward dynamic."
Most of this article is just regurgitating WWE Conference Call and Press Release talking points, but this excerpt did have me smiling. It underscores the fact that while WWE is a publicly traded stock, it's controlled by the McMahon family and what they want to do (start a film division, buy a resturant, start a network), they do. However, it does also remind us that the investment community is intrigued and excited about the new Network (there were tons of callers today for the Network conference call), even if it doesn't immediately generate profit.

They're all interesting and well-written pieces to consider.
@mookieghana

Right model, Right formula, Right time -- WWE Network Conference Call

I listed a lot of the questions I wanted to ask about the WWE Network last week.

Today, WWE held a conference call to add some additional context and hype for their new product.

OPENING
Vince McMahon thanks Investors for "baring with us through the years".
Vince acknowledges they had explored the other models (traditional channel, premium channel) before landing on the over-the-top.  He says that it's, the "right model, right formula, right time."
He promises that everything is "synergistic".  (He views that growing WWE is good for EVERYONE in the aggregate including USA or whomever they're on television with.)

George Barrios takes over.

Things I'm learning:

WWE is especially counting on Tablets and growth in Smart TVs/BluRay players to fuel adoption of "streaming content".  (It's interesting to see how PCs is essentially flat across the years.)

WWE does acknowledge the demographic gap between OTT and TV networks.


WWE throws out bought Netflix growth numbers but more interestingly MLB subscribership (about 3 million). That's an interesting service to compare against.


One claim from the Network launch which I was curious about was "WWE Fans consume more online video than others".  Here's the internal research that "back ups" that claim.  Considering the nutty 52M households with "affinity for WWE", I don't know how much I'd bank on this.

I was curious how WWE was going to handle Customer Service, and as expected they did out-source it to Harte  Hanks - they are a marketing company which provides call center services and support for other brands.

WWE is using the 52M number as the baseline (instead of something like the 4M domestic RAW Viewers or the "15M weekly WWE Viewers") so they can use terribly low adoption rates (2%-6%) that looks possible, instead of the 25%-90% numbers you'd need otherwise.  The cannibalization number is interesting "up to $60M".  I had guessed $55M on 12/6.

International subscribers for phase 1 (UK, Canada, New Zealand, Singapore, Hong Kong, Nordic) is only between 250,000 and 1,500,000 subscribers.  ($10 price point is also being advertised for the international service.

Q&A Section

They did insist they did a lot of testing on the elasticity of the pricing to reach $9.99. "We did a lot of testing; at the end we want to drive value for the audience. How do we deliver more value to our customers?  That's what we were focused on."  

Q: Why aren't consumers able to sign up now?
A: There is a lot of "timing issues around discussions with current providers and PPV", but didn't want to miss the 6-7 week run up to Wrestlemania.  There's "contractual gymnastics". I think this is code for "we're trying to still sell Royal Rumble and Elimination Chamber".

Responding to the DirecTV already threatening to drop PPVs, WWE is going to "continue to work with them" and they want to "ensure their fans have as much choice as possible.  We'll work with them to make that happen."

Q: Why are you comfortable with up to $60M in Cannibalization?
A: "I've got confidence in our ability to execute.  I can't believe that once someone gets a taste of it they are going to back out."

Revenue Sharing Agreements - Roku, Apple: "There's a general standard that's set that goes from whenever the subscription comes from -- inside the platform (70/30 split), outside the platform (no split)."

Expecting to spend about $20M in Programming (OpEx) in 2014.  This doesn't include the PPV cost.
(They're going to amortize the programming they've already created -- will appear on the P&L.)

Q: Will there be a pre-paid annual subscription (at discount)?
A: Yes, there will be pre-paid annual and you can give it as a gift.


Questions I submitted (but were not answered):

  • Will the Network revenue and expenses be split into a separate revenue division for reporting?
  • Will the Network subscription numbers be included in the monthly KPI numbers?
  • How robust of a marketing campaign outside of the "normal" WWE channels will there be to advertise the WWE Network service?  How much has WWE committed to spend for marketing this service?
  • What is the annual investment for the Network going to look like in terms of re-occurring and one-time costs?

  • Monday, January 13, 2014

    Mid South Wrestling 1979-1986 Results & Stats

    Despite the reputation for stellar action, excellent television production and innovative character development, clean and concise Mid South Wrestling (later Universal Wrestling Federation) results aren't as easy to come by as  I would have thought.

    Data Sources Used for this Analysis

    You can see the quick comparison between the different data sources.

    For the initial pass, I used the ClawMaster results (1979-1986, missing 1982) combined with the CrazyMax TV episodes (Dec 1981-Dec 1983).  This certainly double-counts many of the results.  However, we also have lots of result holes where many of the same matchups indubitably took place, so hopefully the two balance out each other (somewhere).

    Notes
    Many of the cards from the "results" section had winners & losers; however, most of the TV results did not.  So, keep that in mind when you're looking at win-loss percentages -- essentially, I have match counts but little else for 1982 at this moment.

    MID SOUTH WRESTLING (1979-1986)

    Table of Annual Matches


    Records

    Keep in mind that 1982 is all TV matches without "winners/losers" specified.

    More than 300 matches
    Junkyard Dog: 361 matches, 169-36-14 and 142 other matches (82% win)
    Ted Dibiase: 341 matches, 133-81-14 and 113 other matches (62% win)

    More than 200 matches
    Jim Duggan: 292 matches, 151-53-19 and 69 other matches (74% win)

    More than 150 matches
    Steve Williams: 190 matches, 96-64-13 and 17 other matches (60% win)
    Butch Reed: 181 matches, 83-55-8 and 35 other matches (60% win)
    Mr Wrestling II: 165 matches, 46-47-6 and 66 other matches (49% win)
    Terry Gordy: 158 matches, 47-72-17 and 22 other matches (39% win)
    Paul Orndorff: 158 matches, 70-17-8 and 63 other matches (80% win)
    Terry Taylor: 155 matches, 103-38-12 and 2 other matches (73% win)

    More than 100 matches

    Kamala: 139 matches, 37-65-10 and 27 other matches (36% win)
    Buck Robley: 137 matches, 61-16-7 and 53 other matches (79% win)
    Jake Roberts: 136 matches, 66-40-5 and 25 other matches (62% win)
    Len Denton: 127 matches, 23-44-4 and 56 other matches (34% win)
    Michael Hayes: 125 matches, 27-76-8 and 14 other matches (26% win)
    Chavo Guerrero: 120 matches, 78-23-5 and 14 other matches (77% win)
    Buddy Landell: 118 matches, 27-51-2 and 38 other matches (35% win)
    Tim Horner: 115 matches, 27-40-4 and 44 other matches (40% win)
    Ernie Ladd: 112 matches, 30-41-5 and 36 other matches (42% win)
    Buddy Roberts: 107 matches, 40-45-7 and 15 other matches (47% win)
    Mike Sharpe: 105 matches, 30-25-5 and 45 other matches (55% win)

    More than 50 matches

    Dick Murdoch: 100 matches, 39-24-4 and 33 other matches (62% win)
    Art Crews: 99 matches, 19-42-5 and 33 other matches (31% win)
    Mike George: 95 matches, 21-39-1 and 34 other matches (35% win)
    Magnum TA: 90 matches, 42-21-2 and 25 other matches (67% win)
    Mr Olympia: 88 matches, 15-14 and 59 other matches (52% win)
    Super Destroyer: 87 matches, 20-33-2 and 32 other matches (38% win)
    Robert Gibson: 86 matches, 61-19-5 and 1 other match (76% win)
    Ricky Morton: 84 matches, 61-17-5 and 1 other match (78% win)
    Jack Victory: 82 matches, 17-61-4 (22% win)
    Bob Sweetan: 82 matches, 14-44-5 and 19 other matches (24% win)
    Bobby Fulton: 81 matches, 58-19-4 (75% win)
    Killer Khan: 81 matches, 23-22-6 and 30 other matches (51% win)
    Terry Latham: 80 matches, 27-30-6 and 17 other matches (47% win)
    Tommy Rogers: 78 matches, 58-16-4 (78% win)
    Ken Mantell: 77 matches, 17-30-5 and 25 other matches (36% win)
    Jim Garvin: 77 matches, 28-16 and 33 other matches (64% win)
    Terry Orndorff: 76 matches, 27-25-1 and 23 other matches (52% win)
    King Cobra: 76 matches, 25-24-3 and 24 other matches (51% win)
    One Man Gang: 69 matches, 17-25-2 and 25 other matches (40% win)
    Missing Link: 69 matches, 35-15-3 and 16 other matches (70% win)
    King Kong Bundy: 67 matches, 19-18 and 30 other matches (51% win)
    Bill Watts: 61 matches, 42-5-2 and 12 other matches (89% win)
    Steven Little Bear: 60 matches, 37-5-4 and 14 other matches (88% win)
    Eddie Gilbert: 60 matches, 21-36-3 (37% win)
    Afa: 59 matches, 14-10 and 35 other matches (58% win)
    Dennis Condrey: 58 matches, 23-29-1 and 5 other matches (44% win)
    Sika: 58 matches, 14-9 and 35 other matches (61% win)
    Bobby Eaton: 58 matches, 23-29-1 and 5 other matches (44% win)
    Hector Guerrero: 56 matches, 33-20-2 and 1 other match (62% win)
    Kelly Kiniski: 55 matches, 9-26-1 and 19 other matches (26% win)
    Bob Roop: 54 matches, 3-1 and 50 other matches (75% win)
    Krusher Darsow: 53 matches, 14-23 and 16 other matches (38% win)
    Matt Borne: 52 matches, 8-19-3 and 22 other matches (30% win)

    More than 25 matches

    Tommy Wright: 50 matches, 6-32-1 and 11 other matches (16% win)
    Hercules Hernandez: 50 matches, 17-29-4 (37% win)
    Rick Steiner: 49 matches, 15-32-2 (32% win)
    Tank Patton: 49 matches, 18-17-2 and 12 other matches (51% win)
    Brett Wayne Sawyer: 48 matches, 18-26-1 and 3 other matches (41% win)
    Bull Ramos: 48 matches, 28-18 and 2 other matches (61% win)
    Randy Colley: 47 matches, 21-26 (45% win)
    Kerry Von Erich: 47 matches, 27-4-6 and 10 other matches (87% win)
    Buzz Sawyer: 45 matches, 14-24-5 and 2 other matches (37% win)
    Dusty Rhodes: 45 matches, 28-6-1 and 10 other matches (82% win)
    Leroy Brown: 44 matches, 11-20-4 and 9 other matches (35% win)
    Tony Charles: 44 matches, 9-12-2 and 21 other matches (43% win)
    Gino Hernandez: 44 matches, 7-13-3 and 21 other matches (35% win)
    Cocoa Samoa: 43 matches, 11-9-1 and 22 other matches (55% win)
    Gustavo Mendoza: 42 matches, 2-39 and 1 other match (5% win)
    Tony Torres: 42 matches, 9-12-3 and 18 other matches (43% win)
    Sting: 42 matches, 14-26-2 (35% win)
    Koko Ware: 41 matches, 29-5-4 and 3 other matches (85% win)
    Ben Alexander: 41 matches, 15-20-4 and 2 other matches (43% win)
    King Parsons: 41 matches, 30-4-2 and 5 other matches (88% win)
    Kendo Nagasaki: 41 matches, 12-12 and 17 other matches (50% win)
    Brad Armstrong: 40 matches, 21-15-4 (58% win)
    Nikolai Volkoff: 40 matches, 10-17-1 and 12 other matches (37% win)
    Arn Anderson: 39 matches, 0-17 and 22 other matches (0% win)
    Johnny Rich: 39 matches, 9-9-2 and 19 other matches (50% win)
    Carl Fergie: 38 matches, 4-17 and 17 other matches (19% win)
    George Weingeroff: 38 matches, 13-8 and 17 other matches (62% win)
    Rip Rogers: 38 matches, 0-18-4 and 16 other matches (0% win)
    Don Diamond: 37 matches, 9-12-1 and 15 other matches (43% win)
    Frank Dusek: 37 matches, 6-13-2 and 16 other matches (32% win)
    Tony Atlas: 36 matches, 14-4-1 and 17 other matches (78% win)
    Sheepherders: 36 matches, 12-21-3 (36% win)
    Mike Bowyer: 36 matches, 7-23-1 and 5 other matches (23% win)
    Brickhouse Brown: 36 matches, 17-15-4 (53% win)
    Dick Slater: 35 matches, 17-15-3 (53% win)
    Mike Miller: 34 matches, 4-15-1 and 14 other matches (21% win)
    Terry Daniels: 34 matches, 11-19-2 and 2 other matches (37% win)
    Jesse Barr: 33 matches, 1-1 and 31 other matches (50% win)
    Al Perez: 31 matches, 17-13-1 (57% win)
    Jose Lothario: 31 matches, 10-12-4 and 5 other matches (45% win)
    Mike Bond: 31 matches, 0-7 and 24 other matches (0% win)
    Bill Dundee: 31 matches, 11-20 (35% win)
    Vinnie Romeo: 31 matches, 1-5-1 and 24 other matches (17% win)
    Igor Putski: 30 matches, 5-21 and 4 other matches (19% win)
    Jim Neidhart: 30 matches, 7-7-1 and 15 other matches (50% win)
    Bob Orton Jr: 29 matches, 2-2 and 25 other matches (50% win)
    Lanny Poffo: 29 matches, 6-14 and 9 other matches (30% win)
    Tiger Conway Jr: 28 matches, 11-4-1 and 12 other matches (73% win)
    Boris Zurkov: 28 matches, 4-7-1 and 16 other matches (36% win)
    Tony Anthony: 28 matches, 6-15-1 and 6 other matches (29% win)
    Charlie Cook: 28 matches, 8-7-1 and 12 other matches (53% win)
    Stan Stasiak: 28 matches, 16-8 and 4 other matches (67% win)
    John Tatum: 27 matches, 8-16-2 and 1 other match (33% win)
    Bill Eadie: 26 matches, 8-17 and 1 other match (32% win)

    Winners & Losers

    The Winners

    • Junkyard Dog: 361 matches, 169-36-14 and 142 other matches (82% win)
    • Paul Orndorff: 158 matches, 70-17-8 and 63 other matches (80% win)
    • Buck Robley: 137 matches, 61-16-7 and 53 other matches (79% win)
    • Chavo Guerrero: 120 matches, 78-23-5 and 14 other matches (77% win)
    • Robert Gibson: 86 matches, 61-19-5 and 1 other match (76% win)
    • Ricky Morton: 84 matches, 61-17-5 and 1 other match (78% win)
    • Bobby Fulton: 81 matches, 58-19-4 (75% win)
    • Tommy Rogers: 78 matches, 58-16-4 (78% win)
    • Bill Watts: 61 matches, 42-5-2 and 12 other matches (89% win)
    • Steven Little Bear: 60 matches, 37-5-4 and 14 other matches (88% win)
    • Kerry Von Erich: 47 matches, 27-4-6 and 10 other matches (87% win)
    • Dusty Rhodes: 45 matches, 28-6-1 and 10 other matches (82% win)
    • Koko Ware: 41 matches, 29-5-4 and 3 other matches (85% win)
    • King Parsons: 41 matches, 30-4-2 and 5 other matches (88% win)
    • Tony Atlas: 36 matches, 14-4-1 and 17 other matches (78% win)
    • Andre the Giant: 25 matches, 14-1-1 and 9 other matches (93% win)
    • Chris Adams: 22 matches, 14-4-4 (78% win)
    • Ed Wiskowski: 21 matches, 1-0 and 20 other matches (100% win)
    • Frank Monte: 19 matches, 1-0 and 18 other matches (100% win)
    • Ric Flair: 17 matches, 11-3-2 and 1 other match (79% win)
    • Kevin Von Erich: 15 matches, 10-2 and 3 other matches (83% win)
    • Tommy Rich: 15 matches, 10-1 and 4 other matches (91% win)

    The Losers

    • Jack Victory: 82 matches, 17-61-4 (22% win)
    • Bob Sweetan: 82 matches, 14-44-5 and 19 other matches (24% win)
    • Tommy Wright: 50 matches, 6-32-1 and 11 other matches (16% win)
    • Gustavo Mendoza: 42 matches, 2-39 and 1 other match (5% win)
    • Arn Anderson: 39 matches, 0-17 and 22 other matches (0% win)
    • Carl Fergie: 38 matches, 4-17 and 17 other matches (19% win)
    • Rip Rogers: 38 matches, 0-18-4 and 16 other matches (0% win)
    • Mike Bowyer: 36 matches, 7-23-1 and 5 other matches (23% win)
    • Mike Miller: 34 matches, 4-15-1 and 14 other matches (21% win)
    • Mike Bond: 31 matches, 0-7 and 24 other matches (0% win)
    • Vinnie Romeo: 31 matches, 1-5-1 and 24 other matches (17% win)
    • Igor Putski: 30 matches, 5-21 and 4 other matches (19% win)
    • Doug Vines: 25 matches, 0-8-1 and 16 other matches (0% win)
    • Tom Renesto Jr: 23 matches, 1-8 and 14 other matches (11% win)
    • Bill Irwin: 23 matches, 3-14-1 and 5 other matches (18% win)
    • Ricky Fields: 22 matches, 4-15-1 and 2 other matches (21% win)
    • Steve Hall: 21 matches, 2-15 and 4 other matches (12% win)
    • Libyan: 20 matches, 1-19 (5% win)
    • Porkchop Cash: 20 matches, 0-10-1 and 9 other matches (0% win)
    • Billy Starr: 19 matches, 0-4 and 15 other matches (0% win)
    • Tony Zane: 18 matches, 0-5-1 and 12 other matches (0% win)
    • Larry Higgins: 17 matches, 0-8 and 9 other matches (0% win)

    Additional detail at: https://sites.google.com/site/chrisharrington/mookieghana-prowrestlingstatistics/midsouth_wrestling