As you probably know, Inside The Rock Era is the only organization that has developed a formula to rank songs. While we do occasionally adjust the rankings slightly, the mathematical formula considers all of the factors which should go into a ranking of songs, artists, or albums--radio airplay, sales, chart performance popularity, longevity, awards won, and the competition faced at the time. We want to talk about the "error rate" in this special, which we will get to after a little background.
There should be little disagreement that the above factors are the correct ones to utilize in analyzing the artists of the 70's to come up with The Top 100*. Where people can logically disagree (and we acknowledge this) is the exact weight that you give each of the above factors. Ask 10 people and you're likely to get very different answers as to which of the factors are the most important and which are the least. Some might say you only go by chart ranking. We strongly disagree, and believe most people will back us up on that. For instance, if you only went by chart performance, "Macarena" would be the #2 song of all-time. Gulp.
Others say you only go by sales, for that is the ultimate goal of artists and record companies--to get the product sold. That makes a lot of sense, but to concentrate wholly on that is wrong. The reason is because not everyone buys music, or certainly not to the level of others. The purchase of a song or album, while great for an artists' bottom line, doesn't mean the person who bought the music likes it any more than the person who didn't. Thus, to only base rankings on sales means you are saying the person who buys a lot of music should have more influence on the millions who don't. Put that way, sole emphasis or overemphasis on sales is intuitively incorrect.
Still others argue you should look at radio airplay--if a song or artist is never heard, how can you say they are better than someone who is regularly played? After all, radio stations too are in business to make money, and they want to keep their customers happy. But there are problems with this line of thinking as well. Radio stations play music that they know advertisers will support. If the businesses in the community, for example, can't stand acid rock music, a radio station which plays said music won't last long. This fact is negated somewhat by Arbitron, which rates radio stations quarterly, semiannually, or annually, depending on how large the market is. Arbitron prints out a rather lengthy report showing how each radio station in the market is doing. Obviously, the more listeners each radio station has, the easier it is to sell radio airtime (or commercials, or "spots", as those in the business call them).
And while Arbitron helps keep radio competitive to some degree, basing rankings solely on radio airplay has its inherent disadvantages as well that are similar to placing too many eggs in the "sales basket". I knew several people that listened to radio constantly--they would call DJ's all the time to request songs, and participate in all the contests. Again, that doesn't mean they should be the most important people in deciding song rankings or artist rankings. Yet if you put a strong emphasis on radio airplay, those people that listen to the radio 10-12 hours a day would get to determine what the top songs are. You see the problem with that?
We at Inside The Rock Era believe that all of the above factors are important in producing music specials, with one key caveat. We believe that competition, when applied to songs or artists, is the main factor that other organizations don't think of, forget, or don't know how to apply it. We came up with a copyrighted mathematical formula* that includes competition in these rankings. No one else has it. Only a song or artist ranking that considers competition is worth its weight in salt, and can be considered legitimate. It is the only way to properly evaluate whether "Macarena", with its 14 weeks at #1, is a better song than "Stairway To Heaven", with its zero weeks at #1.
So we combine all of the factors mentioned above. We tried thousands of different weighting formulas, and believe we arrived at the most accurate. It bears repeating, however, that we acknowledge that to the extent we place different weights on the factors, there is a bias on the results. It is a professional bias, one calculated through over 40 years of doing the rankings, but it is a bias nonetheless. I call this bias the "error rate" in a song or artist ranking. And I point it out so viewers can appreciate the logical difference between putting more emphasis on sales, or airplay, or awards won, or chart performance.
As it pertains to The Top 100 Artists of the Seventies*, the error rate in this area of the music special, #85 to about #60, is approximately + or -8, meaning an artist that we have ranked at #85 could be ranked logically as high as #77 or as low as #93, depending on the weight that you give each factor. As we get closer to #1, the error rate drops significantly. This is to say that no matter how you look at it, the artists in the Top 15 or Top 20 are pretty much the way it is. In fact, the top two artists of the 70's were so successful that they are miles ahead of everyone else--only a fool would look at all of the accomplishments of each artist in the decade and say that someone else was better than those two. The error rate when we get to those two artists is "One", meaning either its 1-2 or 2-1. No one else is close to them.
So while we want to disclose the "error rate", the above statistics should also tell you that the formula is pretty close to accurate. Just keep in mind the error rate, and you'll have a true picture as to where an artist could rank. This will help you enjoy the music and the accomplishments of each artist, rather than fret about each individual ranking!
There should be little disagreement that the above factors are the correct ones to utilize in analyzing the artists of the 70's to come up with The Top 100*. Where people can logically disagree (and we acknowledge this) is the exact weight that you give each of the above factors. Ask 10 people and you're likely to get very different answers as to which of the factors are the most important and which are the least. Some might say you only go by chart ranking. We strongly disagree, and believe most people will back us up on that. For instance, if you only went by chart performance, "Macarena" would be the #2 song of all-time. Gulp.
Others say you only go by sales, for that is the ultimate goal of artists and record companies--to get the product sold. That makes a lot of sense, but to concentrate wholly on that is wrong. The reason is because not everyone buys music, or certainly not to the level of others. The purchase of a song or album, while great for an artists' bottom line, doesn't mean the person who bought the music likes it any more than the person who didn't. Thus, to only base rankings on sales means you are saying the person who buys a lot of music should have more influence on the millions who don't. Put that way, sole emphasis or overemphasis on sales is intuitively incorrect.
Still others argue you should look at radio airplay--if a song or artist is never heard, how can you say they are better than someone who is regularly played? After all, radio stations too are in business to make money, and they want to keep their customers happy. But there are problems with this line of thinking as well. Radio stations play music that they know advertisers will support. If the businesses in the community, for example, can't stand acid rock music, a radio station which plays said music won't last long. This fact is negated somewhat by Arbitron, which rates radio stations quarterly, semiannually, or annually, depending on how large the market is. Arbitron prints out a rather lengthy report showing how each radio station in the market is doing. Obviously, the more listeners each radio station has, the easier it is to sell radio airtime (or commercials, or "spots", as those in the business call them).
And while Arbitron helps keep radio competitive to some degree, basing rankings solely on radio airplay has its inherent disadvantages as well that are similar to placing too many eggs in the "sales basket". I knew several people that listened to radio constantly--they would call DJ's all the time to request songs, and participate in all the contests. Again, that doesn't mean they should be the most important people in deciding song rankings or artist rankings. Yet if you put a strong emphasis on radio airplay, those people that listen to the radio 10-12 hours a day would get to determine what the top songs are. You see the problem with that?
We at Inside The Rock Era believe that all of the above factors are important in producing music specials, with one key caveat. We believe that competition, when applied to songs or artists, is the main factor that other organizations don't think of, forget, or don't know how to apply it. We came up with a copyrighted mathematical formula* that includes competition in these rankings. No one else has it. Only a song or artist ranking that considers competition is worth its weight in salt, and can be considered legitimate. It is the only way to properly evaluate whether "Macarena", with its 14 weeks at #1, is a better song than "Stairway To Heaven", with its zero weeks at #1.
So we combine all of the factors mentioned above. We tried thousands of different weighting formulas, and believe we arrived at the most accurate. It bears repeating, however, that we acknowledge that to the extent we place different weights on the factors, there is a bias on the results. It is a professional bias, one calculated through over 40 years of doing the rankings, but it is a bias nonetheless. I call this bias the "error rate" in a song or artist ranking. And I point it out so viewers can appreciate the logical difference between putting more emphasis on sales, or airplay, or awards won, or chart performance.
As it pertains to The Top 100 Artists of the Seventies*, the error rate in this area of the music special, #85 to about #60, is approximately + or -8, meaning an artist that we have ranked at #85 could be ranked logically as high as #77 or as low as #93, depending on the weight that you give each factor. As we get closer to #1, the error rate drops significantly. This is to say that no matter how you look at it, the artists in the Top 15 or Top 20 are pretty much the way it is. In fact, the top two artists of the 70's were so successful that they are miles ahead of everyone else--only a fool would look at all of the accomplishments of each artist in the decade and say that someone else was better than those two. The error rate when we get to those two artists is "One", meaning either its 1-2 or 2-1. No one else is close to them.
So while we want to disclose the "error rate", the above statistics should also tell you that the formula is pretty close to accurate. Just keep in mind the error rate, and you'll have a true picture as to where an artist could rank. This will help you enjoy the music and the accomplishments of each artist, rather than fret about each individual ranking!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.