No doubt you’ve heard that Pitchfork has published a list of “not canon” album score adjustments. In their own words they serve as a conversation starter for the opinionated reader, mere hypotheses to relieve the growing weight of constant, slaved ‘litigations of fickle feelings’ under the shackles of context, culture and generation.
It’s about time one of the most popular internet music criticism blogs famed for giving scores out of 10 got in on the idea, as You Know Who set a precedent for it nearly four years ago. His first two ‘redux’ revision-reviews also have a lot more credibility than Pitchfork’s given that they are subject of landmark internet age albums that were themselves revised: Kanye West – The Life of Pablo, and Frank Ocean – Endless.
Both released months apart in 2016, and both part of a bigger, rapidly changing shift in favour of the internet’s influence on music publishing. Anthony Fantano’s initial impression is one thing, but both albums have quite literally changed since they were first heard by public ears. Kanye used TLOP’s Tidal exclusive period to adjust mixes, add background elements, change features and even added the song Saint Pablo to cohesively conclude the album; the history of Endless needs its own article to fully unpack, but it has seen similar (if lesser) mix and tracklist adjustments.
Pitchfork acknowledge that their review score changes are more justified by a shift in culture surrounding these albums rather than a change within the album itself, so the question switches from why, to why should I care? They also state that “writing is rewriting”, but only in that the revisions should be made before publication, as apparently like any good novel they of course require their proofs and editing; even stressing the point that clicking ‘publish’ on the internet is just like sending a final copy to the printing press. I’m not sure what internet they occupy, where “there is finally no more futzing and no takes-backsies. And then there it is, forever, a good and righteous piece of criticism”, but on my end I could update a review easier than I can a Tweet. Indeed the previous example still serves, in the digital age of rapid culture shifts where albums can be updated as easily as they can be published, why should the reviews remain concrete? At least Anthony’s format can’t actually be changed, necessitating the publication of his follow up videos.
If PF have ever achieved anything valuable, it was taking music criticism to the internet, leaving behind the dying platform of print media and becoming possibly the biggest music based publication across two decades. So then why take this traditional media high road of rigidly standing by a piece once it’s published? In their ‘futzing’ justification of the article itself, they imply that silently editing a piece once published would impair its integrity, which is true. So… were they to admit that they simply could have edited their scores without promoting the fact, it would expose the core problem that revising album ratings is just as valueless as awarding objective scores in the first place.
Of course the major response was predictable, essentially being, ‘if they changed the score for this album, why wouldn’t they change the score for *my favourite* album as well?’. The question is valid, why only these nineteen albums? If these review scores from twenty years ago can change now, what importance could we possibly place on the scores that are currently being published?
Here they are, by the way:
Artist | Album | Release Year | Original Score | New Score |
Rilo Kiley | Take Offs and Landings | 2001 | 4 | 8 |
PJ Harvey | Stories From the City, Stories From the Sea | 2000 | 5 | 8.4 |
Wilco | Sky Blue Sky | 2007 | 5.2 | 8.5 |
Chief Keef | Back From the Dead | 2012 | 7.9 | 9.1 |
Jeffrey Lewis | It’s the Ones Who’ve Cracked That the Light Shines Through | 2003 | 3.9 | 7.6 |
Chairlift | Moth | 2016 | 7.6 | 8.5 |
Prince | Musicology | 2004 | 5.8 | 7.8 |
Foxygen | We Are the 21st Century Ambassadors of Peace & Magic | 2013 | 8.4 | 6.3 |
Grimes | Miss Anthropocene | 2020 | 8.2 | 6.9 |
Big Boi | Sir Lucious Left Foot: The Son of Chico Dusty | 2010 | 9.2 | 7.7 |
Lana Del Rey | Born to Die | 2012 | 5.5 | 7.8 |
Daft Punk | Discovery | 2001 | 6.4 | 10 |
Daft Punk | Random Access Memories | 2013 | 8.8 | 6.8 |
Interpol | Turn on the Bright Lights | 2002 | 9.5 | 7 |
Liz Phair | Liz Phair | 2003 | 0 | 6 |
The Strokes | Room on Fire | 2003 | 8 | 9.2 |
Regina Spektor | Begin to Hope | 2006 | 7.5 | 8.5 |
Charli XCX | Vroom Vroom EP | 2016 | 4.5 | 7.8 |
Knxwledge | Hud Dreems | 2015 | 7.2 | 8.4 |
Nobody needs a fully detailed analysis of these changes; but just mentally plotting music culture context, genre, release date and if the revision was an increase or a decrease in score does reveal some key points, for example:
We can see the large generational shift within Pitchfork from early 2000s indie-rock-experimental-folk proclaimers, to late 2010s popheads. The original success for the blog came from their attachment to the ‘indie’ wave, establishing themselves as taste-makers and bringing the ‘underground’ attention in a way not seen again until Anthony Fantano did the same for his brand of ‘Dark Prog’ experimental Hip-Hop and industrial/noise aesthetics. The kind of music they regularly praised even came to be known as ‘Pitchfolk’. But a whole generation has grown up since then, and in recent years Pitchfork have really come around to Pop music, batting now for the likes of Taylor Swift, Beyoncé and Grimes.
This is exemplified by the PJ Harvey and Liz Phair adjustments, where originally they were given low scores for ‘going commercial’ (broadly speaking); PF now recognise they let their bias and sexism get in the way of predicting how we would view these career changes in the future. Similarly, while Melon was immediately comfortable naming Charli XCX – Vroom Vroom as his favourite EP of the year, Pitchfork were still ‘questioning the motives’ of Hyperpop, and now that their Twitter followers have told them its cool, have given it a significant bump.
Conversely, some old favourites dropped. Interpol and Foxygen both started their Pitchfork careers as high scoring darlings, but that wave has now rolled back, and both bands received score deductions as we come to grips with the fact that their scene maybe didn’t have the long lasting importance some of us thought it would.
Daft Punk’s inclusions here are guaranteed to cause upset. Back in 2001, Discovery got a 6.4, then at about the halfway point between then and now, it came time to compile the best albums of the 2000s, and there it was at number three alongside Arcade Fire and Radiohead’s respective 9.7 and perfect 10. Along came Random Access Memories, rightfully, we were all excited and Pitchfork awarded it an 8.8, clearly noting Daft Punk’s already legendary status. It seems like everyone blinked all at once, because it’s somehow been nearly 10 years again; but the only news from the Daft Punk front is that RAM is to be their last effort, cementing their status for everyone who hadn’t yet realised. Yet Pitchfork have backpedalled, apparently we aren’t all listening to RAM quite as much as they thought we would be. My perspective on this is that of the top 250 most collected records on Discogs (~600,000 users), RAM is the only release from the 2010s. Even the 2000s are vastly underrepresented, the only one more collected than RAM is Radiohead’s perfect 10 Kid A, and one of the small handful of others is Discovery. Consider all of this and briefly ponder RAM’s deduction of two points, and Discovery’s addition of nearly 4.

Of course, everyone is more offended by an album they enjoy getting a low score, than they are pleased when an album they don’t care about gets a high one.
There is a plethora of scores to discuss that should have been changed but weren’t. Perhaps the most glaring omission from the list is Frank Ocean’s Blonde. Named recently by Pitchfork as the best album of the 2010s, it was originally awarded a 9.0, so what’s with all the other albums from the 2010s that still have higher scores? Maybe that one just sort of goes without saying and is therefore not worthy of inclusion in the list of ‘conversation starters’, but then again, what is?
If awarding scores is futile, then so is discussing them; so I would like to acknowledge a quality of value that Pitchfork has always had, arguably their main purpose, as a music discovery tool. I still use Pitchfork to find more music, some of their written content is quite good too. Any music criticism institution, once you familiarise yourself with their in and outs, becomes transparent as you see past the rating and learn to use it only as a rough gauge of how worth your time the album may be. Here’s a tip, find some of your favourite albums with good Pitchfork scores, look at whoever wrote its review and find the other albums they wrote positively about.
The thing with numerical ratings is that they are too easy to share, and subsequently they get lots of clicks. I really don’t know how many people out there wait around for a high Pitchfork score to come around, and choose to listen to an album based on it, but it doesn’t look like many. The scores seem now to more concern the Stan Twitter culture of established fan bases of particular artists, who compete with scores against other fan bases of other artists. Low scores give rivals fuel for the fire, while high scores often contribute only to the comparatively silent supportive side.
The Best New Music tag is a much better feature that helps circumvent these issues. Rather than permanently judge an album for generations to come, the assessment is more confined to the bubble of the week in which it was released, so the BNM tag culturally expires by the time the next new exciting album arrives, leaving much less emphasis on its importance. Their best of the year and decade lists are also obviously very easy to sift through, and the most rewarding way to get at the good stuff without the rare patience required to actually read the whole written portion of a review.
With all that in mind, the verdict on the rescoring stunt is this: the merit of music criticism is too easily damaged by the attempted objectivism of album ratings. This effort to stir up discussion of how good these albums actually are might appear virtuous if it wasn’t such a shallow campaign to catch more impressions and hopefully draw in some of those sweet engagement stats. Does anyone really care if Discovery is now a 10, while Blonde remains not? Not only have these score adjustments contributed nothing of value to the reviews that they are attached, they suggest a worsening of the problem at hand with Pitchfork’s growing emphasis of the meta culture surrounding the scores themselves. The more popular the numerical ratings get, the lower the quality of review required to score in the shifting goalposts of success for the publication.