I finally got around yesterday to spending some quality time with Eli Pariser’s The Filter Bubble, and I’m sure that I’ll be using at least parts of it in the digital literacies course that I’m scheduled to teach in the fall. There are plenty of reviews out there, so I don’t feel any real need to offer an extended read of the book. If you’re like me, though, you tend to flag tech books, and add them to the “when I have time” pile. I’m glad that Pariser’s book made it to the top of that pile for me; it’s a book that really speaks to issues of privacy, personalization, and the “next wave” on the Internet. It’s unusual for a pop book to have a significant impact on the academic work that I do, but this book might, depending on how I eventually take it up.
I mention it here today because I was reading it today at the same time that a piece over Gawker came across my radar, “Just because you don’t like a study doesn’t mean it is wrong,” an unfortunately titled essay about a problem that I think more and more academics will begin to encounter, especially as access to academic work is opened up. The study, “Women (Not) Watching Women: Leisure Time, Television, and Implications for Televised Coverage of Women’s Sports.” appeared in the journal Communication, Culture and Critique, and was picked up by a number of mainstream outlets, who proceeded to give their coverage splashy titles to drive their pageviews. Of course. And this snowballed into a typical calm, careful, and measured discussion about the merits of the study race amongst online outlets to see who could misunderstand, mischaracterize, and abuse the study the most, by responding to each others’ headlines, rather than the coverage or (God forbid!) reading the actual 20 pages for themselves. Hamilton Nolan notes
Not only were the purposes and conclusions of this study mischaracterized, but that mischaracterization led to widespread derision from feminist blogs over methodologies that were explicitly feminist in nature.
Nolan’s piece does what it can to undo this, by reading the actual study and following up with an interview of one of the writers, but really, I can’t imagine that those folks who dismissed the study as stupid bullshit are likely to follow up on it, nor is the emotional damage of those attacks going to disappear with a wave of a magic wand, no matter how well-intentioned. To say nothing of the Google rankings.
Part of what Pariser talks about in Filter Bubble is the way that outlets like Gawker and others measure success, drive traffic, and conduct business. I’m not personally interested in demonizing them for it, because they’re simply figuring out (faster than most) how to play the cards they’ve (and we’ve) been dealt. Nor am I going to launch into a rant about the misappropriation of academic work, or suggest that the almost-certain-albeit-unintended outcome of open access will be more of this kind of “coverage.”
What’s chilling for me is how there is no longer much of a threshold that divides browsing from linking or choosing or endorsing. Maybe what I want to say is that Pariser’s tracking a phenomenon that erases the boundaries between Daniel Kahneman’s two different systems of thinking–we tend to believe that we are the sum of our conscious choices, but online, we’re not. Every action, every link followed, every status update we like, every extra minute we stay on this page rather than moving on (regardless of the reasons for it) is being tracked, analyzed, collated, and used to shape our experiences and the information we have access to, and that’s spooky to me.
The revolutionary thing about Google’s PageRank was that the threshold necessary to link to a page signified some of sort of endorsement (even a negative link was still an endorsement of its importance), but the optimizers broke it because that threshold vanished with the ease of automated blog/wiki spam assaults. PageRank struggled increasingly to sift the information from the noise. If I read Pariser right, what’s happening now is that search personalization simply ignores the distinction–everything we do online is information. Everything we do signifies equally about our preferences, our tastes, our affinities–of course, the flip side of “everything is information” is “everything is noise.” Our noise happens to be a little softer and perhaps more patterned than the spambots’.
For what it’s worth, Pariser does offer some tips for popping the bubble. Guess what I’ve been doing today?
Laura
July 17, 2012 8:24 amI liked The Filter Bubble, though I ruthlessly don’t follow its advice. I have used parts of it though to talk to students about what they’re “really” sharing. Hint: it’s not just what they’re watching right now. Another good book along these lines is Blown to Bits, which is a free download. I also like Program or be Programmed, which is less about programming and more about hidden data. If you want to go back a bit on this issue, Neal Stephenson’s In the Beginning, There was the Command Line is a rant about how GUI interfaces prevent us from knowing what our computers are doing. Sound familiar?
Had no ideas this was going to turn into a book list. It’s a class I want to teach.
Collin
July 17, 2012 6:12 pmStill, thanks for the list 🙂 I read Stephenson’s book when it first came out, and PoBP is another in my pile…I’ll check out Blown to Bits as well…
c