Jessen, Johan and Anker Helms Jorgensen. “Aggregated Trustworthiness: Redefining Online Credibility through Social Validation.” First Monday: Peer-Reviewed Journal on the Internet 17.1 (2012): n. pag. Web.
While this article was not what I was originally hoping it would be, it actually has more resonance with my work than those first pangs of disappointment would have led me to believe. When I read the phrase “aggregated trustworthiness,” I had (foolishly) guessed that it would be referring to the way people establish authority within an online community—through regular, insightful, traceable participation. What they are actually talking about, though, once you get past the abstract, is something different.
Jessen and Jorgensen aren’t talking about community; they’re talking about facts. Specifically, they are urging a move from this conception of source credibility by Fogg, based on a top-down model:
to this one, which takes into account what they call aggregated and what I might call distributed authority:
Here, “social validation” is the “large-scale” feedback from other users, like Facebook likes or recipes ratings. “Profile” is linked, I think, to ethos, where the verifiable identity of the people who endorse the information comes into play via their Twitter feed, personal site, etc. Finally, “authority” is the most traditional form of credibility, encompassing both the source of the information itself (name recognition) and the identity of other authority figures) or “trustees”) who speak for its veracity.
The key question that Jessen and Jorgensen seek to answer seems to be: if sites like Twitter and Wikipedia fail to conform to Fogg’s theory of online credibility, then why are they viewed as being credible? There are some interesting points in the middle about how the very things that make Wikipedia such a failure under Fogg’s model ensure its credibility under theirs—collaboration and vetting by the collective. I talk about this a lot in one of my posts for Gail Hawisher’s class, come to think of it.
And this is all very interesting, of course, and I suppose it can still apply to establishing credibility within a community—certainly there are commenting platforms that allow for an aggregation of ratings when a user makes a comment. Like this, for instance:
In this comment (on a wordpress blog but through software called Intense Debate), the individual comment has been given twelve upvotes (as you can see in the left-hand corner). The user herself, though, also has a rating (123—very high in this case), which shows how well-regarded the user is based on the up-/down-votes given to all of her past comments. This helps to take the concept of aggregated trustworthiness and literally codify it.
Where the stakes get high in the discussion, however, is not within the comments but on blogs as a whole. Jessen and Jorgensen touch on this briefly, but I’d like to tease it out a bit: although blog posts may not be open to collective editing in the same way that Wikipedia is, the existence of—and persistence of—comments is a vital part of the aggregated trustworthiness of each post and of the blog itself. Allowing dissent, then, becomes intrinsic to establishing credibility. That is, when Jezebel demotes or removes comments that critique the site or the author, saying that “if you don’t like it you don’t have to read it,” what they are really doing is attempting to draw on Fogg’s model of credibility and authority. Unfortunately for both Fogg and Jezebel, as Jessen and Jorgensen point out, that ship has sailed.