Matzat, Uwe. “A Theory of Relational Signals in Online Groups.” New Media and Society 11.2 (2009): 375–394. Print.
This is an article that seemed like it was going to be really useful and interesting and instead sort of fizzled out for my purposes. It looked at first as though it might have something to say about why problems arise in communication within online communities, and I suppose in some ways it did. The unusual thing, though, was that it is written specifically as advice for online group administrators—how do they overcome the typical problems that plague online communities and create a harmonious and productive group?
What follows juggles a lot of factors at once, so I’m going to summarize in numbered points and then do a little more discussion at the end.
- In a community, individuals have both common goals and individual goals. These may or may not be in conflict with one another.
- These goals can sometimes be achieved independently (as, say, on an auction site), and sometimes they depend on others for fulfillment (as in a support group).
- There are three major interaction problems that arise in communities that can threaten the group’s ability to function: (1) opportunity/”free-rider” problems (not enough people are contributing and the common goal cannot be reached), (2) trust problems (the worry that people will use disclosed information in unanticipated ways), and (3) loyalty problems (people aren’t committed enough to the group to stick around).
- For a group to successfully achieve its goals, these problems must be overcome, but here’s the thesis: different kinds of communities require different kinds of responses in order to combat these problems.
- One part of avoiding these problems is making sure that people prioritize the common group goal as their decision frame rather than individual goals.
- There are three kinds of relational signals that let us know what goals a person is prioritizing: (1) bilateral interaction (relationships between two people), (2) participation in common group activities, and (3) relationship with the administrator (Matzat points out here that the administrators’ actions signal to the group what kinds of interactions are expected).
- The more “relational interests” a group has, the more its members pay attention to these signals and the more smoothly the group runs.
- Group administrators can change how the group views relational interests in two ways. In the short term, they can make policies that exert social control, and in the long term, they can change the level of interdependency necessary to achieve group goals.
- There are three types of social control that the group administrators can use: (1) frame stabilizing tools, like group meetings or the appeal to norms and things that separate this group from other groups, (2) indirect monitoring like encouraging self-policing among the community, which lets people show their willingness to be a part of the norms by enforcing them on other people, and (3) direct control, though tangible (as opposed to perceived) rewards for “right” participation.
- Frame stabilization and indirect monitoring are much more effective in high-relationship communities, like support groups, while direct control works best in less relational contexts like auction sites.
- The more interdependent the group’s goals are, the more interest in relationship-building the participants are, the more they are paying attention to the signals they send, the less direct control the administrator needs to take to keep things running smoothly
- Communities embedded in the real world are more relational
- Making a group fulfill multiple goals at once will get rid of the loyalty and trust problems
- Short-term strategies for administrators should use the appropriate social control for their kind of group. In the long term, though, they can actually change what kind of group they have by increasing either multifunctionality or social embeddedness.
Throughout my reading I kept struggling with some of the statements that Matzat makes about how communities work, mostly because I was comparing what he was saying to the community at Jezebel. But then, in some ways that helps me isolate why Jezebel is problematic. Do people really have a shared goal that can be reinforced? I would say no, unless it is to provide additional content for Gawker Media. But of course, that isn’t the conscious goal for anyone. In fact, as I wrote in a letter to editor-in-chief Jessica Coen (which maybe I will post one of these days), it is precisely NOT having an articulated goal that causes many of the admin-user kerfuffles on the site.
It’s just interesting to see some of these social controls being levied on the site: allowing users to nominate for movement or actually move posts off of the main page when they don’t conform to expectations, occasional (but not consistent or easily found) commenting guides, the editors’ active intereference in certain comment threads. Would the starring system count as indirect in that it is based on perceived value or as direct in that it is given concrete form? Or would it be what Kim (2000) calls a symbol in a reference I was going to read until I realized it is a book and not an article? Matzat argues that administrators largely set the tone for acceptable behavior and norms, but on Jezebel the messages have been wildly inconsistent. It stands to reason, though, I suppose—Jezebel was never “supposed to” be a community; the community grew up on its own, and now the editors are the reluctant administrators of a group they never asked to take charge of.
Maybe they should read the article.