I thought the article on the African-American blog community response to the ABC News report on AIDS in Black America was really interesting. Overall, I thought the article was excellent, but right now I want to talk about a few things that confused and troubled me.
One thing that I wanted to learn more about in this article was the methodology. While the authors explain that they coded the blog, they provide a confused indication of what served as their unit of analysis. They explain that they coded a number of comments, but also explained that they analyzed direct quotatoins, and threads of conversation. I'm wondering how they segmented the data into chunks, how they determined what the parameters of those chunks were. Also, the authors explain rather vaguely that "quotations were coded and analyzed according to themes that they represented" (577). How were these themes chosen/deciphered? How did the authors negotiate differences/similarities amongst the "candidate codes" that they came up with when they coded separately? What was their interrater reliability?
More importantly, it seems like the authors were expressly looking for signs of "resistance" in these blogs. They say that one of their research questions was to find if and how the "community provided an oppositional interpretation" of the ABC report (576). It seems, that if you're looking for these oppositional interpretations of the news report, you'll be likely to find and code for them. Thus, it's not a surprise that the discussion portion of the article emphasizes these signs of resistance, while perhaps understating many of the other categories that emerged from this analysis.
I'm not saying that the authors were merely finding what they were looking for. But I am saying that their description of the methodology makes this an important question to ask. One problem I see is that the authors don't indicate the percentage of comments or quotations or threads (or whatever unit of analysis they were coding) that fell under each coding category. In their discussion, they make it appear that bloggers' comments resisting ABC news were overwhlemingly present in the data: "They [the bloggers] questioned the statistics, provided explanations for why the figures overstate the proportion of HIV infections among African Americans compared to other racial/ethnic groups, and critically analyzed how science is often misued to legitimize negative portrayals of Black people....The agency to resist these ascribed identities is situated in and often in opposition to the institutional power structure of existing AIDS discourse" (588). However, the authors provide no way of knowing if these oppositional interpretations of the ABC report were the most salient feature of the blog that was analyzed. They don't compare the number of such comments to the number of comments that support and uphold the news report. For instance, how many comments gave "props" to ABC for giving the report and shining light on this issue? How many comments neither questioned nor challenged the findings of the report? There's also no way of knowing if the "resistance" category was more or less salient than the "ineffective leadership" or "Black cultural practices" or "individual behavior" categories.
Finally, let me take small issue with one potential category that the authors did not code for, but which they mention in passing. The authors notice "clear linguistic markers such as "they" and "us", ABC and BET to demarcate the outgroup and the ingroup, which serves as additional evidence of a shared group identity" (582). I realize that this was not one of the researchers primary questions, but it would have been interesting if they had coded for demarcations of an "us/them" binary. It would have been interesting to try to understand where and when these demarcations occur. And I think it's important to ask if, by making such demarcations, this blog community sustains a kind of unnecessary separation b/w black and white, us and them, BET and ABC. To me the creation of this demarcation is double-edged. On the one hand, it serves to promote a kind of shared group identity, as the authors note. On the other hand, it serves to promote a kind of Otherizing of White people, White news sources, and White culture. This is certainly understandable given the history and persistence of racism in our country; moreover, it is understandable given the real bias in the White-controlled news reports (see Teun van Dijk's Elite Discourse and Racism). Still, part of me cringes when I see us/them lingusitic markers, and I think researchers have a responsibility to expose these binaries and comment on thier potentially divisive implications.
Subscribe to:
Post Comments (Atom)
2 comments:
I agree with wanting to know more about the methodology, more from a selfish point of view though. What Kvansy and Igwe did seems to be similar to what I am attempting to do with the birth control discussion boards (discussion boards Vs. blogs would be something interesting in and of itself to research... or comments on blogs vs comments on discussion boards... hmmm) Anyways, I found two of their citations through Ohio Link and am going to read them this upcoming week so if anything jumps out in them that explains their method more I will let you know.
John, as always, raised some really good questions.
I went back to the article again to see what I might find, and I still have a lot of the same questions as you. But, here are a few thoughts:
Unit of analysis concern: "A total of 128 usable comments groups – 62 posted before
the report aired and 66 posted afterwards – were posted, and each comment served as
a unit of analysis" (p. 577). I read this as an entire post (which seems to be what they mean by 'comment') by a person would be the unit of analysis. But, I agree with you, it does leave a little too much room for reader interpretation.
Kvansy and Igwe also say, "Our approach proceeded in
a top-down fashion by first analyzing direct quotations, and purposefully analyzing
threads of conversation" (p. 577). The phrase "top-down" was a new one for me in this context, and honestly, I'm still not entirely sure what they did from reading this sentence. My best guess at this point, is that they started from the individual quotations (another problem here-- why are they switching between "quotations" and "comments"? are these the same thing?) and then moved into looking at larger themes across the posts-- threads. I need some greater clarification here too though...
You mentioned inter-rater reliability. Their explanation was, "Both authors independently constructed a list of
candidate codes based on data analysis. The authors then compared lists, and cocreated
a common set of codes and associated definitions based on list overlap and
joint sense making of the data." In this case, it seems that inter-rate reliability is not an issue, because of their invocation of "joint sense-making." The way I read that is that when there were disagreements, those were all resolved one by one. This is what Smagorinsky (2008) called "collaborative coding." He characterizes it as a Vygotskian move in that "agreement" in regard to traditional measures of inter-rater reliablity does not necessarily equate to reliability... Smagorinsky explains "we reach agreement on each code
through collaborative discussion rather than independent corroboration" (p. 401). While this approach is a break from traditionally accepted social science research, it does seem to be becoming more common.
Post a Comment