Nice results

I have not been blogging much (I don’t just mean writing, I also mean reading), because I am really close to sending off an article for review. Any moment I have at my machine that is not taken up by immediate email correspondance, I want to spend on my paper. But I know that some people stop by the blog regularly and I don’t want to disappoint by not having anything new here for days on end when they visit. There all sorts of things on my mind for blogging, but I just don’t want to take the time from my piece now.

I’ve figured out a middle ground: I’ll briefly blog about the paper I am writing. 🙂 This may get a bit technical, but fellow geeks who stop by here may appreciate it.

One of the goals of my dissertation project was to figure out survey measures of people’s actual online skills. In most of the existing literature, when people include measures of computer skills (the existing lit is mostly about computer-use skill not online skills), they rely on people’s self-perceived abilities. That is, researchers simply ask users to rate their skill. As you can imagine, this measure may not be very good. However, collecting data on actual skill is quite time-consuming, labor intensive and expensive, so we often don’t have a choice but to rely on survey measures. The question then is whether we could come up with better survey measures.

In my project, I measured people’s (one hundred randomly selected adult Internet users’) ability to find various types of information online and their efficiency (speed) in doing so. I also asked participants to rank their skills (as per the traditional skill measures) and to rate their understanding of a few dozen computer and Internet-related items. (There’s more on what I did to see whether perceived understanding is a good proxy for actual knowledge, but for that you’ll have to read the paper.;-)

I then checked the correlation of the various survey measures with actual skill. I constructed an index measure of skill based on the most highly correlated survey questions. I then looked to see to what extent the self-perceived skill measure explains the variance in actual skill versus the extent to which my index measure based on knowledge items explains the variance in actual skill. I am happy to report that my index measure is a better predictor of skill than people’s self-perceived abilities.

An additional exciting bonus is that some of my survey measures were replicated on a national data set (the General Social Survey 2000 & 2002 Internet modules) so others can use these better measures as well.

I’m excited. The study I did was pretty risky in some ways. There was no guarantee that I would even find any variance on the most crucial variables (such as skill). But I did. And now these findings with the new versus traditional survey measures of skill suggest that there is something generalizable there, which is exciting.

Yes, I’m a data geek.

4 Responses to “Nice results”

  1. brayden Says:

    And the geeks in the gallery applaud!

  2. Mick Says:

    Interesting. You wouldn’t care to share the resulkts or what kind of skills yoyu measured, would you? Just a taste….

  3. eszter Says:

    Mick, thanks for your interest. One of the reasons I don’t blog too many details about my projects is discussed in this blog post. Off-blog, I’m open to describing the project in a bit more detail.

  4. Luke Says:

    Very nice! I’m glad that the research is proving fruitful. I think your area of research is very interesting.