Explaining Google’s popularity
I should be prepping for class, but I want to add an alternative perspective to a question raised about Google’s popularity. The Freakonomics blog features an interesting Q&A with Hal Varian today, I recommend heading over to check out how Google’s chief economist answers some questions submitted by readers last week.
The Official Google Blog takes one of the questions and posts an expanded response to it. The question:
How can we explain the fairly entrenched position of Google, even though the differences in search algorithms are now only recognizable at the margins?
Varian addresses three possible explanations: supply-side economies of scale, lock-in, and network effects. He dismisses all of these (see the post for details) and then goes on to say that it’s about Google’s superior quality in search that makes it as popular as it is.
I don’t buy it, especially the dismissal of the lock-in factor. While I realize that it seems as though another search engine is just a simple click away (and sure, technically it is), I have observed too many Internet users in my research to know that in reality it is not that simple at all. First, there is the lock-in that comes from having Google as the default search engine in some browsers (e.g., Firefox). Of course, related issues apply to other search engines as well. Why does Yahoo! still enjoy a sizeable market share in search at least in the U.S. one might ask? It is probably related to the fact that more people seem to have a personalized version of Yahoo! as their start page in their browsers than any other customized starting page. Or maybe it is because Yahoo! also offers sufficiently good search results.
This then leads us to another issue: the assumption that users carefully consider or realize that there are differences in what search engines return in response to their queries. There is room for much more research here (some of it one of my students may pursue soon), but based on what we know so far, some people tend to have a tremendous amount of trust in results presented by Google. One could say this is due to Google’s superior quality, but research has found that even when results are manipulated and the less relevant ones are offered up on top, some users will click on them presumably because they believe them to be the most relevant. (I’d really like to see that study replicated on users of other search engines to see how this compares across services. Also, additional tweaks to that study design could help us learn more about these issues.)
We still have a lot to learn about the extent to which users actually consider the quality of search engines when using them. Presumably as long as they find (or think they have found!) what they are looking for they will be satisfied. However, again, research (e.g., here, with more in the works) suggests that some users are very bad about assessing the quality of the material that shows up on pages linked from search engine results, which then puts into question their ability to evaluate search engine results quality.
I am not suggesting that Google is not a good search engine nor am I even suggesting that it is not necessarily the best search engine (although how one defines quality in this domain is tricky). I would love to see some really careful studies on this actually. What I am suggesting is that equating market share in searches should not be confused with quality of search results. I know that there are some very talented folks at Google working on search quality some of whom I know and with whom I have had very interesting and helpful conversations. I’m grateful for the work that they do. Nontheless, that’s a different issues. My point here is that I would not dismiss lock-in factors and others in explaining the service’s popularity based on what my research has taught me about how people use search engines.
I have to add one more note here as it is related and it is something I have been trying to insert into discussions of this sort for years. It may be helpful to remember that most search engine market share data look at proportion of searches not proportion of searchers. Since power users are more likely to be Google users (various data sets I work with supply evidence for this), I suspect that if we were to look at market share based on user figures Google’s share would be smaller than it seems based on figures about proportion of searches. I’ve been commenting on this for years, but the statistics that continue to be discussed concern searches not searchers. Of course, both figures may be relevant, but which one is more relevant depends on the particular questions asked. When discussing quality, it seems that proportion of users would be just as important to consider (if not more) than proportion of searches since presumably all users would want to use the highest quality search engine. Point being, if Google is so superior and that explains its popularity then why doesn’t it have a much larger market share especially regarding proportion of users?
UPDATE: Before trying to explain Google’s popularity today with why people turned to it in the earlier part of this decade, I think it’s worth noting that the Web of 2008 is very different from the Web of 2001/02 when people started migrating over to Google in masses. Explaining that trend doesn’t necessarily say much about why people may stick with it today and what, if anything, would inspired them to try a new service now.
UPDATE 2: Perhaps worth noting here is that I think of “lock-in” not in the completely restrictive sense of the term. Of course, I know that there is no technical lock-out from other options, my point was that given how people use the Internet for information seeking, something similar is going on nonetheless
February 25th, 2008 at 6:14 pm
It is indeed an interesting idea and i think there are enough examples throughout the history. I believe the arrangement of modern keyboard and its alternative (i.e. QWERTY vs. Dvorak) is a very good example of lock-in and transition costs. Hitwise recent data shows that Google’s other services lag far behind its search market share, which indeed strengthens the point you are making (I wish they had some comparison with alternatives to Google’s online services).
But mostly, i agree with you that this would be a fascinating topic to look at. For example, it would be actually really interesting to see if tech-savyness of the users (or their digital literacy) play role in their search engine mobility. And as to the claim of Google search quality, i wonder if the fact that Google has so much information about our online behavior makes its results more relevant/good/accurate for a specific user.
Just a few thoughts… 🙂
February 26th, 2008 at 3:16 pm
As I said over at CT, I came over to Google from a mix of Yahoo, Webcrawler, and Excite after receiving a recommendation from quite a savvy moderator of a discussion forum. He thought it was more sensitive than other engines and endorsed it on that basis. Since then I’ve stuck with it – not out of brand loyalty, but convenience and habit I guess.
February 27th, 2008 at 1:04 am
[…] Update: Not surprisingly, Eszter says it better. These icons link to social bookmarking sites where readers can share and discover new web pages. […]
March 3rd, 2008 at 10:22 pm
don’t forget. Varian is the economist who wrote a book about the virtues of DRM, and how such a nifty technology will help the media industry profit in the digital era.
I reckon Habit, the great predictor that is what you did in time t-1, is the reason people stick with google. They made the shift, and the choice for doing a search holds steady until something happens to shock behavior
Human behavior is wonderfully predictable as long as we know what you did last time: