In Google We Trust

Question: How do you want to find Open Educational Resources?


OER Google search box

As mentioned in my last post, our expert workshop included user testing several OER sites including JORUM, Merlot, Xpert and Connexions. To this list must be added Google, whose presence hovered constantly throughout the workshop.

Participants were asked to find teaching resources on qualitative interviewing and to generally evaluate the user interface. We divided into two groups for the session and amongst the group I was with, all sites reviewed were new.

Speed of Judgement

This sort of user testing always creates two impressions; firstly, how valuable it is to see a familiar website through someone else’s eyes, and secondly, how difficult web design is because of the speed with which users make decisions and judgements online. I had expected to run into time difficulties getting though all the web sites. In practice, judgements were made in the space of a few minutes at most. The value of having experts review the sites was the quality of the general conversation the tasks engendered. The strongest theme to emerge from the user testing was one of frustration. The most frequently expressed sources included:

  • Results judged not to be relevant
  • Not enough information provided about resources before selection
  • Resources were slow to load (particularly JORUM)
  • Navigation and presentation of results caused confusion

Using Google to Search

The session took an unexpected turn towards the end when one of the participants suggested comparing results with a Google search. All agreed the Google search produced the best results. What was interesting about this consensus was the speed and certainly of the judgement without moving away from the results page; further questioning showed that within this incredibly short amount of time a number of complex evaluations had been made. The success of the Google search was described in terms of:

  • Relevance of results
  • Clarity of results –knowing what the resource is from title and text
  • Transparency of authorship (through academic URLs)
  • Ability to distinguish file format (such as PDFs, PowerPoint)

Just how relevant results are is a moot point; Google’s ability to present information on the results page with search terms highlighted (rather than the first few sentences or an abstract) may contribute to making results seem more relevant than they actually are.  However, what matters is that Google provides enough information for people to quickly establish personal relevance which in turns leads to trusting the search.


The trust invested in Google in our user testing meant that irrelevant results (such as commercial ones) were simply not noticed. People recognise that it is a general website and so seem to have adopted strategies so that they only focus on relevant results. Without this trust and familiarity, equivalent irrelevant results in OER searches acted to make participants mistrust the site. This mistrust was compounded by the initial expectation that the OER sites should be more relevant than Google because of the academic focus.  It must also be added that the percentage of ‘irrelevant’ results was much higher with the OER sites – participants had difficulties identifying many resources for interviewing.

What the user testing demonstrated was the pervasiveness of Google in determining how people make sense of searching online. Even though not part of the design, both groups automatically compared all the sites to Google; my group simply took it further in doing a comparison search.

Google has invested a huge amount of money in its searches and no OER search is ever going to compete. But maybe understanding how Google generates trust could help in exploring alternative means of developing trust amongst OER site users. The awareness of how people interact with the resources gained through user testing  is also a good reality check for all those involved with promoting the use of OERs.

This entry was posted in C-SAP OER Collections project, OER discovery, OER Phase II, Research methods and tagged , , , . Bookmark the permalink.

8 Responses to In Google We Trust

  1. Pat says:

    I’d agree that google is the answer in a lot of ways – but also that the trust is partially based on familiarity?

    I wish I had the link to hand, but I read somewhere that people often decide if a website is useful within the first 15 seconds, and after that you’re fighting a losing battle / opinions will never change. User experience is a key issue – google wins slightly by being so dominant that no one can really find a decent alternative.

    Most sites fail that miserably on user experience.

    I’d be keen to hear any of the Xpert feedback?

    • Isabelle Brent says:

      I’d agree that trust and familiarity go hand in hand and I am sure much of Google’s popularity lies in its familiarity and lack of alternatives. However, I do think Google gained its dominance through its emphasis on usability. Its first core value is ‘focus on the user and all else will follow’ and this approach has paid dividends.

      I would love to see some references to how long people spend on websites. The references I have found seem to be refer to debatable psychological assumptions like the claim that it takes four seconds to form a lasting impression. As far as I know that idea refers to the concept of ‘adaptive unconsciousness’; an interesting idea but not the most helpful in judging the effectiveness of websites since it relates to what is inaccessible to conscious awareness. If the judgements people make about websites are below consciousness then we may as well all go home. I am more interested in the secondary level of user-experience reflected in our user tests where people are able to quite clearly describe what it is they like and dislike about websites. As I mentioned earlier, what is so interesting about user testing is how articulate people are in describing what they do and don’t like – the only difficulty is imposing the artificiality of getting people to verbalise what are very quick mental assumptions.

      And I really think the whole field could do with more user testing. There are all sorts of complicated ways of going about it but the most basic form with just a few participants is so straightforward but generates a huge amount of useful information.

      I think I covered the main part of the Xpert feedback in my last comment but I’m in the process of writing up the Report for the workshop and will extract the relevant sections.

      • Pat says:

        I’d agree every site could do with user testing more – but we also have two distinct user’s cases, and I think I commented this on one of Anna’s recipe blogs – if people are coming to find OERs to change – that’s a different group of people to people coming to learn.

        Most of the requests I get at Xpert are for new features to facilitate technological problems.

        This is a recent blog by me about how most of Xpert’s searches almost happen via our APIs now – so we are a search engine a lot of people don’t even know they are using. So we can use other people’s trust (most of the API searches come from subject centres) as a proxy almost – extends the google trust issue into interesting areas I think.

  2. Pat says:

    Oh and you should have swapped “i’m feeling lucky” for “i’m feeling learny” 🙂

  3. Andy says:

    Really interesting post. In terms of relevance of items found in Google, were these judged relevant because of subject matter, openness or both? Did users even care?

    • Isabelle Brent says:

      Openness didn’t come into it, our participants had no background in OERs and some expressed reservations about making work open for a variety of reasons. The underlying assumption was that if the resource was for teaching then the licensing didn’t matter (which turns out to be another reason why participants were reluctant to make their own work open in case they had used copyrighted images). One further interesting perspective that all my group expressed was that the production of teaching resources was their own responsibility and they would only use resources from the web for their own preparation. It almost felt like a form of cheating to take an exercise from the web – whether the source was open or not. If that is the primary purpose of using online materials then the licensing becomes even less important to the user.

      Maybe more has to be done persuaded people of the advantages of the CC license. I know attempts have been made by JISC and others to do this but if there is a structural imperative (lack of funding and increased competition between universities) operating against it then it is going to be a long haul.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s