As a land grant university, we ensure our content is research-based and grounded in scientific principles. To fulfill our mission effectively, we need to ensure that the methods we use to deliver that content are as well. In this post, we’ll talk about a couple different methods of website research. We’ll also give some ideas for experiments you can do to research the effectiveness of your own web content.

As a team that works with websites professionally, we draw on existing academic and private research from many fields. These include computer science, psychology, marketing/advertising/media studies, library science, etc. We also use publicly-available datasets, such as search term data from Google, to make decisions.

To ensure that we are applying this research and data effectively in our specific context, we also conduct our own research. These are the main methods we use.

Page experiments (A/B and multivariate testing)

This is when you set up a system on the website that delivers one version of a page (version A) to half of the visitors and another version (version B) to the other half, which is why it’s commonly called “A/B testing”. (You can do this with more than two versions as well, in which case it is called “multivariate” testing.) Analytics are recorded separately for each version, so after some time you can check to see which performed better. Then you just start delivering that version to all visitors.

For example, let’s say you were setting up a page that included a button link to download a publication that says “Download the publication.” However, as you are working it occurs to you that more people might click on it if it said “Download the free publication.” To test this, you could set up an A/B test on the page where each version is shown to half the visitors. Then after a week or so you can check and see which version has a higher percentage of visitors that click the button.

A/B and multivariate testing are available through Google Analytics, which runs the analytics on the Extension website. If you are interested in potentially trying out a page experiment, contact the Extension Communications Web Team.

User testing

This method involves recruiting one or more participants, who receive a list of tasks that you watch them complete (or try to complete). 

For example, let’s say you work with a program that is putting on an event. Attendees are required to download, print, and fill out a couple of forms in order to register, which you are worried will be confusing to people. Before you announce the event, you could perform user testing with a few of your program participants to make sure people are able to figure it out. You could meet with them over Zoom, have them share their screen, and ask them to find the forms they need for the event while you watch. If they get confused or lost anywhere, you could then update your content to help others avoid that pitfall.

The following are some ideas for other kinds of user tests that you might consider doing for your own content.

Cloze test for comprehension

Use this test when you have written a piece of content and want to ensure that it is understandable to your target audience, particularly if that audience is so specific that automatic readability tests (such as Hemingway App) may need to be verified (such as English language learners or young kids).

Procedure:

  1. Choose a piece of content (or a sample of text from a piece of content) to test and remove every fifth word, replacing each with a blank space to fill in.
  2. Recruit participants
  3. Ask each participant to fill in each blank space with the word they think was removed.
  4. If participants can get about 60% of words right or above, the text can usually be considered readable for the audience. Otherwise, the text probably needs to be reworked.

Highlighter test for reader impressions

Use this test when you have written content and want to test how effective it is at its intended purpose. This tests involves asking participants to identify sections of text that they consider to be effective in whatever way you specify (e.g. “clear/understandable”, “inspires confidence”) as well as sections that are not effective in that way (e.g. “confusing”, “makes you feel less confident”).

Procedure

  1. Print out the content and get two different colors of highlighter
  2. Recruit participants and ask each to read the printed out content. Have them mark words or phrases they find confusing in one color and words or phrases they find especially clear in the other.

Learn more about the highlighter test from GOV.UK

Tree testing

Use this test when you have a set of links or categories that you are using (or plan to use) as navigation and want to ensure that they are understandable to your target audience.

Procedure:

  1. Create a list of your links/categories, including any nested links/categories.
  2. Identify several (~3-4) important or representative pieces of content that are (or will be) contained in one of the links/categories.
  3. Recruit participants, who will be tested one at a time (we recommend doing several rounds with 2-3 participants rather than one big round with many participants).
  4. For each participant, for each piece of content you identified, ask them which link/category in the list they would expect to find the content. Record their answer.
  5. If several participants fail to identify the correct category/link for a piece of content, then probably either that content needs to be moved or you need to rethink the wording and/or organization of your links/categories.

Read more about tree testing from Optimal Workshop

Card sorting

Use this test when you have a lot of content that needs to be sorted into categories, but you haven’t come up with those categories yet or are having trouble doing so.

Procedure:

  1. For each piece of content that needs to be sorted, create a “card.” The “low-tech” version of this is just the title of the content on a sticky note, but there are also a number of services (such as OptimalSort) that can allow you to do this digitally.
  2. Recruit participants, who can be tested one at a time or in groups
  3. Ask participants to group the cards into categories that make sense to them and, optionally, name the groups.
  4. When you have developed a set of categories, use tree testing (above) to double check that your final list still makes sense to your users.

Read more about card sorting from Usability.gov

Content questions test

Use this test when you have written content and want to ensure that its title and/or short description gives readers an accurate idea of what it includes.

Procedure:

  1. Select one or more pieces of content to test and collect its title and short description
  2. Recruit participants
  3. Give each participant a copy of the content’s title and short description. Ask them to come up with three questions they would expect to be answered by the content based on the title and short description.

Web updates

Help text has been added to all content fields, explaining the function and purpose of each field and best practices for filling them out. Please contact the Extension Communications web team with any questions.

Print Friendly, PDF & Email

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> 

required