While attending a panel presentation by students pursuing degrees online, I heard one of the student panelists share something to the effect of, “Oh, I don’t do Office Hours. However, instead of Office Hours, one of my instructors had these ‘Afternoon Tea’ sessions on Zoom that I loved to attend when it worked with my schedule. She answered my questions, and I feel like she got to know me better. She was also available to meet by appointment.” What wasn’t revealed was why this student wouldn’t attend something called “Office Hours” but did attend these other sessions. Did “Office Hours” sound too formal? Was she unsure of what would happen during office hours, or unsure of what the purpose was? Did she think office hours was something students only went to if they were failing the course? The student didn’t say.

There is some mystery around why this student wouldn’t attend office hours, and her comment reminded me of what I had read in Small Teaching Online: Applying Learning Science in Online Classes, by Flower Darby and James Lang (available digitally through the Valley Library if you are part of the OSU community). In Small Teaching Online, under the section titled, “Get Creative with Virtual Office Hours,” several tips are highlighted for how to enhance participation in office hours. Here is a summary of a few of those tips presented in this book, which are based on Lowenthal’s 2017 study (pp. 119-121, Darby & Lang, 2019):

  • Rename office hours to sound more welcoming: “Afternoon Tea,” “Consultations,” or “Coffee Breaks” are some ideas to consider (p. 188, Lowenthal, 2017).
  • To enhance participation, plan just 3-4 well-timed sessions instead of weekly office hours, and announce them early in the term. For timing, think about holding a session before or after a major assessment or project milestone is due, for example.
  • Collect questions ahead of time, and make office hours optional.

Additionally, outside of office hours, remind students that you are available to meet with them individually by appointment since students’ schedules vary so widely. 

Putting these tips into practice, here is what the redesigned office hours can look like in an asynchronous online course, where this “Coffee Break” happens three times in the term and is presented in the LMS using the discussion board tool or the announcements feature as needed:

Canvas page shows a banner image titled "Coffee Break" and  "Join me for a chat. I hope to get to know each of you in this course, so I would like to invite you to virtual coffee breaks." The description on the page details expectations, tasks, and how to join the Coffee Break.

What I like about this design is that the purpose and expectations of the session are explained, and it is flexible for both students and faculty. The “Coffee Break” is presented in an asynchronous discussion board so that students’ questions can be collected ahead of time and at their convenience. Further, if something comes up with the faculty and the live “Coffee Break” is canceled, the faculty can answer questions asynchronously in the discussion board. There is also a reminder that students are invited to make a separate appointment with their instructor at a time that works for them.

Have you tried rebranding your office hours? How did it go?

References

Darby, F., & Lang, J. M. (2019). Small teaching online : applying learning science in online classes (First edition.). Jossey-Bass, a Wiley Brand.

Lowenthal, P. R., Dunlap, J. C., & Snelson, C. (2017). Live Synchronous Web Meetings in Asynchronous Online Courses: Reconceptualizing Virtual Office Hours. Online Learning (Newburyport, Mass.), 21(4), 177-. https://doi.org/10.24059/olj.v21i4.1285

by Greta Underhill

Are you interested in qualitative research? Are you currently working on a qualitative project? Some researchers find it helpful to use a computer-assisted qualitative data analysis software (CAQDAS) program to help them organize their data through the analysis process. Although some programs can perform basic categorization for researchers, most software programs simply help researchers to stay organized while they conduct the deep analysis needed to produce scientific work. You may find a good CAQDAS program especially helpful when multiple researchers work with the same data set at different times and in different ways. Choosing the right CAQDAS for your project or team can take some time and research but is well worth the investment. You may need to consider multiple factors before determining a software program such as cost, operating system requirements, data security, and more.

For the Ecampus Research Unit, issues with our existing CAQDAS prompted our team to search for another program that would fit our specific needs: Here’s what we were looking for:

NeedsReasoning
General qualitative analysisWe needed a program for general analysis for multiple types of projects; Other programs are designed for specific forms of analysis such as Leximancer for content analysis
Compatibility across computer operating systems (OS)Our team used both Macs and PCs
Adherence to our institution’s IRB security requirementsLike many others, our institution and our team adhere to strict data security and privacy requirements, necessitating a close look at how a program would manage our data
Basic coding capabilitiesAlthough many programs offer robust coding capabilities, our team needed basic options such as coding one passage multiple times and visually representing coding through highlights
Export of codes into tables or Excel booksThis function is helpful for advanced analysis and reporting themes in multiple file formats for various audiences
A low learning-curveWe regularly bring in temporary team members on various projects for mentorship and research experience, making this a helpful function
A one-time purchaseA one-time purchase was the best fit for managing multiple and temporary team members on various projects

Testing a CAQDAS

I began systematically researching different CAQDAS options for the team. I searched “computer-assisted qualitative data analysis software” and “qualitative data analysis” in Google and Google Scholar. I also consulted various qualitative research textbooks and articles, as well as blogs, personal websites, and social media handles of qualitative researchers to identify software programs. Over the course of several months, I generated a list of programs to examine and test. Several programs were immediately removed from consideration as they are designed for different types of analysis: DiscoverText, Leximancer, MAXQDA, QDA Miner. These programs are powerful, but best suited for specific analysis, such as text mining. With the remaining programs, I signed up for software trials, attended several product demonstrations, participated in training sessions, borrowed training manuals from the library, studied how-to videos online, and contacted other scholars to gather information about the programs. Additionally, I tested whether programs would work across different operating systems. I kept recorded details about each of the programs tested, including how they handled data, the learning curve for each, their data security, whether they worked across operating system, how they would manage the export of codes, and whether they required a one-time or subscription-based payment. I started with three of the most popular programs, NVivo, Dedoose, and ATLAS.ti. The table below summarizes which of these programs fit our criteria.

NVivoDedooseATLAS.ti
General Qualitative Analysis
Cross-OS Collaboration
Data security
Basic coding capabilities
Export codes
Low learning curve
One-time purchase
A table demonstrating whether three programs (NVivo, Dedoose, and ATLAS.ti) meet the team’s requirements. Details of requirements will be discussed in the text of the blog below.

NVivo

I began by evaluating NVivo, a program I had used previously. NVivo is a powerful program that adeptly handled large projects and is relatively easy to learn. The individual license was available for one-time purchase and allowed the user to maintain their data on their own machine or institutional servers. However, it had no capabilities for cross-OS collaboration, even when clients purchased a cloud-based subscription. Our team members could download and begin using the program, but we would not be able to collaborate across operating systems.

Dedoose

I had no prior experience with Dedoose, so I signed up for a trial of the software. I was impressed with the product demonstration, which significantly helped in figuring out how to use the program. This program excelled at data visualization and allowed a research team to blind code the same files for interrater reliability if that suited the project. Additionally, I appreciated the options to view code density (how much of the text was coded) as well as what codes were present across transcripts. I was hopeful this cloud-based program would solve our cross-OS collaboration problem, but it did not pass the test for our institution’s IRB data security requirements because it housed our data on Dedoose servers.

ATLAS.ti

ATLAS.ti was also a new program for me, so I signed up for a trial of this software. It is a well-established program with powerful analysis functions such as helpful hierarchical coding capabilities and institutive links among codes, quotations, and comments. But the cross-OS collaboration, while possible via the web, proved to be cumbersome and this too did not meet the data security threshold for our institution’s IRB. Furthermore, the price point meant we would need to rethink our potential collaborations with other organizational members.

Data Security

Many programs are now cloud-based, which offer powerful analysis options, but unfortunately did not meet our IRB data security requirements. Ultimately, we had to cut Delve, MAXQDA, Taguette, Transana, and webQDA. All of these programs would have been low-learning curve options with basic coding functionality and cross-OS collaboration; however, for our team to collaborate, we would need to purchase a cloud-based subscription, which can quickly become prohibitively expensive, and house our data on company servers, which would not pass our institutional threshold for data security.

Note-taking programs

After testing multiple programs, I started looking beyond just qualitative software programs and into note-taking programs such as DevonThink, Obsidian, Roam Research, and Scrintal. I had hoped these might provide a work around by organizing data on collaborative teams in ways that would facilitate analysis. However, most of them did not have functionalities that could be used for coding or had high learning curves that precluded our team using them.

It seemed like I had exhausted all options and I still did not have a program to bring back to the Research Unit. I had no idea that a low-cost option was just a YouTube video away. Stay tuned for the follow-up post where we dive into the solution that worked best for our team.

 

Photo by Sarah Kilian on Unsplash.

This is the paradox of failure in games. It can be stated like this:

  1. We generally avoid failure.
  2. We experience failure when playing games.
  3. We seek out games, although we will experience something that we normally avoid. (Juul, p. 2)

As a continuation from my last blog post considering grades and Self-Determination Theory, I wanted to take a brief side-quest into considering what it means to experience failure. Jesper Juul’s The Art of Failure: An Essay on the Pain of Playing Video Games will provide the main outline and material for this post, while I add what lessons we might learn about feedback and course design in online settings.

Dealing with Failure

Juul outlines how games communicate through feedback using the theory of Learned Helplessness. Specifically, he focuses on Weiner’s attribution theory, which has three dimensions:

  1. Internal vs. External Failure
    1. Internal: The failure is the fault of the player. “I don’t have the skills to defeat this enemy right now.”
    2. External: The failure is the fault of the game. “The camera moved in a way that I couldn’t see or control and resulted in a game over.”
  2. Stable vs. Unstable Failure
    1. Stable: The failure will be consistent. No recognition of experience gained or improvement. “No matter what I do, I can’t get past this challenge.”
    2. Unstable: The failure is temporary. There is a possibility for future success. “I can improve and try again.”
  3. Global vs. Specific Failure
    1. Global: There is a general inability preventing success. “I am not good at playing video games.”
    2. Specific: Poor performance does not reflect on our general abilities or intelligence. “I’m not good at flight simulators, but that doesn’t mean I’m bad at all video games.”

In general, a combination of Internal+Stable+Global failure feedback would contribute most strongly toward a player adopting a learned helplessness mindset. There is a potential parallel here with course design: when a student does not do well on an assessment, what kind of feedback are they receiving? In particular, are they receiving signals that there is no opportunity for improvement (stable failure) and that it shows a general inability at the given task (global failure)? Designing assessments so that setbacks are unstable (offer multiple attempts and a way for students to observe their own improvement over time) and communicating specific skills to improve (make sure feedback pinpoints how a student could improve) would help students bounce back from a “game over” scenario. But what about internal vs. external failure? For Juul, “this marks another return of the paradox of failure: it is only through feeling responsible for failure (which we dislike) that we can feel responsible for escaping failure (which we like)” (p. 54). This importance of internal failure aligns with what we know about metacognition (Berthoff, “Dialectical notebooks and the audit of meaning”) and the numerous benefits of reflection in learning.

Succeeding from Failure

Now that we have an idea on how we deal with failure, let’s consider how we can turn that failure into success! “Games then promise players the possibility of success through three different kinds of fairness or three different paths: skill, chance, and labor” (Juul, p. 74):

  1. Skill: Learning through failure, emphasis on improvement with each attempt. (This is also very motivating by being competence-supportive!)
  2. Chance: We try again to see if we get lucky.
  3. Labor: Incremental progress on small tasks accumulates more abilities and items that persist through time and multiple play sessions. Emphasis here is on incremental growth over time through repetition. (Animal Crossing is a great example.) (This path is also supported by Dweck’s growth mindset.)

Many games reward players for all three of these paths to success. In an online course, allowing flexibility in assignment strategies can help students explore different routes to success. For example, a final project could allow for numerous format types, like a paper, podcast, video tutorial, interactive poster, etc. that students choose strategically based on their own skills and interests. Recognizing improvement will help students with their skills and helping students establish a routine of smaller, simpler tasks that build over an entire course can help them succeed through labor. Chance is an interesting thing to think about in terms of courses, but I like to think of this as it relates to content. Maybe a student “gets lucky” by having a discussion topic align with their final project topic, for example. For the student in that example, that discussion would come easier to them by chance. Diversifying content and assignment types can help different individuals and groups of students feel like they have “lucky” moments in a course.

Reflecting on Failure

Finally, how do games give us the opportunity to reflect on our successes and failures during gameplay? Juul outlines three types of goals that “make failure personal in a different way and integrates a game into our life in its own way” (pp. 86–87):

  1. Completable Goal: Often the result of a linear path and has a definite end.
    1. These can be game- or player-created. (i.e., Game-Driven: Defeat the ghost haunting the castle. Player-Driven: I want to defeat the ghost without using magic.)
  2. Transient Goal: Specific, one-time game sessions with no defined end, but played in rounds. (e.g., winning or losing a single round of Mario Kart.)
  3. Improvement Goal: Completing a personal best score, where a new high score sets a new goal.

For Juul, each of these goal-types have different “existential implications: while working toward a completable goal, we are permanently inscribed with a deficiency, and reaching the goal removes that deficiency, perhaps also removing the desire to play again. On the other hand, we can never make up for failure against a transient goal (since a lost match will always be lost), whereas an improvement goal is a continued process of personal progress” (pp. 86–87). When thinking about your courses, what kinds of goals do you design for? Many courses have single-attempt assignments (transient goal), but what if those were designed to be improvement goals, where students worked toward improving on their previous work in a more iterative way that replaced old scores with new and improved scores (improvement goal)? Are there opportunities for students to create their own challenging completable goals?

I hope this post shines a light on some different ways of thinking about assessment design, feedback types, and making opportunities for students to “fail safely” based on how these designs are achieved in gaming. To sum everything up, “skill, labor, and chance make us feel deficient in different ways when we fail. Transient, improvement, and completable goals distribute our flaws, our failures, and successes in different ways across our lifetimes” (Juul, p. 90).