multidisciplinary seeking solutions
Seeking Solutions @Oregon State University

Core Education at Oregon State University launched summer 2025 and is designed to deepen how students think about problem-solving in ways that transcend disciplinary-specific approaches. It aims at preparing students to be adaptive, proactive members of society who are ready to take on any challenge, solve any problem, advance in their chosen career and help build a better world (Oregon State University Core Education, 2025).


Designing Seeking Solutions Signature Core category courses presents a few challenges, such as the nature of wicked problems, cross-discipline teamwork, and the global impact of wicked problems, to name just a few. In the past eight months, instructional designers at Oregon State University Ecampus have worked intensively to identify design challenges, brainstorm course design approaches, discuss research on teamwork and related topics, and draft guidelines and recommendations in preparation for the upcoming Seeking Solutions course development projects. Here is a list of the key topics we reviewed in the past few months.
1. Wick Problems
2. Team conflict
3. Online Large Enrollment Courses

Next, I will share summaries of research articles reviewed and implications for instructional design work for each of the above topics. 

Wicked Problems

A wicked problem, also knowns as ill-structure problem or grand challenge, is a problem that is difficult or impossible to solve due to its complex and ever-changing nature. Research suggests that wicked problems must have high levels of three dimensions: complexity, uncertainty and value divergence. Complexity can take many forms but often involves the need for interdisciplinary reasoning and systems with multiple interacting variables. Uncertainty typically refers to how difficult it is to predict the outcome of attempts to address wicked problems. Value divergence refers particularly to wicked problems having stakeholders with fundamentally incompatible worldviews. It is the presence of multiple stakeholders in wicked problems with incompatible viewpoints that marks the shift from complex to super complex. (Veltman, Van Keulen, and Voogt, 2019; Head, 2008)
The Seeking Solutions courses expect students to “wrestle with complex, multifaceted problems, and work to solve them and/or evaluate potential solutions from multiple points of view”. Supporting student learning using wicked problems involves designing activities with core elements that reflect the messiness of these types of problems. McCune et al. (2023) from University of Edinburgh interviewed 35 instructors teaching courses covering a broad range of subject areas. 20 instructors teaching practices focused on wicked problems, while the other 15 instructors whose teaching did not relate to wicked problems. The research goal is to understand how higher education teachers prepare students to engage with “wicked problems”—complex, ill-defined issues like climate change and inequality with unpredictable consequences. The research question is “Which ways of thinking and practicing foster effective student learning about wicked problems?” The article recommended four core learning aspects essential for addressing wicked problems from their study:
1. Interdisciplinary negotiation: Students must navigate and integrate different disciplinary epistemologies and values.
2. Embracing complexity/messiness: Recognizing uncertainty and non linear problem boundaries as part of authentic learning.
3. Engaging diverse perspectives: Working with multiple stakeholders and value systems to develop consensus-building capacities.
4. Developing “ways of being”: Cultivating positional flexibility, uncertainty tolerance, ethical awareness, and communication across differences


Applications for instructional designers: 

As instructional designers work very closely with course developers, instructors, and faculty, they contribute significantly to the design of Seeking Solutions courses. Here are a few instructional design recommendations regarding wicked problems from instructional designers on our team:
• Provide models or structures such as systems thinking for handling wicked problems.
• Assign students to complete the Identity Wheel activity and reflect on how their different identities shape their views of the wicked problems or shifts based on contextual factors. (resources on The Identity Wheel, Social Wheel, and reflection activities).
• Provide activities early in the course to train students on how to work and communicate in teams; to take different perspectives and viewpoints.
• Create collaborative activities regarding perspective taking.
• Evaluate assessment activities by focusing on several aspects of learning (students’ ability to participate; to solve the problem; grading the students on the ability to generate ideas, to offer different perspectives, and to collaborate; evaluation more on the process than the product, and self-reflection). 

Team Conflict and Teamwork

“A central goal of this category is to have students wrestle with complex, multifaceted problems, and evaluate potential solutions from multiple points of view” (OSU Core Education, 2025). Working in teams provides an opportunity for teammates to learn from each other. However, teamwork is not always a straightforward and smooth collaboration. It can involve different opinions, disagreements, and conflict. While disagreements and differences can be positive for understanding others’ perspectives when taken respectively and rationally; when disagreements are taken poorly, differences in perspectives rises to become conflict and conflict could impact teamwork, morality, and outcomes negatively. Central to Seeking Solutions courses is collaborative teamwork where students will need to learn and apply their skills to work with others, including perspectives taking.

Aggrawal and Magana (2024) conducted a study on the effectiveness of conflict management training guided by principles of transformative learning and conflict management practice simulated via a Large Language Modeling (ChatGPT 3.5).
Fifty-six students enrolled in a systems development course were exposed to conflict management intervention project. The study used the five modes of conflict management based on the Thomas-Kilmann Conflict Mode Instrument (TKI), namely: avoiding, competing, accommodating, compromising, and collaborating. The researchers use a 3-phase (Learn, Practice and Reflect) transformative learning pedagogy. 

  • Learn phase: The instructor begins with a short introduction; next, students watch a youtube video (duration 16:16) on conflict resolution. The video highlighted two key strategies for navigating conflict situations: (1) refrain from instantly perceiving personal attacks, and (2) cultivate curiosity about the dynamics of difficult situations. 
  • Practice phase: students practice conflict management with a simulation scenario using ChatGPT 3.5. Students received detailed guidance on using ChatGPT 3.5. 
  • Reflect phase: students reflect on this session with guided questions provided by the instructor. 

The findings indicate 65% of the students significantly increased in confidence in managing conflict with the intervention. The three most frequently used strategies for managing conflict were identifying the root cause of the problem, actively listening, and being specific and objective in explaining their concerns. 


Application for Instructional Design

Providing students with opportunities to practice handling conflict is important for increasing their confidence in conflict management. Such learning activities should have relatable conflicts like roommate disputes, group project tension, in the form of role-play or simulation where students are given specific roles and goals, with structured after-activity reflection to guide students to process what happened and why, focusing on key conflict management skills such as I-messages, de-escalation, and reframing, and within safe environment.

Problem Solving

Creativity, collaboration, critical thinking, and communication—commonly referred to as the 4Cs essential for the future—are widely recognized as crucial skills that college students need to develop. Creative problem solving plays a vital role in teamwork, enabling teams to move beyond routine solutions, respond effectively to complexity, and develop innovative outcomes—particularly when confronted with unfamiliar or ill-structured problems. Oppert et al. (2022) found that top-performing engineers—those with the highest levels of knowledge, skills, and appreciation for creativity—tended to work in environments that foster psychological safety, which in turn supports and sustains creative thinking. Lim et al. (2014) proposed to provide students with real-world problems. Lee et al. (2009) suggest to train students on fundamental concepts and principles through a design course. Hatem and Ferrara (2001) suggest using creative writing activities to boost creative thinking among medical students. 

Application for Instructional Designers

We recommend on including an activity to train students on conflict resolution, as a warm-up activity before students work on actual course activities that involve teamwork and perspective taking. Also, it will be helpful to create guidelines and resources that students can use for managing conflict, and add these resources to teamwork activities.

Large Enrollment Online Courses

Teaching large enrollment science courses online presents a unique set of challenges that require careful planning and innovative strategies. Large online classes often struggle with maintaining student engagement, providing timely and meaningful feedback, and facilitating authentic practice. These challenges underscore the need for thoughtful course design and pedagogical approaches in designing large-scale online learning environments.

Mohammed and team (2021) assessed the effectiveness of interactive multimedia elements in improving learning outcomes in online college-level courses, by surveying 2111 undergraduates at Arizona State University. Results show frequently reported factors that increase student anxiety online were technological issues (69.8%), proctored exams (68%), and difficulty getting to know other students. More than 50% of students reported at least moderate anxiety in the context of online college science courses. Students commonly reported that the potential for personal technology issues (69.8%) and proctored exams (68.0%) increased their anxiety, while being able to access content later (79.0%) and attending class from where they want (74.2%), and not having to be on camera where the most reported factors decreased their anxiety. The most common ways that students suggested that instructors could decrease student anxiety is to increase test-taking flexibility (25.0%) and be understanding (23.1%) and having an organized course. This study provides insight into how instructors can create more inclusive online learning environments for students with anxiety. 

Applications for Instructional Design

What we can do to help reduce student anxieties in large online courses:
1. Design task reminders for instructors, making clear that the instructor and the school care about student concerns.
2. Design Pre-assigned student groups if necessary
3. Design warm up activities to help students get familiar with their group members quickly.
4. Design students preferences survey in week 1.
5. Design courses that Make it easy for students to seek and get help from instructors.

As Ecampus moves forward with course development, these evidence-based practices will support the instructional design work to create high-quality online courses that provide students with the opportunities to develop, refine, and apply skills to navigate uncertainty, engage diverse viewpoints, and contribute meaningfully to a rapidly changing world. Ultimately, the Seeking Solutions initiative aligns with OSU’s mission to cultivate proactive global citizens, ensuring that graduates are not only career-ready but also prepared to drive positive societal change. 

Conclusions

Instructional design for solution-seeking courses requires thoughtful course design that addresses perspective taking, team collaboration, team conflict, problem solving, and possibly large enrollments. Proactive conflict resolution frameworks, clear team roles, and collaborative tools help mitigate interpersonal challenges, fostering productive teamwork. Additionally, integrating structured problem-solving approaches (e.g., design thinking, systems analysis) equips students to tackle complex, ambiguous “wicked problems” while aligning course outcomes with real-world challenges. Together, these elements ensure a robust, adaptable curriculum that prepares students for dynamic problem-solving and sustains long-term program success.


References

Aggrawal, S., & Magana, A. J. (2024). Teamwork Conflict Management Training and Conflict Resolution Practice via Large Language Models. Future Internet, 16(5), 177-. https://doi.org/10.3390/fi16050177


Bikowski, D. (2022). Teaching large-enrollment online language courses: Faculty perspectives and an emerging curricular model. System. Volume 105 


Head, B. (2008). Wicked Problems in Public Policy. Public Policy, 3 (2): 101–118.


McCune, V., Tauritz, R., Boyd, S., Cross, A., Higgins, P., & Scoles, J. (2023). Teaching wicked problems in higher education: ways of thinking and practising. Teaching in Higher Education, 28(7), 1518–1533. https://doi.org/10.1080/13562517.2021.1911986


Mohammed, T. F., Nadile, E. M., Busch, C. A., Brister, D., Brownell, S. E., Claiborne, C. T., Edwards, B. A., Wolf, J. G., Lunt, C., Tran, M., Vargas, C., Walker, K. M., Warkina, T. D., Witt, M. L., Zheng, Y., & Cooper, K. M. (2021). Aspects of Large-Enrollment Online College Science Courses That Exacerbate and Alleviate Student Anxiety. CBE Life Sciences Education, 20(4), ar69–ar69. https://doi.org/10.1187/cbe.21-05-0132

Oppert ML, Dollard MF, Murugavel VR, Reiter-Palmon R, Reardon A, Cropley DH, O’Keeffe V. A Mixed-Methods Study of Creative Problem Solving and Psychosocial Safety Climate: Preparing Engineers for the Future of Work. Front Psychol. 2022 Feb 18;12:759226. doi: 10.3389/fpsyg.2021.759226. PMID: 35250689; PMCID: PMC8894438.


Veltman, M., J. Van Keulen, and J. Voogt. (2019). Design Principles for Addressing Wicked Problems Through Boundary Crossing in Higher Professional Education. Journal of Education and Work, 32 (2): 135–155. doi:10.1080/13639080.2019.1610165.

This post was written in collaboration with Mary Ellen Dello Stritto, Director of Ecampus Research Unit.

Quality Matters standards are supported by extensive research on effective learning. Oregon State University’s own Ecampus Essentials build upon these standards, incorporating OSU-specific quality criteria for ongoing course development. But what do students themselves think about the elements that constitute a well-designed online course?

The Study

The Ecampus Research Unit took part in a national research study with Penn State and Boise State universities that sought student insight into what elements of design and course management contribute to quality in an online course. Data was collected from 6 universities across the US including Oregon State in Fall of 2024. Students who chose to participate completed a 73-item online survey that asked about course design elements from the updated version of the Quality Matters Rubric. Students responded to each question with the following scale: 0=Not important, 1=Important, 2=Very Important, 3=Essential.  A total of 124 students completed survey, including 15 OSU Ecampus students. The findings reveal a remarkable alignment between research-based best practices and student preferences, validating the approach taken in OSU’s Ecampus Essentials.

See the findings in data visualization form below, followed by a detailed description.

Data visualization of the findings. See detailed description after the image.

What Students Consider Most Important

Students clearly value practical, research-backed features that make online courses easier to navigate, more accessible, and more supportive of learning. The following items received the most ratings of “Essential” + “Very Important”:

QM Standards and Study FindingsRelated Ecampus Essentials
Accessibility and Usability (QM Standards 8.2, 8.3, 8.4, 8.5, 8.6): Every OSU student rated course readability and accessible text as “Very Important” or “Essential” (100%). Nationally, this was also a top priority (96% and 91%, respectively). Accessibility of multimedia—like captions and user-friendly video/audio—was also highly rated (100% OSU, 90% nationally).Text in the course site is accessible. Images in the course are accessible (e.g., alt text or long description for images). The course design facilitates readability. All video content is accurately captioned.
Clear Navigation and Getting Started (QM Standards 1.1, 8.1): 93% of OSU students and 94% of the national sample rated easy navigation highly, while 89% of OSU students and 96% nationally said clear instructions for how to get started and where to find things were essential.  Course is structured into intuitive sections (weeks, units, etc.) with all materials for each section housed within that section (e.g., one page with that week’s learning materials rather than a long list of files in the module). Course is organized with student-centered navigation, and it is clear to students how to get started in the course.
Meaningful Feedback and Instructor Presence (QM Standards 3.5, 5.3): Students placed high importance on receiving detailed feedback that connects directly to course content (100% OSU, 94% nationally). The ability to ask questions of instructors was also essential (100% OSU, 96% nationally).  Assessments are sequenced in a way to give students an opportunity to build knowledge and learn from instructor feedback. The instructor’s plan for regular interaction with students in substantive ways during the course is clearly articulated. Information about student support specific to the course (e.g., links to the Writing Center in a writing course, information about TA open office hours, etc.) is provided.  
Clear Grading Criteria (QM Standards 3.2, 3.3): 93% of OSU students and the full sample found clear, detailed grading rules to be essential.  Specific and descriptive grading information for each assessment is provided (e.g., detailed grading criteria and/or rubrics).
Instructional Materials (QM Standard 4.1): All OSU students and 92% nationally rated high-quality materials that support learning outcomes as very important or essential.Instructional materials align with the course and weekly outcomes. A variety of instructional materials are used to appeal to many learning preferences (readings, audio, visual, multimedia, etc.). When pre-recorded lectures are utilized, content is brief and integrated into course learning activities, such as with interactive components, discussion questions, or quiz questions. Longer lectures should be shortened to less than 20 min. chunks.

What Students Consider Less Important

The study also revealed areas where students expressed less enthusiasm:

Study FindingsRelated Ecampus Essentials
Self-Introductions (QM Standard 1.9): Over half of OSU students (56%) and a third nationally (33%) rated opportunities to introduce themselves as “Not Important”.No specific EE
Peer Interaction (QM Standard 5.2): Students were lukewarm about peer-to-peer learning activities. Nearly half said that working in small groups is not important (47% OSU, 46% nationally). About a quarter didn’t value sharing ideas in public forums (27% OSU, 24% nationally) or having learning activities that encourage them to interact with other students (27% OSU, 23% nationally).  Three forms of interaction are present, in some form, in the course (student/content, student/instructor, student/student).
Technology Variety and Data Privacy Info (QM Standards 6.3, 6.4): Some students questioned the value of using a variety of tech tools (20% OSU, 23% nationally rated this as “Not Important”) or being given info about protecting personal data (20% OSU, 22% nationally).  Privacy policies for any tools used outside of Canvas are provided.

Student Comments

Here are a few comments from Ecampus students that illustrate their opinions on what makes a quality course:

  • “Accessible instructional staff who will speak to students in synchronous environments. Staff who will guide students toward the answer rather than either treating it like cheating to ask for help at all or simply giving out the answer.”
  • “A lack of communication/response from teachers and no sense of community” – was seen as a barrier.
  • “Mild reliance on e-book/publisher content, out-weighed by individual faculty created content that matches student deliverables. In particular, short video content guiding through the material in short, digestible amounts (not more than 20 minutes at a go).”
  • “When there aren’t a variety of materials, it makes it hard to successfully understand the materials. For example, I prefer there to be lectures or videos associated with readings so that I understand the material to the professor’s standards. When I only have reading materials, I can sometimes misinterpret the information.”
  • “Knock it off with the discussion boards, and the ‘reply to 2 other posts’ business. This is not how effective discourse takes place, nor is it how collaborative learning/learning community is built.”

Conclusion and Recommendations

The takeaways? This research shows that students recognize and value the same quality elements emphasized in OSU’s Ecampus Essentials:

  1. Student preferences align with research-based standards – Students consistently value accessibility, clear structure, meaningful feedback, and purposeful content.
  2. Universal design benefits everyone – Students’ strong preference for accessible, well-designed courses supports the universal design principles embedded in the Ecampus Essentials.

However, there is always room for improvement, and these data provide some hints. Many students don’t immediately see value in peer interactions and collaborative activities, even though extensive educational research shows these are among the most effective learning strategies. Collaborative learning is recognized as a High Impact Practice that significantly improves student outcomes and critical thinking. This disconnect suggests we need to design these experiences more thoughtfully to help students recognize their benefits. Here are some suggestions:

  • Frame introductions purposefully: Instead of generic “tell us about yourself” posts, connect introductions to course content (“Introduce yourself and share an experience related to the topic of this course”).
  • Design meaningful group work: Create projects that genuinely require collaboration and produce something students couldn’t create alone.
  • Show the connection: Explicitly explain how peer interactions help students learn and retain information better, and the value of teamwork for their future jobs.
  • Start small: Begin with low-stakes peer activities before moving to more complex collaborations.
chart describing the steps in the feedback process

In part one of this two-part blog series, we focused on setting the stage for a better feedback cycle by preparing students to receive feedback. In part two, we’ll discuss the remaining steps of the cycle- how to deliver feedback effectively and ensure students use it to improve.

In part one, we learned about the benefits of adding a preliminary step to your feedback system by preparing students to receive suggestions and view them as helpful and valuable rather than as criticism. If you haven’t read part one, I recommend doing so before continuing. This first crucial but often overlooked step involves fostering a growth mindset and creating an environment where students understand the value of feedback and learn to view it as a tool for improvement rather than criticism. 

Step 2: Write Clear Learning Outcomes

The next step in the cycle is likely more familiar to teachers, as much focus in recent decades has been placed on developing and communicating clear, measurable learning outcomes when designing and delivering courses. Bloom’s Taxonomy is commonly used as a reference when determining learning outcomes and is often a starting point in backwards design strategy. Instructors and course designers must consider how a lesson, module, or course aligns with the learning objectives so that students are well-equipped to meet these outcomes via course content and activities. Sharing these expected outcomes with students, in the form of CLOs and rubrics, can help them to focus on what matters most and be better informed about the importance of each criterion. These outcomes should also inform instructors’ overall course map and lesson planning. 

Another important consideration is ensuring that learning outcomes are measurable, which requires rewriting unmeasurable ones that begin with verbs such as understand, learn, appreciate, or grasp. A plethora of resources are available online to assist instructors and course designers who want to improve the measurability of their learning outcomes. These include our own Ecampus-created Bloom’s Taxonomy Revisited and a chart of active and measurable verbs from the OSU Center for Teaching and Learning that fit each taxonomy level.

Step 3: Provide Formative Practice & Assessments

The third step reminds us that student learning is also a cycle, overlapping and informing our feedback cycle. When Ecampus instructional designers build courses, we try to ensure instructors provide active learning opportunities that engage students and teach the content and skills needed to meet our learning objectives. We need to follow that up with ample practice assignments and assessments, such as low-stakes quizzes, discussions, and other activities to allow students to apply what they have learned. This in turn allows instructors to provide formative feedback that should ideally inform our students’ study time and guide them to correct errors or revisit content before being formally or summatively graded. Giving preliminary feedback also gives us time to adjust our teaching based on how students perform and hone in on what toreview before assessments. Providing practice tests or assignments or using exam wrappers, exit cards, or “muddiest point” surveys to collect your students’ feedback can also be an important practice that can help us improve our teaching.

Step 4: Make Feedback Timely and Actionable

Step four is two-fold, as both the timeliness and quality of the feedback we give are important. The best time to give feedback is when the student can still use it to improve future performance. When planning your term schedule, it can be useful to predict when you will need to block off time to provide feedback on crucial assignments and quizzes, as a delay for the instructor equates to a delay for students. Having clear due dates, reminding students of them,  and sticking to the timetable by giving feedback promptly are important aspects of giving feedback.

To be effective, feedback must focus on moving learning forward. It should target the identified learning gap and suggest specific steps for the student to improve.. For a suggestion to be actionable, it should describe actions that will help the student do better without overloading them with too much information- choose a few actionable areas to focus on each time. Comments that praise students’ abilities, attitudes, or personalities are not as helpful as ones that give them concrete ways to improve their work.

Step 5: Give Time to Use Feedback and Incentive it

The last step in the cycle, giving students time to use the feedback provided, is often relegated to homework or ignored altogether. Feedback is most useful when students are required to view it and preferably do something with it, and by skipping this important step, the feedback might be ignored or glanced over perfunctorily and promptly forgotten. To close the loop, students must put the feedback to use. This can be the point where your feedback cycle sputters out, so be sure to make time to prioritize this final step. Students may need assistance in applying your feedback. Guiding students through the process, and providing scaffolds and models for using your feedback can be beneficial, especially during the initial attempts.

In my experience, it never hurts to incentivize this step: this can be as simple as adding points to an assignment for reflecting on the feedback given or giving extra credit opportunities around redone work. As a writing teacher, I required rewrites for work that scored below passing and offered to regrade any rewritten essays incorporating my detailed feedback. This proved to be a good solution, and while marking essays was definitely labor intensive, I was rewarded with very positive feedback from my students, often commenting that they learned a lot and improved significantly in my courses.

Considerations

A robust feedback cycle often includes opportunities for students to develop their own feedback skills by performing self-assessments and peer reviews. Self-assessment helps students in several ways, promoting metacognition and helping them learn to identify their own strengths and weaknesses. It also allows students to reflect on their study habits and motivation, manage self-directed learning, and develop transferable skills. Peer review also provides valuable practice honing their evaluative skills, using feedback techniques, and giving and receiving feedback, all skills they will find useful throughout adulthood. Both self-assessment and peer review give students a deeper understanding of the criteria teachers use to evaluate work, which can help them fine-tune their performance. 

Resources for learning more:

Feedback

Learning Outcomes

Self-assessment

Peer review

In our hyper-connected world, it’s tempting to think that technology like Google, Generative Artificial Intelligence, and our smartphones have rendered memory obsolete. But is that really true?

I recently participated in a book club offered by Oregon State University’s Center for Teaching and Learning. The book we read, Remembering and Forgetting in the Age of Technology: Teaching, Learning, and the Science of Memory in a Wired World by Michelle D. Miller, challenges misconceptions about how technology affects our memory and attention and offers valuable insights for educators. Let’s explore some key takeaways.

Memory Still Matters

There has been a growing backlash against memorization in education, with critics claiming it’s outdated and harmful to creativity and critical thinking. But here’s the kicker: memory actually supports robust, transferable thinking skills. Memory and thinking aren’t enemies – they’re complementary partners in learning.

Despite the “Google it” mentality, memory remains crucial. It’s not just about recalling facts; it’s about building a foundation for critical thinking and creativity. For one thing, it’s impossible in certain situations to stop and look things up (think emergency room doctors or lawyers during a trial). But more than that, our own memorized knowledge in a discipline allows us to consider context and practice skills fluently.

We’re all familiar with Bloom’s taxonomy and its bottom level: “Remembering”. Michelle Miller recommends that, instead of viewing memory as the “lowest” level of thinking, consider it the foundation. Higher-order thinking skills interact with and reinforce memory, creating a two-way street of learning.

The Power of Testing

Contrary to popular belief, quizzes and tests aren’t the enemy. Research shows that retrieval practice actually strengthens long-term retention, supports complex skills, and can even reduce test anxiety. It’s not about memorizing for the test; it’s about reinforcing learning.

In addition, “pre-quizzing” – that is, giving a quiz before introducing the material (ungraded or graded for participation only) – has been shown to help activate prior knowledge, integrate new information into existing schemas, and identify gaps or misconceptions that instructors can address.

Attention Spans: Not What You Think

The idea that “attention spans are shrinking” isn’t backed by solid science. In fact, in attention research there’s no such thing as “attention span”! And that “Students can only pay attention for 10 minutes at a time” idea? It’s based on outdated, poorly designed studies.

What about the idea that technology worsens our attention? There is no strong evidence that technology is affecting our ability to pay attention. While people often report this phenomenon (about themselves or others), a more likely explanation seems to be our decreased tolerance for boredom rather than our actual ability. However, smartphones can indeed be very distracting, and they can also affect memory negatively through the “I can Google it” effect – the expectation that information will be available online anytime can reduce our memory encoding.

Handwriting vs. Typing: It’s Complicated

The debate over handwritten versus typed notes isn’t as clear-cut as you might think. What matters most is your note-taking strategy. The best notes, regardless of medium, involve synthesizing ideas rather than transcribing verbatim.

Enhancing Memory in the Classroom

The good news is that there are many things an educator can do to help students remember essential content. Here are some strategies:

  1. Create meaning and structure: When we process information deeply and evaluate it for meaning we remember it better than when we perform shallow processing. Organizational schemes like narrative structures help information stick, and active learning techniques such as project-based learning ensure a deeper level of engagement with the content.
  2. Connect to prior knowledge: Ask questions to elicit information, draw explicit connections with previous material, and use pre-quizzing to help students see the gaps and stimulate curiosity.
  3. Embrace visualization: We’re visual creatures – use this to engage your audience. Create and ask students to create mind-maps, infographics, or other visual representations.
  4. Engage emotions: Both positive and negative emotions can enhance memory, but aim for a supportive atmosphere, which has been shown to improve learning outcomes. The emotion of surprise is a powerful memory enhancer.
  5. Connect to goals: Show how information is relevant to students’ immediate objectives.
  6. Use the self-reference effect: Relating information to oneself boosts memory. Ask students to bring their own experience or interests into the learning process through personalized assignments.
  7. Implement retrieval practice: Regular quizzing with immediate feedback can significantly boost retention.
  8. Space it out: Distribute practice over time instead of cramming.

Conclusion

In this age of information overload, understanding how memory works is more crucial than ever. By debunking myths and implementing evidence-based strategies, we can help students navigate the digital landscape while building strong, adaptable minds. I’ve only touched on a few points, but this book is chock-full of interesting information that’s useful not just for educators but for everyone!

What myths about memory and technology have you encountered in your teaching? How might you incorporate these insights into your classroom? Share your thoughts in the comments below!

References

Miller, M. D. (2022). Remembering and forgetting in the age of technology: teaching, learning, and the science of memory in a wired world (1st ed.). West Virginia University Press.

By Greta Underhill

In my last post, I outlined my search for a computer-assisted qualitative data analysis software (CAQDAS) program that would fit our Research Unit’s needs. We needed a program that would enable our team to collaborate across operating systems, easily adding in new team members as needed, while providing a user-friendly experience without a high learning curve. We also needed something that would adhere to our institution’s IRB requirements for data security and preferred a program that didn’t require a subscription. However, the programs I examined were either subscription-based, too cumbersome, or did not meet our institution’s IRB requirements for data security. It seemed that there just wasn’t a program out there to suit our team’s needs.

However, after weeks of continued searching, I found a YouTube video entitled “Coding Text Using Microsoft Word” (Harold Peach, 2014). At first, I assumed this would show me how to use Word comments to highlight certain text in a transcript, which is a handy function, but what about collating those codes into a table or Excel file? What about tracking which member of the team codes certain text? I assumed this would be an explanation of manual coding using Word, which works fine for some projects, but not for our team.

Picture of a dummy transcript using Lorem Ipsum placeholder text. Sentences are highlighted in red or blue depending upon the user. Highlighted passages have an associated “comment” where users have written codes.

Fortunately, my assumption was wrong. Dr. Harold Peach, Associate Professor of Education at Georgetown College, had developed a Word Macro to identify and pull all comments from the word document into a table (Peach, n.d.). A macro is “a series of commands and instructions that you group together as a single command to accomplish a task automatically” (Create or Run a Macro – Microsoft Support, n.d.). Once downloaded, the “Extract Comments to New Document” macro opens a template and produces a table of the coded information as shown in the image below. The macro identifies the following properties:

  • Page: the page on which the text can be found
  • Comment scope: the text that was coded
  • Comment text: the text contained in the comment; for the purpose of our projects, the code title
  • Author: which member of the team coded the information
  • Date: the date on which the text was coded

Picture of a table of dummy text that was generated from the “Extract Comments to New Document” Macro. The table features the following columns: Page, Comment Scope, Comment Text, Author, and Date.

You can move the data from the Word table into an Excel sheet where you can sort codes for patterns or frequencies, a function that our team was looking for in a program as shown below:

A picture of the dummy text table in an Excel sheet where codes have been sorted and grouped together by code name to establish frequencies.

This Word Macro was a good fit for our team for many reasons. First, our members could create comments on a Word document, regardless of their operating system. Second, we could continue to house our data on our institution’s servers, ensuring our projects meet strict IRB data security measures. Third, the Word macro allowed for basic coding features (coding multiple passages multiple times, highlighting coded text, etc.) and had a very low learning curve: teaching someone how to use Word Comments. Lastly, our institution provides access to the complete Microsoft Suite so all team members including students that would be working on projects already had access to the Word program. We contacted our IT department to have them verify that the macro was safe and for help downloading the macro.

Testing the Word Macro       

Once installed, I tested out the macro with our undergraduate research assistant on a qualitative project and found it to be intuitive and helpful. We coded independently and met multiple times to discuss our work. Eventually we ran the macro, pulled all comments from our data, and moved the macro tables into Excel where we manually merged our work. Through this process, we found some potential drawbacks that could impact certain teams.

First, researchers can view all previous comments made which might impact how teammates code or how second-cycle coding is performed; other programs let you hide previous codes so researcher can come at the text fresh.

Second, coding across paragraphs can create issues with the resulting table; cells merge in ways that make it difficult to sort and filter if moved to Excel, but a quick cleaning of the data took care of this issue.

Lastly, we manually merged our work, negotiating codes and content, as our codes were inductively generated; researchers working on deductive projects may bypass this negotiation and find the process of merging much faster.

Despite these potential drawbacks, we found this macro sufficient for our project as it was free to use, easy to learn, and a helpful way to organize our data. The following table summarizes the pro and cons of this macro.

Pros and Cons of the “Extract Comments to New Document” Word Macro

Pros

  • Easy to learn and use: simply providing comments in a Word document and running the macro
  • Program tracks team member codes which can be helpful in discussions of analysis
  • Team members can code separately by generating separate Word documents, then merge the documents to consensus code
  • Copying Word table to Excel provides a more nuanced look at the data
  • Program works across operating systems
  • Members can house their data in existing structures, not on cloud infrastructures
  • Macro is free to download

Cons

  • Previous comments are visible through the coding process which might impact other members’ coding or second round coding
  • Coding across paragraph breaks creates cell breaks in the resulting table that can make it hard to sort
  • Team members must manually merge their codes and negotiate code labels, overlapping data, etc.

Scientific work can be enhanced and advanced by the right tools; however, it can be difficult to distinguish which computer-assisted qualitative data analysis software program is right for a team or a project. Any of the programs mentioned in this paper would be good options for individuals who do not need to collaborate or for those who are working with publicly available data that require different data security protocols. However, the Word macro highlighted here is a great option for many research teams. In all, although there are many powerful computer-assisted qualitative data analysis software programs out there, our team found the simplest option was the best option for our projects and our needs.

References 

Create or run a macro—Microsoft Support. (n.d.). Retrieved July 17, 2023, from https://support.microsoft.com/en-us/office/create-or-run-a-macro-c6b99036-905c-49a6-818a-dfb98b7c3c9c

Harold Peach (Director). (2014, June 30). Coding text using Microsoft Word. https://www.youtube.com/watch?v=TbjfpEe4j5Y

Peach, H. (n.d.). Extract comments to new document – Word macros and tips – Work smarter and save time in Word. Retrieved July 17, 2023, from https://www.thedoctools.com/word-macros-tips/word-macros/extract-comments-to-new-document/

by Greta Underhill

Are you interested in qualitative research? Are you currently working on a qualitative project? Some researchers find it helpful to use a computer-assisted qualitative data analysis software (CAQDAS) program to help them organize their data through the analysis process. Although some programs can perform basic categorization for researchers, most software programs simply help researchers to stay organized while they conduct the deep analysis needed to produce scientific work. You may find a good CAQDAS program especially helpful when multiple researchers work with the same data set at different times and in different ways. Choosing the right CAQDAS for your project or team can take some time and research but is well worth the investment. You may need to consider multiple factors before determining a software program such as cost, operating system requirements, data security, and more.

For the Ecampus Research Unit, issues with our existing CAQDAS prompted our team to search for another program that would fit our specific needs: Here’s what we were looking for:

NeedsReasoning
General qualitative analysisWe needed a program for general analysis for multiple types of projects; Other programs are designed for specific forms of analysis such as Leximancer for content analysis
Compatibility across computer operating systems (OS)Our team used both Macs and PCs
Adherence to our institution’s IRB security requirementsLike many others, our institution and our team adhere to strict data security and privacy requirements, necessitating a close look at how a program would manage our data
Basic coding capabilitiesAlthough many programs offer robust coding capabilities, our team needed basic options such as coding one passage multiple times and visually representing coding through highlights
Export of codes into tables or Excel booksThis function is helpful for advanced analysis and reporting themes in multiple file formats for various audiences
A low learning-curveWe regularly bring in temporary team members on various projects for mentorship and research experience, making this a helpful function
A one-time purchaseA one-time purchase was the best fit for managing multiple and temporary team members on various projects

Testing a CAQDAS

I began systematically researching different CAQDAS options for the team. I searched “computer-assisted qualitative data analysis software” and “qualitative data analysis” in Google and Google Scholar. I also consulted various qualitative research textbooks and articles, as well as blogs, personal websites, and social media handles of qualitative researchers to identify software programs. Over the course of several months, I generated a list of programs to examine and test. Several programs were immediately removed from consideration as they are designed for different types of analysis: DiscoverText, Leximancer, MAXQDA, QDA Miner. These programs are powerful, but best suited for specific analysis, such as text mining. With the remaining programs, I signed up for software trials, attended several product demonstrations, participated in training sessions, borrowed training manuals from the library, studied how-to videos online, and contacted other scholars to gather information about the programs. Additionally, I tested whether programs would work across different operating systems. I kept recorded details about each of the programs tested, including how they handled data, the learning curve for each, their data security, whether they worked across operating system, how they would manage the export of codes, and whether they required a one-time or subscription-based payment. I started with three of the most popular programs, NVivo, Dedoose, and ATLAS.ti. The table below summarizes which of these programs fit our criteria.

NVivoDedooseATLAS.ti
General Qualitative Analysis
Cross-OS Collaboration
Data security
Basic coding capabilities
Export codes
Low learning curve
One-time purchase
A table demonstrating whether three programs (NVivo, Dedoose, and ATLAS.ti) meet the team’s requirements. Details of requirements will be discussed in the text of the blog below.

NVivo

I began by evaluating NVivo, a program I had used previously. NVivo is a powerful program that adeptly handled large projects and is relatively easy to learn. The individual license was available for one-time purchase and allowed the user to maintain their data on their own machine or institutional servers. However, it had no capabilities for cross-OS collaboration, even when clients purchased a cloud-based subscription. Our team members could download and begin using the program, but we would not be able to collaborate across operating systems.

Dedoose

I had no prior experience with Dedoose, so I signed up for a trial of the software. I was impressed with the product demonstration, which significantly helped in figuring out how to use the program. This program excelled at data visualization and allowed a research team to blind code the same files for interrater reliability if that suited the project. Additionally, I appreciated the options to view code density (how much of the text was coded) as well as what codes were present across transcripts. I was hopeful this cloud-based program would solve our cross-OS collaboration problem, but it did not pass the test for our institution’s IRB data security requirements because it housed our data on Dedoose servers.

ATLAS.ti

ATLAS.ti was also a new program for me, so I signed up for a trial of this software. It is a well-established program with powerful analysis functions such as helpful hierarchical coding capabilities and institutive links among codes, quotations, and comments. But the cross-OS collaboration, while possible via the web, proved to be cumbersome and this too did not meet the data security threshold for our institution’s IRB. Furthermore, the price point meant we would need to rethink our potential collaborations with other organizational members.

Data Security

Many programs are now cloud-based, which offer powerful analysis options, but unfortunately did not meet our IRB data security requirements. Ultimately, we had to cut Delve, MAXQDA, Taguette, Transana, and webQDA. All of these programs would have been low-learning curve options with basic coding functionality and cross-OS collaboration; however, for our team to collaborate, we would need to purchase a cloud-based subscription, which can quickly become prohibitively expensive, and house our data on company servers, which would not pass our institutional threshold for data security.

Note-taking programs

After testing multiple programs, I started looking beyond just qualitative software programs and into note-taking programs such as DevonThink, Obsidian, Roam Research, and Scrintal. I had hoped these might provide a work around by organizing data on collaborative teams in ways that would facilitate analysis. However, most of them did not have functionalities that could be used for coding or had high learning curves that precluded our team using them.

It seemed like I had exhausted all options and I still did not have a program to bring back to the Research Unit. I had no idea that a low-cost option was just a YouTube video away. Stay tuned for the follow-up post where we dive into the solution that worked best for our team.

 

For the first part of this post, please see Media Literacy in the Age of AI, Part I: “You Will Need to Check It All.”

Just how, exactly, we’re supposed to follow Ethan Mollick’s caution to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.

In Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online (University of Chicago Press, November 2023), Mike Caulfield and Sam Wineburg provide a kind of user’s manual to the modern internet. The authors’ central concern is that students—and, by extension, their teachers—have been going about the process of verifying online claims and sources all wrong—usually by applying the same rhetorical skills activated in reading a deep-dive on Elon Musk or Yevgeny Prigozhin, to borrow from last month’s headlines. Academic readers, that is, traditionally keep their attention fixed on the text—applying comprehension strategies such as prior knowledge, persisting through moments of confusion, and analyzing the narrative and its various claims about technological innovation or armed rebellion in discipline-specific ways.

The Problem with Checklists

Now, anyone who has tried to hold a dialogue on more than a few pages of assigned reading at the college level knows that sustained focus and critical thinking can be challenging, even for experienced readers. (A majority of high school seniors are not prepared for reading in college, according to 2019 data.) And so instructors, partnering with librarians, have long championed checklists as one antidote to passive consumption, first among them the CRAAP test, which stands for currency, relevance, authority, accuracy, and purpose. (Flashbacks to English 101, anyone?) The problem with checklists, argue Caulfield and Wineburg, is that in today’s media landscape—awash in questionable sources—they’re a waste of time. Such routines might easily keep a reader focused on critically evaluating “gameable signals of credibility” such as functional hyperlinks, a well-designed homepage, airtight prose, digital badges, and other supposedly telling markers of authority that can be manufactured with minimal effort or purchased at little expense, right down to the blue checkmark made infamous by Musk’s platform-formerly-known-as-Twitter.

Three Contexts for Lateral Reading

One of the delights in reading Verified is drawing back the curtains on a parade of little-known hoaxes, rumors, actors, and half-truths at work in the shadows of the information age—ranging from a sugar industry front group posing as a scientific think tank to headlines in mid-2022 warning that clouds of “palm-sized flying spiders” were about to descend on the East Coast. In the face of such wild ideas, Caulfield and Wineburg offer a helpful, three-point heuristic for navigating the web—and a sharp rejoinder to the source-specific checklists of the early aughts. (You will have to read the book to fact-check the spider story, or as the authors encourage, you can do it yourself after reading, say, the first chapter!) “The first task when confronted with the unfamiliar is not analysis. It is the gathering of context” (p. 10). More specifically:

  • The context of the source — What’s the reputation of the source of information that you arrive at, whether through a social feed, a shared link, or a Google search result?
  • The context of the claim — What have others said about the claim? If it’s a story, what’s the larger story? If a statistic, what’s the larger context?
  • Finally, the context of you — What is your level of expertise in the area? What is your interest in the claim? What makes such a claim or source compelling to you, and what could change that?
“The Three Contexts” from Verified (2023)

At a regional conference of librarians in May, Wineburg shared video clips from his scenario-based research, juxtaposing student sleuths with professional fact checkers. His conclusion? By simply trying to gather the necessary context, learners with supposedly low media literacy can be quickly transformed into “strong critical thinkers, without any additional training in logic or analysis” (Caulfield and Wineburg, p. 10). What does this look like in practice? Wineburg describes a shift from “vertical” to “lateral reading” or “using the web to read the web” (p. 81). To investigate a source like a pro, readers must first leave the source, often by opening new browser tabs, running nuanced searches about its contents, and pausing to reflect on the results. Again, such findings hold significant implications for how we train students in verification and, more broadly, in media literacy. Successful information gathering, in other words, depends not only on keywords and critical perspective but also on the ability to engage in metacognitive conversations with the web and its architecture. Or, channeling our eight-legged friends again: “If you wanted to understand how spiders catch their prey, you wouldn’t just look at a single strand” (p. 87).

SIFT graphic by Mike Caulfield with icons for stop, investigate the source, find better coverage, and trace claims, quotes, and media to the original context.

Image 2: Mike Caulfield’s “four moves”

Reconstructing Context

Much of Verified is devoted to unpacking how to gain such perspective while also building self-awareness of our relationships with the information we seek. As a companion to Wineburg’s research on lateral reading, Caulfield has refined a series of higher-order tasks for vetting sources called SIFT, or “The Four Moves” (see Image 2). By (1) Stopping to take a breath and get a look around, (2) Investigating the source and its reputation, (3) Finding better sources of journalism or research, and (4) Tracing surprising claims or other rhetorical artifacts back to their origins, readers can more quickly make decisions about how to manage their time online. You can learn more about the why behind “reconstructing context” at Caulfield’s blog, Hapgood, and as part of the OSU Libraries’ guide to media literacy. (Full disclosure: Mike is a former colleague from Washington State University Vancouver.)

If I have one complaint about Caulfield and Wineburg’s book, it’s that it dwells at length on the particulars of analyzing Google search results, which fill pages of accompanying figures and a whole chapter on the search engine as “the bestie you thought you knew” (p. 49). To be sure, Google still occupies a large share of the time students and faculty spend online. But as in my quest for learning norms protocols, readers are already turning to large language model tools for help in deciding what to believe online. In that respect, I find other chapters in Verified (on scholarly sources, the rise of Wikipedia, deceptive videos, and so-called native advertising) more useful. And if you go there, don’t miss the author’s final take on the power of emotion in finding the truth—a line that sounds counterintuitive, but in context adds another, rather moving dimension to the case against checklists.

Given the acceleration of machine learning, will lateral reading and SIFTing hold up in the age of AI? Caulfield and Wineburg certainly think so. Building out context becomes all the more necessary, they write in a postscript on the future of verification, “when the prose on the other side is crafted by a convincing machine” (p. 221). On that note, I invite you and your students to try out some of these moves on your favorite chatbot.

Another Postscript

The other day, I gave Microsoft’s AI-powered search engine a few versions of the same prompt I had put to ChatGPT. In “balanced” mode, Bing dutifully recommended resources from Stanford, Cornell, and Harvard on introducing norms for learning in online college classes. Over in “creative” mode, Bing’s synthesis was slightly more offbeat—including an early-pandemic blog post on setting norms for middle school faculty meetings in rural Vermont. More importantly, the bot wasn’t hallucinating. Most of the sources it suggested seemed worth investigating. Pausing before each rabbit hole, I took a deep breath.

Related Resource

Oregon State Ecampus recently rolled out its own AI toolkit for faculty, based on an emerging consensus that developing capacities for using this technology will be necessary in many areas of life. Of particular relevance to this post is a section on AI literacy, conceptualized as “a broad set of skills that is not confined to technical disciplines.” As with Verified, I find the toolkit’s frameworks and recommendations on teaching AI literacy particularly helpful. For instance, if students are allowed to use ChatGPT or Bing to brainstorm and evaluate possible topics for a writing assignment, “faculty might provide an effective example of how to ask an AI tool to help, ideally situating explanation in the context of what would be appropriate and ethical in that discipline or profession.”

References

Caulfield, M., & Wineburg, S. (2023). Verified: How to think straight, get duped less, and make better decisions about what to believe online. University of Chicago Press.

Mollick, E. (2023, July 15). How to use AI to do stuff: An opinionated guide. One Useful Thing.

Oregon State Ecampus. (2023). Artificial Intelligence Tools.

Have you found yourself worried or overwhelmed in thinking about the implications of artificial intelligence for your discipline? Whether, for example, your department’s approaches to teaching basic skills such as library research and source evaluation still hold up? You’re not alone. As we enter another school year, many educators continue to think deeply about questions of truth and misinformation, creativity, and how large language model (LLM) tools such as chatbots are reshaping higher education. Along with our students, faculty (oh, and instructional designers) must consider new paradigms for our collective media literacy.

Here’s a quick backstory for this two-part post. In late spring, shortly after the “stable release” of ChatGPT to iOS, I started chatting with bot model GPT-3.5, which innovator Ethan Mollick describes as “very fast and pretty solid at writing and coding tasks,” if a bit lacking in personality. Other, internet-connected models, such as Bing, have made headlines for their resourcefulness and darker, erratic tendencies. But so far, access to GPT-4 remains limited, and I wanted to better understand the more popular engine’s capabilities. At the time, I was preparing a workshop for a creative writing conference. So, I asked ChatGPT to write a short story in the modern style of George Saunders, based in part on historical events. The chatbot’s response, a brief burst of prose it titled “Language Unleashed,” read almost nothing like Saunders. Still, it got my participants talking about questions of authorship, originality, representation, etc. Check, check, check.

The next time I sat down with the GPT-3.5, things went a little more off-script.

One faculty developer working with Ecampus had asked our team about establishing learning norms in a 200-level course dealing with sensitive subject matter. As a writing instructor, I had bookmarked a few resources in this vein, including strategies from the University of Colorado Boulder. So, I asked ChatGPT to create a bibliographic citation of Creating Collaborative Classroom Norms, which it did with the usual lightning speed. Then I got curious about what else this AI model could do, as my colleagues Philip Chambers and Nadia Jaramillo Cherrez have been exploring. Could ChatGPT point me to some good resources for faculty on setting norms for learning in online college classes?

“Certainly!” came the cheery reply, along with a summary of five sources that would provide me with “valuable information and guidance” (see Image 1). Noting OpenAI’s fine-print caveat (“ChatGPT may produce inaccurate information about people, places, or facts”), I began opening each link, expecting to be teleported to university teaching centers across the country. Except none of the tabs would load properly.

“Sorry we can’t find what you’re looking for,” reported Inside Higher Ed. “Try these resources instead,” suggested Stanford’s Teaching Commons. A closer look with Internet Archive’s Wayback Machine confirmed that the five sources in question were, like “Language Unleashed,” entirely fictitious.

An early chat with ChatGPT-3.5, asking whether the chatbot can point the author to some good resources for faculty on setting classroom norms for learning in online college classes. "Certainly," replies ChatGPT, in recommending five sources that "should provide you with valuable information and guidance."

Image 1: An early, hallucinatory chat with ChatGPT-3.5

As Mollick would explain months later: “it is very easy for the AI to ‘hallucinate’ and generate plausible facts. It can generate entirely false content that is utterly convincing. Let me emphasize that: AI lies continuously and well. Every fact or piece of information it tells you may be incorrect. You will need to check it all.”

The fabrications and limitations of chatbots lacking real-time access to the ever-expanding web have by now been well-documented. But as an early adopter, the speed and confidence ChatGPT brought to the task of inventing and describing fake sources felt unnerving. And without better guideposts for verification, I expect students less familiar with the evolution of AI will continue to experience confusion, or worse. As the Post recently reported, chatbots can easily say offensive things and act in culturally-biased ways—”a reminder that they’ve ingested some of the ugliest material the internet has to offer, and they lack the independent judgment to filter that out.”

Just how, exactly, we’re supposed to “check it all” happens to be the subject of a lively, forthcoming collaboration from two education researchers who have been following the intersection of new media and misinformation for decades.

Stay tuned for an upcoming post with the second installment of “Media Literacy in the Age of AI,” a review of Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online by Mike Caulfield and Sam Wineburg (University of Chicago Press, November 2023).

References

Mollick, E. (2023, July 15). How to use AI to do stuff: An opinionated guide. One Useful Thing.

Wroe, T., & Volckens, J. (2022, January). Creating collaborative classroom norms. Office of Faculty Affairs, University of Colorado Boulder.

Yu Chen, S., Tenjarla, R., Oremus , W., & Harris, T. (2023, August 31). How to talk to an AI chatbot. The Washington Post.

By Cat Turk and Mary Ellen Dello Stritto

In this time of rapid change in online education, we can benefit from leveraging the expertise of faculty who have experienced the evolution of online education. At the Oregon State University (OSU) Ecampus Research Unit, we have been learning from a group of instructors who have taught online for ten years or more. A review of recent research uncovered that these instructors are an untapped resource. Their insights can provide valuable guidance for instructors who are just beginning their careers or instructors who may be preparing to teach online for the first time. Further, their perspectives can also be enlightening for online students.

In 2018-2019 we conducted interviews with 33 OSU faculty who had been teaching online for 10 years or more as a part of a larger study. Two of the questions we asked them were the following:

  1. What skills do you think are most valuable for online instructors to have?
  2. What skills do you think are most valuable for online students to have?

We will share some of the results of a qualitative analysis of these questions and highlight the similarities and differences.

When asked about the most valuable skills for online instructors, three key skills emerged: communication, organization, and time management. When asked about the most valuable skills for online students to have, the same skills were among the most frequently mentioned by these instructors.

As the table below shows, in the responses about skills for online instructors, communication emerged as the most prominent skill, with 85% of instructors in the study emphasizing its importance, while time management and organization were split evenly at 45%. In their response about skills for students, 64% of the instructors emphasized both communication and time management, while 42% discussed organization. When discussing communication for instructors, they indicated that effective communication is essential for building rapport with students, providing clear instructions, and facilitating meaningful interactions in the online environment. Organization (such as structuring course materials or their weekly work process) and time management skills (such as scheduling availability to connect with students) were also highly valued by these instructors. Read more about the analysis of instructor skills here.

 Skills for InstructorsSkills for Students
Communication    28 responses (85%)   21 responses (64%)
Time Management15 responses (45%)  21 responses (64%)
Organization15 responses (45%)   14 responses (42%)
Self-Motivation   —21 responses (64%)            
Frequency of responses of skills for instructors and students.

The responses to both questions emphasized the significance of communication skills in written assignments and in proactive connections within the scope of the online learning environment. Instructors articulated that online students needed to be proactive communicators themselves. Examples of this include contacting their instructors about questions and clarification in a timely way, interacting with their peers in a respectful manner, and turning in quality written assignments that demonstrate comprehension of their learning material. For students, clear and effective communication ensures understanding and engagement, while organization facilitates seamless navigation through course materials, and time management ensures that students are able to make the most of the asynchronous environment.

While time management and organization were both considered by instructors to be just as crucial for students, their responses demonstrated that these skills were needed for different reasons than would be the case for instructors. Instructors personally valued time management and organization due to the nature of facilitating courses online. When the online classroom can travel from place to place, setting blocks of intentional time and structuring hours accordingly were considered essential to instructors for maintaining a work-life balance and so tasks would not be missed.

On the other hand, according to these instructors, students need time management and organization due to the asynchronous and sometimes isolating nature of online courses. One instructor stressed that:

 “[Students] do need to be more organized than on-ground students, because there’s not that weekly meeting to keep students on track.”

These instructors indicated some online students may need to structure their study time to accommodate a different time zone, while others may need to structure their academic pursuits around careers or children. Another instructor emphasized that:

“A lot of our [online students] actually work full-time, so they have families and kids and have to be much more organized too.”

While there were overlaps with the responses to the two questions, a notable difference was the emergence of another skill for students: self-motivation. This concept of self-motivation emerged from the instructor responses about students’ capacity to persevere in online courses. This included their level of motivation, capacity to learn on their own, and comfort with self-paced learning.

One instructor said the following about students’ self-motivation,

“Some people would say it’s self-discipline, but I think it’s more of they have to have a purpose for that class.”

Self-motivation was not mentioned by the instructors as a skill for online instructors, suggesting that these instructors perceive this as more pertinent to students for success in managing their own learning process. It is worth noting that proactive communication was highlighted as an essential aspect of self-motivation, with instructors emphasizing that students who take the initiative in reaching out to them tend to be more successful. This observation suggests that self-motivated individuals are more likely to actively seek support and clarification, which can enhance their learning experience and overall success. 

Another noteworthy aspect was the need for students to be comfortable with learning in physical isolation. Instructors acknowledged that online learners must navigate the challenges of studying independently without the immediate presence of peers and instructors. For online students specifically,

“They need to be motivated because they’re not going to have peers sitting in a classroom with them, and they don’t have a place that they have to physically go every week.”

This finding underscores the importance of maintaining motivation and engagement, as students ideally possess an intrinsic drive to succeed despite the absence of a physical connection to the university and their classmates.

The findings from this study highlight the importance of certain similar skills for both online instructors and students. Effective communication, organization, and time management are vital for success in the online learning environment for both instructors and students. We found this to be an interesting connection that online students might benefit from understanding: these are key skills that students and instructors have in common.

Our findings about self-motivation may be useful for online instructors. Consider incorporating strategies that foster student self-motivation, such as goal-setting exercises, regular check-ins, and providing opportunities for self-reflection. By empowering students to take ownership of their learning, instructors might enhance student engagement and success in the online environment.

Further, students can learn from the instructors’ emphasis on communication, organization, and time management skills. They can intentionally work on improving their communication skills, seeking clarification when needed, and actively participating in online discussions. Developing effective organization and time management strategies, such as creating schedules, prioritizing tasks, and breaking them down into manageable chunks, may significantly enhance their online learning experience.

The field of online education is evolving rapidly, and here we can see how educators and students alike are adapting to these changes. The experiences of long-term online instructors provide valuable insights into the skills necessary for success in the online learning environment. In the future, what answers would we find if we asked students the same question: what do online students think are the skills needed to succeed in the online classroom? By understanding the shared and distinct perspectives of instructors and students, educators can design effective online courses and support systems that foster meaningful learning experiences and empower students to succeed.

Learning outcomes (LOs) are used in instructional design to describe the skills and knowledge that students should have at the end of a course or learning unit, and to design assessments and activities that support these goals. It is widely agreed that specific, measurable outcomes are essential for planning instruction; however, some educators question the benefits of explicitly presenting them to students. I have been asked (and wondered myself): “What is the point of listing learning outcomes in the course?” “How do they help learning? “Do students even read them?”

So, I went on a quest for research that attempted to answer such questions. I was particularly interested in unit/module-level outcomes, as those are the ones that directly steer the content, and students see them throughout the course. Here’s a brief summary of what I found.

Note: the studies use the terms “learning outcome”, “learning objective”, or “learning goal” – they all refer to the same concept: a specific and measurable description of the skills and knowledge that students are expected to have at the end of a learning unit/period of study. At OSU we use the term “outcomes”.

What Does the Research Say?

Armbruster et al. (2009) redesigned an Introductory Biology course at Georgetown University, Washington, DC, using active learning and student-centered pedagogies, leading to increased student performance and satisfaction. One of the strategies used was to include explicit learning goals in the lecture slides, and labeling exam and quiz questions with the related goals. Students’ attitudes towards the course were assessed via a questionnaire and comparison of university-administered student evaluations. Students were asked to rank lecture components in terms of helpfulness to learning, and the authors found that one of the highest-ranking elements was the inclusion of explicit learning goals.

Simon and Taylor (2009) surveyed 597 students from computer science and microbiology and immunology courses at the University of British Columbia, where instructors presented learning goals at the beginning of each lecture or topic area. The questions were open and the answers coded into a number of categories, which helped them identify several values of goals. The main value was “knowing what I need to know”: students reported that the goals showed them how to focus their efforts and felt that the goals “allowed them to organize the information more effectively and be more expertlike in their approach to the class” (Simon & Taylor, 2009, p.55). The authors did not find any difference between presenting the goals before each lecture versus at the beginning of the unit/topic area.

Brooks et al. (2014) examined students’ views of learning outcomes at the University of Leicester, UK. First, they surveyed 918 students taking Biological Sciences, English and Medicine courses. They found that 81% of participants agreed or strongly agreed that learning outcomes are useful learning aids. Additionally, 46% found LOs more useful as their courses progressed, and 49% reported that they engaged more with the LOs as the course progressed. The authors also investigated when LOs are most useful, and found that the most common answer (46%) was when reviewing the material. Moreover, 49% of students reported that LOs can only be fully understood at the end of a module. The researchers followed up on these results with a focus group, which confirmed that students use LOs in various ways and at various points during the course.

Osueke et al. (2018) looked into students’ use and perceptions of learning objectives at University of Georgia. 185 students in an undergraduate Introduction to Biochemistry and Molecular Biology course took part in the study. The instructors included instructions in the syllabus, which they also stated on the first day of class: “Focus on the learning objectives. The exams will assess your accomplishment of the learning objectives. Use the learning objectives as a guide for what to focus on when you are completing assignments and studying for exams.” Students completed two assignments requiring them to explain their use of the LOs. The researchers found that many students (33.8%) reported they had been instructed on how to use LOs to study – these instructions ranged from passively “look over” to using them as a study guide. The ways students used the LOs were: as questions to answer (47.4%), as a resource for studying (24.1%), as a self-assessment tool (14.3%), and passive use (13.5%). When asked why they find the LOs helpful, students said that they help them: narrow down the information (57.1%); organize their studying (23.3%); communicate information (5.3%); monitor their understanding (4.5%); forced them to study (1.5%).

Sana et al. (2020) conducted three experiments aiming to find to what extent presenting the LOs improve retention of information. Participants were asked to read five passages on a neuroscience topic, and then they were tested on comprehension and retention. The experiments took place at McMaster University, Ontario and employed different participants, methods, materials, and procedures. They found that: interpolating LOs throughout the lesson (as opposed to all LOs presented at the beginning) improved learning compared to not including LOs, especially when students’ attention was explicitly directed to them; converting LOs into pretest questions (that students attempted to answer) further enhanced performance; multiple-choice and short answer questions were equally effective; and withholding feedback on pretests was more effective than providing feedback – the explanation proposed by the authors for this last finding was that students may be more motivated to seek the correct answers themselves, which causes further processing of the material.

Barnard et al. (2021) investigated students’ and academics’ perspectives on the purpose of learning objectives and approaches to assessment preparation. They conducted focus groups with participants from an undergraduate Psychology course at the University of Nottingham, UK. The students reported that LOs are useful for guidance, as they “use them to create direction for some of the learning and revision strategies” (Barnard et al., 2021, p. 679).

Conclusions and Recommendations

Good news! The findings of these studies suggest that many students do appreciate clear LOs and use them to guide their learning. The LOs help them understand what they are expected to know – thus, students use them to focus their study, to review for an exam, and to self-check their knowledge.

As instructors and instructional designers, what can we do to help students take full advantage of LOs? Apart from having specific and measurable LOs, make sure that the LOs are well aligned with the activities, and make this alignment explicit. It may also be helpful to offer some guidance on how to use the LOs, for instance by prompting students to recap their learning at the end of a unit based on the LOs. Finally, we could turn the LOs into questions and use them as a pretest.

For more on creating and using LOs, check out the CBE—Life Sciences Education website, which has an informative guide, including a section on student use. 

Do you have any other ideas or resources on how to use learning outcomes to improve students’ experience and study habits? If so, we’d love to hear from you!

References

Armbruster, P., Patel, M., Johnson, E., & Weiss, M. (2009). Active learning and student-centered pedagogy improve student attitudes and performance in Introductory Biology. CBE Life Sciences Education, 8(3), 203–213. https://doi.org/10.1187/cbe.09-03-0025

Barnard, M., Whitt, E., & McDonald, S. (2021). Learning objectives and their effects on learning and assessment preparation: Insights from an undergraduate psychology course. Assessment and Evaluation in Higher Education, 46(5), 673–684. https://doi.org/10.1080/02602938.2020.1822281

Brooks, S., Dobbins, K., Scott, J. J. A., Rawlinson, M., & Norman, R. I. (2014). Learning about learning outcomes: The student perspective. Teaching in Higher Education, 19(6), 721–733. https://doi.org/10.1080/13562517.2014.901964

Osueke, B., Mekonnen, B., & Stanton, J. D. (2018). How undergraduate science students use learning objectives to study. Journal of Microbiology & Biology Education, 19(2). https://doi.org/10.1128/jmbe.v19i2.1510

Sana, F., Forrin, N. D., Sharma, M., Dubljevic, T., Ho, P., Jalil, E., & Kim, J. A. (2020). Optimizing the efficacy of learning objectives through pretests. CBE Life Sciences Education, 19(3), ar43–ar43. https://doi.org/10.1187/cbe.19-11-0257

Simon, B., & Taylor, J. (2009). What is the value of course-specific learning goals? Journal of College Science Teaching, 39(2), 52–57. Retrieved from: https://www.colorado.edu/sei/sites/default/files/attached-files/what_is_the_value_of_course-specific_learning_goals.pdf