A new Ecampus project has underscored the potential of graduate teaching assistants (GTAs) to add immense value to the process of reviewing and improving courses by thinking both as an instructor and as a student. Ecampus analyzed data and collaborated with partners in academic departments to identify five courses in which students were experiencing barriers to success that were not being addressed in Ecampus’ rigorous course development and support process. The academic departments also identified five pedagogically-minded and innovative GTAs to analyze and begin addressing the barriers to student success. The result is a pilot that we believe will be beneficial for the students, the GTA Fellows, and the faculty, while also providing learning opportunities for all stakeholders about how to tackle the most challenging course design problems. While we’re still in the first term of the pilot, our collaborative investigation process and emerging creative solutions have already made us very excited about the findings to come. 

Determining first steps 

At Oregon State Ecampus, we have a strong framework to help support and maintain course quality.  Courses are carefully and thoughtfully designed through a collaborative effort between a faculty course developer who has received training in online course design best practices and an Ecampus instructional designer. The development process spans from the two term initial design and build period, where we ensure courses meet our set of Ecampus Essentials, to iterative first term adjustments, to support for continued lifetime maintenance, to formal course “refreshes” every 3-5 years.  Finally, many of our courses are also submitted for and earn Quality Matters certification, which is an important indicator of quality based on research-based standards. This rigorous and supportive development process aims to make sure the course continues to stay relevant, accessible, and effective for learners.

But, in spite of all this careful planning, development, review, and maintenance, what is the appropriate response when courses have recently come through this rigorous and comprehensive design process, faculty have been trained in best practices for teaching online, and students are still encountering barriers to success in the course?

We recently launched a new pilot project to tackle this question head-on.  As a starting point, and using a few basic indicators of student persistence, retention, and success in our courses — such as the DFWU rate, or a rate at which students receive grades of D, F, W (withdrawal), or U (unsatisfactory)  — we created an initial list of courses across our online offerings where students were least successful in passing or completing. From this list, we identified which courses had been redesigned within the last five years (to rule out our standard redevelopment process as a solution to increasing student success).  The latter group of courses underwent some additional review by our team to identify if there were any stand-out issues that could be easily resolved.

What we arrived at was a short-list of courses that had higher than usual DFWU rates and were redesigned recently.  In these courses, we knew that something else was going on beneath the surface; the underlying problem was neither an obvious issue with design or facilitation techniques.  Many of the courses on this short-list are problematic not for a high rate of D/F/U grades at the end of the term, but rather for a high rate of W (withdrawal) grades. Our Ecampus student population is largely comprised of non-traditional students who have a different set of needs than our more traditional on-campus students, namely in that they need flexibility to balance their busy out-of-school lives while also meeting their educational goals; so, through this pilot, we wanted to find an effective way to determine what could be changed to better support Ecampus students in staying in (and succeeding in) these courses that were particularly challenging for reasons we could not easily identify.

Designing the pilot

With these course profiles compiled, we reached out to five department partners to assess their interest in collaborating on a project to further review and revise a course (or, in some cases, a sequence of courses).  We proposed to fund a Graduate Teaching Assistant (GTA) for three consecutive quarters to evaluate and then to propose and implement innovative interventions in these targeted courses with an eye toward increasing online student success.  In general, pilots are following the below schedule:

  • Quarter 1: the GTA is an active observer of the course(s), and reviews previous sections’ data to look for patterns in obstacles that students might face; in collaboration with the faculty course lead and Ecampus staff, the GTA then proposes a first set of interventions for quarter 2; IRB approval of research is confirmed if necessary for design of interventions and/or for desire for possible future publication.
  • Quarter 2: the GTA continues to be an active observer in the course(s) and helps to implement the first set of interventions; in collaboration with the faculty course lead and Ecampus staff, the GTA then proposes new or refined interventions for quarter 3.
  • Quarter 3: the GTA continues to be an active observer in the course(s) and helps the instructor to implement the new or refined interventions; data reporting is wrapped up and a campus presentation is arranged.

Note that, across the three quarters, the GTA does not undertake the traditional tasks associated with a teaching assistant in an online course, such as grading assignments, responding to student questions, holding virtual office hours, etc., modeling our pilot on fellowship programs such as Duke University’s Bass Digital Education Fellowships.  Rather, all stakeholders agreed to allow the GTA to not be constrained by these time-consuming tasks and focus their efforts instead on observational work and then planning and implementing interventions.  The instructors assigned to these courses continue to take on their regular duties of interacting with and assessing students.

The unique advantage of GTAs

With our five unique pilots underway as of this summer, it has already become clear that the key to this pilot is the unique positioning of the GTA to tackle these student success problems from both the faculty and student perspectives.  At Oregon State, GTAs regularly serve as teaching assistants or instructors of record in on-campus, hybrid, and online courses, so our GTAs have come to these pilot projects with prior teaching experience (and, often, with some training in pedagogy and course design).  Yet, our pilot program GTAs are also still students themselves, so they are particularly attuned to the student experience as they follow and track current and upcoming groups of students working through these courses.

Our pilot will also benefit from the fact that these GTAs have a strong interest in pedagogy and in their own professional development as instructors.  With that in mind, we have worked to structure some of the individualized goals of each pilot to reflect how we can help the GTA get the most value out of this opportunity (such as through a campus presentation, a published paper when we have results, or connecting with Ecampus leaders as possible references for job applications).  The final name for our pilot – GTA Innovations for Student Success Fellowship – is crafted both to reflect the central goals of the pilot (student success) and to call out the important and unique work that GTAs are doing as fellows.

Looking forward (to sharing innovative interventions and results)

We are still in the very early stages of each of these pilots, so while we don’t yet have any results to share, the deep engagement of our stakeholders in this process has been heartening, and wonderful plans are in the works for the first sets of interventions to be implemented this fall.  We are also so pleased to see the support behind allowing this group of GTAs inspire innovative online teaching within their home departments, and the willingness of the faculty who teach the courses under review to think collaboratively and differently about the creative ways we can support their online students.

As part of their pilot work, we will encourage these GTAs to contribute to the blog and share their insights and takeaways along the way.  What they learn about how to support student needs in these particularly challenging courses and course sequences, learning design, teaching methods that better motivate disengaged learners, etc. will no doubt be useful to Ecampus stakeholders across the university and beyond.  Stay tuned for more!

Introduction

For those who work in higher education, it may not come as a surprise that the field of instructional design has grown in tandem with the expansion of online programs and courses. Evidence of this growth abounds. While the discipline of instructional design has expanded rapidly in recent years, the history of instructional design is not well known by those outside of the field.

This post will cover a brief history of instructional design with a particular emphasis on design: What influences design? How are design decisions made? How has the way we approached design changed over time? We’ll also consider how instructional designers actually design courses and the importance of course structure as an inclusive practice.

Instructional Design: Theory and History

Every instructional design curriculum teaches three general theories or theoretical frameworks for learning: behaviorism, cognitivism, and constructivism. While an instructional designer (ID) probably wouldn’t call herself a cognitivist or a behaviorist, for example, these theories influence instructional design and the way IDs approach the design process.

The field of instructional design is widely believed to have originated during World War II, when training videos like this one were created to prepare soldiers with the knowledge and skills they would need in battle. This form of audio-visual instruction, although embraced by the military, was not initially embraced by schools.

B.F. Skinner
“B.F. Skinner” Portrait Art Print
by Xiquid

In the 1950s, behaviorists, such as B.F. Skinner, dominated popular thought on how to teach and design instruction. For behaviorists, learning results in an observable change in behavior. The optimal design of a learning environment from a behaviorist perspective would be an environment that increases student motivation for learning, provides reinforcement for demonstrating learning, and removes distractions. Behaviorists are always designing for a specific response, and instruction is intended to teach discrete knowledge and skills. For behaviorists, motivation is critical, but only important to the extent that it elicits the desired behavior. 

Cognitivism was largely a response to behaviorism. Cognitivists emphasized the role of cognition and the mind; they acknowledged that, when designing learning environments, there is more to consider than the content to be learned. More than environmental factors and instructional components, the learners’ own readiness, or prior knowledge, along with their beliefs and attitudes, require consideration. Design, from a cognitivist approach, often emphasizes preparedness and self-awareness. Scaffolding learning and teaching study skills and time-management (metacognitive skills) are practices grounded in a cognitivist framework.

While cognitivists emphasize the learner experience, and in particular, acknowledge that learners’ existing knowledge and past histories influence their experience, the learner is still receiving information and acting on it–responding to carefully designed learning environments.

Constructivism, the most current of the three frameworks, on the other hand, emphasizes that the learner is constructing their own understanding of the world, not just responding to it. Learners are activity creating knowledge as they engage with the learning environment.

All–or nearly all–modern pedagogical approaches are influenced by these theoretical frameworks for learning.

Design Approaches

A single course can be seen as a microcosm of theoretical frameworks, historical models, and value-laden judgements of pedagogical approaches

Learning theories are important because they influence our design models, but by no means are learning theories the only factor guiding design decisions. In our daily work, IDs rely on many different tools and resources. Often, IDs will use multiple tools to make decisions and overcome design challenges. So, how do we accomplish this work in practice?

  1. We look to established learning outcomes. We talk about learning goals and activities with faculty. We ask questions to guide decision making about how to meet course learning outcomes through our course design.
  2. We look to research-based frameworks and pedagogical approaches such as universal design for learning (UDL), inclusive design, active learning, student-centered design, and many other models. These models may be influenced by learning theory, but they are more practical in nature.
  3. We look to human models. We often heed advice and follow the examples our more experienced peers.
  4. We look to our own past experiences and solutions that have worked in similar situations, and we apply what we learned to future situations.
  5. We make professional judgements; judgements rooted in our tacit knowledge of what we believe “good design” looks like. For better or for worse, we follow our intuition. Our gut.

Over time, one can see that instructional design has evolved from an emphasis on teaching discrete knowledge and skills that can be easily measured (behaviorism) to an emphasis on guiding unique learners to actively create their own understanding (constructivism). Design approaches, however, are not as straightforward as simply taking a theory and applying it to a learning situation or some course material. Instructional design is nuanced. It is art and science. A single course can be seen as a microcosm of theoretical frameworks, historical models, and value-laden judgements of pedagogical approaches–as well as value-laden judgements of disciplinary knowledge and its importance. But. That’s another blog post.

Design Structure to Meet Diverse Needs

Meeting diverse needs, however, does not necessitate complexity in course design

If learners are unique, if learning can’t be programmed, if learning environments must be adaptable, if learners are constructing their own knowledge, how is all of this accommodated in a course design?

Designing from a modern constructivist perspective, from the viewpoint that students have vastly different backgrounds, past experiences, and world-views, requires that many diverse needs be accommodated in a single course. Meeting diverse needs, however, does not necessitate complexity in course design. Meeting diverse needs means that we need to provide support, so that it is there for those who need it, but not distracting to those who don’t need it. Design needs to be intuitive and seamless for the user.

Recent research on inclusive practices in design and teaching identify structure as an inclusive practice. Design can be viewed as a way of applying, or ensuring, a course structure is present. In that way, working with an instructional designer will make your course more inclusive. But, I digress. Or, do I?

Sathy and Hogan contend, in their guide, that structure benefits all students, but some, particularly those from underrepresented groups, benefit disproportionately. Conversely, not enough structure, leaves too many students behind. Since many of the same students who benefit from additional course structure also succeed a lower rates, providing course structure may also help to close the achievement gap.

How are We Doing This?

The good news is that Ecampus is invested in creating courses that are designed–or structured–in a way that meets the needs of many different learners. Working with an Ecampus instructional designer will ensure that your course materials are clearly presented to your students. In fact, many of the resources we provide–course planning templates, rubrics, module outlines, consistent navigation in Canvas, course banners and other icons and visual cues–are intended to ensure that your students navigate your course materials and find what they need, when they need it.

References

Icons made by phatplus and Freepik from www.flaticon.com are licensed by CC 3.0 BY

Boling, E., Alangari, H., Hajdu, I. M., Guo, M., Gyabak, K., Khlaif, Z., . . . Techawitthayachinda, R. (2017). Core Judgments of Instructional Designers in Practice. Performance Improvement Quarterly, 30(3), 199-219. doi:10.1002/piq.21250

Eddy, S.L. and Hogan, K. A. (2017) “Getting Under the Hood: How and for Whom Does Increasing Course Structure Work?” CBE—Life Sciences Education. Retrieved from https://www.lifescied.org/doi/10.1187/cbe.14-03-0050

Sathy, V. and Hogan, K.A. (2019). “Want to Reach All of Your Students? Here’s How to Make Your Teaching More Inclusive: Advice Guide. Chronicle of Higher Education. Retrieved from https://www.chronicle.com/interactives/20190719_inclusive_teaching

Tanner, K.D. (2013) “Structure Matters: Twenty-One Teaching Strategies to Promote Student Engagement and Cultivate Classroom Equity,” CBE—Life Sciences Education. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3762997/

Traveler items

Traveler itemsHave you ever taken a trip with a tour group? Or looked at an itinerary of places and activities to see if it meets your expectations and/or fits into your schedule? Most guided tours include an itinerary with a list of destinations, activities, and time allotted. This helps travelers manage their expectations and time.

Now, have you ever thought of an online course as a guided trip? The instructor is similar to a tour guide, leading student travelers to their destination. And, like travelers, students naturally want to know what to expect and how much time to commit to their learning. They could benefit from a detailed itinerary, or schedule of activities, that includes estimated time commitment for each week.

As an instructional designer for hybrid and online courses, I like to include a detailed schedule for each week to help students organize their time and stay on task. In order to determine what is on that schedule, I begin the design process with a draft of the course syllabus that outlines where the students are headed (learning outcomes) and how the instructor knows they arrived (assessments). This draft helps me understand the instructor’s plans for the course. Together, we look at the learning outcomes and assessments, as well as course requirements like credit hours to determine appropriate learning activities along the way. The course credit hours inform the workload requirements for students.  For example, Oregon State University is on the quarter system and their policy states that one credit hour is equivalent to 3-4 hours of course work each week. If a course is worth 3 credit hours, then students should expect to dedicate 9-12 hours each week to their course. I use a workload estimator created by The Center for Teaching Excellence at Rice University to help with the estimates. This tool provides a reasonable estimation of the workload expectations for students and can be used to verify whether the course meets the university’s guidelines for the assigned credit hours. (For more information on how the estimates are made, see the Rice University CTE blog post.)

While all of this information is useful to instructors, I also encourage them to share a weekly list of activities along with the calculations with students. Tour guides provide detailed schedules informing travelers where they are going, the order of the activities, and the time allotted to each activity, why not do that for students? Below, I’ve included a sample for how I do this in my courses. I create a weekly table on an introduction page at the beginning of each module within our LMS. This table includes a suggested order of the activities, the estimated time commitment to complete the activities, along with the official due dates. Anecdotally, students appreciate the schedule and use it to manage their time. I encourage you to consider using a detailed schedule with your future courses.

Example of a weekly Detailed Schedule

References

Rice Blog: https://cte.rice.edu/blogarchive/2016/07/11/workload

Barre, E. (2016, July 11). How much should we assign? Estimating out of class workload [Blog post]. Retrieved from http://cte.rice.edu/blogarchive/2016/07/11/workload.

Photo by Dariusz Sankowski on Unsplash

This post is the second in a three-part series that summarizes conclusions and insights from research of active, blended, and adaptive learning practices. Part one covered active learning, and today’s article focuses on the value of blended learning.

First Things First

What, exactly, is “blended” learning? Dictionary.com defines it as a “style of education in which students learn via electronic and online media as well as traditional face-to-face learning.” This is a fairly simplistic view, so Clifford Maxwell (2016), on the Blended Learning Universe website, offers a more detailed definition that clarifies three distinct parts:

  1. Any formal education program in which at least part of the learning is delivered online, wherein the student controls some element of time, place, path or pace.
  2. Some portion of the student’s learning occurs in a supervised physical location away from home, such as in a traditional on-campus classroom.
  3. The learning design is structured to ensure that both the online and in-person modalities are connected to provide a cohesive and integrated learning experience.

It’s important to note that a face-to-face class that simply uses an online component as a repository for course materials is not true blended learning. The first element in Maxwell’s definition, where the student independently controls some aspect of learning in the online environment, is key to distinguishing blended learning from the mere addition of technology.

You may also be familiar with other popular terms for blended learning, including hybrid or flipped classroom. Again, the common denominator is that the course design intentionally, and seamlessly, integrates both modalities to achieve the learning outcomes.

Let’s examine what the research says about the benefits of combining asynchronous, student-controlled learning with instructor-driven, face-to-face teaching.

Does Blended Learning Offer Benefits?

Blended Learning Icon

The short answer is yes.

The online component of blended learning can help “level the playing field.” In many face-to-face classes, students may be too shy or reluctant to speak up, ask questions, or offer an alternate idea. A blended environment combines the benefit of giving students time to compose thoughtful comments for an online discussion without the pressure and think-on-your-feet demand of live discourse, while maintaining direct peer engagement and social connections during in-classroom sessions (Hoxie, Stillman, & Chesal, 2014). Blended learning, through its asynchronous component, allows students to engage with materials at their own pace and reflect on their learning when applying new concepts and principles (Margulieux, McCracken, & Catrambone, 2015).

Since well-designed online learning produces equivalent outcomes to in-person classes, lecture and other passive information can be shifted to the online format, freeing up face-to-face class time for active learning, such as peer discussions, team projects, problem-based learning, supporting hands-on labs or walking through simulations (Bowen, Chingos, Lack, & Nygren, 2014). One research study found that combining online activities with in-person sessions also increased students’ motivation to succeed (Sithole, Chiyaka, & McCarthy, 2017).

What Makes Blended Learning So Effective?

Five young people studying with laptop and tablet computers on white desk. Beautiful girls and guys working together wearing casual clothes. Multi-ethnic group smiling.

Nearly all the research reviewed concluded that blended learning affords measurable advantages over exclusively face-to-face or fully online learning (U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, 2009). The combination of technology with well-designed in-person interaction provides fertile ground for student learning. Important behaviors and interactions such as instructor feedback, assignment scaffolding, hands-on activities, reflection, repetition and practice were enhanced, and students also gained advantages in terms of flexibility, time management, and convenience (Margulieux, McCracken, & Catrambone, 2015).

Blended learning tends to benefit disadvantaged or academically underprepared students, groups that typically struggle in fully online courses (Chingosa, Griffiths, Mulhern, and Spies, 2017). Combining technology with in-person teaching helped to mitigate some challenges faced by many students in scientific disciplines, improving persistence and graduation rates. And since blended learning can be supportive for a broader range of students, it may increase retention and persistence for underrepresented groups, such as students of color (Bax, Campbell, Eabron, & Thomson, 2014–15).

Blended learning  benefits instructors, too. When asked about blended learning, most university faculty and instructors believe it to be more effective (Bernard, Borokhovski, Schmid, Tamim, & Abrami, 2014). The technologies used often capture and provide important data analytics, which help instructors more quickly identify under-performing students so they can provide extra support or guidance (McDonald, 2014). Many online tools are interactive, fun and engaging, which encourages student interaction and enhances collaboration (Hoxie, Stillman, & Chesal, 2014). Blended learning is growing in acceptance and often seen as a favorable approach because it synthesizes the advantages of traditional instruction with the flexibility and convenience of online learning (Liu, et al., 2016).

A Leap of Faith

Is blended learning right for your discipline or area of expertise? If you want to give it a try, there are many excellent internet resources available to support your transition.

Though faculty can choose to develop a blended class on their own, Oregon State instructors who develop a hybrid course through Ecampus receive full support and resources, including collaboration with an instructional designer, video creation and media development assistance. The OSU Center for Teaching and Learning offers workshops and guidance for blended, flipped, and hybrid classes. The Blended Learning Universe website, referenced earlier, also provides many resources, including a design guide, to support the transformation of a face-to-face class into a cohesive blended learning experience.

If you are ready to reap the benefits of both online and face-to-face teaching, I urge you to go for it! After all, the research shows that it’s a pretty safe leap.

For those of you already on board with blended learning, let us hear from you! Share your stories of success, lessons learned, do’s and don’ts, and anything else that would contribute to instructors still thinking about giving blended learning a try.

Susan Fein, Oregon State University Ecampus Instructional Designer
susan.fein@oregonstate.edu | 541-747-3364

References

  • Bax, P., Campbell, M., Eabron, T., & Thomson, D. (2014–15). Factors that Impede the Progress, Success, and Persistence to Pursue STEM Education for Henderson State University Students Who Are Enrolled in Honors College and in the McNair Scholars Program. Henderson State University. Arkadelphia: Academic Forum.
  • Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended learning and technology use in higher education: From the general to the applied. J Comput High Educ, 26, 87–122.
  • Bowen, W. G., Chingos, M. M., Lack, K. A., & Nygren, T. I. (2014). Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis and Management, 33(1), 94–111.
  • Chingosa, M. M., Griffiths, R. J., Mulhern, C., & Spies, R. R. (2017). Interactive online learning on campus: Comparing students’ outcomes in hybrid and traditional courses in the university system of Maryland. The Journal of Higher Education, 88(2), 210-233.
  • Hoxie, A.-M., Stillman, J., & Chesal, K. (2014). Blended learning in New York City. In A. G. Picciano, & C. R. Graham (Eds.), Blended Learning Research Perspectives (Vol. 2, pp. 327-347). New York: Routledge.
  • Liu, Q., Peng, W., Zhang, F., Hu, R., Li, Y., & Yan, W. (2016). The effectiveness of blended learning in health professions: Systematic review and meta-analysis. Journal of Medical Internet Research, 18(1). doi:10.2196/jmir.4807
  • Maxwell, C. (2016, March 4). What blended learning is – and isn’t. Blog post. Retrieved from Blended Learning Universe.
  • Margulieux, L. E., McCracken, W. M., & Catrambone, R. (2015). Mixing in-class and online learning: Content meta-analysis of outcomes for hybrid, blended, and flipped courses. In O. Lindwall, P. Hakkinen, T. Koschmann, & P. Tchoun (Ed.), Exploring the Material Conditions of Learning: Computer Supported Collaborative Learning (CSCL) Conference (pp. 220-227). Gothenburg, Sweden: The International Society of the Learning Sciences.
  • McDonald, P. L. (2014). Variation in adult learners’ experience of blended learning in higher education. In Blended Learning Research Perspectives (Vol. 2, pp. 238-257). Routledge.
  • Sithole, A., Chiyaka, E. T., & McCarthy, P. (2017). Student attraction, persistence and retention in STEM programs: Successes and continuing challenges. Higher Education Studies, 7(1).
  • U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Washington, D.C.

Image Credits

  • Blended Learning Icon: Innovation Co-Lab Duke Innovation Co-Lab [CC0]
  • Leap of Faith: Photo by Denny Luan on Unsplash
  • School photo created by javi_indy – www.freepik.com

There are many benefits to using rubrics for both instructors and students, as discussed in Rubrics Markers of Quality Part 1 – Unlock the Benefits. Effective rubrics serve as a tool to foster excellence in teaching and learning, so let’s take a look at some best practices and tips to get you started.

Best Practices

Alignment

Rubrics should articulate a clear connection between how students demonstrate learning and the (CLO) Course Learning Outcomes. Solely scoring gateway criteria, the minimum expectations for a task, (e.g., word count, number of discussion responses) can be alluring. Consider a rubric design to move past minimum expectations and assess what students should be able to do after completing a task.

Detailed, Measurable, and Observable

Clear and specific rubrics have the potential to communicate to how to demonstrate learning, how performance evaluation measures, and markers of excellence. The details provide students with a tool to self-assess their progress and level up their performance autonomously.

Language Use

Rubrics create the opportunity to foster an inclusive learning environment. Application of clear and consistent language takes into consideration a diverse student composition. Online students hail from around the world and speak various native languages. Learners may interpret the meaning of different words differently. Use simple terms with specific and detailed descriptions. Doing so creates space for students to focus on learning instead of decoding expectations. Additionally, consider the application of parallel language consistently. The use of similar language (e.g. demonstrates, mostly demonstrates, and doesn’t demonstrate) across each criterion can be helpful to differentiate between each performance level.

Tips of the Trade!

Suitability

Consider the instructional aim, learning outcomes, and the purpose of a task when choosing the best rubric for your course.

  • Analytic Rubrics: The hallmark design of an analytic rubric evaluates performance criteria separately. Characteristically this rubric’s structure is a grid, and evaluation of performance scores are on a continuum of levels. Analytic rubrics are detailed, specific, measurable, and observable. Therefore, this rubric type is an excellent tool for formative feedback and assessment of learning outcomes.
  • Holistic Rubrics: Holistic rubrics evaluate criteria together in one general description for each performance level. Ideally, this rubric design evaluates the overall quality of a task.  Consider the application of a holistic rubric can when an exact answer isn’t needed, when deviation or errors are allowed, and for interpretive/exploratory activities.
  • General Rubrics: Generalized rubrics can be leveraged to assess multiple tasks that have the same learning outcomes (e.g., reflection paper, journal). Performance dimensions focus solely on outcomes versus discrete task features.

Explicit Expectations

Demystifying expectations can be challenging.  Consider articulating performance expectations in the task description before deploying a learning task. Refrain from using rubrics as a standalone vehicle to communicate expectations. Unfortunately, students may miss the rubric all together and fail to meet expectations. Secondly, make the implicit explicit! Be transparent. Provide students with all the information and tools they need to be successful from the outset.

Iterate

A continuous improvement process is a key to developing high-quality assessment rubrics. Consider multiple tests and revisions of the rubric. There are several strategies for testing a rubric. 1) Consider asking students, teaching assistants, or professional colleagues to score a range of work samples with a rubric. 2) Integrate opportunities for students to conduct self-assessments. 3) Consider assessing a task with the same rubric between course sections and academic terms. Reflect on how effectively and accurately the rubric performed, after testing is complete. Revise and redeploy as needed.

Customize

Save some time, and don’t reinvent the wheel. Leverage existing samples and templates. Keep in mind that existing resources weren’t designed with your course in mind. Customization will be needed to ensure the accuracy and effectiveness of the rubric.

Are you interested in learning more about rubrics and how they can enrich your course? Your Instructional Designer can help you craft effective rubrics that will be the best fit for your unique course.

References

Additional Resources

The Basics
Best Practices
Creating and Designing Rubrics

One of the most common questions I get as an Instructional Designer is, “How do I prevent cheating in my online course?” Instructors are looking for detection strategies and often punitive measures to catch, report, and punish academic cheaters. Their concerns are understandable—searching Google for the phrase “take my test for me,” returns pages and pages of results from services with names like “Online Class Hero” and “Noneedtostudy.com” that promise to use “American Experts” to help pass your course with “flying grades.” 1 But by focusing only on what detection measures we can implement and the means and methods by which students are cheating, we are asking the wrong questions. Instead let’s consider what we can do to understand why students cheat, and how careful course and assessment design might reduce their motivation to do so.

A new study published in Computers & Education identified five specified themes in analyzing the reasons students provided when seeking help from contract cheating services (Amigud & Lancaster, 2019):

  • Academic Aptitude – “Please teach me how to write an essay.”
  • Perseverance – “I can’t look at it anymore.”
  • Personal Issues – “I have such a bad migraine.”
  • Competing Objectives – “I work so I don’t have time.”
  • Self-Discipline – “I procrastinated until today.”

Their results showed that students don’t begin a course with the intention of academic misconduct. Rather, they reach a point, often after initially attempting the work, when the perception of pressures, lack of skills, or lack of resources removes their will to complete the course themselves. Online students may be more likely to have external obligations and involvement in non-academic activities. According to a 2016 study, a significant majority of online students are often juggling other obligations, including raising children and working while earning their degrees (Clinefelter & Aslanian, 2016).

While issues with cheating are never going to be completely eliminated, several strategies have emerged in recent research that focus on reducing cheating from a lens of design rather than one of punishment. Here are ten of my favorite approaches that speak to the justifications identified by students that led to cheating:

  1. Make sure that students are aware of academic support services (Yu, Glanzer, Johnson, Sriram, & Moore, 2018). Oregon State, like many universities, offers writing help, subject-area tutors and for Ecampus students, a Student Success team that can help identify resources and provide coaching on academic skills. Encourage students, leading up to exams or big assessment projects, to reach out during online office hours or via email if they feel they need assistance.
  2. Have students create study guides as a precursor assignment to an exam—perhaps using online tools to create mindmaps or flashcards. Students who are better prepared for assessments have a reduced incentive to cheat. Study guides can be a non-graded activity, like a game or practice quiz, or provided as a learning resource.
  3. Ensure that students understand the benefits of producing their own work and that the assessment is designed to help them develop and demonstrate subject knowledge (Lancaster & Clarke, 2015). Clarify for students the relevance of a particular assessment and how it relates to the weekly and larger course learning outcomes.
  4. Provide examples of work that meets your expectations along with specific evaluation criteria. Students need to understand how they are being graded and be able to judge the quality of their own work. A student feeling in the dark about what is expected from them may be more likely to turn to outside help.
  5. Provide students with opportunities throughout the course to participate in activities, such as discussions and assignments, that will prepare them for summative assessments (Morris, 2018).
  6. Allow students to use external sources of information while taking tests. Assessments in which students are allowed to leverage the materials they have learned from to construct a response do a better job of assessing higher order learning. Memorizing and repeating information is rarely what we hope students to achieve at the end of instruction.
  7. Introduce alternative forms of assessment. Creative instructors can design learning activities that require students to develop a deeper understanding and take on more challenging assignments. Examples of these include recorded presentations, debates, case studies, portfolios, and research projects.
  8. Rather than a large summative exam at the end of a course, focus on more frequent smaller, formative assessments (Lancaster & Clarke, 2015). Provide students with an ongoing opportunity to demonstrate their knowledge without the pressure introduced by a final exam that accounts for a substantial portion of their grade.
  9. Create a course environment that is safe to make and learn from mistakes. Build into a course non-graded activities in which students can practice the skills they will need to demonstrate during an exam.
  10. Build a relationship with students. When instructors are responsive to student questions, provide substantive feedback throughout a course and find other ways to interact with students — they are less likely to cheat. It matters if students believe an instructor cares about them (Bluestein, 2015).

No single strategy is guaranteed to immunize your course against the possibility that a student will use some form of cheating. Almost any type of assignment can be purchased quickly online. The goal of any assessment should be to ensure that students have met the learning outcomes—not to see if we can catch them cheating. Instead, focus on understanding pressures a student might face to succeed in a course, and the obstacles they could encounter in doing so. Work hard to connect with your students during course delivery and humanize the experience of learning online. Thoughtful design strategies, those that prioritize supporting student academic progress, can alleviate the conditions that lead to academic integrity issues.


1 This search was suggested by an article published in the New England Board of Higher Education on cheating in online programs. (Berkey & Halfond, 2015)

References

Amigud, A., & Lancaster, T. (2019). 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education, 134, 98–107. https://doi.org/10.1016/j.compedu.2019.01.017

Berkey, D., & Halfond, J. (2015). Cheating, student authentication and proctoring in online programs.

Bluestein, S. A. (2015). Connecting Student-Faculty Interaction to Academic Dishonesty. Community College Journal of Research and Practice, 39(2), 179–191. https://doi.org/10.1080/10668926.2013.848176

Clinefelter, D. D. L., & Aslanian, C. B. (2016). Comprehensive Data on Demands and Preferences. 60.

Lancaster, T., & Clarke, R. (2015). Contract Cheating: The Outsourcing of Assessed Student Work. In T. A. Bretag (Ed.), Handbook of Academic Integrity (pp. 1–14). https://doi.org/10.1007/978-981-287-079-7_17-1

Morris, E. J. (2018). Academic integrity matters: five considerations for addressing contract cheating. International Journal for Educational Integrity, 14(1), 15. https://doi.org/10.1007/s40979-018-0038-5

Yu, H., Glanzer, P. L., Johnson, B. R., Sriram, R., & Moore, B. (2018). Why College Students Cheat: A Conceptual Model of Five Factors. The Review of Higher Education, 41(4), 549–576. https://doi.org/10.1353/rhe.2018.0025

Oregon State University’s Learning Management System (LMS) migrated to Canvas in 2014-2015. The Canvas migration was based not only on the company’s feature alignment with our learning platform needs but also on the outstanding customer service Canvas Instructure has provided to our LMS user community including students, faculty, instructional designers, and administrators. How Canvas provides customer service offers an example we can model to continue to exceed student expectations.

According to Michael Feldstein’s July 8, 2018 report, major players in US LMS market include Blackboard, Canvas, Moodle, Bright Space, Sakai, Schoology, and others (Feldstein, 2018).

LMS Market share in North America

Figure 1: US Primary LMS Systems, July 6th, 2018 (Feldstein, 2018)

 

Of these major players in the LMS field, Canvas is most noticeable with fastest growth in market share among U.S. and Canadian higher education institutions.

LMS history and Market Share

Figure 2. LMS Market Share for US and Canadian Higher Ed Institutions (Feldstein, 2018)

 

Different people suggest different criteria when comparing LMSs. Udutu.com provided a list of 7 things to think about before purchasing a LMS:

  1. Be clear on your learning and training objectives;
  2. Don’t be fooled by the high costs of an LMS;
  3. Know the limitations of your internal team and users;
  4. Pay for the features you need, not for what you might need;
  5. The latest new technology is not necessarily the best one;
  6. Customer support is everything; and
  7. Trust demos and trials over reviews, ratings and “industry experts”

(Udutu, 2016).  Noud (2016) suggested the following ten factors to consider when selecting a LMS:

  1. Unwanted Features;
  2. Mobile Support;
  3. Integrations (APIs, SSO);
  4. Customer Support;
  5. Content Support;
  6. Approach to pricing;
  7. Product roadma;
  8. Scalability, Reliability and Security;
  9. Implementation Timeframe; and
  10. Hidden costs.

Christopher Pappas (2017) suggested 9 factors to consider when calculating your LMS budget:

  1. Upfront costs;
  2. LMS training;
  3. Monthly Or Annual Licensing Fees;
  4. Compatible eLearning Authoring Tools;
  5. Pay-per-User/Learner Fee;
  6. Upgrades and Add-Ons;
  7. Learning and Development Team Payroll;
  8. Online Training Development Costs; and
  9. Ongoing Maintenance.

Of all of the above lists, I like Udutu’s list the best because it matches with my personal experiences with LMS migrations.

I first used WebCT between 2005 and 2007, participated in migrating from WebCT Vista to Blackboard in 2008, and Angel to Blackboard migration in 2013-2014.  During my seven years of using Blackboard as instructional designer and faculty support staff, my biggest complaint with Blackboard was its unexpected server outages during peak times such as beginning of the term and final’s weeks. In 2014, I moved to Oregon State University (OSU). The OSU community was looking for a new LMS in 2013 and started piloting Canvas in 2014. At the end of the pilot, instructor and student feedback was mostly positive. Not subject to local server outages, the cloud-based system was stable and had remained available to users throughout the pilot. Of course no LMS is perfect. But after careful comparison and feedback collection, we migrated from Blackboard to Canvas in 2015. So far in my four years of using Canvas, there has not been a single server outage. Canvas has the basic functionality of a LMS.

Canvas wanted to expand their market share by building up positive customer experiences. They were eager to please OSU and they provided us with 24/7 on-call customer service during our first two years of using Canvas, at a relatively reasonable price. The pilot users were all super satisfied with their customer service. Several instructors reported that they contacted Canvas hotline on Thanksgiving or Christmas, and their calls were answered immediately, and their issues were resolved.

Michael Feldstein (2018) summarized that Canvas’ “cloud-based offering, updated user interface, reputation for outstanding customer service and brash, in-you-face branding” have helped its steady rise in the LMS market share. As instructors and instructional designers, we can learn a lot from the CANVAS INSTRUCTURE’s success story and focus on improving the service we provide to our students, such as student success coaching, online recourses, online learning communities, etc. Would you agree with me on this? If you have specific suggestions on how to improve the way we serve our students, feel free to let us know (Tianhong.shi@oregonstate.edu ; @tianhongshi) !

 

References:

Goldberg, M., Salari, S. & Swoboda, P. (1996) ‘World Wide Web – Course Tool: An Environment for Building WWW-Based Courses’ Computer Networks and ISDN Systems, 28:7-11 pp1219-1231

Feldstein, Michael. (2018). Canvas surpasses Blackboard Learn in US Market Share. E-Literate, July 8, 2018. Retrieved from https://mfeldstein.com/canvas-surpasses-blackboard-learn-in-us-market-share/ on February 2, 2019.

McKenzie, Lindsay. (2018). Canvas catches, and maybe passes, Blackboard. InsideHigherEd. July 10, 2018. Retrieved from https://www.insidehighered.com/digital-learning/article/2018/07/10/canvas-catches-and-maybe-passes-blackboard-top-learning on February 2, 2019.

Moran, Gwen (October 2010). “The Rise of the Virtual Classroom”Entrepreneur Magazine. Irvine, California. Retrieved July 15, 2011.

Noud, Brendan. (February 9, 2016). 10 Things to consider when selecting an LMS. Retrieved from https://www.learnupon.com/blog/top-10-considerations-when-selecting-a-top-lms/ on February 2, 2019.

Pappas, Christopher. (June 13, 2017). Top 9 Factors to consider when calculating Your LMS Budget. Retrieved from https://blog.lambdasolutions.net/top-9-factors-to-consider-when-calculating-your-lms-budget on February 2, 2019.

Udutu. (May 30, 2016). How to choose the best Learning Management System. Retrieved from https://www.udutu.com/blog/lms/ on February 2, 2019.

Wikipedia. (n.d.). WebCT. Retrieved from https://en.wikipedia.org/wiki/WebCT on February 2, 2019.

 

Would you like to save time grading, accurately assess student learning, provide timely feedback, track student progress, demonstrate teaching and learning excellence, foster communication, and much more? If you answered yes, then rubrics are for you! Let’s explore why the intentional use of rubrics can be a valuable tool for instructors and students.

Value for instructors

  • Time management: Have you ever found yourself drowning in a sea of student assignments that need to be graded ASAP (like last week)?  Grading with a rubric can quicken the process because each student is graded in the same way using the same criteria. Rubrics which are detailed, specific, organized and measurable clearly communicate expectations. As you become familiar with how students are commonly responding to an assessment, feedback can be easily personalized and readily deployed.
  • Timely and meaningful feedback: Research has shown that there are several factors that enhance student motivation. One factor is obtaining feedback that is shared often, detailed, timely, and useful. When students receive relevant, meaningful, and useful feedback quickly they have an opportunity to self-assess their progress, course correct (if necessary), and level up their performance.
  • Data! Data! Data! Not only can rubrics provide a panoramic view of student progress, but the tool can also help identify teaching and learning gaps. Instructors will be able to identify if students are improving, struggling, remaining consistent, or if they are missing the mark completely. The information gleaned from rubrics can be utilized to compare student performance within a course, between course sections, or even across time. As well as, the information can serve as feedback to the instructor regarding the effectiveness of the assessment.
  • Effectiveness: When a rubric is designed from the outset to measure the course learning outcomes the rubric can serve as a tool for effective, and accurate, assessment. Tip! Refrain from solely scoring gateway criteria (i.e. organization, mechanics, and grammar). Doing so is paramount because students will interpret meeting the criteria as a demonstration that they have met the learning outcomes even if they haven’t. If learning gaps are consistently identified consider evaluating the task and rubric to ensure instructions, expectations, and performance dimensions are clear and aligned.
  • Shareable: As academic programs begin to develop courses for various modalities (i.e. on campus, hybrid, online) consistently assessing student learning can be a challenge. The advantage of rubrics is they can be easily shared and applied between course sections and modalities. Doing so can be especially valuable when the same course is taught by multiple instructors and teaching assistants.
  • Fosters communication: Instructors can clearly articulate performance expectations and outcomes to key stakeholders such as teaching assistants, instructors, academic programs, and student service representatives (e.g. Ecampus Student Success Team, Writing Center). Rubrics provide additional context above and beyond what is outlined in the course syllabus. A rubric can communicate how students will be assessed, what students should attend to, and how institutional representatives can best help support students. Imagine a scenario where student contacts the Writing Center with the intent of reviewing a draft term paper, and the representative asks for the grading criteria or rubric. The grading criteria furnished by the instructor only outlines the requirements for word length, formatting, and citation conventions. None of the aforementioned criteria communicate the learning outcomes or make any reference to the quality of the work. In this example, the representative might find it challenging to effectively support the student without understanding the instructor’s implicit expectations.
  • Justification: Have you ever been tasked with justifying a contested grade? Rubrics can help you through the process! Rubrics which are detailed, specific, measurable, complete, and aligned can be used to explain why a grade was awarded. A rubric can quickly and accurately highlight where a student failed to meet specific performance dimensions and/ or the learning outcomes.
  • Evidence of teaching improvement: The values of continuous improvement, lifelong learning, and ongoing professional development are woven into the very fabric of academia. Curating effective assessment tools and methods can provide a means of demonstrating performance and providing evidence to support professional advancement.

Value for students

  • Equity: Using rubrics creates an opportunity for consistent and fair grading for all students. Each student is assessed on the same criteria and in the same way. If performance criteria are not clearly communicated from the outset then evaluations may be based on implicit expectations. Implicit expectations are not known or understood by students, and it can create an unfair assessment structure.
  • Clarity: Ambiguity is decreased by using student-centered language. Student composition is highly diverse, and many students speak different native languages. Therefore, students may have different interpretations as to what words mean (e.g. critical thinking). Using very clear and simplistic language can mitigate unintended barriers and decrease confusion.
  • Expectations: Students know exactly what they need to do to demonstrate learning, what instructors are looking for, how to meet the instructor’s expectations, and how to level up their performance. A challenge can be to ensure that all expectations (implicit and explicit) are clearly communicated to students. Tip! Consider explaining expectations in the description of the task as well.
  • Skill development: Rubrics can introduce new concepts/ terminology and help students develop authentic skills (e.g. critical thinking) which can be applied outside of their academic life.
  • Promotes metacognition and self-regulatory behavior: Guidance and feedback help students reflect on their thought processes, self-assess, and foster positive learning behaviors.

As an Ecampus course developer, you have a wide array of support services and experts available to you. Are you interested in learning more about rubric design, development, and implementation? Contact your Instructional Designer today to begin exploring best-fit options for your course. Stay tuned for Rubrics: Markers of Quality (Part 2) –Tips & Best Practices.

References:

  • Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, Va.: ASCD.
  • Richter, D., & Ehlers, Ulf-Daniel. (2013). Open Learning Cultures: A Guide to Quality, Evaluation, and Assessment for Future Learning. (1st ed.). Berlin, Heidelberg: Springer.
  • Stevens, D. D., & Levi, Antonia. (2013). Introduction to rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning (2nd ed.). Sterling, Va.: Stylus.
  • Walvoord, B. E. F., & Anderson, Virginia Johnson. (2010). Effective grading: a tool for learning and assessment in college (Second edition.). San Francisco, CA: Jossey-Bass.

 

 

 

Curious what an Ecampus Instructional Designer is looking for when they approve slides for narrated lectures?  It certainly depends on the course content.

Generally, the top three things I am looking at are copyright, accessibility, and aesthetics.

For this post, I am going to focus on copyright and I will return to the other topics in a future post.  A copy of the slides, which includes links to helpful materials, is available below the video as well as a list of resources.

Slides: Copyright Considerations for Narrated Slides

Resources:

What’s An Image’s Value?

Image of postcard with a picture is worth a thousand words written on it.

Have you ever created an online course without using images? No?

That is not surprising as images can convey emotions, ideas, and much more. Their value is often captured in an old adage: A picture is worth a thousand words.

This article will discuss the value of images in online course design and how using visuals to accompany instruction via text or narration might contribute to or detract from an online learning experience. Let’s begin.

Multimedia Learning: Images, Text, and More

Online learning is a modern form of multimedia learning. Richard Mayer (2009) described multimedia learning as that learning that integrates the use of words and pictures. In traditional classrooms these learning resources might be experienced as: 

  • Textbooks:  Text and illustrations.
  • Computer-based lessons: Narration w/animation
  • Face-to-face slide presentations: Graphics and audio.

In online learning multimedia may also include:

  • eBooks: Text and digital images 
  • Video: Text, images, animations, coupled with audio.
  • Interactives: Maps, images, and video.
  • Digital Visual Representations: Virtual worlds and 3D models.
  • Screencasts: Software demos, faculty video feedback, and more.
  • Audio: Enhanced podcasts or narrated lectures.

These two short lists, although not exhaustive, demonstrates the importance of visual elements to multimedia based learning in online courses. There are many reasons why we might include any one of these multimedia learning experiences in an online course. For our purposes we will explore a bit more the instructional value of visuals to online learning.

So, how do words and pictures work together to help shape learning? Given that this is perhaps the most common learning object used in an online course it would seem useful to understand what may be considered this simple interpretation of visual literacy for learning (Aisami, 2015).

Visual Engagement Of A Learning Object

In a recent study of how people acquire knowledge from an instructional web page Ludvik Eger (2018) used eye tracking technology to examine a simple learning object composed of a title (headline), a visual element (i.e., diagram), and a box of written text. With no audio support for the learning object in this study, participants engaged the content via visual engagement alone. Results indicated that the majority of students started their learning process at the headline or the headline and visual element. The box of information, in text form, was the third part of the learning object engaged.

Within this context eye movement analysis indicates a learning process that is dependent upon a consistent visual flow. Purposely connecting the title, visual element and information text of a learning object may best reinforce learning. By doing this the course designer/instructor becomes a sort of cognitive guide either focusing or not-focusing learning via the meaning structure of the various learning object elements. In our case we want to use visual elements to support performance and achievement of learning tasks.

Choosing Visual Elements

In order to explore the choice of visual elements in an online learning experience it is helpful to understand how we process that experience from a cognitive science perspective.

Clark and Mayer (2016) describe that cognitive science suggests knowledge construction is based upon three principles: Dual channels, limited capacity and active processing. Let’s briefly examine what these are.

Dual channels:

People have two channesl of cognitive processing 1) for processing visual/pictorial material and 2) one for auditory/verbal material. See Figure 1.  below.

 

Model of cognitive model of multimedia learning.
Figure 1.: Model of the Cognitive Theory of Multimedia Learning

Limited capacity:

Humans can only process a few bits of pieces of information in each channel at the same time.

Active processing:

Learning occurs as people engage in cognitive processing during learning. This may include attending to relevant material, organizing that material into a coherent structure, and integrating that material with prior knowledge.

Due to the limits on any learner’s processing capability it is paramount that we select visual images that help manage the learning process. Our goal is to limit excessive processing that clutters the learning experience, build visual support for representing the core learning process, and provide visual support that fosters deeper understanding of the learning at hand. What does this mean in practice?

Managing Processing Via Image Use

Making decisions about image selection and use is a key to managing this learning process. Understanding the meaning of images to select is also key and is really a function of literacy in one’s field and visual literacy in general (Kennedy, 2013).

In practice we can use the following guidelines to make decisions about image use in multimedia-based online learning. 

  • Control Visual Elements – Too many images on a web page or slide may force extraneous cognitive processing that does not support the instructional objective. 
  • Select Visual Elements Carefully – Images difficult to discern are likely to negatively impact learning. Think about good visual quality, emotional and intellectual message of the image, information value, and readability.
  • Use Focused Visual Elements – Target selection of visual support to those images that represent the core learning material and/or provide access to deeper understanding of that core content.

Other Image Tips

Emotional Tone: Emotional design elements (e.g., visuals) can play important roles in motivating learners and achievement of learning outcomes (Mayer, 2013).

Interest: Decorative images may boost learner interest but do not contribute to higher performance in testing (Mayer, 2013). Use decorative images prudently so they do not contribute to extraneous learning processing (Pettersson & Avgerinou, 2016).

Challenge: Making image selections that contribute to a degree of confusion may challenge learnings to dive more deeply into core learning. This is a tenuous decision in that challenge in sense making may prove to foster excessive processing.

Access: Images must be presented in a format that is viewable to users to be practical. This involves an understanding of technical features of image formats, download capability, mobile use, and universal design techniques.

Final Thoughts

It is valuable to remember that visuals communicate non verbally. They are most effectively used when carefully selected and paired with text or audio narration. Visuals appeal to the sense of sight. They have different classifications and could be pictures, symbols, signs, maps graphs, diagrams, charts, models, and photographs. Knowing their form, meaning, and application is part of being a visually literate course developer or instructional designer.

Web Resources

References

Aisami, R. S. (2015). Learning Styles and Visual Literacy for Learning and Performance. Procedia – Social and Behavioral Sciences, 176, 538-545. doi:10.1016/j.sbspro.2015.01.508

Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction : Proven guidelines for consumers and designers of multimedia learning. Retrieved from http://ebookcentral.proquest.com

Eger, L. (2018). How people acquire knowledge from a web page: An eye tracking study. Knowledge Management & E-Learning: An International Journal 10(3), 350-366.

Kennedy, B. (2013, November 19). What is visual literacy?. [Video file]. Retrieved from https://www.youtube.com/watch?time_continue=1&v=O39niAzuapc

Mayer, R. E. (2009). Multimedia learning (2nd ed.). New York: Cambridge University Press.

Mayer, R. E. (2014). Incorporating motivation into multimedia learning. Learning and Instruction, 29, 171-173. doi:10.1016/j.learninstruc.2013.04.003

Rune Pettersson & Maria D. Avgerinou (2016) Information design with teaching and learning in mind, Journal of Visual Literacy, 35:4, 253-267, DOI: 10.1080/1051144X.2016.1278341

 

Credit: Embedded image by Kelly Sikkema on Unsplash.com