Design for outcomes, not devices

When designing educational experiences, it is important to craft interfaces that are not only intuitive, but enable learning while minimizing any extraneous cognitive load to the student (Chandler & Sweller 1991; Plass et al. 2010) Unfortunately, good Human Centered Design practices do not always support learner outcomes — and that’s where Learning Sciences comes in. While design principles, such as Gestalt are important, the ability to integrate cognitive learning principles to the features and products we build are integral to creating delightful and effective learning experiences.

Before I start to speak about technology and screens, I’d like you to think about the last time you opened up a textbook. As you flipped through the pages you would have likely seen an assortment of images, illustrations, charts, tables, assessments, and many other features that complement the words on the page.

Figure 1: Image of textbook spread that shows assessment and images

These features serve a variety of functions. The most obvious is to make a great first impression when sitting next to other books on a retail shelf that are covering similar content. It is no surprise that publishers take pride in finding the right cover art and imagery that will convey their brand, and get the person to explore the book further. However, the most important job that these features handle is to support a pedagogical outcome that helps students either comprehend a particular concept or retain and later recall what they have read.

These features are outcome driven, targeted at optimizing the learning experience for the student. The simplest example of what I mean could be found in many of the images and illustrations that are included in a passage of text. If students are tasked to read about the various parts of the brain, it helps to allow students to reference an illustration showing where the frontal lobe is located in comparison to the brain stem in order for them to start creating an accurate mental representation of the organ in their memory. Imagine having to describe that in words alone, “The frontal lobe is the front portion of brain, followed by the temporal lobe, which sits on either side of the brain stem”. The risk of students having a flawed mental model is too great.

Figure 2: Textbook illustration showing a labeled figure pointing at sections of the brain

Leveraging the Learning Science

The interrelationship of words and figures is not a new concept. Visual representations are critical in science education as well as other disciplines because visual literacy is a necessary skill (Lowe, 2000). According to Cognitive Theory of Multimedia Learning “people learn more deeply from words and pictures than from words alone” (Mayer, 2005, p. 47). These visual representations are not processed as distinct verbal and visual formats, but go through dual-coding and share an association as they work towards comprehension (Sadoski et al., 2001). This means that a figure that is relevant to the passage of text should not be physically separated from the text that refers to it, as described by the Mayer’s spatial contiguity principle. (2001)

When looking at individual features that comprise a learning experience through the lens of the intended outcomes and the learning science that grounds them, it starts to guide how we might want to present that element to the learner. By ignoring this and haphazardly placing these elements on a page, aeven the most beautiful layout will miss out on an opportunity to help the learner achieve their learning outcome.

See it in action in textbooks…

Various eye tracking studies have looked into the effect of a reader’s eye moving from the text to visual image on the page and what might factor into that behavior. In a study with fourth graders it was found that students used one of three reading strategies: low integration of images, high integration of images, and intermediate integration of images. Additionally, students that had a high integration of images while reading a passage, scored higher on a posttest when compared with students that focused on words alone (Mason, 2013)

Figure 3: Eye-tracking image showing three types of reading strategies. Image A is intermediate integrator, Image B is low integrator, Image C is high integrator (Mason L., 2013)

While learning science states that leveraging both text and image will lead to better comprehension, studies have also shown that students may ignore photos, instead focusing mainly on the text (Behnke, 2016). This might be because, “an image is worth a thousand words only if the student knows the codes to interpret images” (Pintó et al., 2002, p. 341).

Figure 4: Eye tracking image from study on how students move through visual elements of a Geography textbooks (Behnke, 2016)

This realization has lead to suggestions for taking a more tactical approach in using the words on the page to guide learners towards picture analysis (Schnotz et al., 2014) increasing the chance that students are benefiting from both modalities. In the figure below the paragraph prompts the reader to leverage the image to help build their understanding. This means, that authors are being very intentional in the words and images they use and are leading students towards better reading strategies through their words.

Figure 5: Example textbook passage that prompts student to refer to figure to build their mental models of tectonic plates

…but students read on smartphones too

As technology has evolved, so has behavior. With the ubiquity of screens, people are increasingly demanding that all their content be available on the device of their choice. They want access to everything everywhere, and are willing to make compromises for that convenience. Whether it’s reading a chapter or writing an essay, the tools students use and what they feel comfortable using them for will continue to evolve.

The typical Medium [mobile] App user spends 25% more time reading per day than Desktop users

“I want reading to be part of my life…If I waited for the kind of time I used to have — sitting down for five hours — I wouldn’t read at all.”

“[When inspiration strikes] You can just whip out your phone . . . and do a few quick edits”

Whether or not reading and writing on mobile phones is less effective is up for debate (Patronis, 2016; Ziming, 2012), but the reality that mobile usage is trending upwards is clear. A 2018 pew research study reports that 94% of 18–29 year-olds own a smartphone. Moreover, there is an upwards trend in demographic noting that they are dropping broadband and relying on their smartphones as their primary means of online access at home.

Not too long ago, “everywhere” was predictable. Everywhere meant on a desktop or a laptop. Then we started seeing devices like XBox show up on our analytics reports of a learning platform I was working on. Now we are seeing smartphones and tablets of all sizes accessing our content. Who knows where the next screen will be, or even if it will be a screen at all…”Alexa, what is the capital of Mississippi?”. For this reason we need to change our approach from adapting to screen sizes and technology. Instead think about the underlying learning outcomes we are targeting, uncovering new opportunities that may have not been possible in any other format. This means being mobile responsive may not be enough, and if you are not careful, may actually detract from learning.

How might we rethink the mobile reading experience?

As you read the scenario below keep in mind the original intention of the learning features that were integrated into the passage of the print book. I will point out some unintended cognitive load we might be introducing for students if we get lazy and not provide the same learning-science based rigor in small screen reading that we have on print books.

Scenario: Reading for understanding

Lisa is a Sophomore at Montclair University pursuing a Social Work Major. She is currently taking psychology class on Sensation and Perception, but has very little experience learning about the human anatomy. While engaging in her reading assignments, it is quite common for her textbook to start describing the various parts that make up the eye and brain, as well as the processes that are occurring between both organs.

The challenge

Below is an example of how the same content might end up being presented to students across mediums. You will notice that both the responsive desktop website and responsive mobile website both have overflow content that is not in the student’s viewable area. Students have no knowledge that anything after the viewable area exists, unless they decide to scroll down prematurely.

As you look through each of these layouts, place yourself in Lisa’s shoes as she is trying to read a passage that not only mentions specific parts of the brain and eye, but also the process that are occurring in the body while making its way from eye to brain.

Figure 6:Example of a typical multi-modal layout which mixes headers, text, images, and a side column which might have assessment, a case study, or some other learning feature.

Responsive design allows for the reflow of elements on the page, and is the core foundation of many mobile styling frameworks that developers use today. It is common practice that at certain breakpoints, multi-column layouts turn into single column layout to make for a less cramped experience. However, with the good comes the bad. If you recall, earlier in the article I was admiring the rigor and intentionality of layout by textbook designers, and how that aligned well with learning principles such as dual-coding, spatial contiguity, and Multimedia Learning Theory. Embracing only a responsive view and traditional web design methods does not enable the best learning experience. The result is that the spatial distance of a figure gradually increase as the width of its container decreases. This increased distance and the inability to leverage the pedagogical purpose that the figure was intended to provide may unintentionally begin to increase the cognitive load by introducing elements of split attention (Ayres, Sweller 2005) and not take full advantage of dual-coding.

What if…

Assuming we take Schnotz’s approach of guiding students to glance at a figure on the page through the words in the passage(Fig. 6), the text would be directly referencing parts of a figure prompting the reader to glance over to the image with intent as they build up their mental model on the subject. Unlike the paper in a printed textbook, the browser has the ability to be aware of the text that is contained in the passage via Javascript and the use of Regular Expressions. If the browser was aware of reference links for images and figures, how might we allow for the integration of guiding instructions, allowing the learner to spend less time processing the material by giving the student the ability to easily jump from words, minimizing split attention? (Chandler, 1992)

In the example below I explore one possible method of signaling students that there is a figure that might be helpful to refer to while they read.

Figure 6: Prototype demonstrating how a designer might leverage the affordances of a mobile browser to leverage learning science.

This approach might help in a couple ways. First, similar to the eye-tracking observations above, the student’s eye will be able hop from the words on the screen and its supporting image and back again. Secondly, the student could continue reading uninterrupted, never losing their place on the page due to having to scroll. In previous research I’ve done, I uncovered an anxiety of clicking on links in a book because there is a risk of the student losing their place. Kindle recently added a nice feature that may help alleviate some of this anxiety.

Conclusion

The example above represents one of many design challenges that we need to start considering when designing for outcomes across all user interfaces. There are many others we might want to think about when designing a reading experience for a small mobile device, and eventually VR/AR, wearables, and voice user interfaces like Alexa. The list goes on and on. Hopefully, the example above allows you to explore how you might re-evaluate the relationship between the underlying academic research and usability of a learning experience as students move across mediums in order to help guide your designs toward a more learner-centric approach, while at the same time innovating and leveraging the affordances that a new medium provides.

Let’s not treat a digital text as if it were a printed book. That would be like ripping out the floor of your car, so you could walk because walking is something you are more familiar with. I’d love to hear other examples you’ve seen of interface elements that lost their original intended outcomes once they were presented on another modality.

Figure 7: Yabba Dabba Do!

Academic References

Ayres, P., & Sweller, J. (2005). The split-attention principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 135–146). New York: Cambridge University Press.

Behnke, Yvonne. (2016). How textbook design may influence learning with geography textbooks. Nordidactica: Journal of Humanities and Social Science Education. no 2016:. 38–62.

Chandler, P. & Sweller, J. (1991). Cognitive Load Theory and the Format of Instruction. Cognition and Instruction, 8(4), pp.293–332.

Chandler, P.; Sweller, J. (1992). “The split-attention effect as a factor in the design of instruction”. British Journal of Educational Psychology. 62 (2): 233–246. doi:10.1111/j.2044–8279.1992.tb01017.x.

Mason L, Tornatora MC, Pluchino P (2013) Do fourth graders integrated text and picture in processing and learning from an illustrated science text? Evidence from eye-movement patterns. Comput Educ 60:95–109. doi:10.1016/j.compedu.2012.07.011

Mayer, R. (2001). Multi-media Learning. Cambridge, UK: Cambridge University Press.

Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: Evidence for dual processing systems in working memory. Journal of Educational Psychology, 90(2), 312–320. http://dx.doi.org/10.1037/0022-0663.90.2.312

Patronis, Marielle. “The Effect of Using Mobile Devices on Students’ Performance in Writing.” Handbook of Research on Mobile Devices and Applications in Higher Education Settings. IGI Global, 2016. 250–270. Web. 12 Apr. 2018. doi:10.4018/978–1–5225–0256–2.ch011

Plass, J.L. et al. (2010). Cognitive Load Theory, New York: Cambridge University Press.

Schnotz, W., et al. (2014). Focus of Attention and Choice of Text Modality in Multimedia Learning. European Journal of Psychology of Education. https://doi. org/10.1007/s10212–013–0209-y.

Zhao, Fang, et al. “Eye Tracking Indicators of Reading Approaches in Text-Picture Comprehension.” Frontline Learning Research, vol. 2, no. 5, 01 Jan. 2014, pp. 46–66.

Ziming Liu. “Digital reading” Chinese Journal of Library and Information Science (English edition)(2012): 85–94.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alex Britez

Designer, Developer, Dad & maker of things that teach stuff. Sr Designer at Microsoft VS Code & MakeCode & Adjunct Instructor @ NYU’s Digital Media for Learning