Initial Publication Date: March 16, 2018

Fostering Strategy #6: Learners compare, contrast and critique multiple visualizations on the same topic

(latest update: 24 jan 2018) (return to workshop front page)

Contributors: All Fostering Stand participants worked on this strategy, in small groups by discipline. Physical Sciences/Engineering Group: Yuen-ying, Jung Lee, Alexey Leontvey, Paul Parsons, Sally Wu, Melissa Zrada; Life Sciences Group: Gayle Bowness, Pamela Marshall, Caleb Trujillo, Tiffany Herder; Earth & Space Science Group: Vetria Byrd, Elizabeth Joyner, Sarah Klain, Bob Kolvoord

Description:

  • Instructor gathers multiple instances of visualizations that depict the same concept or phenomenon. Visualizations can be concept-driven, data-driven, or a mixture of the two. Visualizations can be printed out or provided digitally. Visualizations are usually professionally-created, but this approach could work with learner-created visualizations, see fostering strategies 1 and 5.
  • In small groups, learners view and discuss the visualizations. Potential prompts include:
    • What do these visualizations have in common?
    • How do they differ?
    • Which is most effective, and why?
    • Pick one visualization and say how it could be improved. Explain the reasoning behind your suggestions.

Examples:

  • Earth & Space Sciences:
    • Plate tectonics (e.g. different diagrams of the same type of plate boundary)
    • Different concept-driven visualizations of the water cycle (see figure)
    • Map projections (same area in different projections)
    • Time cycles (same data type over longer and shorter time intervals, e.g. daily, seasonal, inter annual)
  • Physical Sciences & Engineering
    • Depictions of molecules (ball-and-stick, space-filling, wireframe)
    • Protein conformations
    • Engineering: sketches, prototype, autoCAD, 2D/3D of the same design
    • Computer Science: different conventions for representing dynamics in static visualizations
  • Life Sciences
    • Diagrams of animal cells (or plant cells)
    • Genetics: representations of chromosomes
    • Carbon cycle diagrams (and other elemental cycles)

Affordances of this strategy/what it is good for:

  • For learning about the phenomenon:
    • Helps learners sort out which features of the phenomenon are important--those that are in common across visualizations ("extracting the schema")
    • Allows learners to learn about multiple facets or aspects of the phenomenon, not all of which can be captured in the same visualization
    • Reinforces key concepts (the ones represented by the simpler diagrams) and then allows layering of complexity (in the more detailed representations)
  • For increasing visualization competency:
    • Builds realization that different visualizations are suitable (and even necessary) to convey different aspects of the same phenomenon or concept (e.g. big picture versus focused attention).
    • Builds realization that different visualizations work better for different audiences.
    • Enables learners to develop skill in connecting across multiple visualizations.
    • May awaken learners to the possibility that some visualizations contain misrepresentations ("these two can't both be right; one of these must be wrong.")
    • Could help learners master some aspects of visual language (e.g. different uses of arrows; different ways of indicating scale)
    • Identifying strong and weak aspects of provided visualizations lays the groundwork for learners to create better visualizations themselves later on.
    • Lays groundwork for learners to be able to communicate with a variety of different representations (e.g. wireframes, digital prototypes, physical prototypes, diagrams, photorealistic), which is a foundational competence in some fields.
  • For class dynamics and workforce skills:
    • Having multiple images allows for everyone in the group to have voice/ownership (different people connect with different images)
    • Discourse among learners reinforces learning and builds connections
    • Confidence building for translating between representations and using one of greatest comfort for later topics
    • Practice in critiquing others' work builds habit of constructive, respectful critiquing
  • For the instructor:
    • May help instructor learn about hidden assumptions about particular visualizations; this could include assumptions by learners or assumptions by the visualization creator.
    • Could be used to assess student understanding of phenomenon and visualization competency. Possible assessment prompt: why are these visualizations different from each other?

Potential pitfalls & challenges:

  • For the learners:
    • So many visualizations can be overwhelming to the novice learner.
    • Novices may not have knowledge of the disciplinary conventions required to understand the meaning of some visualizations.
    • The "representational dilemma": You need to learn the conventions at the same time you are learning the content, and that can be challenging (Rau, 2016).
    • Learners could mistakenly conclude that there is supposed to be one "best" visualization--rather than concluding that different visualizations are valuable for different audiences and different aspects of the concept/phenomenon.
    • If you include some flawed visualizations in the mix to enrich the "critique" task, erroneous ideas about the phenomenon could be reinforced.
    • For some phenomena, many representations have the same flaw (e.g. number of mitochondria in a cell), opening the door to even more reinforcement of erroneous content ideas.
    • Seeing multiple representations that seem to disagree with each other could lead to inappropriate skepticism about the quality of the science (e.g. climate change).
    • In critique phase, learners may focus on aesthetic aspects.
  • For the instructor or curriculum designer:
    • Takes significant time. Have to use this strategically for selected concepts/phenomena.
    • Classroom logistics may be challenging.
    • Figuring out and guiding learners towards a basis for the critique (other than "I find this one confusing.") Not having such a framework could lead to a lot of churn. Developing such a framework could be done as a class activity, which could be valuable for visualization mastery--but that would take up even more time.
    • Static visualizations are the easiest to organize for this activity--but they can only carry the learners so far towards understanding a complex phenomenon.
    • If visualizations are all concept-driven (as in workshop exercise), significant detail from the underlying data is lost. If visualizations are all data-driven, processes or mechanisms may not emerge. If visualizations include both data-driven and concept-driven, students will find compare/contrast more confusing.
    • Time consuming to find appropriate visualizations that also have the appropriate copyright.

Emergent insights:

  • "Critique" and "Compare/contrast" are different skill sets; in some cases it might be better to put these as separate activities. There is an inherent tension between signaling to learners that one visualization is "the best" and signaling that different visualizations will be needed for different audiences and/or different aspects of the same phenomenon.
  • Even setting aside the "Critique" component, there is an additional tension within the compare/contrast activity, between "lumping" (seeing commonalities) and "splitting" (seeing differences). Students need to learn that both similarities and differences are of value in coming to understand the concept or phenomenon.
  • There are two sources of difference among visualizations of the same phenomenon: Actual variability in the world (e.g. volcano to volcano), and variability in the goals/priorities/skillset/knowledge of the visualization-creator. How can we get learners to realize this themselves--other than by just telling them? For some phenomena (e.g. volcanoes), both sources of variability are important. For other phenomena (e.g. sodium molecules), actual variation in the referent is minimal and the variation in the visualizations is all introduced by the visualization creator to serve different purposes.
  • All the fostering strategies involve some tension between teaching "content" (knowledge and understanding of the system depicted by the representation) and fostering visualization competency. Workshop participants felt that the compare/contrast/critique strategy is among the best for balance or efficiency for doing both simultaneously; the "Affordances" section below explores some reasons why.

Researchable questions:

  • How do content learning outcomes (knowledge and understanding of the represented concept or phenomenon) differ between this approach and the more traditional approach of showing one iconic visualization?
  • The examples used during the workshop (water cycle, volcano) were concept-driven visualizations. How do the affordances, challenges, and effectiveness of this strategy differ when used with data-driven versus concept-driven visualizations?
  • How can we foster appropriate critiquing behaviors? What prompts? What social norms? How do we move novices beyond "I like..." towards ability to articulate strengths, weaknesses and reasons/evidence for so saying?
  • How can we get past the "representational dilemma" (the need to learn representational conventions simultaneously with learning the content conveyed by the representations)? What should be explicitly taught before learners jump into the deep end? Where is the sweep spot for scaffolding, guidance, etc.?
  • For which topics or types of topics is this instructional approach most valuable?
  • How should the instructional designer choose the visualizations for this approach? How many? What mixture of simple/complex, data-driven/concept-driven? Is it a good idea to include some visualizations with known flaws?
  • Do the visualization competencies fostered by this approach transfer between topics or disciplines? If this approach is used for one topic, and then later tried in a totally different topic, do learners begin topic 2 with more insightful comparisons and critiques?
  • What instructional moves will help to bring forth the insight that different types of visualizations are suited for different audiences and for different aspects of the represented phenomenon?
  • What instructional moves will help to bring forth the insight that there are two sources of variation between visualizations: variation out there in the real world, and variation introduced by the visualization creator?

References & Credits:

  • The idea for using this as a student activity came from Professor Ann Rivet, of Columbia Teachers College, with whom workshop convener Kim Kastens taught a course called "Teaching and Learning Concepts in Earth Science."
  • Ainsworth, S. (2006). DeFT: A conceptual framework for considering learning with multiple representations. Learning and instruction, 16(3), 183-198.
  • Ainsworth, S. (2008). The educational value of multiple-representations when learning complex scientific concepts. Visualization: Theory and practice in science education, 191-208.
  • Christie, S., & Gentner, D. (2010). Where hypotheses come from: Learning new relations by structural alignment. Journal of Cognition and Development, 11(3), 356-373.
  • diSessa, A. A. (2002). Students' criteria for representational adequacy. In K. Gravemeijer, R. Lehrer, B. Oers & L. van and Verschaffel (Eds.), Symbolizing, Modeling and Tool Use in Mathematics Education (pp. 105-129). Netherlands: Kluwer Academic Publishers.
  • Harold, J., Lorenzoni, I., Shipley, T. F., & Coventry, K. R. (2016). Cognitive and psychological science insights to improve climate change data visualizations. Nature Climate Change, 6, 1080-1089.
  • Kozma, R. B., & Russell, J. (1997). Multimedia and understanding: Expert and novice responses to different representations of chemical phenomena. Journal of Research in Science Teaching, 34, 949-968.
  • Rau, M. A. (2016). Conditions for the effectiveness of multiple visual representations in enhancing STEM learning. Educational Psychology Review, 1-45. doi: 10.1007/s10648-016-9365-3.
  • Rau, M. A., Michaelis, J. E., & Fay, N. (2015). Connection making between multiple graphical representations: A multi-methods approach for domain-specific grounding of an intelligent tutoring system for chemistry. Computers and Education, 82, 460-485. doi:10.1016/j.compedu.2014.12.009
  • Russell, J., & Kozma, R. (2007). Assessing learning from the use of multimedia chemical visualization software. In J. K. Gilbert (Ed.), Visualization in Science Education (pp. 299-332): Springer.
  • White, T., & Pea, R. (2011). Distributed by design: On the promises and pitfalls of collaborative learning with multiple representations. Journal of the Learning Sciences, 20(3), 489-547