Pattern-Driven Navigation
in 2D Multiscale Visualizations
with
Scalable Insets

Scalable Insets is a new technique for interactively exploring and navigating large numbers of annotated patterns or features in multiscale visualizations such as gigapixel images, matrices, or maps. A feature can by any kind of visual pattern such as car in an image, a dot in a matrix, or a park in a map. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the patterns. Insets are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport.

Screencast & Presentation

5-min. Introduction to Scalable Insets

12-min. talk from the IEEE VIS 2019 conference

Slides from my talk at ISMB BioVis 2018

Slides from my talk at the IEEE VIS 2019 conference

Introduction

Many large data sets, such as gigapixel images, geographic maps, or networks, require exploration of annotations at different levels of detail. We call an annotation a region in the visualization that contains some visual patterns (called annotated pattern henceforth). These annotated patterns can either be generated by users or derived computationally. However, annotated patterns are often magnitudes smaller than the overview and too small to be visible. This makes tasks such as exploration, searching, comparing, or contextualizing challenging, as considerable navigation is needed to overcome the lack of overview or detail.

Exploring annotated patterns in context is often needed to assess the relevance of patterns and to dissect important from unimportant regions. For example, computational biologists study thousands of small patterns in large genome interaction matrices [1] to understand which physical interactions between regions on the genome are the driving factor that defines the structure of the genome. In astronomy, researchers are exploring and comparing multiple heterogeneous galaxies and stars within super high-resolution imagery [2]. In either case, inspecting every potentially important region in detail is simply not feasible.

Fig. 1: Exploration behavior of different user roles.
Fig. 1: Three approaches exemplifying naive optimization of (1) locality, (2) context, and (3) detail only. The red rectangle in (C) indicates the size of the occluded image for reference.

Exploring visual details of these annotated patterns in multiscale visualizations requires a tradeoff between several conflicting criteria (Fig. 1). Patterns must be visible for inspection and comparison (detail). Enough of the overview needs to be visible to provide context for the patterns (context). And the detailed pattern representations must be close to its actual position in the overview (locality).

Current interactive navigation and visualization approaches, such as focus+context, overview+detail, or general highlighting techniques, address some but not all of these criteria and become difficult as repeated viewport changes, multiple manual lenses, or separate views at different zoom levels are required, which stress the user’s mental capacities.

Fig. 2: The Scalable Insets technique
Fig. 2: The core idea of the Scalable Inset. (1) A multiscale visual space with several annotated patterns, some of which are too small to be identifiable (indicated by “???”). (2) The virtual pattern space illustrated as a space-scale diagram, illustrating that some patterns are only identifiable at certain zoom levels. (3) To provide guidance, small patterns are placed as magnified insets into the current viewport. Scalability is ensured by dynamically grouping small patterns in close proximity and representing them as an aggregate as shown in (4).

Scalable Insets is a scalable visualization technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visualizations. Scalable Insets support users in early exploration through multifocus guidance by dynamically placing magnified thumbnails of annotated patterns as insets within the viewport (Fig. 2). The entire design (Fig. 3) is focused on scalability. To keep the number of insets stable, we developed dynamic grouping to cluster patterns based on their location, type, and the user’s viewport.

The degree of clustering constitutes a tradeoff between context and detail. Groups of patterns are visually represented as a single aggregated inset to accommodate for detail. Details of aggregated patterns are gradually resolved as the user navigates into certain regions. We also present two dynamic mechanisms for placing insets either within the overview or on the overview’s boundaryto allow flexible adjustment to locality. With Scalable Insets, the user can rapidly search, compare, and contextualize large pattern spaces in multiscale visualizations.

Fig. 3: Design of Scalable Insets
Fig. 3: Schematic design principals of Scalable Insets. (1) Inset design and information encoding. (2) Visual representation of aggregated insets. (3) Leader line styles. (4) The inset placement mechanism and stability considerations of Scalable Insets. (5) Aggregation procedure and stability considerations. (6) Interaction between insets applied in Scalable Insets.

We implemented Scalable Insets as an extension to HiGlass [3], a flexible web application for viewing large tile-based datasets. The implementation currently supports gigapixel images, geographic maps, and genome interaction matrices. In a qualitative user study six computational biologists explored features in a genome interaction matrices using Scalable Insets.

Their feedback shows that our technique is easy to learn and effective in biological data exploration. Results of a controlled user study with 18 novice users comparing both placement mechanisms of Scalable Insets to a standard highlighting technique show that Scalable Insets reduced the time to find annotated patterns by up to 45% and improved the accuracy in comparing pattern types by up to 32 percentage points.

Publication

  1. Pattern-Driven Navigation in 2D Multiscale Visualizations with Scalable Insets

    1. Fritz Lekschas
    2. Michael Behrisch
    3. Benjamin Bach
    4. Peter Kerpedjiev
    5. Nils Gehlenborg
    6. Hanspeter Pfister
    IEEE Transactions on Visualization and Computer Graphics (InfoVis 19), 2019. doi: 10.1109/TVCG.2019.2934555

Source Code

All the code of Scalable Insets is publicly accessible and open-source.

You might also be interested in the following repositories as Scalable Insets is implemented as a plugin for HiGlass.

Authors

  1. Fritz Lekschas

    Harvard John A. Paulson School of Engineering and Applied Sciences

  2. Michael Behrisch

    Harvard John A. Paulson School of Engineering and Applied Sciences

  3. Benjamin Bach

    University of Edinburgh

  4. Peter Kerpedjiev

    Department of Biomedical Informatics, Harvard Medical School

  5. Nils Gehlenborg

    Department of Biomedical Informatics, Harvard Medical School

  6. Hanspeter Pfister

    Harvard John A. Paulson School of Engineering and Applied Sciences

References

  1. [1] Dekker et al. (2017) The 4d nucleome project. Nature, 549, 219.
  2. [2] Pietriga et al. Exploratory visualization of astronomical data on ultra-high-resolution wall displays. Proceedings SPIE, 9913, 15.
  3. [3] Kerpedjiev et al. (2018) HiGlass: web-based visual exploration and analysis of genome interaction maps. Genome Biology, 19, 125.