Pattern-Driven Navigation
in 2D Multiscale Visual Spaces
with
Scalable Insets

Scalable Insets is a new technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces such as gigapixel images, matrices, or maps. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the patterns. Insets are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport.

Screencast

Introduction

Many large data sets, such as gigapixel images, geographic maps, or networks, require exploration of annotations at different levels of detail. We call an annotation a region in the visualization that contains some visual patterns (called annotated pattern henceforth). These annotated patterns can either be generated by users or derived computationally. However, annotated patterns are often magnitudes smaller than the overview and too small to be visible. This makes tasks such as exploration, searching, comparing, or contextualizing challenging, as considerable navigation is needed to overcome the lack of overview or detail.

Exploring annotated patterns in context is often needed to assess the relevance of patterns and to dissect important from unimportant regions. For example, computational biologists study thousands of small patterns in large genome interaction matrices [1] to understand which physical interactions between regions on the genome are the driving factor that defines the D structure of the genome. In astronomy, researchers are exploring and comparing multiple heterogeneous galaxies and stars within super high-resolution imagery [2]. In either case, inspecting every potentially important region in detail is simply not feasible.

Fig. 1: Exploration behavior of different user roles.
Fig. 1: Three approaches exemplifying naive optimization of (1) locality, (2) context, and (3) detail only. The red rectangle in (C) indicates the size of the occluded image for reference.

Exploring visual details of these annotated patterns in multiscale visual spaces requires a tradeoff between several conflicting criteria (Fig. 1). Patterns must be visible for inspection and comparison (detail). Enough of the overview needs to be visible to provide context for the patterns (context). And the detailed pattern representations must be close to its actual position in the overview (locality).

Current interactive navigation and visualization approaches, such as focus+context, overview+detail, or general highlighting techniques, address some but not all of these criteria and become difficult as repeated viewport changes, multiple manual lenses, or separate views at different zoom levels are required, which stress the user’s mental capacities.

Fig. 2: The Scalable Insets technique
Fig. 2: The core idea of the Scalable Inset. (1) A multiscale visual space with several annotated patterns, some of which are too small to be identifiable (indicated by “???”). (2) The virtual pattern space illustrated as a space-scale diagram, illustrating that some patterns are only identifiable at certain zoom levels. (3) To provide guidance, small patterns are placed as magnified insets into the current viewport. Scalability is ensured by dynamically grouping small patterns in close proximity and representing them as an aggregate as shown in (4).

Scalable Insets is a scalable visualization technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces. Scalable Insets support users in early exploration through multifocus guidance by dynamically placing magnified thumbnails of annotated patterns as insets within the viewport (Fig. 2). The entire design (Fig. 3) is focused on scalability. To keep the number of insets stable, we developed dynamic grouping to cluster patterns based on their location, type, and the user’s viewport.

The degree of clustering constitutes a tradeoff between context and detail. Groups of patterns are visually represented as a single aggregated inset to accommodate for detail. Details of aggregated patterns are gradually resolved as the user navigates into certain regions. We also present two dynamic mechanisms for placing insets either within the overview or on the overview’s boundaryto allow flexible adjustment to locality. With Scalable Insets, the user can rapidly search, compare, and contextualize large pattern spaces in multiscale visualizations.

Fig. 3: Design of Scalable Insets
Fig. 3: Schematic design principals of Scalable Insets. (1) Inset design and information encoding. (2) Visual representation of aggregated insets. (3) Leader line styles. (4) The inset placement mechanism and stability considerations of Scalable Insets. (5) Aggregation procedure and stability considerations. (6) Interaction between insets applied in Scalable Insets.

We implemented Scalable Insets as an extension to HiGlass [3], a flexible web application for viewing large tile-based datasets. The implementation currently supports gigapixel images, geographic maps, and genome interaction matrices. In a qualitative user study six computational biologists explored features in a genome interaction matrices using Scalable Insets.

Their feedback shows that our technique is easy to learn and effective in biological data exploration. Results of a controlled user study with 18 novice users comparing both placement mechanisms of Scalable Insets to a standard highlighting technique show that Scalable Insets reduced the time to find annotated patterns by up to 45% and improved the accuracy in comparing pattern types by up to 32 percentage points.

Publication

  1. Pattern-Driven Navigation in 2D Multiscale Visual Spaces with Scalable Insets

    1. Fritz Lekschas
    2. Michael Behrisch
    3. Benjamin Bach
    4. Peter Kerpedjiev
    5. Nils Gehlenborg
    6. Hanspeter Pfister
    bioRxiv, April 15, 2018. doi: 10.1101/301036
    1. Preprint

Source Code

All the code of Scalable Insets is publicly accessible and open-source.

Authors

  1. Fritz Lekschas

    Harvard John A. Paulson School of Engineering and Applied Sciences

  2. Michael Behrisch

    Harvard John A. Paulson School of Engineering and Applied Sciences

  3. Benjamin Bach

    University of Edinburgh

  4. Peter Kerpedjiev

    Department of Biomedical Informatics, Harvard Medical School

  5. Nils Gehlenborg

    Department of Biomedical Informatics, Harvard Medical School

  6. Hanspeter Pfister

    Harvard John A. Paulson School of Engineering and Applied Sciences

References

  1. [1] Dekker et al. (2017) The 4d nucleome project. Nature, 549, 219.
  2. [2] Pietriga et al. Exploratory visualization of astronomical data on ultra-high-resolution wall displays. Proceedings SPIE, 9913, 15.
  3. [3] Kerpedjiev et al. (2017) HiGlass: Web-based visual comparison and exploration of genome interaction maps. bioRxiv.