# VisLunch/Fall2010

**Vis Lunch!**

*Where:* Conference Room WEB 3760

*When:* Friday noon

This semester Paul Rosen and Kristi Potter will be responsible

for organizing the VisLunch sessions. Please feel free to contact them

for any question regarding VisLunch or for scheduling a talk:

Paul Rosen prosen@sci.utah.edu Kristi Potter kpotter@sci.utah.edu

Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Fall2010)

If you are unaware, VisLunch provides everyone at SCI a platform to present their research work and/or the latest developments in the community that could benefit the rest of us. In addition, the meeting is a great forum to give practice talks and improve your presentation skills. Plus there's _free_ pizza, and it's a nice opportunity to meet new people. Please let either Paul or Kristi know if

1.) You've submitted work to a research venue (e.g. recent conferences like Siggraph) and would like to share your ideas;

2.) You are preparing a submission to an upcoming venue (e.g. IEEE Vis, Siggraph Asia, etc.) and would like to get some feedback;

3.) Your work has been accepted to some venue and you are preparing a presentation you would like to practice; or

4.) You've recently read a new publication and are fascinated by the ideas and wish to share them with the rest of us.

Please consider volunteering to give a presentation at some point!
We're hoping that there will be enough presenters so that we don't
cancel any future weeks.

## Sessions

Date | Presenter | Topic |

September 03 | Kristi Potter | Organization and Introductions |
---|---|---|

Yi Yang | ViSSaAn: Visual Support for Safety Analysis | |

September 10 | Dav de St. Germain | Developer's Symposium II |

September 17 | Jens Krueger | Work at the Interactive Visualization and Data Analysis Group |

October 1 | Liang Zhou | Tensor Product Transfer Functions Using Statistical and Occlusion Metrics |

October 8 | Sam Gerber | Vis Practice Talk: Visual Exploration of High Dimensional Scalar Functions |

Claurissa Tuttle | InfoVis Practice Talk: PedVis: A Structured, Space Efficient Technique for Pedigree Visualization | |

October 15 | Fall Break | NO VisLunch |

October 22 | Allen Sanderson | Vis Practice Talk: Analysis of Recurrent Patterns in Toroidal Magnetic Fields |

Roni Choudhury | Vis PhD Colloquim Practice Talk: Application-Specific Visualization for Memory Reference Traces | |

October 29 | VisWeek 2010 | NO VisLunch |

November 5 | Sidharth Kumar | Towards Parallel Access of Multidimensional Multi-resolution Scientific Data |

November 12 | Tiago Etiene | Volume Rendering Verification |

November 19 | Jorge Poco Medina Roni Choudhury Daniel Osmari Linh Khanh Ha Huy Vo |
Visweek 2010 Review |

November 26 | Thanksgiving | NO VisLunch |

December 3 | Alan Cannaday | Regularization methods for the inverse Laplace transform |

Jason Thummel and Wilson Batemann | 3D Visualization of c. Elegans Neuron Vesicles | |

December 10 | Shreeraj Jadhav | Consistent Approximation of Local Flow Behavior for 2D Vector Fields using Edge Maps |

Harsh Bhatia | Edge Maps: Representing Flow with Bounded Error |

### September 3: Organization and Introductions / Yi Yang

**Organization and Introductions**

Quick discussion of vis lunch and introductions. Students attending should plan on giving a brief (5 minutes or so) oral description of what they have done with the last 3 months of their lives.

**Speaker: Yi Yang**

ViSSaAn: Visual Support for Safety Analysis Safety of technical systems are becoming more and more important nowadays. Fault trees, component fault trees, and minimal cut sets are usually used to attack the problems of assessing safety-critical systems. A visualization system named ViSSaAn (Visual Support for Safety Analysis), consisting of a matrix view, is proposed that supports an efficient safety analysis based on the information from these techniques. Interactions such as zooming and grouping are provided to support the task of finding the safety problems from the analysis information.

### September 10: Developer's Symposium II

**Speakers:**

- C-SAFE [Davison de St. Germain, John Schmidt]
- SDC (Software Development Center) [Steve Callahan, John Schreiner]
- Backscatter CT simulation, Non-rigid image registration [Yongsheng Pan]
- Longitudinal data analysis [Stanley Durrleman, Marcel Prastawa]
- FEBio/PreView/PostView [Steve Maas]

### September 17: Work at the Interactive Visualization and Data Analysis Group

**Speaker: Jens Krueger **

What's Jens' been up to in the last year and what are possibilities of collaboration?

### October 01: Tensor Product Transfer Functions Using Statistical and Occlusion Metrics

**Speaker: Liang Zhou **

Direct volume rendering has been an active area of research for over two decades. While impressive results are possible, transfer function design remains a difficult task. Current methods, the traditional 1D and 2D transfer functions, are not always effective for all datasets. In this paper, we present a novel tensor product style 3D transfer function which can provide more specificity for data classification. Our new transfer function field is comprised of a 2D statistical transfer function with occlusion information as the third axis. The 2D statistical transfer function space is computed via an adaptive method, the occlusion information is computed as an edge preserving mean value on the volume. Both metrics are precomputed on GPUs in seconds providing for interactivity. Additionally, we present a novel user interface for manipulating the 3D transfer function which allows the user to easily explore the 3D tensor product transfer function space. We compare the new method to previous 2D gradient magnitude, 2D occlusion spectrum and 2D statistical transfer functions to demonstrate its usefulness.

### October 08: Vis Practice Talks

**Speaker: Sam Gerber**

Vis Paper Practice Talk: Visual Exploration of High Dimensional Scalar Functions

An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar ﬁelds. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric proper ties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy ﬂux and the concentrations of chemical species in a combustion simulation and their integration with temperature.

**Speaker: Claurissa Tuttle **

InfoVis Paper Practice Talk: PedVis: A Structured, Space Efficient Technique for Pedigree Visualization

Public genealogical databases are becoming increasingly populated with historical data and records of the current population’s ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree’s structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, presents an increase in space-efficiency, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user’s ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, the results of a user-study indicate high potential for user acceptance of the new layout.

### October 15: Fall Break - NO Vis Lunch

### October 22: Vis Practice Talks

**Speaker: Allen Sanderson**

Vis Paper Practice Talk: Analysis of Recurrent Patterns in Toroidal Magnetic Fields

Abstract: In the development of magnetic confinement fusion which will potentially be a future source for low cost power, physicists must be able to analyze the magnetic field that confines the burning plasma. While the magnetic field can be described as a vector field, traditional techniques for analyzing the field’s topology cannot be used because of its Hamiltonian nature. In this paper we describe a technique developed as a collaboration between physicists and computer scientists that determines the topology of a toroidal magnetic field using fieldlines with near minimal lengths. More specifically, we analyze the Poincare map of the sampled fieldlines in a Poincare section including identifying critical points and other topological features of interest to physicists. The technique has been deployed into an interactive parallel visualization tool which physicists are using to gain new insight into simulations of magnetically confined burning plasmas.

**Speaker: Roni Choudhury**

Vis PhD Colloquim Practice Talk: Application-Specific Visualization for Memory Reference Traces

Abstract: Memory performance is an important component of high-performance software. As CPUs have been showing faster increases in speed than main memory in the last several years, and now we are seeing more and more CPU cores bundled into computing systems, this speed difference has meant that memory performance has been more and more critical to achieving high performance. One way to investigate memory performance is through the use of *memory reference traces*, which are records of the memory accesses performed by a program at runtime. The traditional use for reference traces is to perform cache simulation, producing overall cache statistics from which some insight about program performance can be gained.

In this talk I will describe the Memory Trace Visualizer (MTV), a novel system that takes as input memory reference traces, and produces visualizations representing how the program accessed memory, and how those accesses affect a cache of the user's design. The purpose of MTV is to investigate memory behavior and performance in real programs, and I will discuss the motivation behind it and its history, including current work in which I design application-specific visualizations with the goal of combining reference trace data with traditional scientific visualization in order to gain insight into how the structure of a particular problem may affect its memory performance.

### October 29: VisWeek 2010 - no Vis Lunch

http://vis.computer.org/VisWeek2010/

### November 05: Sidharth Kumar

**Speaker: Sidharth Kumar**

Towards Parallel Access of Multidimensional Multiresolution Scientific Data

Abstract: Large scale scientific simulations routinely produce data of increasing resolution. Analyzing this data is key to scientific discovery. A critical bottleneck facing the analysis is the I/O time to access the data. One method of addressing this problem is to reorganize the data in a manner that simplifies analysis and visualization. The IDX file format is an example of this approach. It orders data points so that they can be accessed at multiple resolution levels with favorable spatial locality and caching properties. IDX has been used successfully in fields such as digital photography and visualization of large scientific data, and is a promising approach for analysis of HPC data. Unfortunately, the existing tools for writing data in this format only provide a serial interface. HPC applications must therefore either write all data from a single process or convert existing data as a post-processing step, in either case failing to utilize available parallel I/O resources. In this work, we provide an overview of the IDX file format and the existing ViSUS library that provides serial access to IDX data. We investigate methods for writing IDX data in parallel and demonstrate that it is possible for HPC applications to write data directly into IDX format with scalable performance. Our preliminary results demonstrate 60% of the peak I/O throughput when reorganizing and writing the data from 512 processes on an IIBM BG/P system. We also analyze the remaining bottlenecks and propose future work towards a more flexible and efficient implementation.

### November 12: Tiago Etiene

**Speaker: Tiago Etiene**

Title: Volume Rendering Verification

Abstract: Volume rendering techniques became part of many critical scientific pipelines. While there are several papers focused on error estimation, visual artifacts, transfer functions, and performance, little has been done to assess correctness of both implementation and algorithms. Typically, developers use techniques ranging from the 'eye ball' norm to expert analysis to assess code correctness. The goal of this ongoing work is to present an additional tool to help scientists and developers to increase confidence in their volume rendering tools. We use convergence analysis to evaluate the final images generated by two volume rendering packages: VTK and Voreen. We tested four VTK modules and two versions of Voreen volume rendering implementations. In the case of VTK, so far we found and fix code mistakes in two modules. Voreen presented unexpected behaviors and we are still looking for explanations.

### November 19: Visweek 2010 Review

**Speaker: Jorge Poco Medina**

Two-Phase Mapping for Projecting Massive Data Sets

by F. Paulovich, C. Silva, and L. Nonato

**Speaker: Roni Choudhury**

Graphical Inference for Infovis

by H. Wickham, D. Cook, H. Hofmann, and A. Buja

**Speaker: Daniel Osmari**

Browsing Large Image Datasets through Voronoi Diagrams

by P. Brivio, M. Tarini, and P. Cignoni

**Speaker: Linh Khanh Ha**

Necklace Maps

by B. Speckmann and K. Verbeek

**Speaker: Huy Vo**

A Scalable Distributed Paradigm for Multi-User Interaction with Tiled Rear Projection Display Walls

by P. Roman, M. Lazarov, and A. Majumder

### November 26: Thanksgiving - no Vis Lunch

### December 03: Alan Cannaday, Jason Thummel and Wilson Batemann

**Speaker: Alan Cannaday**

Title: Regularization methods for the inverse Laplace transform

Abstract: In many applications such as transistors, solar cells, LEDs, and diode lasers it is important to control the spontaneous photon emissions from atoms. One way of quantifying this effect is to embed quantum dots in the photonic crystal. An isolated quantum dot absorbs light at a given frequency and emits it with an exponentially decaying intensity with known decay rate. When many quantum dots are embedded in a photonic crystal and observed while absorbing and emitting light, the sum of their emissions or emission intensity I(t) at time t can be modeled using the Laplace transform, i.e. I(t)/I(0) = L[φ(γ)](t). Where φ(γ) is the distribution of concentration of emitters for a certain decay rate γ. The problem of determining φ(γ) from its Laplace transform I(t) is known to be a severely ill-posed linear inverse problem, which means that small perturbations in I(t) can lead to large errors in the estimation of φ(γ). For my project I explored different regularization methods for the inverse Laplace transform proposed by Epstein and Schotland (2008) that allows us to compute the inverse Laplace transform without requiring prior information about φ(γ).

**Speaker: Jason Thummel and Wilson Batemann**

Title: 3D Visualization of c. Elegans Neuron Vesicles

Abtract: Three dimensional visualizations of c. Elegans neuronal structure data can provide unique opportunities for analysis of cell structure and function. With our software, we can parse large amounts of data in raw text files to create interactive visual representations of an array of neuronal vesicles from c. Elegans. We first had to learn how to use Java3D in order to visualize the data, and then come up with ways to convert the data into images. Vesicles, being simple spheres, were fairly easy to create; however, creating the membrane hull proved to be a much larger challenge.

### December 10: Sreeraj Jadhav and Harsh Bhatia

**Speaker: Sreeraj Jadhav**

Title: Consistent Approximation of Local Flow Behavior for 2D Vector Fields using Edge Maps

Abstract:Vector fields, represented as vector values sampled on the vertices of a triangulation, are commonly used to model physical phenomena. To analyze and understand vector fields, practitioners use derived prop- erties such as the paths of massless particles advected by the flow, called streamlines. However, computing streamlines requires numerical methods not guaranteed to preserve fundamental invariants such as the fact that streamlines cannot cross. The resulting inconsistencies can cause errors in the analysis, e.g. invalid topological skeletons, and thus lead to misinterpretations of the data. We propose an alternate representation for triangulated vector fields that exchanges vector values with an encoding of the transversal flow behavior of each triangle. We call this representation edge maps. This work focuses on the mathematical properties of edge maps. Edge maps allow for a multi-resolution approximation of flow by merging adjacent streamlines into an interval based mapping. Consistency is enforced at any resolution if the merged sets maintain an order-preserving property. At the coarsest resolution, we define a notion of equivalency between edge maps, and show that there exist 23 equivalence classes describing all possible behaviors of piecewise linear flow within a triangle.

**Speaker: Harsh Bhatia**

Title: Edge Maps: Representing Flow with Bounded Error (accepted in IEEE Pacific Visualization 2011)

Abstract: Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Many analysis techniques rely on computing streamlines, a task often hampered by numerical instabilities. Approaches that ignore the resulting errors can lead to inconsistencies that may produce unreliable visualizations and ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with linear maps defined on its boundary. This representation, called edge maps, is equivalent to computing all possible streamlines at a user defined error threshold. In spite of this error, all the streamlines computed using edge maps will be pairwise disjoint. Furthermore, our representation stores the error explicitly, and thus can be used to produce more informative visualizations. Given a piecewise-linear interpolated vector field, a recent result shows that there are only 23 possible map classes for a triangle, permitting a concise description of flow behaviors. This work describes the details of computing edge maps, provides techniques to quantify and refine edge map error, and gives qualitative and visual comparisons to more traditional techniques.