VisLunch/Spring2011/

From VistrailsWiki
Jump to navigation Jump to search

Vis Lunch!

Where: Conference Room WEB 3760

When: Friday noon

This semester Paul Rosen and Kristi Potter will be responsible
for organizing the VisLunch sessions. Please feel free to contact them
for any question regarding VisLunch or for scheduling a talk:

Paul Rosen
prosen@sci.utah.edu

Kristi Potter
kpotter@sci.utah.edu

Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Spring2011)

If you are unaware, VisLunch provides everyone at SCI a platform to present their research work and/or the latest developments in the community that could benefit the rest of us. In addition, the meeting is a great forum to give practice talks and improve your presentation skills. Plus there's _free_ pizza, and it's a nice opportunity to meet new people. Please let either Paul or Kristi know if 1.) You've submitted work to a research venue (e.g. recent conferences like Siggraph) and would like to share your ideas;

2.) You are preparing a submission to an upcoming venue (e.g. IEEE Vis, Siggraph Asia, etc.) and would like to get some feedback;

3.) Your work has been accepted to some venue and you are preparing a presentation you would like to practice; or

4.) You've recently read a new publication and are fascinated by the ideas and wish to share them with the rest of us.


Please consider volunteering to give a presentation at some point! We're hoping that there will be enough presenters so that we don't cancel any future weeks.


Sessions

Date Presenter Topic
January 28 Kristi Potter State of the Art in Uncertainty Visualization
February 4 Carson Brownlee Talking DIRTY (Distributed Interactive Ray Tracing and You)
February 11 Bei Wang & Brian Summa Global and Local Circular Coordinates and Their Applications
February 18 Matt Berger An End-to-End Framework for Evaluating Surface Reconstruction
Harsh Bhatia Edge Maps: Representing Flow with Bounded Error
February 25 Jeff Phillips Skylines and their Efficient Computation on (Approximate) Uncertain Data
March 4
March 11 Shreeraj Jadhav Consistent Approximation of Local Flow Behavior for 2D Vector Fields using Edge Maps
March 18 Jacob Hinkle TBA
Blake Nelson TBA
March 25 Spring Break NO Vislunch!
April 1 Thiago Ize TBA
Tom Fogal TBA
April 8 Erik Anderson A User Study of Visualization Effectiveness Using EEG and Cognitive Load
April 15 Josh Levine TBA
April 22 Prof. Nat Smale TBA

January 28: Uncertainty Visualization

Speaker: Kristi Potter

State of the Art in Uncertainty Visualization

The graphical depiction of uncertainty information is emerging as a problem of great importance in the field of visualization. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence, and this information is often presented as charts and tables alongside visual representations of the data. Uncertainty measures are often excluded from explicit representation within data visualizations because the increased visual complexity incurred can cause clutter, obscure the data display, and may lead to erroneous conclusions or false predictions. However, uncertainty is an essential component of the data, and its display must be integrated in order for a visualization to be considered a true representation of the data. This talk will go over the current work on uncertainty visualization.

February 4: Talking DIRTY

Speaker: Carson Brownlee

Talking DIRTY (Distributed Interactive Ray Tracing and You)

I will talk about a sort-last interactive ray tracing implementation within ParaView/VisIt as well as an OpenGL hijacking program called GLuRay. I will also go over a distributed shared memory paging scheme me and (mostly) thiago worked on. They are three different ways to tackle the same problem, DIRT, within different constraints.

February 11: Global and Local Circular Coordinates and Their Applications

Speakers: Bei Wang

Global and Local Circular Coordinates and Their Applications

Given high-dimensional data, nonlinear dimensionality reduction algorithms typically assume that real-valued low-dimensional coordinates are sufficient to represent its intrinsic structure. The work by de Silva et. al. has shown that global circle-valued coordinates enrich such representations by identifying significant circle-structure in the data, when its underlying space contains nontrivial topology. We use this previous work and extend it by detecting significant relative circle-structure and constructing circular coordi- nates on a local neighborhood of a point. We develop a local version of the persistent cohomology machinery. We suggest that the local circular coordinates provide a detailed analysis on the local intrinsic structure and are beneficial for certain applications. We are interested in using both global and local circular coordinates on a broad range of real-world data.

Joint work with Brian Summa, Mikael Vejdemo-Johansson and Valerio Pascucci

February 18: Edge Maps: Representing Flow with Bounded Error

Speaker: Matt Berger

An End-to-End Framework for Evaluating Surface Reconstruction

We present a benchmark for the evaluation and comparison of algorithms which reconstruct a surface from point cloud data. Although a substantial amount of effort has been dedicated to the problem of surface reconstruction, a comprehensive means of evaluating this class of algorithms is noticeably absent. We propose a simple pipeline for measuring surface reconstruction algorithms, consisting of three main phases: surface modeling, sampling, and evaluation. We employ implicit surfaces for modeling shapes which are expressive enough to contain details of varying size, in addition to preserving sharp features. From these implicit surfaces, we produce point clouds by synthetically generating range scans which resemble realistic scan data. We validate our synthetic sampling scheme by comparing against scan data produced via a commercial optical laser scanner, wherein we scan a 3D-printed version of the original implicit surface. Last, we perform evaluation by comparing the output reconstructed surface to a dense uniformly-distributed sampling of the implicit surface. We decompose our benchmark into two distinct sets of experiments. The first set of experiments measures reconstruction against point clouds of complex shapes sampled under a wide variety of conditions. Although these experiments are quite useful for the comparison of surface reconstruction algorithms, they lack a fine-grain analysis. Hence to complement this, the second set of experiments are designed to measure specific properties of surface reconstruction, both from a sampling and surface modeling viewpoint. Together, these experiments depict a detailed examination of the state of surface reconstruction algorithms.

Speaker: Harsh Bhatia

Edge Maps: Representing Flow with Bounded Error (Pacific Viz 2011 practice talk)

Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Many analysis techniques rely on computing streamlines, a task often hampered by numerical instabilities. Approaches that ignore the resulting errors can lead to inconsistencies that may produce unreliable visualizations and ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with linear maps defined on its boundary. This representation, called edge maps, is equivalent to computing all possible streamlines at a user defined error threshold. In spite of this error, all the streamlines computed using edge maps will be pairwise disjoint. Furthermore, our representation stores the error explicitly, and thus can be used to produce more informative visualizations. Given a piecewise-linear interpolated vector field, a recent result [15] shows that there are only 23 possible map classes for a triangle, permitting a concise description of flow behaviors. This work describes the details of computing edge maps, provides techniques to quantify and refine edge map error, and gives qualitative and visual comparisons to more traditional techniques.

February 25: Skylines and their Efficient Computation on (Approximate) Uncertain Data

Speaker: Jeff Phillips

Skylines and their Efficient Computation on (Approximate) Uncertain Data

This talk will focus on two aspects of visualization. First, I will discuss the "skyline" data summary and its variants as a way to visualize the important elements of a large multi-dimensional dataset. Specifically, given a large data set where each data point has multiple attributes, the skyline retains all data points for which no other data point is better in *all* attributes. A common example used is for a set of hotels near the beach. For each hotel a user wants a low price and to be close to the beach. A hotel-booking website may want to display all hotel options which for which there is no other hotel which is both closer to the beach and cheaper, as the user's choice will surely be among this limited set.

Second, I will present a series of technical illustrations critical for conveying the details of complicated geometric algorithms. My coauthors and I put much thought, effort, and experience into creating clear and concise illustrations to help explain the simple ideas behind the technical specifications needed to prove and precisely describe our main results. So in the second part of the talk I will define and describe efficient algorithms for uncertain skylines and approximate uncertain skylines. Throughout, I will make an effort to comment on the design of the illustrations used to convey the algorithms.

Joint work with Peyman Afshani, Lars Arge, Pankaj Agarwal, and Kasper Green Larsen

March 4: TBA

Speaker:

March 11: Topo in Vis Practice Talk

Speaker: Shreeraj Jadhav '

Consistent Approximation of Local Flow Behavior for 2D Vector Fields using Edge Maps

Vector fields, represented as vector values sampled on the vertices of a triangulation, are commonly used to model physical phenomena. To analyze and understand vector fields, practitioners use derived properties such as the paths of massless particles advected by the flow, called streamlines. However, currently available numerical methods for computing streamlines do not guarantee preservation of fundamental invariants such as the fact that streamlines cannot cross. The resulting inconsistencies can cause errors in the analysis, e.g. invalid topological skeletons, and thus lead to misinterpretations of the data. We propose an alternate representation for triangulated vector fields that exchanges vector values with an encoding of the transversal flow behavior of each triangle. We call this representation edge maps. This work focuses on the mathematical properties of edge maps; a companion paper discusses some of their applications [1]. Edge maps allow for a multi-resolution approximation of flow by merging adjacent streamlines into an interval based mapping. Consistency is enforced at any resolution if the merged sets maintain an order-preserving property. At the coarsest resolution, we define a notion of equivalency between edge maps, and show that there exist 23 equivalence classes describing all possible behaviors of piecewise linear flow within a triangle.

March 18: TBA

Speaker: Blake Nelson

Speaker: Jacob Hinkle

March 25: Spring Break!

No Vislunch

April 1: Thiago

Speaker: Thiago Ize

April 8: EEG Vis Evaluation

Speaker: Erik Anderson

A User Study of Visualization Effectiveness Using EEG and Cognitive Load

Abstract:

Effectively evaluating visualization techniques is a difficult task often assessed through feedback from user studies and expert evaluations. This work presents an alternative approach to visualization evaluation in which brain activity is passively recorded using electroencephalography (EEG). These measurements are used to compare different visualization techniques in terms of the burden they place on a viewer’s cognitive resources. In this paper, EEG signals and response times are recorded while users interpret different representations of data distributions. This information is processed to provide insight into the cognitive load imposed on the viewer. This paper describes the design of the user study performed, the extraction of cognitive load measures from EEG data, and how those measures are used to quantitatively evaluate the effectiveness of visualizations.


April 15: TBA

Speaker: Josh Levine

April 22: TBA

Speaker: Nat Smale