Where: Conference Room WEB 3760
When: Friday noon
This semester Paul Rosen and Kristi Potter will be responsible
for organizing the VisLunch sessions. Please feel free to contact them
for any question regarding VisLunch or for scheduling a talk:
Paul Rosen firstname.lastname@example.org Kristi Potter email@example.com
Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Spring2011)
If you are unaware, VisLunch provides everyone at SCI a platform to present their research work and/or the latest developments in the community that could benefit the rest of us. In addition, the meeting is a great forum to give practice talks and improve your presentation skills. Plus there's _free_ pizza, and it's a nice opportunity to meet new people. Please let either Paul or Kristi know if 1.) You've submitted work to a research venue (e.g. recent conferences like Siggraph) and would like to share your ideas;
2.) You are preparing a submission to an upcoming venue (e.g. IEEE Vis, Siggraph Asia, etc.) and would like to get some feedback;
3.) Your work has been accepted to some venue and you are preparing a presentation you would like to practice; or
4.) You've recently read a new publication and are fascinated by the ideas and wish to share them with the rest of us.
Please consider volunteering to give a presentation at some point! We're hoping that there will be enough presenters so that we don't cancel any future weeks.
|January 28||Kristi Potter||State of the Art in Uncertainty Visualization|
|February 4||Carson Brownlee||Talking DIRTY (Distributed Interactive Ray Tracing and You)|
|February 11||Bei Wang & Brian Summa||Global and Local Circular Coordinates and Their Applications|
|February 18||Matt Berger||An End-to-End Framework for Evaluating Surface Reconstruction|
|Harsh Bhatia||Edge Maps: Representing Flow with Bounded Error|
|February 25||Jeff Phillips||Skylines and their Efficient Computation on (Approximate) Uncertain Data|
|March 11||Shreeraj Jadhav||Consistent Approximation of Local Flow Behavior for 2D Vector Fields using Edge Maps|
|March 18||Jacob Hinkle||4D MAP Image Reconstruction|
|March 25||Spring Break||NO Vislunch!|
|April 1||Thiago Ize||RTSAH Traversal Order for Occlusion Rays|
|Tom Fogal||Efficient I/O for Parallel Visualization|
|April 8||Miriah Meyer||Visualizing Biological Data|
|April 15||Josh Levine||TBA|
|April 22||Prof. Nat Smale||TBA|
January 28: Uncertainty Visualization
Speaker: Kristi Potter
State of the Art in Uncertainty Visualization
The graphical depiction of uncertainty information is emerging as a problem of great importance in the field of visualization. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence, and this information is often presented as charts and tables alongside visual representations of the data. Uncertainty measures are often excluded from explicit representation within data visualizations because the increased visual complexity incurred can cause clutter, obscure the data display, and may lead to erroneous conclusions or false predictions. However, uncertainty is an essential component of the data, and its display must be integrated in order for a visualization to be considered a true representation of the data. This talk will go over the current work on uncertainty visualization.
February 4: Talking DIRTY
Speaker: Carson Brownlee
Talking DIRTY (Distributed Interactive Ray Tracing and You)
I will talk about a sort-last interactive ray tracing implementation within ParaView/VisIt as well as an OpenGL hijacking program called GLuRay. I will also go over a distributed shared memory paging scheme me and (mostly) thiago worked on. They are three different ways to tackle the same problem, DIRT, within different constraints.
February 11: Global and Local Circular Coordinates and Their Applications
Speakers: Bei Wang
Global and Local Circular Coordinates and Their Applications
Given high-dimensional data, nonlinear dimensionality reduction algorithms typically assume that real-valued low-dimensional coordinates are sufficient to represent its intrinsic structure. The work by de Silva et. al. has shown that global circle-valued coordinates enrich such representations by identifying significant circle-structure in the data, when its underlying space contains nontrivial topology. We use this previous work and extend it by detecting significant relative circle-structure and constructing circular coordi- nates on a local neighborhood of a point. We develop a local version of the persistent cohomology machinery. We suggest that the local circular coordinates provide a detailed analysis on the local intrinsic structure and are beneficial for certain applications. We are interested in using both global and local circular coordinates on a broad range of real-world data.
Joint work with Brian Summa, Mikael Vejdemo-Johansson and Valerio Pascucci
February 18: Edge Maps: Representing Flow with Bounded Error
Speaker: Matt Berger
An End-to-End Framework for Evaluating Surface Reconstruction
We present a benchmark for the evaluation and comparison of algorithms which reconstruct a surface from point cloud data. Although a substantial amount of effort has been dedicated to the problem of surface reconstruction, a comprehensive means of evaluating this class of algorithms is noticeably absent. We propose a simple pipeline for measuring surface reconstruction algorithms, consisting of three main phases: surface modeling, sampling, and evaluation. We employ implicit surfaces for modeling shapes which are expressive enough to contain details of varying size, in addition to preserving sharp features. From these implicit surfaces, we produce point clouds by synthetically generating range scans which resemble realistic scan data. We validate our synthetic sampling scheme by comparing against scan data produced via a commercial optical laser scanner, wherein we scan a 3D-printed version of the original implicit surface. Last, we perform evaluation by comparing the output reconstructed surface to a dense uniformly-distributed sampling of the implicit surface. We decompose our benchmark into two distinct sets of experiments. The first set of experiments measures reconstruction against point clouds of complex shapes sampled under a wide variety of conditions. Although these experiments are quite useful for the comparison of surface reconstruction algorithms, they lack a fine-grain analysis. Hence to complement this, the second set of experiments are designed to measure specific properties of surface reconstruction, both from a sampling and surface modeling viewpoint. Together, these experiments depict a detailed examination of the state of surface reconstruction algorithms.
Speaker: Harsh Bhatia
Edge Maps: Representing Flow with Bounded Error (Pacific Viz 2011 practice talk)
Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Many analysis techniques rely on computing streamlines, a task often hampered by numerical instabilities. Approaches that ignore the resulting errors can lead to inconsistencies that may produce unreliable visualizations and ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with linear maps defined on its boundary. This representation, called edge maps, is equivalent to computing all possible streamlines at a user defined error threshold. In spite of this error, all the streamlines computed using edge maps will be pairwise disjoint. Furthermore, our representation stores the error explicitly, and thus can be used to produce more informative visualizations. Given a piecewise-linear interpolated vector field, a recent result  shows that there are only 23 possible map classes for a triangle, permitting a concise description of flow behaviors. This work describes the details of computing edge maps, provides techniques to quantify and refine edge map error, and gives qualitative and visual comparisons to more traditional techniques.
February 25: Skylines and their Efficient Computation on (Approximate) Uncertain Data
Speaker: Jeff Phillips
Skylines and their Efficient Computation on (Approximate) Uncertain Data
This talk will focus on two aspects of visualization. First, I will discuss the "skyline" data summary and its variants as a way to visualize the important elements of a large multi-dimensional dataset. Specifically, given a large data set where each data point has multiple attributes, the skyline retains all data points for which no other data point is better in *all* attributes. A common example used is for a set of hotels near the beach. For each hotel a user wants a low price and to be close to the beach. A hotel-booking website may want to display all hotel options which for which there is no other hotel which is both closer to the beach and cheaper, as the user's choice will surely be among this limited set.
Second, I will present a series of technical illustrations critical for conveying the details of complicated geometric algorithms. My coauthors and I put much thought, effort, and experience into creating clear and concise illustrations to help explain the simple ideas behind the technical specifications needed to prove and precisely describe our main results. So in the second part of the talk I will define and describe efficient algorithms for uncertain skylines and approximate uncertain skylines. Throughout, I will make an effort to comment on the design of the illustrations used to convey the algorithms.
Joint work with Peyman Afshani, Lars Arge, Pankaj Agarwal, and Kasper Green Larsen
March 4: TBA
March 11: Topo in Vis Practice Talk
Speaker: Shreeraj Jadhav
Consistent Approximation of Local Flow Behavior for 2D Vector Fields using Edge Maps
Vector fields, represented as vector values sampled on the vertices of a triangulation, are commonly used to model physical phenomena. To analyze and understand vector fields, practitioners use derived properties such as the paths of massless particles advected by the flow, called streamlines. However, currently available numerical methods for computing streamlines do not guarantee preservation of fundamental invariants such as the fact that streamlines cannot cross. The resulting inconsistencies can cause errors in the analysis, e.g. invalid topological skeletons, and thus lead to misinterpretations of the data. We propose an alternate representation for triangulated vector fields that exchanges vector values with an encoding of the transversal flow behavior of each triangle. We call this representation edge maps. This work focuses on the mathematical properties of edge maps; a companion paper discusses some of their applications . Edge maps allow for a multi-resolution approximation of flow by merging adjacent streamlines into an interval based mapping. Consistency is enforced at any resolution if the merged sets maintain an order-preserving property. At the coarsest resolution, we define a notion of equivalency between edge maps, and show that there exist 23 equivalence classes describing all possible behaviors of piecewise linear flow within a triangle.
March 18: Jacob
Speaker: Jacob Hinkle
4D MAP Image Reconstruction
We have developed a maximum a posteriori (MAP) algorithm for tracking organ motion that uses raw time-stamped data to reconstruct the images and estimate deformations in anatomy simultaneously. Since the algorithm does not rely on a binning process, binning artifacts are avoided. Signal-to-noise ratio (SNR) is also increased since the algorithm uses all of the collected data. The method is general and can be applied to data from a number of modalities including fanbeam or conebeam CT, MRI, and PET. In the case of CT, the increased SNR provides the opportunity to reduce dose to the patient during scanning. This framework also facilitates the incorporation of fundamental physical properties such as the conservation of local tissue volume during the estimation of organ motion. In this talk I'll give an overview of the method and show some of our initial results. I'll also try to point out some possible vis applications that I think could be useful in the context of radiotherapy treatment planning.
March 25: Spring Break!
April 1: Thiago and Tom
Speaker: Thiago Ize
RTSAH Traversal Order for Occlusion Rays
We accelerate the finding of occluders in tree based acceleration structures, such as a packetized BVH and a single ray kd-tree, by deriving the ray termination surface area heuristic (RTSAH) cost model for traversing an occlusion ray through a tree and then using the RTSAH to determine which child node a ray should traverse first instead of the traditional choice of traversing the near node before the far node. We further extend RTSAH to handle materials that attenuate light instead of fully occluding it, so that we can avoid superfluous intersections with partially transparent objects. For scenes with high occlusion, we substantially lower the number of traversal steps and intersection tests and achieve up to $2\times$ speedups.
Speaker: Tom Fogal
Efficient I/O for Parallel Visualization
While additional cores and newer architectures, such as those provided by GPU clusters, steadily increase available compute power, memory and disk access have not kept pace. It is therefore of critical importance that we develop algorithms which make effective use of off-processor storage, and communicate how to effectively utilize parallel filesystems to developers of the growing body of software targeted at parallel supercomputing environments. With this work, we outline a series of popular pitfalls observed in code written from a serial mindset, expound on the reasons these practices lead to poor performance, and present a series of results which highlight the characteristics of modern supercomputing I/O systems.
April 8: Miriah Meyer
Speaker: Miriah Meyer
Visualizing Biological Data
Visualization tools are essential for deriving meaning from the avalanche of data we are generating today. To facilitate an understanding of the complex relationships embedded in this data, visualization research leverages the power of the human perceptual and cognitive systems, encoding meaning through images and enabling exploration through human-computer interactions. In my research I design visualization systems that support exploratory, complex data analysis tasks by scientists who are analyzing large amounts of heterogeneous data. These systems allow users to validate their computational models, to understand their underlying data in detail, and to develop new hypotheses and insights. My research process includes five distinct stages, from targeting a specific group of domain experts and their scientific goals through validating the efficacy of the visualization system. In this talk I'll describe a user-centered, methodological approach to designing and developing visualization tools and present several successful visualization projects in the areas of genomics and systems biology. I will also discuss generalizations that arise from working on focused, visualization projects as well as long term implications for the field.
April 15: Josh Levine
Speaker: Josh Levine
Interpreting Performance Data Across Intuitive Domains
To exploit the capabilities of current and future systems, developers must understand the interplay between on-node performance, domain decomposition, and applications' intrinsic communication patterns. While tools exist to gather and analyze data for each of these components individually, the resulting information is generally processed in isolation and presented in an abstract, categorical fashion unintuitive to most users. In this work we present the HAC model, in which we identify the three domains of performance data most familiar to the user: (i) the physical domain of the application data, (ii) the hardware domain of the compute and network devices, and (iii) the communication domain of logical data transfers.
We show that taking data from each of these domains and projecting, visualizing, and correlating it to the other domains can give valuable insights into the behavior of parallel application codes. This work opens the door for a new generation of tools that can help users more easily and intuitively associate performance data with root causes in the hardware system, the application's structure, and in its communication behavior, and by doing so leads to an improved understanding of the performance of their codes. Case studies I will discuss include performance characteristics for Miranda hydrodynamics simulations, algebraic multigrid (AMG), atomistic simulations using QBox, and large-scale laser-plasma interaction using pF3D.
April 22: TBA
Speaker: Nat Smale