Difference between revisions of "VisLunch/Spring2010"

From VistrailsWiki
Jump to navigation Jump to search
(New page: This semester Guoning Chen and Josh Levine will be responsible <br/> for organizing the VisLunch sessions. Please feel free to contact them <br/> for any question regarding VisLunch or fo...)
 
 
(41 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This semester Guoning Chen and Josh Levine will be responsible <br/>
This semester Guoning Chen and Josh Levine will be responsible <br/>
for organizing the VisLunch sessions. Please feel free to contact them <br/>
for organizing the VisLunch sessions. Please feel free to contact them <br/>
for any question regarding VisLunch or for scheduling a talk:
for any question regarding VisLunch or for scheduling a talk:
Line 5: Line 5:
Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Spring2010)
Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Spring2010)


== Open Discussion and Semester Planning ==
VisLunch is back for this semester and will be organized by Guoning
Chen and Josh Levine. If you are unaware, VisLunch provides everyone at SCI
a platform to present their research work and/or the latest
developments in the community that could benefit the rest of us. In
addition, the meeting is a great forum to give practice talks and
improve your presentation skills. Plus there's _free_ pizza, and it's
a nice opportunity to meet new people. Please let either Josh or
Guoning know if


1.) You've submitted work to a research venue (e.g. recent conferences
like Siggraph) and would like to share your ideas;


2.) You are preparing a submission to an upcoming venue (e.g. IEEE
Vis, Siggraph Asia, etc.) and would like to get some feedback;
3.) Your work has been accepted to some venue and you are preparing a
presentation you would like to practice; or
4.) You've recently read a new publication and are fascinated by the
ideas and wish to share them with the rest of us.
Please consider volunteering to give a presentation at some point!
We're hoping that there will be enough presenters so that we don't
cancel any future weeks.
----
----
== June 25, 2010 ==
- '''Shape Analysis and Understanding'''
In the first part of the talk, I will present my doctoral research on
3D shape description for retrieval purposes. I will describe the use
of a Reeb-graph based shape descriptor and the Fiedler vector of its
Laplacian matrix for shape matching.
In the second part of the talk I will present an ongoing research
project which aims at understanding the genetic basis of root traits
of crop plants. This project with applications in biology is analogous
to an open research problem on how to bridge the shape of an object
with its semantics. Known genome sequences of some crops, like rice or
corn, represent the functional meaning of the shape, aka its
semantics. From the other side, a description of the shape of roots is
lacking due to the natural non-transparent growth media which makes it
difficult to acquire and analyze the shape of roots. The results of
the early acquisition process produce noisy data with complex spatial
structure. Within this framework, we are interested in the analysis of
time-series 3D data, and, more precisely, in the computation of
global, local, and dynamic shape descriptors of roots. As a first step
in this direction, we extract a curve skeleton representing root
structure which is robust to noise. Using this skeleton we are able to
compute various local and global shape descriptors, and move towards
analysis of time-series data.
In the conclusion of the talk, I will discuss some applications of
shape analysis and description which are inspiring me for future work,
such as: robust methods for extraction of complex data in medical and
biological applications; definition of meaningful shape features;
analysis and similarity estimation of large physical networks; and
data-driven shape morphing.
- ''Speaker:'' Olga Symonova (Georgia Tech), http://ecotheory.biology.gatech.edu/olga/main.htm
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (06/25)
== June 11, 2010 ==
- '''Advances in Architectural Geometry – Research and Practice'''
The emergence of freeform structures in contemporary architecture raises numerous challenging research problems, most of which are related to the actual fabrication and are a rich source of research topics in geometry and geometric computing. The talk will provide an overview of recent progress in this field, with a particular focus on projects which illustrate the transfer of research into the architectural practice.
- ''Speaker:'' Helmut Pottmann (KAUST), Director of the Geometric Modeling and Scientific Visualization Research Center and Professor of Applied Mathematics and Computational Science in the Mathematical and Computer Sciences and Engineering Division
http://www.kaust.edu.sa/academics/faculty/pottmann.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (06/11)
== June 4, 2010 ==
- '''Stratification Learning Through Homology Inference'''
A basic problem in geometry, topology, and statistical inference that has
received recent attention is that of manifold learning: given a point cloud
of data sampled from a manifold in an ambient space R^k, infer the
underlying manifold. A clear limitation of this problem is that the object
may not be a manifold but a stratified space, the space which can be
partitioned into strata, each of which are manifolds. In this work, we
study the following problem: given a point cloud sampled from a stratified
space, which points belong to the same strata? This inference problem is
examined from three perspectives, a topological inference statement, a
geometric inference statement, and a probabilistic inference statement. The
approach we describe holds for Whitney stratified spaces.
- ''Speaker:'' Bei Wang (Duke), http://www.cs.duke.edu/~beiwang/
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (06/04)
== May 21, 2010 ==
- '''Fast Volumetric Data Exploration with Importance-Based Accumulated Transparency Modulation'''
Direct volume rendering techniques have been successfully applied to visualizing volumetric datasets across many application domains. Due to the sensitivity of transfer functions and the complexity of fine-tuning transfer functions, direct volume rendering is still not widely used in practice. For fast volumetric data exploration, we propose Importance-Based Accumulated Transparency Modulation which does not rely on transfer function manipulation. This novel rendering algorithm is a generalization and extension of the Maximum Intensity Difference Accumulation technique. By only modifying the accumulated transparency, the resulted volume renderings are essentially high dynamic range. We show that by using several common importance measures, different features of the volumetric datasets can be highlighted. The results can be easily extended to a high-dimensional importance difference space, by mixing the results from an arbitrary number of importance measures with weighting factors, which all control the final output with a monotonic behavior. With Importance-Based Accumulated Transparency Modulation, the end-user can explore a wide variety of volumetric datasets quickly without the burden of manually setting and adjusting a transfer function.
- ''Speaker:'' Yong Wan (SCI), http://www.sci.utah.edu/people/wanyong.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (05/21)
== May 14, 2010 ==
'''Applications, Data, and the Future of Storage in Computational Science'''
Computational science continues to play an important role in the scientific discovery process. At the largest scales, efficient data storage, access, and analysis have become central concerns. This talk will focus on storage systems for computational science at extreme scale. First we'll discuss what kinds of applications are running at this scale and what their data requirements look like at a high level. Next we'll describe the traditional model for storage systems, why this model is in use, and what the community is doing to ensure continued success of storage systems using this model. Then we'll have a quick look at what is happening in storage architectures, from a hardware perspective, and how this might impact computational science deployments. Finally, we observe that analysis is becoming a more important part of the data picture, and we'll examine how improvements or augmentations in the storage system might aid in more effective large-scale data analysis.
- ''Speaker:'' Rob Ross (ANL), http://www.mcs.anl.gov/~rross/
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (05/14)
== May 7, 2010 ==
- '''Computing Handle and Tunnel Loops on Surfaces'''
Meaningful non-trivial loops in surfaces are important topological
features of the shapes. Many graphics applications such as surface
parameterization, feature recognition, topology repair, and model editing,
rely on such loops that capture the topologies of the surfaces. It's known
that for a closed surface $M$ of genus $g$ embedded in three dimensions,
there are 2$g$ non-trivial loops that form the base of the first homology
group of $M$. In this work, we mathematically define a special class of
loops called handle and tunnel loops in terms of the first homology groups
of $M$ and $M$'s embeddings in $R3$. After that, we propose two
algorithms to compute them that are useful in practice.
Our first method works on a class of closed surfaces that retract to their
skeleton graphs. This algorithm first computes the curve skeletons of a
closed surface $M$. The interior and exterior of $M$ retract to the inside
and outside skeletons respectively. After that, the handle and tunnel
loops could be characterized through knot linking with these skeletons.
Specifically, the handle loops are linked with the inside skeleton, while
the tunnel loops are linked with the outside skeleton. Such a linking
leads to an algorithm of generation of these loops that are minimally
linked with the curve skeletons.
Our second loop computation algorithm is developed based on persistent
homology. The concept of persistent homology perfectly fits in the
definitions of handle and tunnel loops, hence these loops can be computed
naturally with a persistence algorithm. Since loop qualities are affected
by the choice of the filtration of complexes used in the persistence
algorithm, we incorporate geometry by a geodesic measure to design a
filtration that results in geometry-aware loops of small size. Compared
with previous algorithms, this method works combinatorially and doesn't
require any extra data structures. Moreover, it's time efficient and can
be applied on a larger class of surfaces including iso-surfaces of volume
data, surfaces that are `knotted', surfaces with boundaries and even
non-manifolds.
- ''Speaker:'' Kuiyu Li (Ohio State University), http://www.cse.ohio-state.edu/~liku/
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (05/07)
== Apr. 16, 2010 ==
- '''Automatically Synthesizing Impressionistic Oil Paintings'''
In the era of large datasets, many visualization methods either summarize or emphasize aspects of otherwise incomprehensibly large datasets. In a sense, techniques like clustering and dimensionality reduction sacrifice accuracy for meaning. Likewise, impressionist painters developed art in reaction to the photograph, which they considered more accurate than the realistic paintings of the day. Instead of following realist traditions, they tried to craft images which evoke meanings without representing minute details. Their works reflect a rigorous inquiry into human visual perception through which they learned to discern omissible image features from those of cognitive significance. In a sense, impressionist methods distill complex imagery into image summaries which emphasize the aspects of a subject deemed most important by the artist.
This talk discusses the relevance between Impressionism and scientific visualization, compares various non-photorealistic rendering systems which synthesize paintings, sheds light on the technical challenges involved in the creation of such systems, contrasts the advantages and disadvantages of human and automatic painting abilities, and discusses some applications of impressionistic painting synthesis.
- ''Speaker:'' Clifton Brooks (SCI, graduate student), http://www.sci.utah.edu/people/cbrooks.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (04/16)
== Apr. 9, 2010 ==
- '''A New Perspective on Perspective: Improving 3-D Scene Sampling through Camera Model Design'''
Images are pervasive throughout our lives and are the central focus of computer graphics and visualization.  Most images are generated from complex 3-D data using the planar pinhole camera model (also known as the perspective projection).  This classic camera model has important advantages, including its simplicity, enabling efficient hardware and software implementations, and its similarity to human vision, yielding images familiar to users.  The planar pinhole camera model, however, suffers from important limitations including sampling from a single viewpoint and requiring a uniform sampling rate along the image plane.  These limitations result in problems with occlusions when no direct line-of-sight exists to the viewpoint and sampling rates which do not correlate well to the complexity of 3-D data.
We have proposed a new paradigm of problem solving, dubbed Camera Model Design, which overcomes the limitations of the planar pinhole camera model to address many problems which still exist in computer graphics and visualization.  The Camera Model Design paradigm stresses three important ideas. First, relax the constraints of the planar pinhole camera model allowing generalized camera rays which are no longer straight and no longer converge.  This facilitates camera models that overcome occlusions and have variable sampling rates.  Second, camera models should no longer be static.  Instead they should dynamically adapt to the 3-D data they are sampling.  Third, in order to support interactive exploration, a high level of computational efficiency should be maintained.
In my talk I will be giving an overview of some of the camera models we have developed and their applications.  In addition, I will preview some of our ongoing work and discussing future directions of this work.
- ''Speaker:'' Paul Rosen (Purdue Univ.), http://www.cspaul.com/wiki/doku.php
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (04/09)
== Apr. 2, 2010 ==
- '''How Seg3D is helping us to build an anatomical atlas of the heart, and the joy of adding a new Spline Tool to Seg3D for valve annula segmentation'''
As part of the research in computational models of cardiac physiology at the departments of Computing, Physiology and Cardiovascular Medicine at Oxford, we are working in an anatomical atlas of mammalian hearts.
Using high resolution 3D Magnetic Resonance Imaging of ex-vivo rat hearts, we are interested in 1) finding a geometric model to represent the fundamental structure of the ventricles, and 2) study the statistical variability of elements such as trabeculae, papillary muscles, valves, etc.
In order to build the model, first we need to segment the cardiac tissue and ventricular cavities from the images.
The open source application Seg3D, developed at the Center for Integrative Biomedical Computing at the University of Utah, has proven very useful to solve the segmentation problems we have encountered.
But we have also extended the functionality provided by Seg3D with a new tool that enables a human expert segment ring-like structures in 3D space such as cardiac valves annula.
In this talk we'll present our ongoing work on cardiac atlas models, as well as the difficulties and advantages we found using Seg3D.
- ''Speaker:'' Ramón Casero Cañas  (University of Oxford), http://web.comlab.ox.ac.uk/people/Ramon.CaseroCanas/
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (04/02)
== Mar. 26, 2010 ==
Spring Break
== Mar. 19, 2010 ==
- '''Topology Verification for Isosurfaces Extraction'''
Visual representations of isosurfaces are ubiquitous in the scientific
and engineering literature. In this talk, we present
techniques to assess the behavior of topological properties of isosurfacing codes.
These techniques allow us to distinguish whether
anomalies in isosurface features can be attributed to the underlying physical
process or to artifacts from the extraction process.
Such scientific scrutiny is at the heart of verifiable visualization –
subjecting visualization algorithms to the same verification process
that is used in other components of the scientific pipeline.
This technique is practical: it exposes actual problems in implementations.
Armed with the results of the verification process practitioners can
judiciously select the isosurface extraction technique appropriate for their
problem of interest, and have confidence in its behavior.
- ''Speaker:'' Tiago Etiene Queiroz  (SCI), http://www.sci.utah.edu/people/etiene.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (03/19)
== Mar. 12, 2010 ==
- '''Data-Intensive Scientific Visualization in the Cloud: Challenges and Opportunities'''
Large-scale scientific visualization systems are historically designed for "throwing datasets"
- pushing pre-conditioned data as quickly as possible through the graphics pipeline. However,
increasingly, scalable data manipulation, restructuring, and querying -- tasks at which the
data management community has provided excellent tools -- are considered integral parts of
exploratory visualization. We observe that the visualization community tends to support these
tasks only through ad hoc extensions to existing visualization systems. We advocate a different
approach: implement and evaluate a core set of visualization algorithms in a high-level, share-nothing
parallel dataflow system. Analysis of such algorithms can be used to inform requirements for a
system that bridges the gap between scalable visualization and scalable data analysis. Given it's growth
and success in industry, we utilize the MapReduce model to perform this analysis on a representative
suite of scientific visualization tasks: isosurface extraction, mesh simplification, and rendering.
- ''Speaker:'' Jonathan Bronson  (SCI), http://www.sci.utah.edu/people/bronson.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (03/12)
== Mar. 5, 2010 ==
- '''Fiedler Trees for Multiscale Surface Analysis'''
In this work we introduce a new hierarchical surface decomposition method for multiscale analysis of surface meshes. In contrast to other multiresolution methods, our approach relies on spectral properties of the surface to build a binary hierarchical decomposition. Namely, we utilize the Fiedler vector of the Laplace-Beltrami operator to recursively decompose the surface. For this reason, we coin our surface decomposition the Fiedler tree. Using the Fiedler tree ensures a number of attractive properties, including: mesh-independent decomposition, well-formed and equi-areal surface patches, and noise robustness. We illustrate how the hierarchical patch decomposition may be exploited for generating multiresolution high quality uniform and adaptive meshes, as well as being a natural means for carrying out wavelet methods.
- ''Speaker:'' Matt Berger  (SCI), http://www.sci.utah.edu/people/bergerm.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (03/05)
== Feb. 26, 2010 ==
- '''Physically-Based Interactive Schlieren Flow Visualization ''' (Pacific Vis 2010 Practice talk)
Understanding fluid flow is a difficult problemand of increasing importance
as computational fluid dynamics produces an abundance of simulation data.
Experimental flow analysis has employed techniques such as shadowgraph and
schlieren imaging for centuries which allow empirical observation of inhomogeneous
flows. Shadowgraphs provide an intuitive way of looking at small changes in
flow dynamics through caustic effects while schlieren cutoffs introduce
an intensity gradation for observing large scale directional changes in the
flow. The combination of these shading effects provides an informative global
analysis of overall fluid flow. Computational solutions for these methods have
proven too complex until recently due to the fundamental physical interaction
of light refracting through the flow field. In this paper, we introduce a novel
method to simulate the refraction of light to generate synthetic shadowgraphs
and schlieren images of time-varying scalar fields derived from computational
fluid dynamics (CFD) data. Our method computes physically accurate schlieren
and shadowgraph images at interactive rates by utilizing a combination of GPGPU
programming, acceleration methods, and data-dependent probabilistic schlieren
cutoffs. Results comparing this method to previous schlieren approximations
are presented.
- ''Speaker:'' Carson Brownlee  (SCI), http://www.sci.utah.edu/people/brownlee.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (02/26)
== Feb. 19, 2010 ==
- '''Visualizing Statistics for Uncertain Data, with Guarantees'''
We consider the problem of visualizing statistics on uncertain data.  In particular, we assume we are given a data set where each data element has a probability distribution describing its uncertainty.  This data arises in robotics, computational structural biology, biosurveillance, and many other important areas. 
Given a query statistic on this uncertain data, we argue that the answer to the query should itself be represented as a probability distribution.  The talk will focus on creating and visualizing distributions for increasingly complicated types of queries: (a) univariate statistics, (b) multivariate statistics, and (c) shape inclusion probabilities (SIPs), which measure the probability that a query point is within a shape summarizing the data. 
The algorithms to create and visualize these structures are simple and practical; furthermore, we can prove guarantees on their accuracy. 
We will conclude with open problems, glimpses at ongoing work, and opportunities for collaboration. 
(joint work w/ Maarten Loffler)
- ''Speaker:'' Jeff Phillips  (CS), http://www.cs.utah.edu/~jeffp/
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (02/19)
== Feb. 12, 2010 ==
- '''Applying Manifold Learning to Plotting Approximate Contour Trees''' (VIS paper discussion)
- ''Speaker:'' Hao Wang (SCI), http://www.cs.utah.edu/~haow/
- '''Mapping Text with Phrase Nets''' (InfoVis paper discussion)
- ''Speaker:'' Claurissa Tuttle (SCI) http://www.sci.utah.edu/people/tuttle.html
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (02/12)
== Feb. 5, 2010 ==
-'''Distributed visualization using high-speed networks'''
I will talk about methods of designing a distributed visualization application
to take advantage of high-speed networks and distributed resources to improve scalability, performance and capabilities. I will describe how, through distribution,
a visualization application can be improved to interactively visualize tens of
gigabytes of data and handle large datasets while maintaining high quality.
The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps.
I will also describe my research in remote data access systems motivated by the distributed visualization application. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency.
Five remote data access architectures are briefly analyzed and the results show
that an architecture that combines bulk and pipeline processing is the best
solution for high-throughput remote data access. The resulting system, also
supporting high-speed transport protocols and configurable remote operations,
is up to 400 times faster than a comparable existing remote data access system.
Transport protocols are briefly compared to understand which protocol can best
utilize high-speed network connections.
My talk will be concluded with a presentation of interesting future research areas,
as well a presentation of the distributed visualization and cyberinfrastructure
research project that was recently funded by the National Science Foundation
and motivates my visit to Utah and interesting related collaboration areas.
- ''Speaker:'' Andrei Hutanu (Louisiana State University) http://www.cct.lsu.edu/~ahutanu/
- ''Where:'' Conference Room 3760
- ''When:'' Friday noon (02/05)

Latest revision as of 15:23, 18 June 2010

This semester Guoning Chen and Josh Levine will be responsible
for organizing the VisLunch sessions. Please feel free to contact them
for any question regarding VisLunch or for scheduling a talk:

Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Spring2010)

Open Discussion and Semester Planning

VisLunch is back for this semester and will be organized by Guoning Chen and Josh Levine. If you are unaware, VisLunch provides everyone at SCI a platform to present their research work and/or the latest developments in the community that could benefit the rest of us. In addition, the meeting is a great forum to give practice talks and improve your presentation skills. Plus there's _free_ pizza, and it's a nice opportunity to meet new people. Please let either Josh or Guoning know if

1.) You've submitted work to a research venue (e.g. recent conferences like Siggraph) and would like to share your ideas;

2.) You are preparing a submission to an upcoming venue (e.g. IEEE Vis, Siggraph Asia, etc.) and would like to get some feedback;

3.) Your work has been accepted to some venue and you are preparing a presentation you would like to practice; or

4.) You've recently read a new publication and are fascinated by the ideas and wish to share them with the rest of us.


Please consider volunteering to give a presentation at some point! We're hoping that there will be enough presenters so that we don't cancel any future weeks.



June 25, 2010

- Shape Analysis and Understanding

In the first part of the talk, I will present my doctoral research on 3D shape description for retrieval purposes. I will describe the use of a Reeb-graph based shape descriptor and the Fiedler vector of its Laplacian matrix for shape matching.

In the second part of the talk I will present an ongoing research project which aims at understanding the genetic basis of root traits of crop plants. This project with applications in biology is analogous to an open research problem on how to bridge the shape of an object with its semantics. Known genome sequences of some crops, like rice or corn, represent the functional meaning of the shape, aka its semantics. From the other side, a description of the shape of roots is lacking due to the natural non-transparent growth media which makes it difficult to acquire and analyze the shape of roots. The results of the early acquisition process produce noisy data with complex spatial structure. Within this framework, we are interested in the analysis of time-series 3D data, and, more precisely, in the computation of global, local, and dynamic shape descriptors of roots. As a first step in this direction, we extract a curve skeleton representing root structure which is robust to noise. Using this skeleton we are able to compute various local and global shape descriptors, and move towards analysis of time-series data.

In the conclusion of the talk, I will discuss some applications of shape analysis and description which are inspiring me for future work, such as: robust methods for extraction of complex data in medical and biological applications; definition of meaningful shape features; analysis and similarity estimation of large physical networks; and data-driven shape morphing.

- Speaker: Olga Symonova (Georgia Tech), http://ecotheory.biology.gatech.edu/olga/main.htm

- Where: Conference Room 3760

- When: Friday noon (06/25)

June 11, 2010

- Advances in Architectural Geometry – Research and Practice

The emergence of freeform structures in contemporary architecture raises numerous challenging research problems, most of which are related to the actual fabrication and are a rich source of research topics in geometry and geometric computing. The talk will provide an overview of recent progress in this field, with a particular focus on projects which illustrate the transfer of research into the architectural practice.

- Speaker: Helmut Pottmann (KAUST), Director of the Geometric Modeling and Scientific Visualization Research Center and Professor of Applied Mathematics and Computational Science in the Mathematical and Computer Sciences and Engineering Division

http://www.kaust.edu.sa/academics/faculty/pottmann.html

- Where: Conference Room 3760

- When: Friday noon (06/11)

June 4, 2010

- Stratification Learning Through Homology Inference

A basic problem in geometry, topology, and statistical inference that has received recent attention is that of manifold learning: given a point cloud of data sampled from a manifold in an ambient space R^k, infer the underlying manifold. A clear limitation of this problem is that the object may not be a manifold but a stratified space, the space which can be partitioned into strata, each of which are manifolds. In this work, we study the following problem: given a point cloud sampled from a stratified space, which points belong to the same strata? This inference problem is examined from three perspectives, a topological inference statement, a geometric inference statement, and a probabilistic inference statement. The approach we describe holds for Whitney stratified spaces.

- Speaker: Bei Wang (Duke), http://www.cs.duke.edu/~beiwang/

- Where: Conference Room 3760

- When: Friday noon (06/04)

May 21, 2010

- Fast Volumetric Data Exploration with Importance-Based Accumulated Transparency Modulation

Direct volume rendering techniques have been successfully applied to visualizing volumetric datasets across many application domains. Due to the sensitivity of transfer functions and the complexity of fine-tuning transfer functions, direct volume rendering is still not widely used in practice. For fast volumetric data exploration, we propose Importance-Based Accumulated Transparency Modulation which does not rely on transfer function manipulation. This novel rendering algorithm is a generalization and extension of the Maximum Intensity Difference Accumulation technique. By only modifying the accumulated transparency, the resulted volume renderings are essentially high dynamic range. We show that by using several common importance measures, different features of the volumetric datasets can be highlighted. The results can be easily extended to a high-dimensional importance difference space, by mixing the results from an arbitrary number of importance measures with weighting factors, which all control the final output with a monotonic behavior. With Importance-Based Accumulated Transparency Modulation, the end-user can explore a wide variety of volumetric datasets quickly without the burden of manually setting and adjusting a transfer function.

- Speaker: Yong Wan (SCI), http://www.sci.utah.edu/people/wanyong.html

- Where: Conference Room 3760

- When: Friday noon (05/21)

May 14, 2010

Applications, Data, and the Future of Storage in Computational Science

Computational science continues to play an important role in the scientific discovery process. At the largest scales, efficient data storage, access, and analysis have become central concerns. This talk will focus on storage systems for computational science at extreme scale. First we'll discuss what kinds of applications are running at this scale and what their data requirements look like at a high level. Next we'll describe the traditional model for storage systems, why this model is in use, and what the community is doing to ensure continued success of storage systems using this model. Then we'll have a quick look at what is happening in storage architectures, from a hardware perspective, and how this might impact computational science deployments. Finally, we observe that analysis is becoming a more important part of the data picture, and we'll examine how improvements or augmentations in the storage system might aid in more effective large-scale data analysis.

- Speaker: Rob Ross (ANL), http://www.mcs.anl.gov/~rross/

- Where: Conference Room 3760

- When: Friday noon (05/14)


May 7, 2010

- Computing Handle and Tunnel Loops on Surfaces

Meaningful non-trivial loops in surfaces are important topological features of the shapes. Many graphics applications such as surface parameterization, feature recognition, topology repair, and model editing, rely on such loops that capture the topologies of the surfaces. It's known that for a closed surface $M$ of genus $g$ embedded in three dimensions, there are 2$g$ non-trivial loops that form the base of the first homology group of $M$. In this work, we mathematically define a special class of loops called handle and tunnel loops in terms of the first homology groups of $M$ and $M$'s embeddings in $R3$. After that, we propose two algorithms to compute them that are useful in practice.

Our first method works on a class of closed surfaces that retract to their skeleton graphs. This algorithm first computes the curve skeletons of a closed surface $M$. The interior and exterior of $M$ retract to the inside and outside skeletons respectively. After that, the handle and tunnel loops could be characterized through knot linking with these skeletons. Specifically, the handle loops are linked with the inside skeleton, while the tunnel loops are linked with the outside skeleton. Such a linking leads to an algorithm of generation of these loops that are minimally linked with the curve skeletons.

Our second loop computation algorithm is developed based on persistent homology. The concept of persistent homology perfectly fits in the definitions of handle and tunnel loops, hence these loops can be computed naturally with a persistence algorithm. Since loop qualities are affected by the choice of the filtration of complexes used in the persistence algorithm, we incorporate geometry by a geodesic measure to design a filtration that results in geometry-aware loops of small size. Compared with previous algorithms, this method works combinatorially and doesn't require any extra data structures. Moreover, it's time efficient and can be applied on a larger class of surfaces including iso-surfaces of volume data, surfaces that are `knotted', surfaces with boundaries and even non-manifolds.

- Speaker: Kuiyu Li (Ohio State University), http://www.cse.ohio-state.edu/~liku/

- Where: Conference Room 3760

- When: Friday noon (05/07)

Apr. 16, 2010

- Automatically Synthesizing Impressionistic Oil Paintings

In the era of large datasets, many visualization methods either summarize or emphasize aspects of otherwise incomprehensibly large datasets. In a sense, techniques like clustering and dimensionality reduction sacrifice accuracy for meaning. Likewise, impressionist painters developed art in reaction to the photograph, which they considered more accurate than the realistic paintings of the day. Instead of following realist traditions, they tried to craft images which evoke meanings without representing minute details. Their works reflect a rigorous inquiry into human visual perception through which they learned to discern omissible image features from those of cognitive significance. In a sense, impressionist methods distill complex imagery into image summaries which emphasize the aspects of a subject deemed most important by the artist. This talk discusses the relevance between Impressionism and scientific visualization, compares various non-photorealistic rendering systems which synthesize paintings, sheds light on the technical challenges involved in the creation of such systems, contrasts the advantages and disadvantages of human and automatic painting abilities, and discusses some applications of impressionistic painting synthesis.


- Speaker: Clifton Brooks (SCI, graduate student), http://www.sci.utah.edu/people/cbrooks.html

- Where: Conference Room 3760

- When: Friday noon (04/16)

Apr. 9, 2010

- A New Perspective on Perspective: Improving 3-D Scene Sampling through Camera Model Design

Images are pervasive throughout our lives and are the central focus of computer graphics and visualization. Most images are generated from complex 3-D data using the planar pinhole camera model (also known as the perspective projection). This classic camera model has important advantages, including its simplicity, enabling efficient hardware and software implementations, and its similarity to human vision, yielding images familiar to users. The planar pinhole camera model, however, suffers from important limitations including sampling from a single viewpoint and requiring a uniform sampling rate along the image plane. These limitations result in problems with occlusions when no direct line-of-sight exists to the viewpoint and sampling rates which do not correlate well to the complexity of 3-D data.

We have proposed a new paradigm of problem solving, dubbed Camera Model Design, which overcomes the limitations of the planar pinhole camera model to address many problems which still exist in computer graphics and visualization. The Camera Model Design paradigm stresses three important ideas. First, relax the constraints of the planar pinhole camera model allowing generalized camera rays which are no longer straight and no longer converge. This facilitates camera models that overcome occlusions and have variable sampling rates. Second, camera models should no longer be static. Instead they should dynamically adapt to the 3-D data they are sampling. Third, in order to support interactive exploration, a high level of computational efficiency should be maintained.

In my talk I will be giving an overview of some of the camera models we have developed and their applications. In addition, I will preview some of our ongoing work and discussing future directions of this work.

- Speaker: Paul Rosen (Purdue Univ.), http://www.cspaul.com/wiki/doku.php

- Where: Conference Room 3760

- When: Friday noon (04/09)

Apr. 2, 2010

- How Seg3D is helping us to build an anatomical atlas of the heart, and the joy of adding a new Spline Tool to Seg3D for valve annula segmentation

As part of the research in computational models of cardiac physiology at the departments of Computing, Physiology and Cardiovascular Medicine at Oxford, we are working in an anatomical atlas of mammalian hearts.

Using high resolution 3D Magnetic Resonance Imaging of ex-vivo rat hearts, we are interested in 1) finding a geometric model to represent the fundamental structure of the ventricles, and 2) study the statistical variability of elements such as trabeculae, papillary muscles, valves, etc.

In order to build the model, first we need to segment the cardiac tissue and ventricular cavities from the images.

The open source application Seg3D, developed at the Center for Integrative Biomedical Computing at the University of Utah, has proven very useful to solve the segmentation problems we have encountered.

But we have also extended the functionality provided by Seg3D with a new tool that enables a human expert segment ring-like structures in 3D space such as cardiac valves annula.

In this talk we'll present our ongoing work on cardiac atlas models, as well as the difficulties and advantages we found using Seg3D.

- Speaker: Ramón Casero Cañas (University of Oxford), http://web.comlab.ox.ac.uk/people/Ramon.CaseroCanas/

- Where: Conference Room 3760

- When: Friday noon (04/02)

Mar. 26, 2010

Spring Break

Mar. 19, 2010

- Topology Verification for Isosurfaces Extraction

Visual representations of isosurfaces are ubiquitous in the scientific and engineering literature. In this talk, we present techniques to assess the behavior of topological properties of isosurfacing codes. These techniques allow us to distinguish whether anomalies in isosurface features can be attributed to the underlying physical process or to artifacts from the extraction process. Such scientific scrutiny is at the heart of verifiable visualization – subjecting visualization algorithms to the same verification process that is used in other components of the scientific pipeline. This technique is practical: it exposes actual problems in implementations. Armed with the results of the verification process practitioners can judiciously select the isosurface extraction technique appropriate for their problem of interest, and have confidence in its behavior.

- Speaker: Tiago Etiene Queiroz (SCI), http://www.sci.utah.edu/people/etiene.html

- Where: Conference Room 3760

- When: Friday noon (03/19)

Mar. 12, 2010

- Data-Intensive Scientific Visualization in the Cloud: Challenges and Opportunities

Large-scale scientific visualization systems are historically designed for "throwing datasets" - pushing pre-conditioned data as quickly as possible through the graphics pipeline. However, increasingly, scalable data manipulation, restructuring, and querying -- tasks at which the data management community has provided excellent tools -- are considered integral parts of exploratory visualization. We observe that the visualization community tends to support these tasks only through ad hoc extensions to existing visualization systems. We advocate a different approach: implement and evaluate a core set of visualization algorithms in a high-level, share-nothing parallel dataflow system. Analysis of such algorithms can be used to inform requirements for a system that bridges the gap between scalable visualization and scalable data analysis. Given it's growth and success in industry, we utilize the MapReduce model to perform this analysis on a representative suite of scientific visualization tasks: isosurface extraction, mesh simplification, and rendering.

- Speaker: Jonathan Bronson (SCI), http://www.sci.utah.edu/people/bronson.html

- Where: Conference Room 3760

- When: Friday noon (03/12)

Mar. 5, 2010

- Fiedler Trees for Multiscale Surface Analysis

In this work we introduce a new hierarchical surface decomposition method for multiscale analysis of surface meshes. In contrast to other multiresolution methods, our approach relies on spectral properties of the surface to build a binary hierarchical decomposition. Namely, we utilize the Fiedler vector of the Laplace-Beltrami operator to recursively decompose the surface. For this reason, we coin our surface decomposition the Fiedler tree. Using the Fiedler tree ensures a number of attractive properties, including: mesh-independent decomposition, well-formed and equi-areal surface patches, and noise robustness. We illustrate how the hierarchical patch decomposition may be exploited for generating multiresolution high quality uniform and adaptive meshes, as well as being a natural means for carrying out wavelet methods.

- Speaker: Matt Berger (SCI), http://www.sci.utah.edu/people/bergerm.html

- Where: Conference Room 3760

- When: Friday noon (03/05)

Feb. 26, 2010

- Physically-Based Interactive Schlieren Flow Visualization (Pacific Vis 2010 Practice talk)

Understanding fluid flow is a difficult problemand of increasing importance as computational fluid dynamics produces an abundance of simulation data. Experimental flow analysis has employed techniques such as shadowgraph and schlieren imaging for centuries which allow empirical observation of inhomogeneous flows. Shadowgraphs provide an intuitive way of looking at small changes in flow dynamics through caustic effects while schlieren cutoffs introduce an intensity gradation for observing large scale directional changes in the flow. The combination of these shading effects provides an informative global analysis of overall fluid flow. Computational solutions for these methods have proven too complex until recently due to the fundamental physical interaction of light refracting through the flow field. In this paper, we introduce a novel method to simulate the refraction of light to generate synthetic shadowgraphs and schlieren images of time-varying scalar fields derived from computational fluid dynamics (CFD) data. Our method computes physically accurate schlieren and shadowgraph images at interactive rates by utilizing a combination of GPGPU programming, acceleration methods, and data-dependent probabilistic schlieren cutoffs. Results comparing this method to previous schlieren approximations are presented.

- Speaker: Carson Brownlee (SCI), http://www.sci.utah.edu/people/brownlee.html

- Where: Conference Room 3760

- When: Friday noon (02/26)

Feb. 19, 2010

- Visualizing Statistics for Uncertain Data, with Guarantees

We consider the problem of visualizing statistics on uncertain data. In particular, we assume we are given a data set where each data element has a probability distribution describing its uncertainty. This data arises in robotics, computational structural biology, biosurveillance, and many other important areas. Given a query statistic on this uncertain data, we argue that the answer to the query should itself be represented as a probability distribution. The talk will focus on creating and visualizing distributions for increasingly complicated types of queries: (a) univariate statistics, (b) multivariate statistics, and (c) shape inclusion probabilities (SIPs), which measure the probability that a query point is within a shape summarizing the data. The algorithms to create and visualize these structures are simple and practical; furthermore, we can prove guarantees on their accuracy. We will conclude with open problems, glimpses at ongoing work, and opportunities for collaboration.

(joint work w/ Maarten Loffler)

- Speaker: Jeff Phillips (CS), http://www.cs.utah.edu/~jeffp/

- Where: Conference Room 3760

- When: Friday noon (02/19)

Feb. 12, 2010

- Applying Manifold Learning to Plotting Approximate Contour Trees (VIS paper discussion)

- Speaker: Hao Wang (SCI), http://www.cs.utah.edu/~haow/


- Mapping Text with Phrase Nets (InfoVis paper discussion)

- Speaker: Claurissa Tuttle (SCI) http://www.sci.utah.edu/people/tuttle.html

- Where: Conference Room 3760

- When: Friday noon (02/12)

Feb. 5, 2010

-Distributed visualization using high-speed networks

I will talk about methods of designing a distributed visualization application to take advantage of high-speed networks and distributed resources to improve scalability, performance and capabilities. I will describe how, through distribution, a visualization application can be improved to interactively visualize tens of gigabytes of data and handle large datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps.

I will also describe my research in remote data access systems motivated by the distributed visualization application. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are briefly analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system.

Transport protocols are briefly compared to understand which protocol can best utilize high-speed network connections.

My talk will be concluded with a presentation of interesting future research areas, as well a presentation of the distributed visualization and cyberinfrastructure research project that was recently funded by the National Science Foundation and motivates my visit to Utah and interesting related collaboration areas.

- Speaker: Andrei Hutanu (Louisiana State University) http://www.cct.lsu.edu/~ahutanu/

- Where: Conference Room 3760

- When: Friday noon (02/05)