Difference between revisions of "VisLunch/Spring2010"

From VistrailsWiki
Jump to navigation Jump to search
Line 47: Line 47:
== May 7, 2010 ==
== May 7, 2010 ==


Finals Week
- '''Fast Volumetric Data Exploration with Importance-Based Accumulated Transparency Modulation'''
 
Direct volume rendering techniques have been successfully applied to visualizing volumetric datasets across many application domains. Due to the sensitivity of transfer functions and the complexity of fine-tuning transfer functions, direct volume rendering is still not widely used in practice. For fast volumetric data exploration, we propose Importance-Based Accumulated Transparency Modulation which does not rely on transfer function manipulation. This novel rendering algorithm is a generalization and extension of the Maximum Intensity Difference Accumulation technique. By only modifying the accumulated transparency, the resulted volume renderings are essentially high dynamic range. We show that by using several common importance measures, different features of the volumetric datasets can be highlighted. The results can be easily extended to a high-dimensional importance difference space, by mixing the results from an arbitrary number of importance measures with weighting factors, which all control the final output with a monotonic behavior. With Importance-Based Accumulated Transparency Modulation, the end-user can explore a wide variety of volumetric datasets quickly without the burden of manually setting and adjusting a transfer function.
 
 
- ''Speaker:'' Yong Wan(SCI, graduate student), http://www.sci.utah.edu/people/wanyong.html
 
- ''Where:'' Conference Room 3760
 
- ''When:'' Friday noon (05/07) Finals Week


== Apr. 30, 2010 ==
== Apr. 30, 2010 ==

Revision as of 15:26, 28 April 2010

This semester Guoning Chen and Josh Levine will be responsible
for organizing the VisLunch sessions. Please feel free to contact them
for any question regarding VisLunch or for scheduling a talk:

Information regarding the VisLunch sessions will posted on this wiki page (http://www.vistrails.org/index.php/VisLunch/Spring2010)

Open Discussion and Semester Planning

VisLunch is back for this semester and will be organized by Guoning Chen and Josh Levine. If you are unaware, VisLunch provides everyone at SCI a platform to present their research work and/or the latest developments in the community that could benefit the rest of us. In addition, the meeting is a great forum to give practice talks and improve your presentation skills. Plus there's _free_ pizza, and it's a nice opportunity to meet new people. Please let either Josh or Guoning know if

1.) You've submitted work to a research venue (e.g. recent conferences like Siggraph) and would like to share your ideas;

2.) You are preparing a submission to an upcoming venue (e.g. IEEE Vis, Siggraph Asia, etc.) and would like to get some feedback;

3.) Your work has been accepted to some venue and you are preparing a presentation you would like to practice; or

4.) You've recently read a new publication and are fascinated by the ideas and wish to share them with the rest of us.


Please consider volunteering to give a presentation at some point! We're hoping that there will be enough presenters so that we don't cancel any future weeks.


May 14, 2010

Applications, Data, and the Future of Storage in Computational Science

Computational science continues to play an important role in the scientific discovery process. At the largest scales, efficient data storage, access, and analysis have become central concerns. This talk will focus on storage systems for computational science at extreme scale. First we'll discuss what kinds of applications are running at this scale and what their data requirements look like at a high level. Next we'll describe the traditional model for storage systems, why this model is in use, and what the community is doing to ensure continued success of storage systems using this model. Then we'll have a quick look at what is happening in storage architectures, from a hardware perspective, and how this might impact computational science deployments. Finally, we observe that analysis is becoming a more important part of the data picture, and we'll examine how improvements or augmentations in the storage system might aid in more effective large-scale data analysis.

- Speaker: Rob Ross (ANL), http://www.mcs.anl.gov/~rross/

- Where: Conference Room 3760

- When: Friday noon (05/14)

May 7, 2010

- Fast Volumetric Data Exploration with Importance-Based Accumulated Transparency Modulation

Direct volume rendering techniques have been successfully applied to visualizing volumetric datasets across many application domains. Due to the sensitivity of transfer functions and the complexity of fine-tuning transfer functions, direct volume rendering is still not widely used in practice. For fast volumetric data exploration, we propose Importance-Based Accumulated Transparency Modulation which does not rely on transfer function manipulation. This novel rendering algorithm is a generalization and extension of the Maximum Intensity Difference Accumulation technique. By only modifying the accumulated transparency, the resulted volume renderings are essentially high dynamic range. We show that by using several common importance measures, different features of the volumetric datasets can be highlighted. The results can be easily extended to a high-dimensional importance difference space, by mixing the results from an arbitrary number of importance measures with weighting factors, which all control the final output with a monotonic behavior. With Importance-Based Accumulated Transparency Modulation, the end-user can explore a wide variety of volumetric datasets quickly without the burden of manually setting and adjusting a transfer function.


- Speaker: Yong Wan(SCI, graduate student), http://www.sci.utah.edu/people/wanyong.html

- Where: Conference Room 3760

- When: Friday noon (05/07) Finals Week

Apr. 30, 2010

Finals Week

Apr. 23, 2010

Apr. 16, 2010

- Automatically Synthesizing Impressionistic Oil Paintings

In the era of large datasets, many visualization methods either summarize or emphasize aspects of otherwise incomprehensibly large datasets. In a sense, techniques like clustering and dimensionality reduction sacrifice accuracy for meaning. Likewise, impressionist painters developed art in reaction to the photograph, which they considered more accurate than the realistic paintings of the day. Instead of following realist traditions, they tried to craft images which evoke meanings without representing minute details. Their works reflect a rigorous inquiry into human visual perception through which they learned to discern omissible image features from those of cognitive significance. In a sense, impressionist methods distill complex imagery into image summaries which emphasize the aspects of a subject deemed most important by the artist. This talk discusses the relevance between Impressionism and scientific visualization, compares various non-photorealistic rendering systems which synthesize paintings, sheds light on the technical challenges involved in the creation of such systems, contrasts the advantages and disadvantages of human and automatic painting abilities, and discusses some applications of impressionistic painting synthesis.


- Speaker: Clifton Brooks (SCI, graduate student), http://www.sci.utah.edu/people/cbrooks.html

- Where: Conference Room 3760

- When: Friday noon (04/16)

Apr. 9, 2010

- A New Perspective on Perspective: Improving 3-D Scene Sampling through Camera Model Design

Images are pervasive throughout our lives and are the central focus of computer graphics and visualization. Most images are generated from complex 3-D data using the planar pinhole camera model (also known as the perspective projection). This classic camera model has important advantages, including its simplicity, enabling efficient hardware and software implementations, and its similarity to human vision, yielding images familiar to users. The planar pinhole camera model, however, suffers from important limitations including sampling from a single viewpoint and requiring a uniform sampling rate along the image plane. These limitations result in problems with occlusions when no direct line-of-sight exists to the viewpoint and sampling rates which do not correlate well to the complexity of 3-D data.

We have proposed a new paradigm of problem solving, dubbed Camera Model Design, which overcomes the limitations of the planar pinhole camera model to address many problems which still exist in computer graphics and visualization. The Camera Model Design paradigm stresses three important ideas. First, relax the constraints of the planar pinhole camera model allowing generalized camera rays which are no longer straight and no longer converge. This facilitates camera models that overcome occlusions and have variable sampling rates. Second, camera models should no longer be static. Instead they should dynamically adapt to the 3-D data they are sampling. Third, in order to support interactive exploration, a high level of computational efficiency should be maintained.

In my talk I will be giving an overview of some of the camera models we have developed and their applications. In addition, I will preview some of our ongoing work and discussing future directions of this work.

- Speaker: Paul Rosen (Purdue Univ.), http://www.cspaul.com/wiki/doku.php

- Where: Conference Room 3760

- When: Friday noon (04/09)

Apr. 2, 2010

- How Seg3D is helping us to build an anatomical atlas of the heart, and the joy of adding a new Spline Tool to Seg3D for valve annula segmentation

As part of the research in computational models of cardiac physiology at the departments of Computing, Physiology and Cardiovascular Medicine at Oxford, we are working in an anatomical atlas of mammalian hearts.

Using high resolution 3D Magnetic Resonance Imaging of ex-vivo rat hearts, we are interested in 1) finding a geometric model to represent the fundamental structure of the ventricles, and 2) study the statistical variability of elements such as trabeculae, papillary muscles, valves, etc.

In order to build the model, first we need to segment the cardiac tissue and ventricular cavities from the images.

The open source application Seg3D, developed at the Center for Integrative Biomedical Computing at the University of Utah, has proven very useful to solve the segmentation problems we have encountered.

But we have also extended the functionality provided by Seg3D with a new tool that enables a human expert segment ring-like structures in 3D space such as cardiac valves annula.

In this talk we'll present our ongoing work on cardiac atlas models, as well as the difficulties and advantages we found using Seg3D.

- Speaker: Ramón Casero Cañas (University of Oxford), http://web.comlab.ox.ac.uk/people/Ramon.CaseroCanas/

- Where: Conference Room 3760

- When: Friday noon (04/02)

Mar. 26, 2010

Spring Break

Mar. 19, 2010

- Topology Verification for Isosurfaces Extraction

Visual representations of isosurfaces are ubiquitous in the scientific and engineering literature. In this talk, we present techniques to assess the behavior of topological properties of isosurfacing codes. These techniques allow us to distinguish whether anomalies in isosurface features can be attributed to the underlying physical process or to artifacts from the extraction process. Such scientific scrutiny is at the heart of verifiable visualization – subjecting visualization algorithms to the same verification process that is used in other components of the scientific pipeline. This technique is practical: it exposes actual problems in implementations. Armed with the results of the verification process practitioners can judiciously select the isosurface extraction technique appropriate for their problem of interest, and have confidence in its behavior.

- Speaker: Tiago Etiene Queiroz (SCI), http://www.sci.utah.edu/people/etiene.html

- Where: Conference Room 3760

- When: Friday noon (03/19)

Mar. 12, 2010

- Data-Intensive Scientific Visualization in the Cloud: Challenges and Opportunities

Large-scale scientific visualization systems are historically designed for "throwing datasets" - pushing pre-conditioned data as quickly as possible through the graphics pipeline. However, increasingly, scalable data manipulation, restructuring, and querying -- tasks at which the data management community has provided excellent tools -- are considered integral parts of exploratory visualization. We observe that the visualization community tends to support these tasks only through ad hoc extensions to existing visualization systems. We advocate a different approach: implement and evaluate a core set of visualization algorithms in a high-level, share-nothing parallel dataflow system. Analysis of such algorithms can be used to inform requirements for a system that bridges the gap between scalable visualization and scalable data analysis. Given it's growth and success in industry, we utilize the MapReduce model to perform this analysis on a representative suite of scientific visualization tasks: isosurface extraction, mesh simplification, and rendering.

- Speaker: Jonathan Bronson (SCI), http://www.sci.utah.edu/people/bronson.html

- Where: Conference Room 3760

- When: Friday noon (03/12)

Mar. 5, 2010

- Fiedler Trees for Multiscale Surface Analysis

In this work we introduce a new hierarchical surface decomposition method for multiscale analysis of surface meshes. In contrast to other multiresolution methods, our approach relies on spectral properties of the surface to build a binary hierarchical decomposition. Namely, we utilize the Fiedler vector of the Laplace-Beltrami operator to recursively decompose the surface. For this reason, we coin our surface decomposition the Fiedler tree. Using the Fiedler tree ensures a number of attractive properties, including: mesh-independent decomposition, well-formed and equi-areal surface patches, and noise robustness. We illustrate how the hierarchical patch decomposition may be exploited for generating multiresolution high quality uniform and adaptive meshes, as well as being a natural means for carrying out wavelet methods.

- Speaker: Matt Berger (SCI), http://www.sci.utah.edu/people/bergerm.html

- Where: Conference Room 3760

- When: Friday noon (03/05)

Feb. 26, 2010

- Physically-Based Interactive Schlieren Flow Visualization (Pacific Vis 2010 Practice talk)

Understanding fluid flow is a difficult problemand of increasing importance as computational fluid dynamics produces an abundance of simulation data. Experimental flow analysis has employed techniques such as shadowgraph and schlieren imaging for centuries which allow empirical observation of inhomogeneous flows. Shadowgraphs provide an intuitive way of looking at small changes in flow dynamics through caustic effects while schlieren cutoffs introduce an intensity gradation for observing large scale directional changes in the flow. The combination of these shading effects provides an informative global analysis of overall fluid flow. Computational solutions for these methods have proven too complex until recently due to the fundamental physical interaction of light refracting through the flow field. In this paper, we introduce a novel method to simulate the refraction of light to generate synthetic shadowgraphs and schlieren images of time-varying scalar fields derived from computational fluid dynamics (CFD) data. Our method computes physically accurate schlieren and shadowgraph images at interactive rates by utilizing a combination of GPGPU programming, acceleration methods, and data-dependent probabilistic schlieren cutoffs. Results comparing this method to previous schlieren approximations are presented.

- Speaker: Carson Brownlee (SCI), http://www.sci.utah.edu/people/brownlee.html

- Where: Conference Room 3760

- When: Friday noon (02/26)

Feb. 19, 2010

- Visualizing Statistics for Uncertain Data, with Guarantees

We consider the problem of visualizing statistics on uncertain data. In particular, we assume we are given a data set where each data element has a probability distribution describing its uncertainty. This data arises in robotics, computational structural biology, biosurveillance, and many other important areas. Given a query statistic on this uncertain data, we argue that the answer to the query should itself be represented as a probability distribution. The talk will focus on creating and visualizing distributions for increasingly complicated types of queries: (a) univariate statistics, (b) multivariate statistics, and (c) shape inclusion probabilities (SIPs), which measure the probability that a query point is within a shape summarizing the data. The algorithms to create and visualize these structures are simple and practical; furthermore, we can prove guarantees on their accuracy. We will conclude with open problems, glimpses at ongoing work, and opportunities for collaboration.

(joint work w/ Maarten Loffler)

- Speaker: Jeff Phillips (CS), http://www.cs.utah.edu/~jeffp/

- Where: Conference Room 3760

- When: Friday noon (02/19)

Feb. 12, 2010

- Applying Manifold Learning to Plotting Approximate Contour Trees (VIS paper discussion)

- Speaker: Hao Wang (SCI), http://www.cs.utah.edu/~haow/


- Mapping Text with Phrase Nets (InfoVis paper discussion)

- Speaker: Claurissa Tuttle (SCI) http://www.sci.utah.edu/people/tuttle.html

- Where: Conference Room 3760

- When: Friday noon (02/12)

Feb. 5, 2010

-Distributed visualization using high-speed networks

I will talk about methods of designing a distributed visualization application to take advantage of high-speed networks and distributed resources to improve scalability, performance and capabilities. I will describe how, through distribution, a visualization application can be improved to interactively visualize tens of gigabytes of data and handle large datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps.

I will also describe my research in remote data access systems motivated by the distributed visualization application. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are briefly analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system.

Transport protocols are briefly compared to understand which protocol can best utilize high-speed network connections.

My talk will be concluded with a presentation of interesting future research areas, as well a presentation of the distributed visualization and cyberinfrastructure research project that was recently funded by the National Science Foundation and motivates my visit to Utah and interesting related collaboration areas.

- Speaker: Andrei Hutanu (Louisiana State University) http://www.cct.lsu.edu/~ahutanu/

- Where: Conference Room 3760

- When: Friday noon (02/05)