IT logo, Information Technology, University of OklahomaPhoto of City Skyline

Oklahoma Supercomputing Symposium 2007Oklahoma Supercomputing Symposium 2007


OSCER logo

OU IT Logo

OK EPSCoR Logo

Table of Contents


KEYNOTE

Jay Boisseau
Jay Boisseau
Director,
Texas Advanced Computing Center
University of Texas at Austin
Topic: "How to Build the Fastest Academic Supercomputer in America -- Twice in One Year"

Slides:   PDF

Talk Abstract

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin. has experienced tremendous growth in infrastructure, services, and R&D activities over the past six years. These achievements include deploying new, world-class HPC systems. One year ago, TACC deployed the largest cluster for academic research in the world, and TACC is repeating that feat now with a 10x more powerful system. The rapid growth in scale of these systems has provided valuable experience and interesting insights for deploying large-scale systems -- and how the requirements and issues are changing rapidly now due to quad-core processors, larger switches, and new science drivers.

Biography

John R. ("Jay") Boisseau is the director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. Since he assumed this position in June 2001, TACC has rapidly grown into one of the leading advanced computing centers in the world, by developing and deploying powerful High Performance Computing, remote visualization, data storage, and grid computing technologies for researchers. Boisseau participates and guides the overall resources and services, research and development, and education and outreach programs of the center, and serves as the principal investigator for TACC's two largest awards: the National Science Foundation TeraGrid institutional lead, and the NSF Leadership-Class System Acquisition ("Track 2") petascale computing system to be deployed at TACC in late 2007. His specific activities include performance characteristics of high-end computing systems and microprocessors, and the development of grid technologies and portals for computational science. His newest interest is the application of HPC and grid technologies to computational biology and biomedicine.

Boisseau started his training at the University of Virginia, where he received a bachelors degree in astronomy and physics in 1986, while working in various scientific computing positions. He continued his education at the The University of Texas at Austin, where he received his masters in astronomy in 1990, then took a position at the Arctic Region Supercomputing Center in 1994, while conducting computational research on Type IA explosion mechanisms, which he completed in 1996. He then moved to the San Diego Supercomputer Center, where he eventually founded and became the Associate Director of the Scientific Computing Department, initiating and leading several major activities of the center in HPC and grid computing.


OTHER PLENARY SPEAKERS

Henry Neeman
Henry Neeman

Director
OU Supercomputing Center for Education & Research (OSCER)
University of Oklahoma
Topic: "OSCER State of the Center Address"
Slides:   PowerPoint   PDF

Talk Abstract

The OU Supercomputing Center for Education & Research (OSCER) celebrated its 6th anniversary on August 31 2007. In this report, we examine what OSCER is, how OSCER began, and where OSCER is going.

Biography

Dr. Henry Neeman is the Director of the OU Supercomputing Center for Education & Research, an adjunct assistant professor in the School of Computer Science and a research scientist at the Center for Analysis & Prediction of Storms, all at the University of Oklahoma. He received his BS in computer science and his BA in statistics with a minor in mathematics from the State University of New York at Buffalo in 1987, his MS in CS from the University of Illinois at Urbana-Champaign in 1990 and his PhD in CS from UIUC in 1996. Prior to coming to OU, Dr. Neeman was a postdoctoral research associate at the National Center for Supercomputing Applications at UIUC, and before that served as a graduate research assistant both at NCSA and at the Center for Supercomputing Research & Development.

In addition to his own teaching and research, Dr. Neeman collaborates with dozens of research groups, applying High Performance Computing techniques in fields such as numerical weather prediction, bioinformatics and genomics, data mining, high energy physics, astronomy, nanotechnology, petroleum reservoir management, river basin modeling and engineering optimization. He serves as an ad hoc advisor to student researchers in many of these fields.

Dr. Neeman's research interests include high performance computing, scientific computing, parallel and distributed computing, structured adaptive mesh refinement and scientific visualization.

Stephen Wheat
Stephen Wheat

Senior Director, High Performance Computing
Intel
Topic: "Exa, Zeta, Yotta: Not Just Goofy Words"
Slides:   PDF
Video:   Quicktime

Talk Abstract: coming soon

Biography

Dr. Stephen Wheat is the Senior Director of Intel's High Performance Computing Platform Organization. He is responsible for the development of Intel's HPC strategy and the pursuit of that strategy through platform architecture, software, tools, sales and marketing, and eco-system development and collaborations.

Dr. Wheat has a wide breadth of experience that gives him a unique perspective in understanding large scale HPC deployments. He was the Advanced Development manager for the Storage Components Division, the manager of the RAID Products Development group, the manager of the Workstation Products Group software and validation groups, and manager of the systems software group within the Supercomputing Systems Division (SSD). At SSD, he was a Product Line Architect and was the systems software architect for the ASCI Red system. Before joining Intel in 1995, Dr. Wheat worked at Sandia National Laboratories, performing leading research in distributed systems software, where he created and led the SUNMOS and PUMA/Cougar programs. Dr. Wheat is a Gordon Bell Prize winner and has been awarded Intel's prestigious Achievement Award. He has a patent in Dynamic Load Balancing in HPC systems.

Dr. Wheat holds a Ph.D. in Computer Science and has several publications on the subjects of load balancing, inter-process communication, and parallel I/O in large-scale HPC systems. Outside of Intel, he is a commercial multi-engine pilot and a certified multi-engine flight instructor.

Robert Whitten Jr.
Robert Whitten Jr.

National Center for Computational Sciences
Oak Ridge National Laboratory
Topic: "The National Center for Computational Sciences: An Introduction"
Slides: available after the Symposium
Video:   Quicktime

Talk Abstract

The National Center for Computational Sciences (NCCS) was founded in 1992 to advance the state of the art in high performance computing, by bringing a new generation of parallel computers out of the laboratory and into the hands of the scientists who could most use them. It is a managed activity of the Advanced Scientific Computing Research program of the Department of Energy Office of Science (DOE-SC) and is located at Oak Ridge National Laboratory.

Biography
Robert M. Whitten Jr. is a member of the User Assistance and Outreach Group of the National Center for Computational Sciences at Oak Ridge National Laboratory. The User Assistance and Outreach Group is tasked with providing technical support to researchers that use the leadership class computing resources of the NCCS. Robert holds degrees in Computer Science from East Tennessee State University.

Tommy Toles
Tommy Toles

Business Development Executive
Advanced Micro Devices
Topic: "Future of Supercomputing: The Computational Element"
Slides: PDF

Talk Abstract

This talk will address the key trends in computation that the industry will be facing over the next few years. Common challenges and innovative approaches will be discussed, as the audience will get a "behind the scenes" look at what goes into the design of next generation platforms. Historic architectures will be compared with today's leading edge designs, to showcase the improvements made to pave the way for future scaling.

Biography

Tommy Toles has a BSEE from Texas A&M University. Early in his career, he worked as a hardware design engineer for both Bell Helicopter and the Tandy Corporation. Over the past 15 years, Tommy has served AMD in several areas, including technology marketing and sales. Currently, he serves as a Business Development Executive, assisting IT managers in planning for new technologies and obtaining maximum value from existing technology. He has been involved with several large clusters across both academic and commercial organizations. Tommy is married with 3 children, lives in Austin TX, and enjoys waterskiing at dawn several mornings each week.

OTHER PLENARY SPEAKERS TO BE ANNOUNCED


BREAKOUT SPEAKERS

Joshua Alexander

IT Support Specialist
OU Information Technology
University of Oklahoma
Topic: "Implementing Linux-enabled Condor in Multiple Windows PC Labs"
(with Chris Franklin and Horst Severini)
Slides:     PDF   PowerPoint   Poster

Talk Abstract

At the University of Oklahoma (OU), Information Technology is completing a rollout of Condor, a free opportunistic grid middleware system, across 775 desktop PCs in IT labs all over campus. OU's approach, developed in cooperation with the Research Computing Facility at the University of Nebraska Lincoln, provides the full suite of Condor features, including automatic checkpointing, suspension and migration as well as I/O over the network to disk on the originating machine. These features are normally limited to Unix/Linux installations, but OU's approach allows them on PCs running Windows as the native operating system, by leveraging coLinux as a mechanism for providing Linux as a virtualized background service. With these desktop PCs otherwise idle approximately 80% of the time, the Condor deployment is allowing OU to get 5 times as much value out of its desktop hardware.

Biography
Joshua Alexander is a Computer Engineering undergraduate at the University of Oklahoma. He currently works with the Customer Services division of OU Information Technology, and also serves as an undergraduate researcher for the OU Supercomputing Center for Education & Research (OSCER). His current project for OSCER involves both the OU IT Condor pool and development of software tools for deploying Condor at other institutions.

Amy Apon
Amy Apon

Associate Professor
Department of Computer Science & Computer Engineering
University of Arkansas
Topic: "Roundtable: The Great Plains Network's Grid Computing & Middleware Initiative"
(with Greg Monaco and Gordon Springer)
Slides:   PDF (Bill Spollen)

Roundtable Abstract

This roundtable focuses on recent developments in collaborative middleware among the Great Plains Network participants and an exploration of directions for the coming year. The roundtable will feature a demonstration of resources developed at the University of Missouri, discussion of a project to use Shibboleth as a means of managing identities for a Wiki (e.g., GPN Wiki), participation in the University of Oklahoma Condor project, and, finally, Globus grid issues (extending access to GPN globus-based grid to other users and institutions).

Biography

Dr. Amy Apon holds a Ph.D. from Vanderbilt University in performance analysis of parallel and distributed systems. Her current research focus on cluster and grid computing, including scheduling in grid systems, management of large-scale data-intensive applications, and authorization and authentication architectures. She also teaches courses in the area of cluster and grid computing and is collaborating with Louisiana State University to teach a course in high-performance computing that explores new course delivery methods using high-definition video broadcast over Access Grid and new high-speed fiber optical networks in Louisiana (LONI) and Arkansas (AREON). As the Principal Investigator of a National Science Foundation Major Research Instrumentation grant, she plays a key role in, directing high performance computing activities on the University of Arkansas campus.

Keith Ball
Keith Ball

Systems Engineer
EverGrid Inc
Topic: "User-Friendly Checkpointing and Stateful Preemption in HPC Environments Using Evergrid Availability Services"
Slides: PowerPoint

Talk Abstract

As high-performance computing (HPC) workloads increase in complexity and size, the HPC systems that support them must scale accordingly. As the size of workloads increase, the Mean Time Between Failure (MTBF) of the system components (CPUs, memory, and disks) will become shorter than expected application runtimes. Next generation systems must be designed to handle failures without interrupting the workloads on the system or crippling the efficiency of the resource. To handle fault-tolerance in the HPC environment, Evergrid Inc provides a transparent, fault-tolerant framework that can periodically save the state of an application and correctly restore/restart the application if a failure occurs. This solution, Availability Services (AVS), is provided as a user-level shared library that is dynamically loaded with an HPC application at runtime. It supports integration with popular scheduling systems such as LSF and PBS to provide automatic handling of checkpoint and restart. AVS provides stateful preemption of HPC workloads, enabling true fair-share scheduling policies for large shared HPC clusters.

Biography

Keith is a Systems Engineer at Evergrid Software in Blacksburg VA, working next door to Virginia Tech's System X supercomputer. Keith received a PhD in physics at the University of Chicago, performing statistical and molecular dynamics studies of potential energy landscapes of small atomic clusters. After postdoctoral positions at Darmstadt Technical University in Germany and at University of California, San Francisco on the computational prediction of protein folds, he worked for several years in bioinformatics and computational chemistry at biotechnology companies in California. Keith is presently applying his computational and scientific background to help integrate checkpointing solutions into large-scale distributed HPC applications and systems.

Keith Brewster
Keith Brewster

Senior Research Scientist
Center for Analysis & Prediction of Storms
University of Oklahoma
Topic: "High Resolution Assimilation of Radar Data for Thunderstorm Forecasting on OSCER"
Slides: PowerPoint   Movie

Talk Abstract

Recently, a network of four X-band radars was deployed in southwestern Oklahoma by the Center for Adaptive Sensing of the Atmosphere (CASA). CAPS has developed a system to assimilate reflectivity data from the CASA radars, along with NEXRAD, mesonet, satellite and conventional data, at 1-km grid resolution using ADAS and incremental analysis updating in the ARPS numerical weather prediction model. Four 6-hour forecasts were made using various combinations of these data for each event day during the Spring of 2007. These forecasts were run in near-real time -- a pair of 6-hour forecasts took 8 hours using 150 nodes on topdawg, a cluster of Pentium4 Xeon EM64T Linux computers at the OU Supercomputing Center for Education & Research (OSCER). Additional forecasts were made later utilizing the radial velocity data in ADAS. Forecasts that accurately depicted the development of storms, and even small-scale rotation within the storms, were successfully made.

Biography

Keith Brewster is a Senior Research Scientist at the Center for Analysis and Prediction of Storms at the University of Oklahoma and an Adjunct Associate Professor in the OU School of Meteorology. His research involves data assimilation of advanced observing systems, including data from Doppler radars, satellites, wind profilers, aircraft and surface mesonet systems. He earned an M.S. and Ph.D. in Meteorology from the University of Oklahoma and a B.S. from the University of Utah.

Robert Ferdinand
Robert Ferdinand

Associate Professor
Department of Mathematics
East Central University
Topic: "Solution and Parameter Estimation in Groundwater Model"
Slides:   PDF

Talk Abstract

The model presented take the form of a coupled system of two nonlinear PDEs describing the dynamics of contaminated groundwater flowing through fissures (cracks) in a rock matrix, leading to the contaminant traveling and diffusing along the length of the fissure and also into the surrounding rock matrix. A finite difference scheme is used to approximate the model solution, and the scheme is further used to estimate model parameters using an inverse method. Both solution and parameter approximation use large amounts of computation.

Biography
Robert Ferdinand obtained his PhD in Applied Mathematics from the University of Louisiana in 1999. His areas of interest include mathematical modeling of physical and biological processes, in which numerical schemes are used to computationally approximate model solutions: for example, the inverse method is applied to numerically estimate model parameters, which involves substantial computing. His theoretical work involves perturbation techniques to investigate long-term behavior of model solutions.

Chris Franklin

IT Systems Administrator
OU Information Technology
University of Oklahoma
Topic: "Implementing Linux-enabled Condor in Multiple Windows PC Labs"
(with Joshua Alexander and Horst Severini)
Slides:     PDF   PowerPoint   Poster

Talk Abstract

At the University of Oklahoma (OU), Information Technology is completing a rollout of Condor, a free opportunistic grid middleware system, across 775 desktop PCs in IT labs all over campus. OU's approach, developed in cooperation with the Research Computing Facility at the University of Nebraska Lincoln, provides the full suite of Condor features, including automatic checkpointing, suspension and migration as well as I/O over the network to disk on the originating machine. These features are normally limited to Unix/Linux installations, but OU's approach allows them on PCs running Windows as the native operating system, by leveraging coLinux as a mechanism for providing Linux as a virtualized background service. With these desktop PCs otherwise idle approximately 80% of the time, the Condor deployment is allowing OU to get 5 times as much value out of its desktop hardware.

Biography
Chris Franklin is a senior in the School of Computer Science at the University of Oklahoma. He has worked for OU Information Technology for three years, and is currently part of a team of 3 people responsible for the administration of approximately 800 lab PCs, among other systems.

Paul Gray
Paul Gray

Associate Professor
Department of Computer Science
University of Northern Iowa
Topic: "High Performance Computing in a Small College Environment: Tools, Techniques, and Resources"
(with Charlie Peck)
Slides:   PDF

Talk Abstract

Providing computational resources and software tools to faculty and students in a small college environment is different, in fundamental ways, from doing so for large R1 and similar institutions. The types of demands, e.g. research and teaching, the human and technical resources available, and the support requirements are just some things that change significantly with scale.

Unfortunately, most of the available best-practice and similar information is more appropriate for large sites than small ones. Yet educating students and faculty about HPC in small college environments is of the utmost importance, given the continuing shortage of computationally-aware STEM people in that space.

Paul and Charlie draw on many years of experience spent supporting HPC hardware and software in small college environments to describe the tools, techniques, and resources appropriate for that context. A question and answer session will follow.

Biography

Paul Gray is an Associate Professor of Computer Science at the University of Northern Iowa. He is the chair of the SC (SuperComputing) Conference Education Program and instructs summer workshops on parallel computing education with the Supercomputing Education program efforts. His current efforts combine the Open Science Grid and TeraGrid with educational endeavors that revolve around LittleFe bringing aspects of grid computing into the high school and undergraduate curriculum.

Mohammed A. Kalkhan
Mohammed A. Kalkhan

Research Scientist / Faculty
Biometrics, Geospatial information, Spatial Statistics
Natural Resource Ecology Laboratory
Colorado State University
Topic: "GODM: Creating a Cyberinfrastructure to Involve Volunteer Groups in Citizen Science"
Slides: available after the Symposium

Talk Abstract

The vision of our research team is to empower citizen scientists (e.g., students, educators, volunteer organizations, private stakeholders, and the public) in using an existing cyberinfrastructure (i.e., the Global Organism Detection and Monitoring system (GODM)) to digitally collect, input, integrate, and analyze data on the distribution of harmful non-native plants and animals. Specifically, the goals are to: (1) promote the sharing of data on harmful non-native plants and animals using our publicly available cyberinfrastructure; (2) provide cyberinfrastructure tools to help citizen scientists accurately collect and efficiently disseminate data on non-native species; (3) provide mapping and decision support services to less technologically advanced groups to analyze and map the current and future distributions of invaders; (4) educate citizen scientists on the utility of cyberinfrastructure to empower them to advance science and conservation and management practices at local to global scales; and (5) foster a shift from a reactive to a proactive prevention, control, and containment strategy for new invaders. To meet these goals, the collaborative research team will: interview citizen scientists to determine appropriate enhancements to existing cyberinfrastructure tools; identify thresholds to ensure high quality data collection; implement identified enhancements to existing cyberinfrastructure tools; and develop educational materials to distribute to both early adopters and eventual end users of the GODM cyberinfrastructure.

Biography

Dr. Mohammed A. Kalkhan is a Research Scientist - Faculty at the Natural Resource Ecology Laboratory (NREL) and as an Affiliate Faculty in the Department of Forest, Rangeland, and Watershed Stewardship, faculty member and advisor for Interdisciplinary Graduate Certificate in Geospatial Science, and Department of Earth Resources, (currently the Department of Geosciences) at Colorado State University (CSU). Dr. Kalkhan received his BSc in Forestry from College of Agriculture and Forestry (1973) and Masters degree in Forest Mensuration from the University of Mosul, Iraq (1980), and his PhD in Forest Biometrics- Remote Sensing Applications from the Department of Forest Sciences at Colorado State University in 1994. From 1975 to 1982, he was Lecturer at Department of Forestry, College of Agriculture and Forestry, University of Mosul, Iraq. In 1994, he joined the Natural Resource Ecology Laboratory. He has served on a number of Program Planning Committees including: Monitoring Science and Technology Symposium - Unifying Knowledge for Sustainability in the Western Hemisphere, Sponsored by the USDA Forest Service and EPA; NASA-USGS Invasive Species Tasks (Present); USGS-NPS Mapping, NASA, USDA- Agriculture Research Service (ARS) on a research program to develop predictive spatial models and maps for Leafy spurge at Theodore Roosevelt National Park, North Dakota using hyperspectral imaging from NASA EO-1 Hyperion (Space), NASA AVIRIS (High altitude aircraft), ARS (CASI- Low altitude aircraft); and The First Conference on Fire, Fuels, Treatments and Ecological Restoration: Proper Place, Appropriate Time. Member of Consortium for Advancing the Monitoring of Ecosystem Sustainability in the America (CAMESA). Dr. Kalkhan also serves as a member of graduate student committees (9 graduated and currently working with 7 students) at CSU.

Dr. Kalkhan's research activities include Biometrics (natural resource applications), landscape (structure-analysis-modeling), remote sensing, GIS, biodiversity assessment, ecological modeling, wetland ecosystems, spatial statistics, sampling methods and designs, determination of uncertainty, mapping accuracy assessment, agricultural ecology (cropping, health monitoring, assessment, precision farming, water resources, soils), fire ecology-characteristics-behavior and modeling, environmental, and health- clinical studies. These activities can be used and related to landscape issues and relate to local and regional land use and land cover, wetlands, fire ecology and other natural resource characteristics. His research interests involve the integration of field data, GIS, and remote sensing with geospatial statistics to understand landscape parameters through the use of a complex model with thematic mapping approaches for wildfire, wetland, invasive species, and plant diversity studies. These studies are aimed at developing a better understanding of landscape-scale ecosystems at any level and to develop better tools for ecological forecasting. His research is funded by NASA, NSF, USGS, NPS, BOR, BLM, USDA Forest Service and other national organizations. Examples of his current research projects include the newly funded NASA and NPS project on integrated geospatial information and spatial statistics for modeling and mapping invasive species in the western USA (i.e. tamarisk or salt cedar, Russian olive, leafy spurge, etc.), and fuel variability and invasive species characteristics within the Rocky Mountain region (Cerro Grande, Hayman and High Meadow, Rocky Mountain National Park, Grand Teton Nation Park, and wetlands of costal area of Texas, and others). The research challenges are to develop a new tool based on geospatial information and mathematics-statistics to forecast landscape characteristics. Dr. Kalkhan has also been active in the academic community in the past and present, co-teaching courses in spatial statistics modeling-mapping of natural resources, sampling designs, biometrics, remote sensing-GIS, forest measurements, and other.

Randy Kolar
Randy Kolar

Associate Professor
School of Civil Engineering & Environmental Science
University of Oklahoma
Topic: "Hurricane Storm Surge Modeling for Southern Louisiana"

Talk Abstract: coming soon

Coastal Louisiana is characterized by low-lying topography and an intricate network of sounds, estuaries, bays, marshes, lakes, rivers and inlets that permit widespread inundation during hurricanes, such as that witnessed during the 2005 hurricane season with Katrina and Rita. A basin to channel scale implementation of the ADCIRC hydrodynamic model has been developed that simulates hurricane storm surge, tides and river flow in this complex region. This is accomplished by defining a domain and computational resolution appropriate for the relevant processes, specifying realistic boundary conditions, and implementing accurate, robust, and highly parallel unstructured grid numerical algorithms. The model domain incorporates the Western North Atlantic, the Gulf of Mexico and the Caribbean Sea, so that interactions between basins and the shelf are explicitly modeled, and boundary conditions for tidal and hurricane processes are specified at the open boundary, which is located in deep water. Selective refinement of the unstructured grid enables high resolution of the complex overland region for modeling localized scales of flow, while minimizing simulation time, so that the model can also be used in forecast mode. The current computational grid resolves features down to 60 meters and contains 2.17 million nodes, each with 3 degrees of freedom. ADCIRC applies a finite element-based solution to the generalized wave continuity form of the governing shallow water equations. The model algorithms must be robust and stable to accommodate the energetic flows that are generated during a hurricane, especially in the narrow inlets and channels connecting water bodies and/or floodplains. Validation of the model is achieved through hindcasts of historical hurricanes. Currently, the validated model is being used by the USACE, FEMA, and the State of Louisiana for preparing post-Katrina IPET reports, levee design, and coastal restoration studies.

Biography: coming soon

Scott Lathrop
Scott Lathrop

Director of Education, Outreach & Training
TeraGrid
Topic: "Advancing Scientific Discovery through TeraGrid"
Slides: PowerPoint

Talk Abstract

TeraGrid is an open scientific discovery infrastructure combining leadership class resources at nine partner sites to create an integrated, persistent computational resource. Using high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. You will learn how TeraGrid's resources, including more than 250 teraflops of computing capability and more than 30 petabytes of online and archival data storage, is advancing scientific discovery.

Biography

Scott Lathrop has been involved in high performance computing and communications activities since 1986, and has been actively involved in engaging local, regional and national research and education communities since that time. Lathrop is the Director of Education, Outreach & Training (EOT) for the TeraGrid Project. The TeraGrid project is funded by the National Science Foundation to provide an open and extensible partnership of researchers, computational experts, and resource providers that together provide a comprehensive cyberinfrastructure to enable discovery in science and engineering. Lathrop coordinates the EOT activities among the Resource Provider sites involved in the TeraGrid project. He is also a member of the Advancement Team that is helping to lead the Engaging People in Cyberinfrastructure (EPIC) project, funded by NSF, among more than twenty organizations involved in EOT activities around the country. Lathrop is the SC07 Education Program chair, the premier international conference on high performance computing, networking, storage and analysis. Lathrop is co-PI on the NSF funded Computational Science Education Reference Desk (CSERD), a Pathways project of the National Science Digital Library (NSDL) program.

Chokchai (Box) Leangsuksun
Box Leangsuksun

Associate Professor of Computer Science
SWEPCO Endowed Professor
Louisiana Tech University
Topic: "Resiliency in HPC"
Slides: available after the Symposium

Talk Abstract

High Performance Computing is an essential enabling technology not only for scientific advancements but also for economic and business driving forces, especially in today's digital world. Time to market, time to insight and time to discovery are prime objectives in HPC and grid adoption and utilization. In addition, a recent introduction of dual-core and multi-core processor products will propel adoption of HPC in mainstream environments. Experts have predicted that personal supercomputers will soon be available on the desktop.

Nevertheless, there are many challenges in a wide spectrum of obstacles, such as mismatches in the technological advancements of hardware and software components, programmability, system reliability and robustness, especially in very large scale systems. In this talk, Box will present his current research and development in HPC, especially his effort towards resiliency in High Performance Computing environments.

Biography

Dr. Chokchai "Box" Leangsuksun is the SWEPCO Endowed Professor, an associate professor in Computer Science and the Center for Entrepreneurship and Information Technology (CEnIT) at Louisiana Tech University. He received his Ph.D. and M.S. in computer science from Kent State University (Kent OH) in 1989 and 1995 respectively. His research interests include:

  • Highly Reliable and High Performance Computing
  • Intelligent component based Software Engineering
  • Service-Oriented Architecture, Service engineering and management
  • High Performance Scientific computing & Bioinformatics
Prior joining Louisiana Tech University in early 2002, Box was a member of Technical Staff, Lucent Technologies-Bell Labs Innovation, from 1995-2002, and was responsible in many key research and development roles in various strategic products. Within a short academic time span, Box has established his name and research recognitions by founding and co-chairing a high availability and performance workshop, serving as program committee in various international conferences/workshops, releasing open source software, writing articles featured in major technical journals/magazines, and giving presentations at highly-regarded conferences. He has also collaborated with various research groups and national and industrial labs, including Oak Ridge National Laboratory, Ames Laboratory, Lawrence Livermore National Laboratory, National Center for Supercomputing Applications, LAM/MPI, Dell, Intel, and Ericsson etc. In September 2003, he received an outstanding teaching award from the College of Engineering and Science, Louisiana Tech University.

Xiaolin (Andy) Li
Xiaolin (Andy) Li

Assistant Professor
Computer Science Department
Oklahoma State University
Topic: "P2P Desktop Grid for Hybrid Sensor Grid Systems"

Talk Abstract

With rapid progress on grid computing and sensor networks, now is the right time to envision an ultimate pervasive grid environment integrating both grids and sensornets seamlessly. We are working on a peer-to-peer approach to enable such an integrated system. This talk will sketch a roadmap towards a hybrid sensor grid system, including three subsystems: AgentGrid, AgentBroker, and TinyAgent.

Biography

Dr. Li has been an assistant professor in the Department of Computer Science at Oklahoma State University since 2005. His research interests include Distributed Systems (Autonomic, Grid, P2P, and HPC), Computer Networks, and Software Engineering. He is director of the Scalable Software Systems Laboratory (S3Lab). His research has been sponsored by National Science Foundation (NSF), Department of Homeland Security (DHS), AirSprite Technologies, and OSU. He has been a visiting scholar in Department of Computer Sciences of the University of Texas at Austin, an alumnus of IBM Extreme Blue, a research staff at Center for Wireless Communication (now the Institute for Infocomm Research, I2R), and a research scholar at the National University of Singapore (NUS). He received his Ph.D. degrees from Rutgers University and NUS.

John Matrow
John Matrow

Director
High Performance Computing Center
Wichita State University
Topic: "Internet2 and Benefits to State Education Networks"
Slides:     PPT1   PPT2   PPT3

Talk Abstract

Internet2 is a research-only network among the 207 largest universities tasked with developing the next generation Internet protocols, security, applications, etc. The Internet2 K20 Initiative brings together Internet2 member institutions, primary and secondary schools, colleges and universities, libraries, and museums to extend new technologies, applications, middleware, and content to all educational sectors, as quickly and connectedly as possible.

Biography

John Matrow has a B.S. in Computer Science from Central Missouri State University and a M.S. in Computer Science from Iowa State University. He has worked for the State of Iowa, and spent 20 years at LSI Logic Storage Systems, formerly Symbios Logic, nee NCR Microelectronics Division, both in IT and product development. Since 2000, John has been System Administrator/Trainer and now Director of the High Performance Computing Center at Wichita State University, and has been actively involved in raising the level of research with regard to high performance computing and high performance networking (Internet2). John also teaches night courses in database for the Computer Science and MIS departments.

Greg Monaco
Greg Monaco

Executive Director
Great Plains Network
Topic: "Roundtable: The Great Plains Network's Grid Computing & Middleware Initiative"
(with Amy Apon and Gordon Springer)
Slides:   PDF (Bill Spollen)

Roundtable Abstract

This roundtable focuses on recent developments in collaborative middleware among the Great Plains Network participants and an exploration of directions for the coming year. The roundtable will feature a demonstration of resources developed at the University of Missouri, discussion of a project to use Shibboleth as a means of managing identities for a Wiki (e.g., GPN Wiki), participation in the University of Oklahoma Condor project, and, finally, Globus grid issues (extending access to GPN globus-based grid to other users and institutions).

Biography

Dr. Greg Monaco has held several positions with the Great Plains Network since August 2000, when he joined GPN. He began as Research Collaboration Coordinator, and then was promoted to Director for Research and Education. Greg is currently the Executive Director of GPN. His resume can be found here.

Charlie Peck
Charlie Peck

Associate Professor
Department of Computer Science
Earlham College
Topic: "High Performance Computing in a Small College Environment: Tools, Techniques, and Resources"
(with Paul Gray)
Slides:   PDF

Talk Abstract

Providing computational resources and software tools to faculty and students in a small college environment is different, in fundamental ways, from doing so for large R1 and similar institutions. The types of demands, e.g. research and teaching, the human and technical resources available, and the support requirements are just some things that change significantly with scale.

Unfortunately, most of the available best-practice and similar information is more appropriate for large sites than small ones. Yet educating students and faculty about HPC in small college environments is of the utmost importance, given the continuing shortage of computationally-aware STEM people in that space.

Paul and Charlie draw on many years of experience spent supporting HPC hardware and software in small college environments to describe the tools, techniques, and resources appropriate for that context. A question and answer session will follow.

Biography

Charlie teaches computer science at Earlham College in Richmond IN. He is also the nominal leader of Earlham's Cluster Computing Group. His research interests include parallel and distributed computing, computational science, and education. Working with colleagues, Charles is co-PI for the LittleFe project. During the summer, he often teaches parallel and distributed computing workshops for undergraduate science faculty under the auspices of the National Computational Science Institute and the SC (SuperComputing) Conference Education Program.

Jeff Pummill
Jeff Pummill

Senior Linux Cluster Administrator
High Performance Computing at the
University of Arkansas
Topic: "Birds of a Feather Meeting: Linux Cluster Administration for High Performance Computing"

BoF Abstract
This BoF, dedicated specifically to administration of HPC systems, is hopefully the first of many. As HPC systems are rapidly becoming more prevalent, there is an immediate need to adapt administrative support to the unique needs of HPC users. Topics of discussion will include, but not be limited to, building, installing, and configuring Linux clusters, as well as meeting the needs of both individual users and the user community as a whole.

Biography
Jeff Pummill is the Senior Linux Cluster Administrator for the University of Arkansas. Prior to his position at the UofA, he spent 13 years in the fields of mechanical design and structural analysis, while also maintaining a large number of Unix workstations and a small Linux cluster used for Finite Element Analysis. His current areas of interest include hardware architectures, resource managers, compilers, and benchmarking tools.

Horst Severini
Horst Severini

Research Scientist
Department of Physics & Astronomy
University of Oklahoma
Topic: "Implementing Linux-enabled Condor in Multiple Windows PC Labs"
(with Joshua Alexander and Chris Franklin)
Slides:     PDF   PowerPoint   Poster

Talk Abstract

At the University of Oklahoma (OU), Information Technology is completing a rollout of Condor, a free opportunistic grid middleware system, across 775 desktop PCs in IT labs all over campus. OU's approach, developed in cooperation with the Research Computing Facility at the University of Nebraska Lincoln, provides the full suite of Condor features, including automatic checkpointing, suspension and migration as well as I/O over the network to disk on the originating machine. These features are normally limited to Unix/Linux installations, but OU's approach allows them on PCs running Windows as the native operating system, by leveraging coLinux as a mechanism for providing Linux as a virtualized background service. With these desktop PCs otherwise idle approximately 80% of the time, the Condor deployment is allowing OU to get 5 times as much value out of its desktop hardware.

Biography

Horst Severini got his Vordiplom (BS equivalent) in Physics at the University of Wuerzburg in Germany in 1988, then went on to earn a Master of Science in Physics in 1990 and a Ph.D. in Particle Physics in 1997, both at the State University of New York at Albany.

He is currently a Research Scientist in the High Energy Physics group at the University of Oklahoma, and also the Grid Computing Coordinator at the Oklahoma Center for High Energy Physics (OCHEP), and the Associate Director for Remote and Heterogeneous Computing at OU Supercomputing Center for Education & Research (OSCER).

Gordon K. Springer
Gordon K. Springer

Associate Professor, Department of Computer Science
Director, Research Support Computing
Scientific Director, UM Bioinformatics Consortium
University of Missouri-Columbia
Topic: "Roundtable: The Great Plains Network's Grid Computing & Middleware Initiative"
(with Amy Apon and Greg Monaco)
Slides:   PDF (Bill Spollen)

Roundtable Abstract

This roundtable focuses on recent developments in collaborative middleware among the Great Plains Network participants and an exploration of directions for the coming year. The roundtable will feature a demonstration of resources developed at the University of Missouri, discussion of a project to use Shibboleth as a means of managing identities for a Wiki (e.g., GPN Wiki), participation in the University of Oklahoma Condor project, and, finally, Globus grid issues (extending access to GPN globus-based grid to other users and institutions).

Biography

Gordon K. Springer teaches primarily operating systems and computer networking to upper level undergraduate and graduate students. He is responsible for directing the installation and maintenance of high-performance computing and networking resources for the Columbia campus research community. In addition, he oversees the infrastructure that serves the life science community for the 4-campus UM System. From a research perspective, Dr. Springer has recently been working on developing middleware solutions for collaborations across multiple institutions to facilitate authentication and authorization in Virtual Organizations, especially in the Great Plains Network (GPN).

Dan Stanzione
Dan Stanzione

Director
High Performance Computing Initiative
Arizona State University
Topic: "Towards a Sustainable Business Model for Campus HPC"
Slides: PDF

Talk Abstract

The primary challenges of deploying Supercomputing, large scale storage, and other forms of cyberinfrastructure at a university are rarely technical. Building the political and financial infrastructure to support high performance computing is just as important, and often more difficult, than the technical issues. This talk will provide an overview of these issues, provide as an example the model used at Arizona State to address them, and start a discussion of other possible models for running an HPC operation that a university can and will support.

Biography

Dr. Dan Stanzione, Director of the High Performance Computing Initiative (HPCI) at Arizona State University, joined the Ira A. Fulton School of Engineering in 2004. Prior to ASU, he served as an AAAS Science Policy Fellow in the Division of Graduate Education at the National Science Foundation. Stanzione began his career at Clemson University, where he earned his doctoral and master degrees in computer engineering as well as his bachelor of science in electrical engineering. He then directed the supercomputing laboratory at Clemson and also served as an assistant research professor of electrical and computer engineering.

Dr. Stanzione's research focuses on parallel programming, scientific computing, Beowulf clusters, scheduling in computational grids, alternative architectures for computational grids, reconfigurable/adaptive computing, and algorithms for high performance bioinformatics. Also an advocate of engineering education, he facilitates student research through the HPCI and teaches specialized computation engineering courses.

Ravi K. Vadapalli
Ravi Vadapalli

Research Scientist
High Performance Computing Center
Texas Tech University
Topic: "Grid Computing: What's in It for Me?"
Slides: available after the Symposium

Talk Abstract

Grid computing is an emerging "collaborative" computing paradigm to extend institution/organization specific high performance computing capabilities greatly beyond local resources. Strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require collaborations and computational capabilities beyond institutional limitations. The institution/organization specific high performance computing center is the building block for the grid computing environment that spans over several administrative/campus domains. In this talk, I will discuss how grid computing environment can be an excellent paradigm for a win-win situation for many systems such as the resource providers, application developers, users, industries and campus administration. As an example, our efforts through TIGRE project 1 in creating a higher educational grid to extend the scope of aforementioned strategic application areas will be outlined.

1 Texas Internet Grid for Research and Education (TIGRE) Project Document and the TIGRE Portal

Biography: coming soon

OTHER BREAKOUT SPEAKERS TO BE ANNOUNCED


OU Logo