Entrepreneurs and Researchers Seek Parallel Computing Clusters 2005-07-18 11:47:11
Fifteen Spokane area research scientists, representing all of the area’s universities and several private sector companies, research administrators, and economic development gurus met at SIRTI on Thursday, July 14 to storm the problem of applications for parallel computer clusters. The thirty attendees were treated to a cluster computing "state of the union" update by Bernard Daines of Liberty Lake Internet Exchange (LLIX) and Joshua Harr, Ph.D. CTO of Linux Networx. The Applied Parallel Computer Cluster roundtable was sponsored by Eastern Washington University (EWU), LLIX, Spokane Area Economic Development Council, and Virtual Possibilities Network (VPnet).
Harr explained Linux Networx commercially available parallel computer product line and parallel computing in general. The most common computer cluster is a group of loosely coupled computers that work together as though it were a single computer. These clusters are commonly connected through fast local area networks. Clusters are usually deployed to improve speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than large single computers of comparable speed or reliability
The cluster types include high-availability (HA) clusters, load balancing and distribution clusters, and high-performance (HPC) clusters. HA clusters are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes which are then used to provide service when system components fail. HA cluster implementations attempt to manage the redundancy inherent in a cluster to eliminate single points of failure.
Load balancing and distribution clusters operate by having all workload come through one or more load-balancing front ends, which then distribute the work to a collection of back end servers. Although they are implemented primarily for improved performance, they commonly include high-availability features as well. Such a cluster of computers is sometimes referred to as “farm”.
High-performance (HPC) clusters are implemented primarily to provide increased performance by splitting a computational task across many different nodes and are now most commonly used in heavily computational scientific computing. Parallel processing can reduce the time needed for a task -- like calculating the fluid dynamics of air flow over an airplane wing -- from days to hours.
Linux Networx’s parallel computing system is in the high-performance cluster category and is a cluster with nodes running Linux as the operating system. These clusters are so new that there are few programs designed to exploit the parallelism available on HPC clusters. Many such programs use libraries such as MPI which are specially designed for writing scientific applications for HPC computers. MPI is notoriously difficult to program, according to Harr.
Following Harr’s overview, fifteen participants reported briefly on their research in progress. Each reported that their project uses significant computer processing time that would benefit greatly by high-performance parallel processing. The research included a variety of projects of wide diversity. Kosuke Imamura, EWU Computer Science, reported work in machine learning applied to bioinformatics problems. Don Douglas of Toolbuilders reported their need for rapid feedback in analysis of coding problems. Gregory Wintz, EWU Occupational Therapy, works with rehabilitation of drug abusers, and has applications for parallel computing in that field. Kang ChulHee of WSU works with genomics. Paul De Palma of Gonzaga University is developing a genetic algorithm for applications in civil engineering. Massimo Capobianchi of Gonzaga is developing software tools to simulate the behavior of integrated circuits. John Mill, CCS and Atsushi Inoue, EWU are working in “human centric computing: information and knowledge management,” using artificial intelligence (AI), including applications to intrusion detection for computer networks.
The four sponsors were inspired by the diversity and number of ideas presented at the roundtable. The sponsors’ next step is to launch a quest to seek a cluster computer installation on the extensive fiber fabric (500 cable miles) in the Inland Northwest to host both novel and existing cluster applications.
|