Rocks v4.2 is released for i386 and x86_64 CPU architectures.

This release supports latest multi-core CPUs from AMD and Intel (a.k.a., Woodcrest).

New Features

  • Bioinformatics Roll
  • The Bio roll contains tools and utilities to facilitate Bioinformatics computation. Included tools:

    • HMMER - From Washington University at St. Louis.
    • BLAST - From National Center for Biotechnology Information.
    • MpiBLAST - From Los Alamos National Laboratory.
    • biopython.
    • ClustalW - From the European BioInformatics Institute.
    • MrBayes - From School of Computational Science at the Florida State University.
    • T_Coffee - From Information Genomique et Structurale at Centre National de la Recherche Scientifique.
    • Emboss - From European Molecular Biology Institute.
    • Phylip - From the Dept. of Biology at the University of Washington.
    • fasta - From the University of Virginia.
    • Glimmer - From Center for Bioinformatics and Computational Biology.

  • Graphical Installer
  • The frontend and client nodes (e.g., compute nodes and viz tile nodes) now utilize Red Hat's graphical mode of the anaconda installer. For client nodes, the graphical installation screens can be monitored with a new utility called 'rocks-console'. This utility connects to the VNC server which is running on the client node. Additionally, a new Rocks tool named 'screengen' is used to transform XML 'screen' nodes into user-input screens (e.g., user input for IP addresses, root password, etc.).

  • Restore Roll
  • The Restore Roll is used to upgrade or reinstall your frontend. It saves and restores user account and cluster node information. This can be used by users to upgrade their existing v4.1 frontend to v4.2 or to resinstall their current v4.2 frontend. Or in the case of a frontend failure, the Restore Roll can be used to reconstruct a frontend on new hardware. See the User's Guide for details.

Enhancements

  • OS Roll based on CentOS release 4/update 3 and all updates as of August 6, 2006.
  • Updated SGE roll to SGE 6 update 8.
  • Grid Roll updated to Globus Toolkit version 4.0.2.
  • Included tentakel as a cluster-fork alternative.
  • Auto partitioning now creates a /var file system to protect the frontend against services that log many messages and fill up the root file system. Default syslog level is also changed from "debug" to "info" for less verbose logging of events.
  • Anaconda installer upgraded from 10.1.1.13 to 10.1.1.37.
  • Hooks added to ease the internationalization of Rocks.
  • Latest development version of SAGE from the Electronics Visualization Lab at UIC. Stability of SAGE is greatly improved for displaying both movies and images. All software from EVL is now on the Viz Roll, and the EVL no longer exists. The Viz Roll now support TwinView mode on nVidia cards.
  • Rocks foundation expanded beyond Python. Now includes Perl, ANT, RCS, CVS, and other utilities required by Rocks Rolls.
  • Condor updated to v6.8.0.
  • Installing client nodes now only respond to DHCP responses from a Rocks frontend. In previous releases, if a client node had an additional ethernet connection to a public network and if the public network had a DHCP server, the client node would assume the public DHCP server was a frontend. In v4.2, the client node now checks to see if the response came from a frontend.
  • Added Open MPI version 1.1 to the HPC roll.
  • Java updated from v1.5.0_05 to v1.5.0_07.
  • Simplified 'central-based' frontend installations. When a frontend is booted from CD, one can install from CD, from the network (a.k.a. a central install) or a combination of the two simply by typing 'frontend' at the CD 'boot:' prompt.
  • Viz: updated nVidia driver from v7667 to v8762.

Bug Fixes

  • Ganglia no longer logs all errors /var/log/messages. This addresses a web-based usage mode of ganglia that sends frequent error messages to /var/log/messages.
  • Can once again access MySQL database via frontend's web site.
  • Can once again print labels for cluster nodes via frontend's web site.
  • Can once again view the kickstart graph via the frontend's web site.
  • In SGE Roll, included configuration code to cleanup MPI programs after user executes a 'qdel' command.
  • Now using native 'useradd' command. This address the issue useradd breaking when up2date refreshes the 'shadow-utils' RPM.
  • The '--ghost' flag was removed from autofs.
  • Permissions for the directories that hold the rolls are now always set to 755 every time rocks-dist is run.
  • During a compute node installation, if the compute node is connected to a public network (e.g., eth1) and if that public network has a DHPC server, the compute node will now ignore that DHCP server and continue to send DHCP requests until it hears from a rocks frontend.
  • The grub splash screen was removed, which will allow headless nodes and nodes with simple video controllers to boot after an installation.
  • mount-loop and umount-loop are removed from the distribution.
  • Turn off oom-killer, the node will now panic when its out of memory rather than kill random processes (thanks to Roy Dragseth for the fix).
  • Avalanche Installer fix: If an installing node dies mid-installation and if other nodes are also installing, then the non-failed installing nodes will proceed at full speed. In v4.1, the package downloading of the non-failed would be dramatically reduced. Thanks to Dell's HPC group for characterizing this bug.
  • Avalanche Installer fix: Client nodes now install even if the tracker is running and there are no torrent files for the RPMS.
  • DHCP configuration file simplified in order to handle a wider range of PXE requests. This addresses the issue where PXE-booting nodes would display the message "PXE-E55: ProxyDHCP service did not reply to request on port 4011."
  • DNS fix. Frontend machines that define their internal ethernet IP address with a netmask with more than 8 bits (e.g., 255.255.255.0), then the reverse lookup configuration files are now correctly written.
  • Deprecated rocks-dist command "mirror" is removed.
  • Static Route added on all nodes to access the frontend over the private network. This fixes the case of SGE failing of clusters with secondary public network interfaces on the compute nodes.
  • Globus correctly installed on compute nodes (for the SDK).
  • Local root exploit using Rocks loopback mounting utilities is fixed, and utilities are removed.
  • Removed the --nolocal flag from the SGE GRAM for Globus/MPI jobs.
  • Fixed XDMX to support viz walls larger than 16 tiles.
  • NAS partitioning issue that caused an anaconda exception fixed.
  • Root passwords can now contain ' ` (tick and backtick) characters.