About
The High-Performance Computing Cluster, aptly name 鈥淪piedie,鈥 housed at the Thomas J. Watson College of Engineering and Applied Science's data center in the Innovative Technology Complex. This research facility offers computer capabilities for researchers across 黑料视频.
Raw Stats
- 20 Core 128GB Head Node
- 328TB Available FDR Infiniband Connected NFS Storage Node
- 143 Compute Nodes
- 3372 native compute cores
- 8x NVidia H100 NVL GPUs, 8x NVidia A40 GPUs, 10x NVidia A5000 GPUs, 6x NVidia P100 GPU
- 40,56 and 400Gb Infiniband to all nodes
- 1GbE to all nodes for management and OS deployment
Since the deployment of the Spiedie cluster, it has gone through various expansions and deployments, growing from 32 compute nodes to 143 compute nodes as of December 2024. Most of these expansions came from individual researcher grant awards. These individuals realized the importance of the cluster to forward their research and helped grow this valuable resource.
Watson College continues to pursue opportunities to enhance the Spiedie Cluster and to expand its outreach to other researchers in different transdisciplinary areas of research. Support for the cluster has come from Watson College and researchers from the Chemistry, Computer Science, Electrical and Computer Engineering, Mechanical Engineering, and the Physics Departments.
Head Node
Consists of a Red Barn HPC head node with dual Intel(R) Xeon(R) CPU ES-2640 v4 @ 2.40GHz.and 128GB of DDR4 RAM with dedicated SSD storage.
Storage Node
A common file system accessible by all nodes is hosted on a second Red Barn HPC server providing 328TB, with the ability to add additional storage drives. Storage is accessible via NFS through a 56 and 400 Gb/s Infiniband interface.
Compute Nodes
The 143 compute nodes are a heterogeneous mixture of varying processor architectures, generations, and capacity.
Management and Network
Networking between the head, storage and compute nodes utilizes Infiniband for inter-node communication and Ethernet for management. Bright Cluster Manager provides monitoring, management of the nodes with SLURM handling, jobs submission, queuing, and scheduling. The cluster currently supports MATLAB jobs up to 600 cores along with, VASP, COMSOL, R and almost any *nix based application.
Cluster Policy
High-Performance Computing at Binghamton is a collaborative environment where computational resources have been pooled together to form the Spiedie cluster.
Access Options
Subsidized access (No cost)
- Maximum of 48 running cores per faculty group
- Storage is monitored
- Higher priority queues have precedence
- Fair-share queue enabled
- 122 hr wall time
Yearly subscription access
- $1,675/year, faculty research group
- Running queue core restrictions are removed
- Fair-share queue enabled
- Storage is monitored
- 122 hr wall time
- per research group access
Condo access
Purchase your own nodes to integrate into the cluster
- High priority on your nodes
- Fair-share access to other nodes
- No limits on job submission to your nodes
- Storage is monitored
- Your nodes are accessible to others when not in use
Watson IT will assist with quoting, acquisition, integration and maintenance of purchased nodes. For more information on adding nodes to the Spiedie cluster, email Phillip Valenta.