[ad_1]

The safety, security and reliability of the nation’s nuclear weapons stockpile increasingly depends on high-performance computing.

This reality was highlighted last year when the Department of Energy’s National Nuclear Security Administration awarded a subcontract to Dell Technologies Inc. for additional supercomputing systems. The contract, CTS-2, provided computing capacity for the Los Alamos, Sandia and Lawrence Livermore National Laboratories for simulation and monitoring services essential to safeguarding the stockpile.

“We’ve been working with Dell for several years on this, and the systems first started being delivered this past August,” said Matt Leininger (pictured, right), deputy of advanced technology projects at the Lawrence Livermore National Laboratory. “We’ve deployed roughly 1,600 nodes now, but that will ramp up to over 6,000 nodes over the next three or four months.”

Leininger spoke with theCUBE industry analysts Paul Gillin and David Nicholson at SC22, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. He was joined by Armando Acosta (pictured, left), director of HPC product management at Dell, and they discussed Dell’s evolving supercomputing partnership with the National Nuclear Security Administration’s laboratories. (* Disclosure below.)

Expanding role for advanced tools

Dell is a co-design partner in CTS-2, according to Leininger. The three laboratories use a number of advanced tools familiar to enterprise IT, including Red Hat Enterprise Linux and one of the most recent processor solutions.

“Red Hat Enterprise Linux allows a common user environment, a common simulation environment across not only CTS-2, but older systems we have,” Leininger said. “The architecture today is based on fourth generation Intel Xeon. We were one of the first customers to get those systems in.”

The work of the national labs with Dell exemplifies the broader role that supercomputing is beginning to play in protecting critical infrastructure. These systems are also forming a useful test bed for what may become enterprise solutions down the line.

“You have this convergence of HPC, AI and data analytics, and these three workloads are applicable across many vertical markets,” Acosta said. “A lot of stuff that happens in the DOE labs trickles down to the enterprise space. These guys know how to do it at scale, they know how to do it efficiently, and they know how to hit the mark.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the SC22 event:

(* Disclosure: TheCUBE is a paid media partner for the SC22 event. Neither Dell Technologies Inc., the main sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

[ad_2]

Source link

Load More By Michael Smith
Load More In Technology
Comments are closed.

Check Also

Autocar magazine 1 February: on sale now

[ad_1] This week in Autocar, we put Porsche’s new 911 ‘SUV’ through its paces, break the s…