My guest on Innovators this week is Greg Wilson. We share common interests in collaboration and Python, but neither of those topics was the focus of this conversation. Instead, we discussed Greg’s unique and somewhat curmudgeonly take on high-performance computing. In his view, the HPC industry has focused on achieving bigger and faster computation at the expense of human productivity, verifiable correctness, and reproducibility.
I claim no expertise in that field, but Greg is an expert, so I wondered what he’d think about the approach discussed in one of my recent Perspectives shows, Cluster computing for the classroom. On that show, Kryil Faenov — Microsoft’s general manager for Windows HPC — describes a system that enables professors to define computational models that students can check out, tweak, and then run against large data on a compute cluster.
From a human productivity standpoint Greg likes that approach. But he’d prefer to see more attention paid to verifying the correctness of the models, and to ensuring that code and the data are managed in ways that make experiments reliably reproducible.
Disclosure: While working at Los Alamos National Laboratory back in 2000, Greg commissioned me to write a report on Internet Groupware for Scientific Collaboration.
4 thoughts on “A conversation with Greg Wilson about doing HPC right”