Over at Scientific Computing’s blog, Gord Sissons of IBM makes an interesting case that the availability of large amounts of inexpensive storage has started to drive the evolution of HPC, much as cheap computing has changed the HPC scene over the last few decades.
“HPC is changing again, and the catalyst this time around is Big Data. As storage becomes more cost-effective, and we acquire the means to electronically gather more data faster than ever before, data architectures are being re-considered once again. What happened to compute over two decades ago is happening today with storage.”
Read the full post here.