Benchmarking large-scale LLM training on LUMI

Last updated: January 11, 2024

CSC, a partner in the GreenNLP project, has evaluated the scalability of large language model (LLM) training on the LUMI supercomputer. The results indicate that there are no fundamental scaling bottlenecks even when training with thousands of GPUs.

You can read more about the evaluation on CSC’s website.