News
The Parallel & Distributed Computing Lab (PDCL) conducts research at the intersection of high performance computing and big data processing. Our group works in the broad area of Parallel & Distributed ...
The basics of distributed computing. Any time a workload is distributed between two or more computing devices or machines connected by some type of network, that’s distributed computing. There are a ...
In this video, Torsten Hoefler from ETH Zurich presents: Scientific Benchmarking of Parallel Computing Systems. "Measuring and reporting performance of parallel computers constitutes the basis for ...
Scaling AI Isn't A Computing Problem... Dedicated hardware, like GPUs (graphics processing units) and TPUs (tensor processing units), has become essential for training AI models.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results