Jaekyu Lee, Si Li, Hyesoon Kim, Sudhakar Yalamanchili,
Design Space Exploration of On-chip Ring Interconnection for a CPU-GPU Architecture
Future chip multiprocessors (CMP) will only grow in core count
and diversity in terms of frequency, power consumption, and resource
distribution. Incorporating a GPU architecture into CMP,
which is more efficient with certain types of applications, is the
next stage in this trend. This heterogeneous mix of architectures
will use an on-chip interconnection to access shared resources such
as last-level cache tiles and memory controllers. The configuration
of this on-chip network will likely have a significant impact on resource
distribution, fairness, and overall performance.
The heterogeneity of this architecture inevitably exerts different
pressures on the interconnection due to the differing characteristics
and requirements of applications running on CPU and GPU
cores. CPU applications are sensitive to latency, while GPGPU applications
require massive bandwidth. This is due to the difference
in the thread-level parallelism of the two architectures. GPUs use
more threads to hide the effect of memory latency but require massive
bandwidth to supply those threads. On the other hand, CPU
cores typically running only one or two threads concurrently are
very sensitive to latency.
This study surveys the impact and behavior of the interconnection
network when CPU and GPGPU applications run simultaneously.
This will shed light on other architectural interconnection
studies on CPU-GPU heterogeneous architectures.