A Flexible Heterogeneous Multi-Core Architecture

Miquel Pericas1,  Ruben Gonzalez2,  Adrian Cristal3,  Francisco Cazorla3,  Daniel A. Jimenez4,  Mateo Valero1
1BSC & UPC, 2UPC, 3BSC, 4Rutgers


Multi-core processors naturally exploit thread-level parallelism (TLP). However, extracting instruction-level parallelism (ILP) from individual applications or threads is still a challenge as application mixes in this environment are nonuniform. Thus, multi-core processors should be flexible enough to provide high throughput for uniform parallel applications as well as high performance for more general workloads. Heterogeneous architectures are a first step in this direction, but partitioning remains static and only roughly fits application requirements.

This paper proposes the Flexible Heterogeneous MultiCore processor (FMC), the first dynamic heterogeneous multi-core architecture capable of reconfiguring itself to fit application requirements without programmer intervention. The basic building block of this architecture is a scalable, variable-size window microarchitecture that exploits the concept of Execution Locality to provide large-window capabilities. This allows us to overcome the memory wall, the main performance limiter for applications with high memory-level parallelism (MLP). The microarchitecture contains a set of small and fast cache processors that execute high locality code and a network of small in-order memory engines that together exploit low locality code. Single-threaded applications can use the entire network of cores while multi-threaded applications can efficiently share the resources. The sizing of critical structures remains small enough to handle current power envelopes.

In single-threaded mode this processor is able to outperform previous state-of-the-art high-performance processor research by 11% on SPEC FP. We show how in a quad-threaded/quad-core environment the processor outperforms a statically allocated FMC configuration in both throughput and harmonic mean, two commonly used metrics to evaluate SMT performance, by around 2-4%. This is achieved while using a very simple sharing algorithm.