🤖 AI Summary
Multicore chips face fundamental trade-offs among performance, power consumption, and energy efficiency as core count increases.
Method: This work establishes a hardware-algorithm co-design modeling framework under area constraints, integrating theoretical complexity analysis with parallel computation models.
Contribution/Results: We rigorously prove, for the first time, that on a single die integrating *m* cores, speedup, power, and energy efficiency asymptotically obey √*m*, 1/√*m*, and 1/*m* scaling laws, respectively—refuting conventional linear scalability assumptions. These results establish tight theoretical bounds on multicore architecture: optimal speedup scales as Θ(√*m*), static power as Θ(1/√*m*), and energy efficiency as Θ(1/*m*). The derived bounds provide verifiable design ceilings for ultra-large-scale integrated circuits and yield energy-efficiency–driven scaling principles for future many-core systems.
📝 Abstract
When a single core is scaled up to m cores occupying the same chip area and executing the same (parallelizable) task, achievable speedup is square-root m, power is reduced by square-root m and energy is reduced by m. Thus, many-core architectures can efficiently outperform architectures of a single core and a small-count multi-core.