🤖 AI Summary
Printed electronics (PE) face significant challenges in implementing complex machine learning (ML) classifiers efficiently due to their large feature sizes, limiting their deployment in low-cost, flexible hardware. To address this, we propose a multiplier-free, hybrid unipolar/bipolar computing architecture specifically designed for PE, eliminating conventional encoders and integrating architecture-aware training for end-to-end hardware-algorithm co-optimization. This approach drastically reduces circuit complexity and hardware overhead while enabling energy-efficient, low-power hardware implementation of multilayer perceptrons (MLPs). Evaluated on six benchmark datasets, our design achieves, on average, 46% reduction in area and 39% reduction in power consumption compared to the state-of-the-art PE-based MLP implementations, with negligible accuracy degradation. The proposed architecture establishes a scalable hardware paradigm for edge intelligence enabled by printed electronics.
📝 Abstract
Printed Electronics (PE) provide a flexible, cost-efficient alternative to silicon for implementing machine learning (ML) circuits, but their large feature sizes limit classifier complexity. Leveraging PE's low fabrication and NRE costs, designers can tailor hardware to specific ML models, simplifying circuit design. This work explores alternative arithmetic and proposes a hybrid unary-binary architecture that removes costly encoders and enables efficient, multiplier-less execution of MLP classifiers. We also introduce architecture-aware training to further improve area and power efficiency. Evaluation on six datasets shows average reductions of 46% in area and 39% in power, with minimal accuracy loss, surpassing other state-of-the-art MLP designs.