Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
Susskind, Z., Arora, A., Miranda, I. D. S., Villon, L. A. Q., Katopodis, R. F., Araújo, L. S. de....John, L. K. (2022). Weightless Neural Networks for efficient edge inference. In Kloeckner, A. (Ed.), PACT '22: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques. (pp. 279-290). Chicago, Illinois: Association for Computing Machinery.
Exportar Referência (IEEE)
Z. Susskind et al.,  "Weightless Neural Networks for efficient edge inference", in PACT '22: Proc. of the Int. Conf. on Parallel Architectures and Compilation Techniques, Kloeckner, A., Ed., Chicago, Illinois, Association for Computing Machinery, 2022, pp. 279-290
Exportar BibTeX
@inproceedings{susskind2022_1715528603604,
	author = "Susskind, Z. and Arora, A. and Miranda, I. D. S. and Villon, L. A. Q. and Katopodis, R. F. and Araújo, L. S. de. and Dutra, D. L. C. and Lima, P. M. V. and França, F. M. G. and Breternitz Jr., M. and John, L. K.",
	title = "Weightless Neural Networks for efficient edge inference",
	booktitle = "PACT '22: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques",
	year = "2022",
	editor = "Kloeckner, A.",
	volume = "",
	number = "",
	series = "",
	doi = "10.1145/3559009.3569680",
	pages = "279-290",
	publisher = "Association for Computing Machinery",
	address = "Chicago, Illinois",
	organization = "",
	url = "https://dl.acm.org/doi/proceedings/10.1145/3559009"
}
Exportar RIS
TY  - CPAPER
TI  - Weightless Neural Networks for efficient edge inference
T2  - PACT '22: Proceedings of the International Conference on Parallel Architectures and Compilation Techniques
AU  - Susskind, Z.
AU  - Arora, A.
AU  - Miranda, I. D. S.
AU  - Villon, L. A. Q.
AU  - Katopodis, R. F.
AU  - Araújo, L. S. de.
AU  - Dutra, D. L. C.
AU  - Lima, P. M. V.
AU  - França, F. M. G.
AU  - Breternitz Jr., M.
AU  - John, L. K.
PY  - 2022
SP  - 279-290
DO  - 10.1145/3559009.3569680
CY  - Chicago, Illinois
UR  - https://dl.acm.org/doi/proceedings/10.1145/3559009
AB  - Weightless neural networks (WNNs) are a class of machine learning model which use table lookups to perform inference, rather than the multiply-accumulate operations typical of deep neural networks (DNNs). Individual weightless neurons are capable of learning non-linear functions of their inputs, a theoretical advantage over the linear neurons in DNNs, yet state-of-the-art WNN architectures still lag behind DNNs in accuracy on common classification tasks. Additionally, many existing WNN architectures suffer from high memory requirements, hindering implementation. In this paper, we propose a novel WNN architecture, BTHOWeN, with key algorithmic and architectural improvements over prior work, namely counting Bloom filters, hardware-friendly hashing, and Gaussian-based nonlinear thermometer encodings. These enhancements improve model accuracy while reducing size and energy per inference. BTHOWeN targets the large and growing edge computing sector by providing superior latency and energy efficiency to both prior WNNs and comparable quantized DNNs. Compared to state-of-the-art WNNs across nine classification datasets, BTHOWeN on average reduces error by more than 40% and model size by more than 50%. We demonstrate the viability of a hardware implementation of BTHOWeN by presenting an FPGA-based inference accelerator, and compare its latency and resource usage against similarly accurate quantized DNN inference accelerators, including multi-layer perceptron (MLP) and convolutional models. The proposed BTHOWeN models consume almost 80% less energy than the MLP models, with nearly 85% reduction in latency. In our quest for efficient ML on the edge, WNNs are clearly deserving of additional attention.
ER  -