Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
Susskind, Z., Bacellar, A. T. L., Arora, A., Villon, L. A. Q., Mendanha, R., Araújo, L. S. de....John, L. K. (2022). Pruning weightless neural networks. In ESANN 2022 proceedings. (pp. 37-42). Bruges (online): ESANN.
Exportar Referência (IEEE)
Z. Susskind et al.,  "Pruning weightless neural networks", in ESANN 2022 proceedings, Bruges (online), ESANN, 2022, pp. 37-42
Exportar BibTeX
@inproceedings{susskind2022_1715374040876,
	author = "Susskind, Z. and Bacellar, A. T. L. and Arora, A. and Villon, L. A. Q. and Mendanha, R. and Araújo, L. S. de. and Dutra, D. L. C. and Lima, P. M. V. and França, F. M. G. and Miranda, I. D. S. and Breternitz Jr., M. and John, L. K.",
	title = "Pruning weightless neural networks",
	booktitle = "ESANN 2022 proceedings",
	year = "2022",
	editor = "",
	volume = "",
	number = "",
	series = "",
	doi = "10.14428/esann/2022.ES2022-55",
	pages = "37-42",
	publisher = "ESANN",
	address = "Bruges (online)",
	organization = "",
	url = "https://www.esann.org/proceedings/2022"
}
Exportar RIS
TY  - CPAPER
TI  - Pruning weightless neural networks
T2  - ESANN 2022 proceedings
AU  - Susskind, Z.
AU  - Bacellar, A. T. L.
AU  - Arora, A.
AU  - Villon, L. A. Q.
AU  - Mendanha, R.
AU  - Araújo, L. S. de.
AU  - Dutra, D. L. C.
AU  - Lima, P. M. V.
AU  - França, F. M. G.
AU  - Miranda, I. D. S.
AU  - Breternitz Jr., M.
AU  - John, L. K.
PY  - 2022
SP  - 37-42
DO  - 10.14428/esann/2022.ES2022-55
CY  - Bruges (online)
UR  - https://www.esann.org/proceedings/2022
AB  - Weightless neural networks (WNNs) are a type of machine learning model which perform prediction using lookup tables (LUTs) instead of arithmetic operations. Recent advancements in WNNs have reduced model sizes and improved accuracies, reducing the gap in accuracy with deep neural networks (DNNs). Modern DNNs leverage “pruning” techniques to reduce model size, but this has not previously been explored for WNNs. We propose a WNN pruning strategy based on identifying and culling the LUTs which contribute least to overall model accuracy. We demonstrate an average 40% reduction in model size with at most 1% reduction in accuracy.
ER  -