Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
M.Breternitz (2019). Efficiency and Scalability of Multi-Lane Capsule Networks (MLCN) . CIENCIA 2019 Encontro com a Ciencia e Tecnologia em Portugal .
Exportar Referência (IEEE)
M. B. Jr.,  "Efficiency and Scalability of Multi-Lane Capsule Networks (MLCN) ", in CIENCIA 2019 Encontro com a Ciencia e Tecnologia em Portugal , 2019
Exportar BibTeX
@misc{jr.2019_1765826260997,
	author = "M.Breternitz",
	title = "Efficiency and Scalability of Multi-Lane Capsule Networks (MLCN) ",
	year = "2019",
	url = "https://ciencia.iscte-iul.pt/publications/efficiency-and-scalability-of-multi-lane-capsule-networks-mlcn-/61916?lang=en"
}
Exportar RIS
TY  - CPAPER
TI  - Efficiency and Scalability of Multi-Lane Capsule Networks (MLCN) 
T2  - CIENCIA 2019 Encontro com a Ciencia e Tecnologia em Portugal 
AU  - M.Breternitz
PY  - 2019
UR  - https://ciencia.iscte-iul.pt/publications/efficiency-and-scalability-of-multi-lane-capsule-networks-mlcn-/61916?lang=en
AB  - Some Deep Neural Networks (DNN) have what we
call lanes, or they can be reorganized as such. Lanes are paths
in the network which are data-independent and typically learn
different features or add resilience to the network. Given their
data-independence, lanes are amenable for parallel processing.
The Multi-lane CapsNet (MLCN) is a proposed reorganization
of the Capsule Network which is shown to achieve better accuracy
while bringing highly-parallel lanes. However, the efficiency and
scalability of MLCN had not been systematically examined.
In this work, we study the MLCN network with multiple
GPUs finding that it is 2x more efficient than the original
CapsNet when using model-parallelism. Further, we present the
load balancing problem of distributing heterogeneous lanes in
homogeneous or heterogeneous accelerators and show that a
simple greedy heuristic can be almost 50% faster than a na¨ıve
random approach.
ER  -