Export Publication

The publication can be exported in the following formats: APA (American Psychological Association) reference format, IEEE (Institute of Electrical and Electronics Engineers) reference format, BibTeX and RIS.

Export Reference (APA)
Gil, P. & Nunes, L. (2013). Hierarchical reinforcement learning using path clustering. In 2013 8th Iberian Conference on Information Systems and Technologies (CISTI). Lisboa: IEEE.
Export Reference (IEEE)
P. A. Gil and L. M. Nunes,  "Hierarchical reinforcement learning using path clustering", in 2013 8th Iberian Conf. on Information Systems and Technologies (CISTI), Lisboa, IEEE, 2013
Export BibTeX
@inproceedings{gil2013_1716097942470,
	author = "Gil, P. and Nunes, L.",
	title = "Hierarchical reinforcement learning using path clustering",
	booktitle = "2013 8th Iberian Conference on Information Systems and Technologies (CISTI)",
	year = "2013",
	editor = "",
	volume = "",
	number = "",
	series = "",
	publisher = "IEEE",
	address = "Lisboa",
	organization = "IEEE",
	url = "https://ieeexplore.ieee.org/xpl/conhome/6589039/proceeding"
}
Export RIS
TY  - CPAPER
TI  - Hierarchical reinforcement learning using path clustering
T2  - 2013 8th Iberian Conference on Information Systems and Technologies (CISTI)
AU  - Gil, P.
AU  - Nunes, L.
PY  - 2013
SN  - 2166-0727
CY  - Lisboa
UR  - https://ieeexplore.ieee.org/xpl/conhome/6589039/proceeding
AB  - In this paper we intend to study the possibility to improve the performance of the Q-Learning algorithm, by automatically finding subgoals and making better use of the acquired knowledge. This research explores a method that allows an agent to gather information about sequences of states that lead to a goal, detect classes of common sequences and introduce the states at the end of these sequences as subgoals. We use the taxiproblem (a standard in Hierarchical Reinforcement Learning literature) and conclude that, even though this problem's scale is relatively small, in most of the cases subgoals do improve the learning speed, achieving relatively good results faster than standard Q-Learning. We propose a specific iteration interval as the most appropriate to insert subgoals in the learning process. We also found that early adoption of subgoals may lead to suboptimal learning. The extension to more challenging problems is an interesting subject for future work.
ER  -