As robots gradually step out of the science fiction scenario and enter our homes and social environments it is important to consider the ethical implications involved. Robots are currently embedded in many social contexts (e.g. care or educational) and being introduced to different social groups (e.g. children or people with disability). In their role of social machines, robots are becoming increasingly autonomous, which calls into question the need to develop robots that are socially effective, by being able to communicate and adapt to social norms. As such, developing increasingly safer and effective social robots, that can aid and collaborate with humans in different scenarios, has been a focus of research in Human-Robot Interaction research by putting an emphasis on transparency, intent communication and recognition of social cues. In line with this issue, it is important to consider what it means to develop a moral machine and how ethical considerations must be flexible and adaptable, not only to the interaction social scenario, but also to the individual, social and cultural norms of the interaction target. Here we will present a short overview of the ethical guidelines and considerations currently in place for those working, researching or developing these machines. Finally, we will discuss the role of social psychology in shaping ethical guidelines and benchmarks for developing socially situated and adapted machines.