Talk
Measuring students’ experience of research-teaching integration practices: A new instrument
Nuno Costa (Costa, N.); Rita Guerra (Guerra, R.); Sónia F. Bernardes (Bernardes, S.F.);
Event Title
10th Annual International Technology, Education and Development Conference
Year (definitive publication)
2016
Language
English
Country
Spain
More Information
Web of Science®

Times Cited: 0

(Last checked: 2022-02-17 06:19)

View record in Web of Science®

Scopus

This publication is not indexed in Scopus

Google Scholar

Times Cited: 0

(Last checked: 2024-05-04 18:09)

View record in Google Scholar

Abstract
Research and teaching are the two main activities of academic institutions. However, meta-analyses have shown that research productivity by the staff and students’ perceived quality of teaching are not related. Nevertheless, the general belief among academics that integrating research into teaching can improve student learning has led to increasing efforts to study how teaching and research can be integrated. Healey and colleagues (2005) proposed four different ways in which research can be included in teaching: research-oriented and research-based, focusing on processes rather than content, and research-tutored and research-led, focusing on content rather than processes. The degree of active participation also varies, and students can have a more active role (research-based and tutored) or a more passive role (research-led and oriented). Although this model is useful for staff to build and evaluate research-teaching integration (R&T) practices, it is unclear whether students experience such practices (and if so, which ones) according to each of the four categories. In the current study we developed and validated an instrument assessing students’ experience of specific R&T practices, and explored how the several practices are inter-related and whether such relations reflect Healey and colleagues’ categories. One existing instrument by Healey, Jordan and Short (2002) assesses whether students have or have not participated in several R&T practices. However, information of how frequently, instead of whether or not, each practice is experienced can offer a more detailed picture of the differences between courses, departments, undergraduate and postgraduate levels. Therefore, we developed an 18-items measure, 14 items of which were adapted from Healey and colleagues (2002) questionnaire. Instead of identifying which research practices they experienced, students’ rated, on a 5-point Likert scale, the frequency of their participation on each practice. The remaining four items emerged from a focus group conducted with the targeted population. Our instrument was tested among a 757 students (37% males, 446 undergraduates, 292 master students) sample that included different scientific areas at a public research-intensive university. An exploratory factor analysis conducted with half the sample revealed a three-factor solution comprising 11 items and accounting for 57% of explained variance. The three factors were: participating in research, learning as audience, and practicing research skills. The internal consistency was good for the first (?=0.84) and second (?=0.80) factors, and acceptable for the third (?=0.75). A confirmatory factor analysis conducted with the remaining half of the sample revealed that a three-factor solution presented a significantly better fit than a two or one-factor solution [?2(41)=158.15; GFI=.93; CFI=.91; RMSEA=.086]. The three factors corresponded to three of Healey’s (2005) four categories of research experience, namely research-based, research-led and research-oriented. The results suggest this is a valid, reliable and sensitive measure of students’ experience of R&T practices. We will discuss the advantages of our instrument for diagnosing and monitoring the impact of departmental and staff practices, as well as disciplinary cultures, on student learning and research experience.
Acknowledgements
--
Keywords
  • Physical Sciences - Natural Sciences