Druckversion
TitelSystematic Reviews in Educational Technology Research: Potential and pitfalls
Publication TypeConference Paper
Year of Publication2018
AuthorsBond, M, Bedenlier, S, Buntins, K, Kerres, M, Zawacki-Richter, O
Conference NameEdMedia + Innovate Learning Conference
VolumeShort Paper
PublisherAACE
LocationAmsterdam
URLhttps://www.researchgate.net/publication/325987954_Systematic_Reviews_in_Educational_Technology_Research_Potential_and_Pitfalls
Vollständiger Text

Reviews in Educational Technology Research: Potential and pitfalls

 

Background and Motivation

 

Conducting systematic reviews has become more prominent in the field of educational research (Gough & Thomas, 2016). Defined as “a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review” (Moher et al., 2009, p. 1), they serve two purposes: establishing a base for evidence-based decision making on the policy level and also thoroughly analyzing a specific field of research to identify research gaps (Gough, Oliver & Thomas, 2012). This also applies to the field of educational technology and its applications, which recent systematic reviews within the field have shown, for example the application of augmented reality in education (Akcayir & Akcayir, 2016), Web 2.0 technologies for student learning (Hew & Cheung, 2012), and learning and engagement within MOOCs (Joksimovic, et al., 2017).

 

In the broader context of teaching and learning research, the construct of student engagement (e.g. Dunne & Owen, 2013; Kahu, 2013, Christensen et al., 2012) has received increased attention in the past decade, as it is directly linked to students’ learning outcomes and cognitive development (Ma, Han, Yang, & Cheng, 2015). Research has shown that using technology can predict increased student engagement (Rashid & Asghar, 2016; Chen, Lambert, & Guidry, 2010), including through improved self-efficacy and self-regulation (Alioon & Delialioglu, 2017; Bouta, Retalis, & Paraskeva, 2012), and increased participation and involvement (Salaber, 2014; Northey, Bucic, Chylinkski, & Govind, 2015; Alioon & Delialioglu, 2017). However, without careful planning and sound pedagogy, technology can promote disengagement and impede rather than help learning (Popenici, 2013; Howard, Ma, & Yang, 2016).

 

The overarching question of how educational technology can support student engagement in higher education is the focus of the research project ActiveLeaRn, funded from 2016 to 2019 by the German Federal Ministry for Education and Research (BMBF). A systematic review is being conducted as part of this project, whose results will be validated through discussion with practitioners in the field – hence closing the often times perceived research practitioner gap (Belli, 2010)

 

Contribution

 

In this Brief Paper contribution, the authors place emphasis on three central aspects of the systematic review and share their hands-on experience with this method. Topics covered include student engagement, the process of searching and screening studies, and the potential and pitfalls of the method, as identified within the current review and within the field of educational technology.

 

Student engagement

 

The ‘meta-construct’ of student engagement is multifaceted and complex (Kahu, 2013; Appleton, Christenson, & Furlong, 2008; Christenson, Reschly, & Wylie, 2012; Fredricks, et al., 2004), with ongoing disagreement about its definition and form (e.g. Reeve & Tseng, 2011; Zepke & Leach, 2010). Recent reviews (Joksimovic, et al., 2017; Henrie, Halverson, & Graham, 2015) have attempted to synthesize educational technology literature, in order to further develop the construct, and this systematic review further adds to this body of work. An overview of the construct will be provided, alongside how its complexity influenced the set up of the systematic review.

 

Searching and screening studies

 

In order to identify the studies relevant for inclusion in the review (Brunton et al., 2012), an intentionally broad search string was developed, piloted and then applied to four major databases in the field (Web of Science, PsychINFO, ERIC and Scopus). Pre-defined inclusion criteria were applied; contributions are peer-reviewed articles in English language journals, published in 2007 or after, discuss technology-enhanced learning and student engagement and target students in higher education. By means of the PRISMA statement (Moher et al., 2009), collected references were documented during the searching process and then screened on title and abstract. Following initial screening, 4,153 potential includes remained for closer analysis. Peculiarities of these two parts of the systematic reviews will be discussed in the presentation.

 

Potential and pitfalls

 

In the course of the execution of the review, a number of issues have emerged that require further attention and are helpful for other researchers to consider when conducting future reviews in the field. Alongside an appraisal of the systematic review method, examples shared in the presentation will include questions regarding how to develop complex search strings for study identification, the management of large reference corpi, the importance of research teams for conducting systematic reviews, and deciding for or against text mining for use within reviews in the broad field of educational technology research.

 

The contribution closes with an outlook on the further steps within the review.

 

References

 

Alioon, Y., & Delialioglu, O. (2017). The effect of authentic m-learning activities on student engagement and motivation. British Journal of Educational Technology, 1-14.

Appleton, J., Christenson, S., & Furlong, M. (2008). Student Engagement with School: Critical conceptual and methodological issues of the construct. Psychology in the Schools, 45(5), 369-386.

Belli, G. (2010, July). Bridging the researcher-practitioner gap: Views from different fields. In ICOTS8 Invited Paper. In C Reading (ed). Data and context in statistics education: towards an evidence-based society. Proceedings of the 8th International Conference on Teaching Statistics (ICOTS 8)(pp. 11-16).

Bouta, H., Retalis, S., & Paraskeva, F. (2012). Utilising a collaborative macro-script to enhance student engagement: A mixed method study in a 3D virtual environment. Computers & Education, 58, 501-517. Chen, P.-S., Lambert, A., & Guidry, K. (2010). Engaging online learners: The impact of Web-based learning technology on college student engagement. Computers & Education, 54(4), 1222-1232.

Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.). (2012). Handbook of Research on Student Engagement. Boston, MA: Springer US. https://doi.org/10.1007/978-1-4614-2018-7

Dunne, E., & Owen, D. (Eds.). (2013). The student engagement handbook: practice in higher education (1. edition). United Kingdom North America Japan India Malaysia China: Emerald.

Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School Engagement: Potential of the Concept, State of the Evidence. Review of Educational Research, 74(1), 59-109.

Gough, D., Oliver, S., & Thomas, J. (2012). Introducing systematic reviews. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews. (pp. 1–16). London: SAGE Publications.

Gough, D., & Thomas, J. (2016). Systematic reviews of research in education: aims, myths and multiple methods. Review of Education, 4(1), 84–102.

Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A review. Computers & Education, 90, 36–53. https://doi.org/10.1016/j.compedu.2015.09.005

Hew, K. F., & Cheung, W. S. (2013). Use of Web 2.0 technologies in K-12 and higher education: The search for evidence-based practice. Educational Research Review, 9, 47–64. https://doi.org/10.1016/j.edurev.2012.08.001

Howard, S., Ma, J., & Yang, J. (2016). Student rules: Exploring patterns of students’ computer efficacy and engagement with digital technologies in learning. Computers & Education, 101, 29-42.

Joksimovic, S., Poquet, O., Kovanovic, V., Dowell, N., Mills, C., Gasevic, D., Dawson, S., Graesser, A., Brooks, C. (2017). How do we Model Learning at Scale? A Systematic Review of Research on MOOCs. Review of Educational Research, 1-62.

Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38(5), 758–773. https://doi.org/10.1080/03075079.2011.598505

Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for engagement in an online learning environment based on learning analytics approach: The role of the instructor. Internet and Higher Education, 24, 26-34.

Northey, G., Bucic, T., Chylinkski, M., & Govind, R. (2015). Increasing Student Engagement Using Asynchronous Learning. Journal of Marketing Education, 37(3), 171-180.

Popenici, S. (2013). Towards a New Vision for University Governance, Pedagogies and Student Engagement. In E. Dunne, & D. Owen (Hrsg.), The Student Engagement Handbook: Practice in Higher Education (S. 23-42). Bingley: Emerald Publishing Group Ltd.

Rashid, T., & Asghar, H. (2016). Technology use, self-directed learning, student engagement and academic performance: Examining the interrelations. Computers in Human Behaviour, 63, 604-612.

Reeve, J., & Tseng, C.-M. (2011). Agency as a fourth aspect of students’ engagement during learning activities. Contemporary Educational Psychology, 36(4), 257-267.

Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. The International Journal of Management Education, 12, 115-126.

Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education, 11(3), 167-177. doi:10.1177/1469787410379680

© Universität Duisburg-Essen | -
admin@mediendidaktik.de