Abstract
Systematic review is a method to identify, assess and analyse published primary studies to investigate research questions. We critique recently published guidelines for performing systematic reviews on software engineering, and comment on systematic review generally with respect to our experience conducting one. Overall we recommend the guidelines. We recommend researchers clearly and narrowly define research questions to reduce overall effort, and to improve selection and data extraction. We suggest that "complementary" research questions can help clarify the main questions and define selection criteria. We show our project timeline, and discuss possibilities for automating and increasing the acceptance of systematic review.
| Original language | English |
|---|---|
| Pages (from-to) | 1425-1437 |
| Number of pages | 13 |
| Journal | Journal of Systems and Software |
| Volume | 80 |
| Issue number | 9 |
| DOIs | |
| State | Published - Sep 2007 |
| Externally published | Yes |
Bibliographical note
Funding Information:Thanks very much to anonymous referees whose comments on previous drafts of this paper have helped to improve it. Mark Staples is employed by National ICT Australia, and Mahmood Niazi was employed by National ICT Australia while conducting the work reported in this paper. National ICT Australia is funded through the Australian Government’s Backing Australia’s Ability initiative, in part through the Australian Research Council.
Keywords
- Empirical software engineering
- Systematic review
ASJC Scopus subject areas
- Software
- Information Systems
- Hardware and Architecture
Fingerprint
Dive into the research topics of 'Experiences using systematic review guidelines'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver