ICER 2022
Sun 7 - Wed 10 August 2022 Lugano, Switzerland

Accepted Papers

Title
A Decade of Demographics in Computing Education Research: A Critical Review of Trends in Collection, Reporting, and Use
Research Papers
DOI
A Pair of ACES: An Analysis of Isomorphic Questions on an Elementary Computing Assessment
Research Papers
DOI Pre-print
Automatic Generation of Programming Exercises and Code Explanations Using Large Language ModelsBest Paper
Research Papers
DOI Pre-print
Comparison of CS Middle-School Instruction during Pre-Pandemic, Early-Pandemic and Mid-Pandemic School Years
Research Papers
DOI
Exploring Group Dynamics in a Group-Structured Computing Undergraduate Research Experience
Research Papers
DOI
Gender, Race, and Economic Status along the Computing Education Pipeline: Examining Disparities in Course Enrollment and Wage Earnings
Research Papers
DOI
Gender, Self-Assessment, and Persistence in Computing: How gender differences in self-assessed ability reduce women's persistence in computer science
Research Papers
DOI
Getting By With Help from My Friends: Group Study in Introductory Programming Understood as Socially Shared Regulation
Research Papers
DOI
"How does the computer carry out DigitalRead()?" Notional Machines Mediated Learner Conceptual Agency within an Introductory High School Electronic Textiles Unit
Research Papers
DOI
Inclusivity Bugs in Online Courseware: A Field Study
Research Papers
DOI Pre-print
Investigating the Use of Planning Sheets in Young Learners' Open-Ended Scratch Projects
Research Papers
DOI
"It's usually not worth the effort unless you get really lucky": Barriers to Undergraduate Research Experiences from the Perspective of Computing Faculty
Research Papers
DOI
"I would be afraid to be a bad CS teacher": Factors Influencing Participation in Pre-Service Secondary Computing Teacher Education
Research Papers
DOI Pre-print
Launching Registered Report Replications in Computer Science Education Research
Research Papers
DOI
Plan Composition Using Higher-Order Functions
Research Papers
DOI
Self-efficacy, Interest, and Belongingness – URM Students’ Momentary Experiences in CS1
Research Papers
DOI
Surfacing Inequities and Their Broader Implications in the CS Education Research Community
Research Papers
DOI
Teaching Quality in Programming Education: the Effect of Teachers' Background Characteristics and Self-efficacy
Research Papers
DOI
The Shortest Path to Ethics in AI: An Integrated Assignment Where Human Concerns Guide Technical Decisions
Research Papers
DOI
Towards a Notional Machine for Runtime Stacks and Scope: When Stacks Don't Stack Up
Research Papers
DOI
Using Adaptive Parsons Problems to Scaffold Write-Code Problems
Research Papers
DOI
Using Electrodermal Activity Measurements to Understand Student Emotions While Programming
Research Papers
DOI
What do We Know about Computing Education for K-12 in Non-formal Settings? A Systematic Literature Review of Recent Research
Research Papers
DOI
What Makes Team[s] Work? A Study of Team Characteristics in Software Engineering Projects
Research Papers
DOI
When Rhetorical Logic Meets Programming: Collective Argumentative Reasoning in Problem-Solving in ProgrammingHonorable Mention
Research Papers
DOI

Call for Papers

Aims and Scope

The 18th annual ACM Conference on International Computing Education Research (ICER) aims to gather high-quality contributions to the Computing Education Research discipline. The “Research Papers” track invites submissions describing original research results related to any aspect of teaching and learning computing, from introductory through advanced material. Submissions are welcome from across the research methods used in Computing Education Research and related fields. Each contribution will be assessed based on the appropriateness and soundness of its methods, its relevance to computing education, and the depth of its contribution to the community’s understanding of the question at hand.

Research areas of particular interest include:

  • design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development,
  • discipline based education research (DBER) about computing, computer science, and related disciplines,
  • informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals,
  • learnability of programming languages and tools,
  • learning analytics and educational data mining in computing education contexts,
  • learning sciences work in the computing content domain,
  • measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines,
  • pedagogical environments fostering computational thinking,
  • psychology of programming,
  • rigorous replication of empirical work to compare with or extend previous empirical research results,
  • teacher professional development at all levels.

While this above list is non-exclusive, authors are also invited to consider the call for papers for the “Lightning Talks & Posters” and “Work-in-Progress” tracks if in doubt about the suitability of their work for this track.

Please see the Submission Instructions for details on how to prepare your submission. As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Note also the ACM policy on Plagiarism, Misrepresentation, and Falsification.

All questions about this call should go to the ICER 2022 program committee chairs at pc-chairs@icer.acm.org.

Important Dates

All submission deadlines are “anywhere on Earth” (AoE, UTC-12).

What When
Titles, abstracts, and authors due. (The chairs will use this information to assign papers to PC members.) Friday, March 18th, 2022, AoE
Full paper submission deadline Friday, March 25th, 2022, AoE
Decisions announced Tuesday, May 24th, 2022
“Conditional Accept” revisions due Wednesday, June 1st, 2022
“Conditional Accept” revisions approval notification Wednesday, June 8th, 2022
Final versions due to TAPS Wednesday, June 15th, 2022, AoE
Published in the ACM Digital Library The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work.

Guidelines

We maintain two set of guidelines to increase transparency of all processes:

Dates
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 8 Aug

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 12:00
Session 1: Programming AssignmentsResearch Papers at Aula Magna
Chair(s): Brett Becker University College Dublin
10:30
30m
Paper
A Pair of ACES: An Analysis of Isomorphic Questions on an Elementary Computing Assessment
Research Papers
Miranda Parker San Diego State University, Leiny Garcia University of California, Irvine, Yvonne Kao WestEd, Diana Franklin University of Chicago, Susan Krause University of Chicago, Mark Warschauer University of California, Irvine
DOI Pre-print
11:00
30m
Paper
Using Adaptive Parsons Problems to Scaffold Write-Code Problems
Research Papers
Xinying Hou University of Michigan, Barbara Ericson University of Michigan, Xu Wang University of Michigan
DOI
11:30
30m
Paper
Automatic Generation of Programming Exercises and Code Explanations Using Large Language ModelsBest Paper
Research Papers
Sami Sarsa Aalto University, Paul Denny The University of Auckland, Arto Hellas Aalto University, Juho Leinonen Aalto University
DOI Pre-print
13:30 - 15:00
Session 2: ParticipationResearch Papers at Aula Magna
Chair(s): Miranda Parker San Diego State University
13:30
30m
Paper
Self-efficacy, Interest, and Belongingness – URM Students’ Momentary Experiences in CS1
Research Papers
Alex Lishinski Michigan State University, Sarah Narvaiz University of Tennessee, Joshua Rosenberg University of Tennessee
DOI
14:00
30m
Paper
Gender, Race, and Economic Status along the Computing Education Pipeline: Examining Disparities in Course Enrollment and Wage Earnings
Research Papers
Jayce R. Warner The University of Texas at Austin, Stephanie N. Baker The University of Texas at Austin, Madeline Haynes The University of Texas at Austin, Miriam Jacobson The University of Texas at Austin, Natashia Bibriescas The University of Texas at Austin, Yiwen Yang The University of Texas at Austin
DOI
14:30
30m
Paper
Gender, Self-Assessment, and Persistence in Computing: How gender differences in self-assessed ability reduce women's persistence in computer science
Research Papers
Cynthia Hunt Kent State University, Spencer Yoder North Carolina State University, Taylor Comment Kent State University, Thomas Price North Carolina State University, Bita Akram North Carolina State University, Lina Battestilli North Carolina State University, Tiffany Barnes North Carolina State University, Susan Fisk Kent State University
DOI
16:30 - 18:00
Session 3: Problem SolvingResearch Papers at Aula Magna
Chair(s): Briana B. Morrison University of Virginia
16:30
30m
Paper
Plan Composition Using Higher-Order Functions
Research Papers
Elijah Rivera , Shriram Krishnamurthi Brown University, United States, Robert Goldstone Indiana University
DOI
17:00
30m
Paper
Using Electrodermal Activity Measurements to Understand Student Emotions While Programming
Research Papers
Jamie Gorson Benario Northwestern University, Kathryn Cunningham Northwestern University, Marcelo Worsley Northwestern University, Eleanor O'Rourke Northwestern University
DOI
17:30
30m
Paper
When Rhetorical Logic Meets Programming: Collective Argumentative Reasoning in Problem-Solving in ProgrammingHonorable Mention
Research Papers
Maria Kallia University of Glasgow, Quintin Cutts University of Glasgow, UK, Nicola Looker University of Glasgow
DOI

Tue 9 Aug

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

08:30 - 09:30
Session 4: Undergraduate Research ExperiencesResearch Papers at Aula Magna
Chair(s): James Prather Abilene Christian University
08:30
30m
Paper
Exploring Group Dynamics in a Group-Structured Computing Undergraduate Research Experience
Research Papers
Katherine Izhikevich University of California, San Diego, Kyeling Ong University of California, San Diego, Christine Alvarado University of California San Diego
DOI
09:00
30m
Paper
"It's usually not worth the effort unless you get really lucky": Barriers to Undergraduate Research Experiences from the Perspective of Computing Faculty
Research Papers
Rhea Sharma University of California, Santa Cruz, Atira Nair University of California, Santa Cruz, Dustin Palea University of California, Santa Cruz, Ana Guo University of California, Santa Cruz, David Lee University of California, Santa Cruz
DOI
11:00 - 12:00
Session 5: Groups and TeamsResearch Papers at Aula Magna
Chair(s): Barbara Ericson University of Michigan
11:00
30m
Paper
Getting By With Help from My Friends: Group Study in Introductory Programming Understood as Socially Shared Regulation
Research Papers
James Prather Abilene Christian University, Lauren Margulieux Georgia State University, Jacqui Whalley Auckland University of Technology, Paul Denny The University of Auckland, Brent Reeves Abilene Christian University, Brett Becker University College Dublin, Paramvir Singh The University of Auckland, Garrett Powell Abilene Christian University, Nigel Bosch University of Illinois at Urbana-Champaign
DOI
11:30
30m
Paper
What Makes Team[s] Work? A Study of Team Characteristics in Software Engineering Projects
Research Papers
Kai Presler-Marshall North Carolina State University, Sarah Heckman North Carolina State University, Kathryn Stolee North Carolina State University
DOI
13:30 - 14:30
Session 6: Notional Machines Research Papers at Aula Magna
Chair(s): Neil Brown King's College London
13:30
30m
Paper
"How does the computer carry out DigitalRead()?" Notional Machines Mediated Learner Conceptual Agency within an Introductory High School Electronic Textiles Unit
Research Papers
Gayithri Jayathirtha University of Pennsylvania
DOI
14:00
30m
Paper
Towards a Notional Machine for Runtime Stacks and Scope: When Stacks Don't Stack Up
Research Papers
John Clements California Polytechnic State University, Shriram Krishnamurthi Brown University, United States
DOI

Wed 10 Aug

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

08:30 - 10:00
Session 8: K–12Research Papers at Aula Magna
Chair(s): Lauren Margulieux Georgia State University
08:30
30m
Paper
Investigating the Use of Planning Sheets in Young Learners' Open-Ended Scratch Projects
Research Papers
David Gonzalez-Maldonado University of Chicago, Alex Pugnali University of Maryland, Jennifer Tsan University of Chicago, Donna Eatinger University of Chicago, Diana Franklin University of Chicago, David Weintrop University of Maryland
DOI
09:00
30m
Paper
What do We Know about Computing Education for K-12 in Non-formal Settings? A Systematic Literature Review of Recent Research
Research Papers
Tracy Gardner Raspberry Pi Foundation, Hayley C. Leonard Raspberry Pi Foundation, Jane Waite Raspberry Pi Foundation, Sue Sentance Raspberry Pi Foundation
DOI
09:30
30m
Paper
Comparison of CS Middle-School Instruction during Pre-Pandemic, Early-Pandemic and Mid-Pandemic School Years
Research Papers
David Gonzalez-Maldonado University of Chicago, Jennifer Tsan University of Chicago, Donna Eatinger University of Chicago, David Weintrop University of Maryland, Diana Franklin University of Chicago
DOI
10:30 - 12:00
Session 9: Computing Education ResearchResearch Papers at Aula Magna
Chair(s): Sally Fincher University of Kent
10:30
30m
Paper
Surfacing Inequities and Their Broader Implications in the CS Education Research Community
Research Papers
Monica McGill CSEdResearch.org, Knox College, Sloan Davis Google, Joey Reyes Knox College
DOI
11:00
30m
Paper
Launching Registered Report Replications in Computer Science Education Research
Research Papers
Neil Brown King's College London, Eva Marinus Pädagogische Hochschule Schwyz, Aleata Hubbard Cheuoua WestEd
DOI
11:30
30m
Paper
A Decade of Demographics in Computing Education Research: A Critical Review of Trends in Collection, Reporting, and Use
Research Papers
Alannah Oleson , Benjamin Xie University of Washington, Seattle, Jean Salac University of Washington, Seattle, Jayne Everson University of Washington, Megumi Kivuva Bard College, Amy Ko University of Washington
DOI
13:30 - 14:30
Session 10: ResponsibilityResearch Papers at Aula Magna
Chair(s): Lisa Kaczmarczyk
13:30
30m
Paper
The Shortest Path to Ethics in AI: An Integrated Assignment Where Human Concerns Guide Technical Decisions
Research Papers
Noelle Brown University of Utah, Koriann South University of Utah, Eliane Wiese University of Utah
DOI
14:00
30m
Paper
Inclusivity Bugs in Online Courseware: A Field Study
Research Papers
Amreeta Chatterjee Oregon State University, Lara Letaw Oregon State University, Rosalinda Garcia Oregon State University, Doshna Umma Reddy Oregon State University, Rudrajit Choudhuri Oregon State University, Sabyatha Sathish Kumar Oregon State University, Patricia Morreale Kean University, Anita Sarma Oregon State University, Margaret Burnett Oregon State University
DOI Pre-print

Clarification (March 1, 2022): Below, we write “authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables.” As authors have been asking about appendices, we would like to clarify that, as in previous years, “all other text” does include appendices. ICER Papers must be self-contained in the sense that reviewers can assess the contribution without referring to any external material. Appendices in the submitted PDF are considered to be part of the main text and thus are subject to word count. If authors want to provide additional material, e.g., codebooks, they must do so in an anonymized way via an external web resource of their choice; reviewers will neither be required nor asked, however, to consult such resources when assessing a paper’s contribution.

Submission Process

Submit at the ICER 2022 HotCRP site.

When you submit the abstract or full version ready for review, you need to perform the following actions:

  • Check the checkbox “ready for review” at the bottom of the submission form. (Otherwise it will be marked as a draft).

  • Check the checkbox “I have read and understood the ACM Publications Policy on Research Involving Human Participants and Subjects”. Note: “Where such research is conducted in countries where no such local governing laws and regulations related to human participant and subject research exist, Authors must at a bare minimum be prepared to show compliance with the above detailed principles.”

  • Check the checkbox “I have read and understood the ACM Policy on Plagiarism, Misrepresentation, and Falsification; in particular, no version of this work is under submission elsewhere.”. Make sure to disclose possible overlap with your own previous work (“redundant publication”) to the ICER Program Committee co-chairs.

  • Check the checkbox “I have read and understood the ICER Anonymization Policy” (see below).

ICER Anonymization Policy

ICER research paper submissions will be reviewed using a double-anonymous process: the authors do not know the identity of the reviewers and the reviewers do not know the identity of the authors. To ensure this:

  • Avoid titles that indicate a clearly identifiable research project.

  • Remove author names and affiliations. (If you are using LaTeX, you can start your document declaration with \documentclass[manuscript,review,anonymous]{acmart} to easily anonymize these.

  • Avoid referring to yourself when citing your own work.

  • Avoid references to your affiliation. For example, rather than referring to your actual university, you might write “A Large Metropolitan University (ALMU)” rather than “Auckland University of Technology (AUT)”.

  • Redact any other identifying information such as contributors, course numbers, IRB names and numbers, grant titles and numbers, from the main text and the acknowledgements.

  • Omit author details from the PDF you generate, such as author name or the name of the source document. These are often automatically inserted into exported PDFs, so be sure to check your PDF before submission.

Do not simply cover identifying details with a black box, as the text can easily be seen from under the box by dragging the cursor over it, and will still be read by screen readers.

Work that is not sufficiently anonymized will be desk-rejected by the PC chairs without offering an option to redact and resubmit.

Conflict of Interests

The SIGCSE Conflict of Interest policy applies to all submissions. You can review how conflicts will be managed by consulting our Reviewer Guidelines, which details our review process.

Submission Format and Publication Workflow

Papers submitted to the research track of ICER 2022 have to be prepared according to the ACM TAPS workflow system. Read this page carefully to understand the new workflow.

The most notable change from ICER conferences prior to 2021 is that the submission format and the publication format differ. The final publication format separates content from presentation in support of accessibility. For submission, we standardize on a single-column presentation.

  • The submission template is either the single column Word Submission Template or the single column LaTeX (using the “manuscript,review,anonymous” style available in template, which you can see an example of in the sample-manuscript.tex example in the LaTeX master template samples). Reviewers will review in this single column format. You can download these templates on the ACM Master Article Templates page.

  • The publication template is either the single column Word Submission Template or LaTeX template using “sigconf” style in acmart. You can download the templates on the ACM TAPS workflow page page, where you can also see example papers using the TAPS-compatible Word and LaTeX templates. If your paper is accepted, you will use the TAPS system to generate your final publication outputs. This will involve more than just submitting a PDF, requiring you to instead submit your Word or LaTeX source files and fix any errors in your source before the final version deadline listed above. The final published versions will be the ACM two-column conference PDF format (as well as XML, HTML, and ePub formats in the future).

For LaTeX users, be aware that there is a list of approved LaTeX packages for use with ACM TAPS. Not all packages are allowed.

This separation of submission and publication format results in several benefits:

  • Improved quality of paper metadata, improving ACM Digital Library search.

  • Multiple paper output formats, including PDFs, responsive HTML5, XML, and ePub.

  • Improved accessibility of paper content for people with disabilities.

  • Streamlined publication timelines.

One consequence of this new publication workflow is that it is no longer feasible to limit papers by page count, as the single column formats and final two-column formats result in hard-to-predict differences in length. When this workflow was introduced in 2021, the 2021 PC chairs and ICER Steering Committee considered several policies for how to manage length, and decided to continue to limit length using word count instead. There is no established way to count words, and so here is how we will count for ICER 2022: authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables. The PC chairs will use the following procedures for counting words for TAPS approved formats:

  • For papers written in the Microsoft Word template, Word’s built-in word-count mechanism will be used, selecting all text except acknowledgements and references.

  • For papers written in the LaTeX template, the document will be converted to plain text using the “ExtractText” functionality of the Apache pdfbox suite (see here) and then post processed with a standard command-line word count tool (“wc -w”, to be precise). Line numbers added by the “review” class option for LaTeX will be removed prior to counting by using “grep -v -E ‘^[0-9]+$’” (thanks to N. Brown for this).

    • We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:

      • Add the following lines at the very beginning of your Overleaf LaTeX document:
      %TC:macro \cite [option:text,text]
      %TC:macro \citep [option:text,text]
      %TC:macro \citet [option:text,text]
      %TC:envir table 0 1
      %TC:envir table* 0 1
      %TC:envir tabular [ignore] word
      %TC:envir displaymath 0 word
      %TC:envir math 0 word
      %TC:envir comment 0 0
      
      • Make sure to write math formulae delimited by \begin{math} \end{math} for in-line math and \begin{displaymath} \end{displaymath} for equations. Do not use dollar signs or \[ \]; these will result in Overleaf not counting math tokens (unlike Word and pdfbox) and thus underestimate your word count.
    • The above flags will ensure that in-text citations, tables, and math formulae will be counted but that comments will be ignored.

    • The above flags do not cover more advanced LaTeX environments, so if authors use such environments, they should interpret the Overleaf word count with care (then again, if authors know how to work with such environments it is very reasonable to assume that they also know how to work with command-line tools such as pdfbox).

    • Authors relying on Overleaf word count should be advised that the submission chairs will not have access to the source files and cannot re-run or verify any counting mechanism done by the submitting authors. To provide a fair treatment across all submission types, only the approved tools mentioned above will be used for word count. That said, submission chairs will operate under a bona fide assumption when it comes to extreme borderline cases.

  • Papers in either format may not use figures to render text in ways that work around the word count limit; papers abusing figures in this way will be desk-rejected.

A paper under the word count limit with either of the above approved tools is acceptable. The submissions chairs will evaluate each submission using the procedures above, notify the PC chairs of papers exceeding the limit, and desk-reject any papers that do.

We expect papers to vary in word count. Abstracts may vary in length, less than 300 words is a good guideline for conciseness. Submission length should be commensurate with its contributions; we expect most papers to be less than 9,000 words according to the rules above, though some may use up to the limit in order to convey details authors deem necessary to evaluate the work. Papers may be judged as too long if they are repetitive, verbose, violate formatting rules, or use figures to save on word count. Papers may be judged as too short if they omit critical details or ignore relevant prior work. See the reviewer guidelines (to be updates soon) for more on how reviewers are expected to assess conciseness.

All of the procedures above, and the TAPS workflow, will likely undergo continued iteration in partnership with ACM, the ICER Steering Committee, and the SIGCSE board. Notify the chairs of questions, edge cases, and other concerns to help improve this new workflow.

Acceptance and Conditional Acceptance

All papers recommended for acceptance after the Senior PC meetings are either accepted or conditionally accepted. For accepted papers, there is no resubmission required; authors of such papers can submit an approved version to TAPS. For conditionally-accepted papers, the paper’s meta-review will indicate one or more minor revisions that are necessary for final acceptance; authors are responsible for submitting these minor revisions to HotCRP prior to the “Conditional Accept revisions due” deadline in the Call for Papers. The Senior PC and Program Chairs will review the final revisions; if they are acceptable, the paper will be officially accepted, and authors will have one week to submit an approved version to TAPS for publication. If the PC judges that the request for minor revisions were not suitably addressed, the paper will be rejected.

Because the turnaround time for conditional acceptance is only one week, requested revisions will necessarily be minor: they may include presentation issues or requests for added clarity or details helpful for future readers of the archived paper. New results, new methodological details that change the interpretation of the results, or other substantially new content will neither be asked for nor allowed to be added.

Kudos

After a paper has been accepted and uploaded into the ACM Digital Library, authors will receive an invitation from Kudos to create an account and add plain-language text into Kudos on its platform. The Kudos “Shareable PDF” integration with ACM will then allow an author to generate a PDF to upload to websites, such as author homepages, institutional repositories, and preprint services, such as ArXiv. This PDF contains the author’s plain-text summary of the paper as well as a link to the full-text version of an article in the ACM Digital Library, adding to the DL download and citation counts there, as well as adding views from other platforms to the author’s Kudos dashboard.

Using Kudos is entirely optional. Authors may also use the other ACM copyright options to share their work (retaining copyright, paying for open access, etc.).

If you are reading this page, you are probably considering submitting to ICER. Congratulations! We are excited to review your work. Whether your research is just starting or nearly finished, this guide is intended to help authors meet the expectations of the computing education research community. It reflects a community-wide perspective on what constitutes rigorous research on the teaching and learning of computing.

Read on for our community’s current guidelines, and if you like, read our reviewer guidelines to understand our review process and review criteria.

What’s in scope at ICER?

ICER’s goal is to be an inclusive conference, both with respect to epistemology (how we know we know things) and with respect to phenomena (who is learning and in what context). Therefore, any research related to the teaching and learning of computing is in scope, using any definition of computing, and using any methods. We particularly encourage work that goes beyond the community’s past focus on introductory programming courses in post-secondary education: such as work on primary and secondary education, work on more advanced computing concepts, informal learning in any setting or learning amongst adults. (However, note that simply using computing technology to perform research in an educational setting is not in itself enough, the focus must be on the teaching or learning of computing topics.) If you have not seen a particular topic published on a topic at ICER, or you have not seen a particular method be used, that is okay. We value new topics, new methods, new perspectives, and new ideas, just as much as more broadly accepted ones.

That said, under the current review process, we cannot promise that we have recruited all the necessary expertise to our program committee to fairly review your work. Check who is on the program committee this year, and if you do not see a lot of expertise on your methods or phenomena, make sure your submission spends a bit of extra time explaining theories or methods that reviewers are unlikely to know. If you have any questions regarding this, email the program chairs (pc-chairs@icer.acm.org).

Note that we used the word “research” above. Research is hard to define, but we can say that ICER is not a place to submit practical descriptions of courses, curriculum, or instruction materials you want to share. If you’re looking to share your experiences at a conference, consider submitting to the SIGCSE Technical Symposium’s Experience Report or Position and Curricula Initiatives tracks. Research, in contrast, should meet the criteria presented throughout this document.

What makes a good computing education research paper?

It’s impossible to anticipate every kind of paper that might be submitted. The current ICER review criteria are listed in the reviewer guidelines. These will evolve over time as the community grows. There are many other criteria that reviews could discuss in relation to specific types of research contributions, but the criteria listed there are generally inclusive to many epistemologies and contribution types. This includes empirical studies that answer research questions, replicate prior results, or present negative research results as well as other, non-empirical types of research that provide novel or deepened insights into the teaching and learning of computer science content.

What prior work should be cited?

As with any research work, your submission should cite all significant publications that are relevant to your research questions. With respect to ICER submissions, this may include not only work that has been published in ACM-affiliated venues like ICER, ITiCSE, SIGCSE, Koli Calling, but also the wide range of conferences and journals in the learning sciences, education, educational psychology, HCI, and software engineering. If you are new to research, consider guides on study design and surveys of prior work like the 2019 Cambridge Handbook of Computing Education Research, which attempts to survey most of what we know about computing education up to 2018.

Papers will be judged on how adequately they are grounded in prior work published across academia. They will also be assessed regarding their accuracy of citing related work: read what you cite closely and ensure the discoveries in published work are supporting your claims; many of the authors of the works you are likely to cite are members of the computing education research community and may be your reviewers. Finally, papers will also be expected to return to prior work in a discussion of a paper’s contributions. All papers should explain how the paper’s contributions advance upon prior work, cause us to reinterpret prior work, or reveal conflicts with prior work.

How might theory be used?

Different disciplines across academia vary greatly on how they use and develop theory. At the moment, the position of the community is that theory can be a useful tool for framing research, connecting it to prior work, and interpreting findings. Papers can also contribute new theories, or refine them. However, it may also be possible for papers to be atheoretical, discovering interesting new relationships or interventions that cannot yet be explained. All of these uses of theory are appropriate.

It is also possible to misuse theory. Sometimes the theories used are too general for a question, where a theory more specific to computing education might be appropriate. In other cases, a theory might be wrongly applied to some phenomena, or a paper might use a theory that has been discredited. Be careful when using theory to understand its history, its body of evidence in support of and against its claims, and its scope of relevance.

Note that our community has discussed the role of theory multiple times, and that conversations about how to use theory are evolving:

  • Nelson and Ko (2018) argued that there are tensions between expectations of theory building and innovative exploration of design ideas, and that our field’s theory building should focus on theories specific to computing education.

  • Malmi et al. (2019) found that while computing education researchers have widely cited many dozens of unique theoretical ideas about learning, behavior, beliefs, and other phenomena, the use of theory in the field remains somewhat shallow.

  • Kafai et al. (2019) argued that there are many types of theories, and that we should more deeply leverage their explanatory potential, especially theories about the sociocultural and societal factors at play in computing education, not just the cognitive factors.

In addition to using theories when appropriate, ICER encourages the contribution of new theories. There is not a community-level consensus on what constitutes a good theory contribution, but there are examples you might learn from. Papers proposing a new theoretical model should consider including concrete examples of said model.

How should educational contexts be described?

If you’re reporting empirical work in a specific education context or set of contexts, it is important to remember that our research community is global, and that education systems across the world are structured differently. This is of particular importance when describing research that took place in primary and secondary schools. Keep in mind that not all readers can be familiar with your educational context. Describe the structure of the educational system. Define terminology related to your education system. Characterize who is teaching, and what prior knowledge and preparation they have. When describing learners, at a minimum, describe their gender, race, ethnicity, age, level in school, and prior knowledge (assuming collecting and publishing this type of data is legal in the context in which the study was conducted, see also the ACM Publications Policy on Research Involving Human Participants and Subjects). Include information about other structural factors that might affect how the results are interpreted, including whether courses are required or elective, what incentives students have to enrol in courses, how students in courses vary. For authors in the United States, common terminology to avoid include “elementary school”, “middle school”, “high school”, and “college”, which do not have well-defined meanings elsewhere. Use the more common globally inclusive phrases “primary”, “secondary”, and “post-secondary”. Given the broad spectrum of, e.g., introductory computing courses that run under the umbrella of “CS1”, make sure to provide enough information on the course content rather than relying on an assumed shared understanding.

What details should we report about our methods?

ICER values a wide range of methods of all kinds, including quantitative, qualitative, design, argumentation, and more. It is critical to describe your methods in detail, both so that reviewers and readers can understand how you arrived at your conclusions, and so they can evaluate the appropriateness of your methods both to the work and, for readers, to their own contexts.

Some contributions might benefit from following the Center for Open Science’s recommendations to ensure replicable, transparent science. These include practices such as:

  • Data should be posted to a trusted repository.

  • Data in that repository is properly cited in the paper.

  • Any code used for analysis is posted to a trusted repository.

  • Results are independently reproduced.

  • Materials used for the study are posted to a trusted repository.

  • Studies and their analysis plans are pre-registered prior to being conducted.

Our community is quite far from adopting any of these standards as expectations. Additionally, pursuing many of these goals might impose significant barriers to conducting research ethically, as educational data can often not be sufficiently anonymized to prevent disclosing identity. Therefore, these supplementary materials are not required for review, but we encourage you to include them where feasible and ethical.

The ACM has adopted a new policy on Research Involving Human Participants and Subjects that requires research to be conducted in accordance with ethical and legal standards. In accordance with the policy, your methods description should briefly describe how these standards were met. This can be as simple as a sentence that your study design was reviewed by a local review board (IRB), or a few sentences with key details if you engaged with human subjects and an IRB review was not appropriate to your context or work. Read the ACM policy for additional details.

How should we report statistics?

The world is moving beyond p-values, but computing education, like most of academia, still relies on them. When reporting the results of statistical hypothesis tests, it is critical to report:

  • The test used

  • The rationale for choosing the test, including a discussion of the data characteristics that allowed this test to be used

  • The test statistic computed

  • The actual p-value (not just whether it was greater than or less than an arbitrary threshold)

  • An effect size and its confidence intervals.

Effect sizes are especially relevant, as they indicate the extent to which something impacts or explains some phenomena in computing education; small effect sizes might not be that significant to learning. The above data should be reported regardless of whether a hypothesis test was significant. Chapters that introduce statistical methods can be found in the Cambridge Handbook of Computing Education Research.

Do not assume that reviewers or future readers have a deep understanding of statistical methods (although they might). If you’re using more advanced or non-standard techniques, justify them in detail, so that the reviewers and future readers understand your choice of methods. We recognize that length limits might prevent a detailed explanation of methods for entirely unfamiliar readers; reviewers are expected to not criticize papers for excluding extensive explanations when there was not space to include them.

How should we report on qualitative methods?

Best practices in other fields for addressing the reliability of qualitative methods suggest providing detailed arguments and rationale for qualitative approaches and analyses. Some fields that rely on qualitative methods have moved toward a recoverability criterion, which like replicability in quantitative methods, aims to ensure a study’s core methods are available for inspection and interpretation; however, recoverability does not imply repeatability, as qualitative methods rely on interpretation, which may not be repeatable.

When qualitative data is counted and used for quantitative methods, authors should report on the inter-rater reliability (IRR) of the qualitative judgements underlying those counts. There are many ways of calculating inter-rater reliability, each with tradeoffs. However, note that IRR analysis is not ubiquitous across social sciences, and not always appropriate; authors should make a clear soundness argument for why it was or was not performed.

Another challenge in reporting qualitative results is that they require more space in a paper; an abundance of quotes, after all, may take considerably more space than a table full of aggregate statistics. Be careful to provide enough evidence of your claims, while being mindful with your use of space.

What makes a good abstract?

A good abstract should summarize the question your paper asks and what answers it found. It is not enough to just say “We discuss our results and their implications”; say what you actually discovered, so future readers can learn that from your summary.

If your paper is empirical in nature, ICER recommends (but does not require) using a structured abstract that contains the following sections, each 1-2 sentences:

  • Background and Context. What is the problem space you are working in? Which phenomena are you considering and why are they relevant and important for an ICER audience?

  • Objectives. What research questions were you trying to answer?

  • Method. What did you do to answer your research questions?

  • Findings. What did you discover? Both positive and negative results should be summarized.

  • Implications. What implications does your discovery have on prior and future research, and on the practice of computing education?

Not all papers may fit this structure, but if yours does, it will greatly help reviewers and future readers understand your paper’s research design and contribution.

What counts as plagiarism?

Read ACM’s policy on Plagiarism, Misrepresentation, and Falsification; these criteria will be applied during review. In particular, attention will be paid to avoiding redundant publication.

Who should be an author on my paper?

ICER follows ACM’s Authorship Policy and Publications Policy on the Withdrawal, Correction, Retraction, and Removal of Works from ACM Publications and ACM DL. These state that any person listed as an author on a paper must (1) have made substantial contributions to the work, (2) have participated in drafting/revising the paper, (3) be aware that the paper has been submitted, and (4) agree to be held accountable for the content of the paper. Note that this policy allows enforcement of plagiarism sanctions, but it could impact people who work in large, collaborative research groups, and on postgraduate advisors who have not contributed directly to a paper.

Must submissions be in English?

At the moment, yes. Our reviewing community’s only lingua franca is English, and any other language would greatly limit the pool of expert reviewers to evaluate your work. We recognize that this is a challenging barrier for many authors globally, and that it greatly limits the diversity of voices in global discourse on computing education. Therefore, we wish to express our support of other computing education conferences around the world that you might consider submitting papers to. To mitigate this somewhat, papers will not be penalized for minor English spelling and grammar errors that can easily be corrected with minor revisions.

Resources

American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35(6), 33–40. http://edr.sagepub.com/content/35/6/33.full.pdf+html.

Decker, A,, McGill, M. M., & Settle, A (2016). Towards a Common Framework for Evaluating Computing Outreach Activities. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE ’16). ACM, New York, NY, USA, 627-632. DOI: https://doi.org/10.1145/2839509.2844567.

Fincher, S. A., & Robins, A. V. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press. DOI: https://dx.doi.org/10.1017/9781108654555.

Petre, M., Sanders, K., McCartney, R., Ahmadzadeh, M., Connolly, C., Hamouda, S., Harrington, B., Lumbroso, J., Maguire, J., Malmi, L., McGill, M.M., Vahrenhold, J. (2020). Mapping the Landscape of Peer Review in Computing Education Research, In: ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, ACM. New York, NY, USA, 173–209. DOI: https://doi.org/10.1145/3437800.3439207.

ICER 2022 Review Process and Guidelines

Version 1.0 - February 6, 2022

Jan Vahrenhold & Kathi Fisler, ICER 2022 Program Co-Chairs

This document is a living document intended to capture the reviewing policies of the ICER community. Please email the Program Co-Chairs at pc-chairs@icer.acm.org with comments or questions; all will be taken into account when updating this document for ICER 2023. To obtain this page as a single, accessible PDF document, click here.

Based on the ICER 2020/20221 Reviewing Guidelines (Amy Ko & Anthony Robins & Jan Vahrenhold) as well as the ICSE 2022 Reviewing Guidelines (Daniela Damian & Andreas Zeller). We are thankful for the input on these earlier documents provided by members of the ICER community.

Table of Contents

  1. Goals of the ICER Reviewing Process
  2. Action Items
  3. Submission System
  4. Roles in the Review Process
  5. Principles Behind ICER Reviewing
  6. Conflicts of Interest
  7. The Reviewing Process
  8. Review Criteria
  9. Award Recommendations
  10. Possible Plagiarism, Misrepresentation, and Falsification
  11. Practical Suggestions for Writing Reviews

1. Goals of the ICER Reviewing Process

The ICER Reviewing Process as outlined in this document is designed to support reaching the following goals:

  • Accept high quality papers
  • Give clear feedback to papers of insufficient quality
  • Evaluate papers consistently
  • Provide transparency in the review process
  • Embrace diversity of perspectives, but work in an inclusive, safe, collegial environment
  • Drive decisions by consensus among reviewers
  • Strive for manageable workload for PC members
  • Do our best on all of the above

2. Action Items

Prior to continuing to read this document, please do the following:

  • Read the call for papers at https://icer2022.acm.org/track/icer-2022-papers. This is the ground truth for scope and submission requirements. We expect you to account for these in your reviews.
  • Read the author guidelines at https://icer2022.acm.org/track/icer-2022-papers#Author-Guidelines. We expect your reviews and meta-reviews to be consistent with these guidelines. After having read this document, please block off a number of time slots in your calendar:
  • [Reviewers and Meta-Reviewers:] Saturday, March 19, 2022 through Friday, March 25, 2022: Reserve at least two hours to read all abstracts and bid for papers to review (see Step 2: Reviewers and Meta-Reviewers Bid for Papers).
  • [Reviewers:] Friday, April 1, 2022 through Friday, April 29, 2022: Reserve enough time to review 5-6 9-10 papers (see Step 6a: Reviewers Review Papers). In general, it is highly recommended to spread the reviews over the full four weeks instead to trying to write them just in time. Notify the PC chairs immediately in case of emergencies that might prevent you from submitting reviews by the deadline.
  • [Reviewers and Meta-Reviewers:] Saturday, April 29, 2022 through Friday, May 6, 2022: Reserve one one-hour slot during the weekend and 20-minutes slot each day of the week to log into HotCRP, read the other reviews, check on the discussion status of each of your papers, and comment where appropriate (see Step 7: Reviewers and Meta-Reviewers Discuss Reviews).
  • [Meta-Reviewers:] Saturday, April 29, 2022 through Wednesday, May 11, 2022: Reserve three hours in total to you to prepare (and update, as necessary) the meta-reviews for your assigned papers (see Step 8: Meta-Reviewers Write Meta-Reviews).
  • [Meta-Reviewers:] Wednesday, May 18, 2022 through Friday, May 20, 2022: Reserve two two-hour slots for synchronous SPC meetings (see Step 9: PC Chairs and Meta-Reviewers Discuss Papers; the PC chairs will be reaching out to schedule these meetings).
  • [Meta-Reviewers:] Wednesday, June 1, 2022 through Sunday, June 5, 2022: Reserve two hours for checking any “conditional accept” revisions that may affect your papers (see Step 13: Meta-Reviewers Check Revised Papers).

If you are new to reviewing in the Computing Education Research community, the following ITiCSE Working Group Report may serve as an introduction:

  • Petre M, Sanders K, McCartney R, Ahmadzadeh M, Connolly C, Hamouda S, Harrington B, Lumbroso J, Maguire J, Malmi L, McGill MM, Vahrenhold J. 2020. “Mapping the Landscape of Peer Review in Computing Education Research.” In ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, edited by Rößling G, Krogstie B, 173-209. New York, NY: ACM Press. doi: 10.1145/3437800.3439207.

3. Submission System

ICER 2022 uses the HotCRP platform for its reviewing process. If you are unfamiliar with this, you will find a basic tutorial below. But first, make sure you can sign in, then bookmark it: http://icer2022.hotcrp.com If you have trouble signing in, or you need help with anything, contact James Prather james.prather@acu.edu and Dastyni Loksa dloksa@towson.edu, the ICER 2022 submission chairs, for help. Make sure that you can log in to HotCRP and that your name and other metadata is correct. Check that emails from HotCRP are not marked as spam and that HotCRP email notifications are enabled.

4. Roles in the Review Process

Program Committee (PC) Chairs

Each year there are two program committee co-chairs. The PC chairs are solicited by the ICER steering committee and appointed by the SIGCSE board to serve a two-year term. One new appointment is made each year so that in any given year there is always a continuing program chair from the prior year and a new program chair. Appointment criteria include prior attendance and publication at ICER, past service on the ICER Program Committee, research excellence in Computing Education, collaborative and organizational skills to share oversight of the program selection process. The ICER Steering Committee solicits and selects candidates for future PC chairs.

Program Committee (PC) Members / Reviewers

PC members write reviews of submissions, evaluating them against the review criteria. The PC chairs invite and appoint the reviewers. The committee is sized so that each reviewer will serve for 5-6 paper submissions, or more depending on the size of the submissions pool. Each reviewer will serve a one-year term, with no limits on reappointment. Appointment criteria include expertise in relevant areas of computing education research and past reviewing experience in computing education research venues. Together, all reviewers constitute the program committee (PC). The PC chairs are responsible for inviting returning and new members of the PC, keeping in mind the various forms of diversity that are present at ICER.

Senior Program Committee Members (SPC) / Meta-Reviewers

SPC members review the PC members’ reviews, ensuring that the review content is constructive and aligned with the review criteria, as well as summarizing reviews and making recommendations for a paper’s acceptance and rejection. They also moderate discussions about each paper and provide feedback on reviews if necessary, asking reviewers to improve the quality of reviews. Finally, they participate in a synchronous SPC meeting to make final recommendations about each paper, and review authors’ minor revisions. The PC chairs invite and appoint Senior PC members, with the approval of the steering committee, again, keeping in mind the various forms of diversity that are present at ICER. Each Senior PC member can be appointed for up to three years in a row; after a hiatus of at least one year, preferably two years, re-appointment is possible. The committee is sized so that each meta-reviewer will handle 8-10 papers, depending on the submission pool.

5. Principles Behind ICER Reviewing

The ICER review process is designed to work towards these goals:

  • Maximize the alignment between a paper and expertise required to review it.
  • Minimize conflicts of interests and promoting trust in the process.
  • Maximize our community’s ability to make excellent, rigorous, trustworthy contributions to the science of computing education.

The call for papers and author guide should make this clear, but ICER is broadly scoped. The conference publishes research on teaching and learning of computer science content that happens in any context. In consequence, reviewers should not downgrade papers for being about a topic they personally perceive to be less important to computing education. If the work is sufficiently ready for publication and reviewers believe it is of interest to some part of the computing education community, it should be published such that the community can decide its importance over time.

6. Conflicts of Interest

ICER takes conflicts of interest, both real and perceived, quite seriously. The conference adheres to the ACM conflict of interest policy (https://www.acm.org/publications/policies/conflict-of-interest) as well as the SIGCSE conflict of interest policy (https://sigcse.org/policies/COI.html). These state that a paper submitted to the ICER conference is a conflict of interest for an individual if at least one of the following is true:

  • The individual is a co-author of the paper
  • A student of the individual is a co-author of the paper
  • The individual identifies the paper as a conflict of interest, i.e., that the individual does not believe that he or she can provide an impartial evaluation of the paper.

The following policies apply to conference organizers:

  • The chairs of any track are not allowed to submit to that track.
  • All other conference organizers are allowed to submit to any track.
  • All reviewers (PC members) and meta-reviewers (SPC members) are allowed to submit to any track.

No reviewer, meta-reviewer, or chair with a conflict of interest in the paper will be included in any evaluation, discussion, or decision about the paper. It is the responsibility of the reviewers, meta-reviewers, and chairs to declare their conflicts of interest throughout the process. The corresponding actions are outlined below for each relevant step of the reviewing process. It is the responsibility of the chairs to ensure that no reviewer or meta-reviewer is assigned a role in the review process for any paper for which they have a conflict of interest.

7. The Reviewing Process

Step 1: Authors Submit Abstracts

Authors will submit a title and abstract one week prior to assigning papers. Authors are allowed to revise their title and abstract before the full paper submission deadline.

Step 2: Reviewers and Meta-Reviewers Bid for Papers

Reviewers and meta-reviewers will be asked to bid on papers for which they have sufficient expertise–in both phenomena and methods–and then the PC chairs will assign papers based on these bids. The purpose of bidding is not to express interest in papers you want to read. It is to express your expertise and eligibility for fairly evaluating the work. These are subtly but importantly different purposes.

  • Specify all of your conflicts of interest. Conflicts are any situation where you have any connection with a submission that is in tension with your role as an independent reviewer (you advised an author, you have collaborated with an author, you are at the same institution, you are close friends, etc.). After declaring conflicts, you will be excluded from all future evaluation, discussion, and decisions of that paper. Program chairs and submissions chairs will also specify conflicts of interest at this time.
  • Bid on all of the papers you believe you have sufficient expertise to review. Sufficient expertise includes knowledge of research methods used and prior research on the phenomena. Practical knowledge of a topic is helpful, but insufficient.
  • Do not bid on papers about topics, techniques, or methods that you strongly oppose. That precludes authors from being fairly reviewed by authors with negative bias; see below for positive biases and how to control for them.

Step 3: Authors Submit Papers

Submissions are due one week after the abstracts are due. As you read in the submission instructions (https://icer2022.acm.org/track/icer-2022-papers#Submission-Instructions), submissions are supposed to be sufficiently anonymous that a reader cannot determine the identity or affiliation of the authors. The main purpose of ICER’s anonymous reviewing process is to reduce the influence of potential (positive or negative) biases on reviewers’ assessments. You should be able to review the work without knowing the authors or their affiliations. Do not try to find out the identity of authors. (Most guesses will be wrong anyway.) See the submission instructions for what constitutes sufficient anonymization. When in doubt, write the PC chairs for clarity at pc-chairs@icer.acm.org.

Step 4: PC Chairs Decide on Desk-Rejects

The PC chairs, with the help of the submissions chairs, will review each submission for papers that violate anonymization requirements, length restrictions, or plagiarism policies. Authors of desk rejected papers will be notified immediately. The PC chairs may not catch every issue. If you see something during review that you believe should be desk rejected, contact the chairs before you write a review; the PC chairs will make the final judgement about whether something is a violation, and give you guidance on whether and if so how to write a review.

Managing Conflicts of Interest

PC chairs with conflicts are excluded from deciding on desk rejected papers, leaving the decision to the other program chair.

Step 5: PC Chairs Assign Reviewers

Based on the bids and their judgement, the PC chairs will collaboratively assign at least three reviewers (PC members) and one meta-reviewer (SPC member) for each submission. The PC chairs will be advised by HotCRP’s assignment algorithm, which depends on all bids being high quality. Remember, for these assignments to be fair and good, your bids should only be based on your expertise and eligibility. Interest alone is not sufficient for bidding on a paper. The chairs will review the algorithm’s assignments to identify potential misalignments with expertise. Managing Conflicts of Interest PC chairs with conflicts are excluded from assigning reviewers to any papers for which they have a conflict. Assignments in HotCRP can only be made by a PC chair without a conflict.

Step 6a: Reviewers Review Papers

Assigned reviewers submit their anonymous reviews through HotCRP by the review deadline, evaluating each of their papers against the review criteria (see Review Criteria). The time allocated for reviews is four weeks in which 5-6 reviews need to be written. Due to the internal and external (publication) deadlines, there cannot be any extensions.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the reviews of the papers they are conflicted on during this process.

Step 6b: Meta-Reviewers and PC Chairs Monitor Progress

Meta-reviewers and PC chairs will periodically check in to ensure that progress is being made.

Step 7: Reviewers and Meta-Reviewers Discuss Reviews

After the reviewing period, the assigned meta-reviewer asks the reviewers to read the other reviewers’ reviews and begin a discussion about any disagreements that arise. All reviewers are asked to do the following:

  • Read all the reviews of all papers assigned (and re-read your own reviews).
  • Engage in a discussion about sources of disagreement.
  • Use the review criteria to guide your discussions.
  • Be polite, friendly, and constructive at all times.
  • Be responsive and react as soon as new information comes in.
  • Remain open to other reviewers shifting your judgements.

If your judgement does shift, update your review to reflect your new views. There is no need to indicate to the authors that you changed your review but do leave a comment for the other reviewers and the meta-reviewer indicating what you changed and why (HotCRP does not track changes). Discussing a paper is not about who wins or who is right. It is about how, in the light of all information, a group of reviewers can find the best decision on a paper. All reviewers (and the authors!) have their unique perspective and competence. It is perfectly normal that they may have seen things you have not, just as you may have seen things they have not. The important thing is to accept that the group will see more than the individual. Therefore, you can always (and are encouraged to!) shift your stance in light of the extra knowledge. The time allocated for this discussion is one week. As discussions about disagreeing reviews may take several (asynchronous) rounds, it is important to check in daily to see whether any new discussion items warrant attention. PC chairs will periodically check in. If you have configured HotCRP notifications correctly, you will be notified as soon as new information (another review or a new discussion item) about your paper comes in. It is important that you react to these, and as soon as possible. Do not let your colleagues wait for days when all that is needed is some short statement from your side.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the discussions of the papers they are conflicted on during this process.

Step 8: Meta-Reviewers Write Meta-Reviews

After the discussion phase, meta-reviewers use the reviews, the discussion, and their own evaluation of the work to write a meta-review and recommendation. A meta-review should summarize the key strengths and weaknesses of the paper, in light of the review criteria, and explain how these led to the decision. The summary and explanation should help the authors in revising their work where appropriate. A generic meta-review (“After long discussion, the reviewers decided that the paper is not up to ICER standards, and therefore rejected the paper”) is not sufficient. There are four possible meta-review recommendations: reject, discuss, conditional accept, and accept. The recommendation needs to be entered in the meta-review.

  • Reject. Ensure that the meta-review constructively summarizes the reviews and the rationale for rejection. The PC chairs will review all meta-reviews to ensure that reviews are constructive, and may request meta-reviewers to revise their meta-reviews as necessary. The PC chairs will make the final rejection decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.
  • Discuss. Ensure that the meta-review summarizes the open questions that need to be resolved at the SPC meeting discussion, where the paper will either be recommended as reject, conditional accept, or accept. Papers marked discussed will be scheduled for discussion at the SPC meeting. All papers for which the opinion of the meta-reviewer and the majority of reviewer recommendations do not align should be marked “discuss” as well.
  • Conditional Accept. Ensure that the meta-review explicitly and clearly states the conditions that must be met with minor revisions before the paper can be accepted. To accept with conditions, the conditions must be feasible to make within the one-week revision period, so they must be minor. The PC chairs will make the final decision on whether the requested revisions are minor enough to warrant conditional acceptance; if necessary, this paper will be discussed at the SPC meeting.
  • Accept. These papers will be accepted, assuming authors deanonymize the paper and meet the final version deadline. For technical reasons, “accept” recommendations are recorded internally as “conditional accept” recommendations that do not state any conditions for acceptance other than submitting the final version. The PC chairs will make the final acceptance decision based on the meta-review rationale; if necessary, this paper will be discussed at the SPC meeting.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.

Step 9: PC Chairs and Meta-Reviewers Discuss Papers

The PC chairs will host synchronous SPC meetings with all available meta-reviewers (SPC members) to discuss and decide on all “discuss” and “conditional accept” papers. Before this meeting, a second meta-reviewer will be assigned to each such paper, ensuring that there are at least two meta-reviewers to facilitate discussion. Each meta-reviewer assigned to a paper should come prepared to present the paper, its reviews, and the HotCRP discussion. Each meta-reviewer’s job is to present their recommendation, and/or if they requested discussion, present the uncertainty that prevents them from making one. All meta-reviewers who are available to attend a SPC meeting session should, at a minimum, skim each of the papers to be discussed and their reviews (excluding those for which they are conflicted), so they are familiar with the papers and their reviews prior to the discussions. At the meeting, the goal is to collectively reach consensus, rather than relying on the PC chairs alone to make final decisions. Papers may move from “discuss” to either “reject”, “conditional accept”, or “accept”; if there are conditions, they must be approved by a majority of the non-conflicted SPC and PC chairs at the discussion. After a decision is made in each case, the original SPC member will add a summary of the discussion at the end of their meta-review, explaining the rationale for the final decision, as well as any conditions for acceptance, and updating the recommendation tag in HotCRP.

Managing Conflicts of Interest

Meta-reviewers conflicted on a paper will not be assigned as a second reader. Any meta-reviewer or PC chair conflicted on a paper will be excluded from the paper’s discussion, returning after the discussion is over.

Step 10: PC Chair Review

Before announcing decisions, the non-conflicted PC chairs will review all meta-reviews to ensure as much clarity and consistency with the review process and its criteria as possible.

Managing Conflicts of Interest

PC chairs cannot change the outcome of an accept or reject decision after the SPC meeting.

Step 11: Notifications

After the SPC meeting, the PC chairs will notify all authors of the decisions about their papers; these notification will be via email through HotCRP. Papers that are (unconditionally) accepted will be encouraged to make any changes that may have been suggested but not required; papers that are conditionally accepted will be reminded of the revision evaluation deadline.

Step 12: Authors of Conditionally Accepted Papers Revise their Papers

Authors of conditionally accepted papers have one week to incorporate the requested revisions and to submit their final versions for review by the assigned meta-reviewer.

Step 13: Meta-Reviewers Check Revised Papers

Meta-reviewers will check the revised papers against the required revisions. Based on the outcome of this, they will change their recommendation to either “accept” or “reject” and will update their meta-reviews to reflect this.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process.

Step 14: Notifications

PC chairs will sanity-check all comments on those papers for which revisions were submitted. Conditionally accepted papers for which not revisions were received will be marked as “reject”. PC chairs then finalize decisions. After this review, all recommendations will be converted to official accept or reject decisions in HotCRP and authors will be notified of these final decisions via email sent through HotCRP. Authors will then have one week to submit to ACM TAPS for final publication.

Managing Conflicts of Interest

Reviewers, meta-reviewers, and PC chairs with conflicts cannot see any of the recommendations or meta-reviews of the papers they are conflicted on during this process. PC chairs with conflicts cannot see or edit any final decision on these papers.

8. Review Criteria

ICER currently evaluates papers against the following reviewing criteria, as independently as possible. These have been carefully chosen to be inclusive to many phenomena, epistemologies, and contribution types.

To be published at ICER, papers should be positively evaluated on all of these. The summary of this is another criterion:

Below, we discuss each criterion in turn.

Criterion A: The submission is grounded in relevant prior work and leverages available theory when appropriate.

Papers should draw on relevant prior work and theories, and explicitly show how they are tied to the questions addressed. After reading the paper, one should feel more informed about prior literature and how that literature is related to the paper’s contributions. Such coverage of related work might come before a work’s contributions, or it might come after (e.g, connecting a new theory derived from observations to prior work. Note that not all types of research will have relevant theory to discuss, nor do all contribution types need theory to make significant advances. For example, a surprisingly robust but unexplained correlation might be an important discovery that later work could develop theory to explain. Reviewers should identify related work the authors might have missed and include pointers. Missing a paper that is relevant, but would not dramatically change the paper, is not sufficient grounds for rejecting a paper. Such citations can be added upon reviewers’ request prior to publication. Instead, criticism in reviews that leads to downgrading a paper should focus on missing prior work or theories that would significantly alter research questions, analysis, or interpretation of results.

Guidelines for (Meta-)Reviewers

Since prior work and theories needs to be covered sufficiently and in a meaningful way but not necessarily completely, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work based on missing one or two peripherally related papers. Just note them, helping the authors to broaden their citations.
  • Refrain from downgrading work based on not citing the reviewer’s own work, unless it really is objectively highly relevant.
  • Refrain from downgrading work based on where in a paper they address prior work. Sometimes a dedicated section is appropriate, sometimes it is not. Sometimes prior work is better addressed at the end of a paper, not at the beginning.
  • Make sure to critically note if work simply lists papers without meaningfully addressing their relevance to the paper’s questions or innovations.
  • Refrain from downgrading work based on making discoveries inconsistent with theory. The point of empirical work is to test and refine theories, not conform to them.
  • Refrain from downgrading work based on not building upon theory when there is no sufficient theory available that can be pointed out in the review. Conversely, if there is a missing and relevant theory, it should be named.
  • Refrain from downgrading work based on not using the reviewer’s interpretation of a theory. Many theories have multiple competing interpretations and multiple distinct facets that can be seen from multiple perspectives.

Criterion B: The submission describes its methods and/or innovations sufficiently for others to understand how data was obtained, analyzed, and interpreted, or how an innovation works.

An ICER paper should be self-contained in the sense that readers should be able to understand most of the key details about how the authors conducted their work or made their innovation possible. This is key for replication and meta-analysis of studies that come from positivist or post-positivist epistemologies. For interpretivist works, it is also key for what Checkland and Howell called “recoverability” (See Tracy et al. 2010 for a detailed overview for evaluating qualitative work). Reviews thus should focus on omissions of research process or innovation details that would significantly alter your judgment of the paper’s validity.

Guidelines for (Meta-)Reviewers

Since ICER papers have to adhere to a word count limit and since there are always more details a paper can describe about methods, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work based on not describing every detail.
  • Refrain from asking authors to write substantially new method details unless you can identify content for them to cut, or there is space to add those details within the length restrictions.
  • Refrain from asking authors of theory contributions for a traditional methods section; such contributions do not require them, as they are not empirical in nature.
  • Feel free to ask authors for minor revisions that would support replication or meta-analysis for positivist or post-positivist works, and recoverability for interpretivist works using qualitative methods.

Criterion C: The submission’s methods and/or innovations soundly address its research questions.

The paper should answer the questions it poses, and it should do so with rigor, broadly construed. This is the single most important difference between research papers and other kinds of knowledge sharing in computing education (e.g., experience reports), and the source of certainty researchers can offer. Note that soundness is relative to claims. For example, if a paper claims to have provided evidence of causality, but its methods did not do that, that would be grounds for critique. But if a paper only claimed to have found a correlation, and that correlation is a notable discovery that future work could explain, downgrading it for not demonstrating causality would be inappropriate.

Guidelines for (Meta-)Reviewers

Since soundness is relative to claims and methods, (meta-)reviewers are asked to do the following:

  • Refrain from applying criteria for quantitative methods to qualitative methods (e.g., critiquing a case study for a “small N” makes no sense; that is the point of a case study).
  • Refrain from downgrading work based on a lack of a statistically significant difference if the study demonstrates sufficient power to detect a difference. A lack of difference can be discovery, too.
  • Refrain from asking for the paper to do more than it claims if the demonstrated claims are sufficiently publishable (e.g., “I would publish this if it had also demonstrated knowledge transfer”).
  • Refrain from relying on inexpert, anecdotal judgments (e.g., “I don’t know much about this but I played with it once and it didn’t work”).
  • Refrain from assuming that because a method has not been used in computing education literature that it is not standard somewhere else. The field draws upon methods from many communities. Look for evidence that the method is used elsewhere.

Criterion D: The submission advances knowledge of computing education by addressing (possibly novel) questions that are of interest to the computing education community.

A paper can meet the previous criteria and still fail to advance what we know about the phenomena. It is up to the authors to convince you that the discoveries advance our knowledge in some way, e.g., by confirming uncertain prior work, adding a significant new idea, or making progress on a long-standing open question. Secondarily, there should be someone who might find the discovery interesting. It does not have to be interesting to a particular reviewer, and a particular reviewer does not have to be absolutely confident that an audience exists. As the PC cannot possibly reflect the broader audience of all readers, a probable audience is sufficient for publication.

Guidelines for (Meta-)Reviewers

Since advances can come in many forms, there are many criticisms that are inappropriate in isolation (if, however, many of these apply, they may justify rejection), and, thus, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work because another, single paper was already published on the topic. Discoveries accumulate over many papers, not just one.
  • Refrain from downgrading work that contributes a really new idea for not yet having everything figured out about it. Again, new discoveries may require multiple papers.
  • Refrain from downgrading work because the results do not appear generalizable or were only obtained at a specific institution. Many papers explicitly discuss such limitations and possible remedies. Also, generalizability takes time, and, by their very nature, some qualitative methods do not lead to generalizable results.
  • Refrain from downgrading work based on “only” being a replication. Replications, if done with diligence, are important.
  • Refrain from downgrading work based on investigating phenomena you personally do not like (e.g., “I hate object-oriented languages, this work does not matter”).

Criterion E: Discussion of results clearly summarizes the submission’s contributions beyond prior work and its implications for research and practice.

It is the authors’ responsibility to help interpret the significance of a paper’s discoveries. If it makes significant advances, but does not explain what those advances are and why they matter, the paper is not ready for publication. That said, it is perfectly fine if you disagree with the paper’s interpretations or implications. Readers will vary on what they think a discovery means or what impact it might have on the world. All that is necessary is that the work presents some reasonably sound discussion of one possible set of interpretations.

Guidelines for (Meta-)Reviewers

Because there is no single “right” interpretation or discussion of implications, (meta-)reviewers are asked to do the following

  • Refrain from downgrading work because you do not think the idea would work in your institution.
  • Refrain from downgrading work because you think that the impact is limited. Check the discussion of limitations and threats to validity and evaluate the paper with respect to the claims made.
  • Make sure to critically note if work makes interpretations that are not grounded in evidence or proposes implications that are not grounded in evidence.

Criterion F: The submission is written clearly enough to publish.

Papers need to be clear and concise, both to be comprehensible to diverse audiences, but also to ensure the community is not overburdened by verboseness. We recognize that not all authors are fluent English writers; if, however, the paper requires significant editing to be comprehensible to fluent English readers, or it is unnecessarily verbose, it is not yet ready for publication.

Guidelines for (Meta-)Reviewers

Since submissions should be clear enough, (meta-)reviewers are asked to do the following:

  • Refrain from downgrading work based on having easily fixed spelling and grammar issues.
  • Refrain from downgrading a sufficiently clear paper because it could be clearer. All writing can be clearer in some way.
  • Refrain from downgrading work based on not using all of the available word count. It is okay if a paper is short but significant.
  • Refrain from asking for more detail unless you are certain there is space or - if there is not space - you can provide concrete suggestions for what to cut.

Summary: Based on the criteria above, this paper should be published at ICER.

Based on all of the previous criteria, decide how strongly you believe the paper should be accepted or rejected, assuming authors make any modest, straightforward minor revisions you and other reviewers request before publication. Papers that meet all of the criteria should be strongly accepted (though this does not imply that they are perfect). Papers that fail to meet most of the criteria should be strongly rejected. Each paper should be reviewed independently of others, as if it were a standalone journal submission. There are no conference presentation “slots”; there is no target acceptance rate. Neither should be a factor in reviewing individual submissions.

Guidelines for (Meta-)Reviewers

Because each paper should be judged on its own, (meta-)reviewers are asked to do the following:

  • Refrain from recommending to accept a paper because it was the best in your set. It is possible that none of your papers sufficiently meet the criteria.
  • Refrain from recommending to reject a paper because it should not take up a “slot”. The PC chairs will devise a program for however many papers sufficiently meet the criteria, whether that is 5 or 50. There is no need to preemptively design the program through your review; focus on the criteria.

9. Award Recommendations

On the review form, reviewers may signal to the meta-reviewer and PC chairs that they believe the submission should be considered for a best paper award. Selecting this option in the review form is visible to the other (meta-)reviewers as part of your review, but it is not disclosed to the authors. Reviewers should recognize papers that best illustrate the highest standards of computing education research, taking into account the quality of its questions asked, methodology, analysis, writing, and contribution to the field. This includes papers that meet all of the review criteria in exemplary ways (e.g., research that was particularly well designed, executed, and communicated), or papers that meet specific review criteria in exemplary ways (e.g., discoveries are particularly significant or sound). The meta-review form for each paper includes an option to officially nominate a paper to the Awards Committee for the best-paper award. Reviewers may flag papers for award consideration during review, but meta-reviewers are ultimately responsible for nominating papers for the best paper award. Each meta-reviewer may nominate at most two papers for the best paper award. Nominated papers may or may not have been flagged by one or more reviewers. Nominations should be recorded in HotCRP and be accompanied by a paragraph outlining the rationale for nomination. NOTE: Whether a paper has been nominated and the accompanying rationale are not disclosed to the authors as part of the meta-review.
Meta-reviewers are encouraged to review and finalize their nominations at the conclusion of the SPC meeting to allow for possible calibration. Once paper decisions have been sent, the submission chair will make PDFs and the corresponding rationales for all nominated papers available to the Awards Chair. Additionally, a list of all meta-reviewers that have handled any nominated paper or have one or more conflicts of interest with any nominated paper will be disclosed to the Awards Chair, as those members are not eligible to serve on the Awards Committee.

10. Possible Plagiarism, Misrepresentation, and Falsification

If after reading a submission, you suspect that it has in some way plagiarized from some other source, do the following:

The chairs will investigate and decide as necessary prior to the acceptance notification deadline. You should not mark the paper for rejection based on suspected plagiarism. Mark it based on the paper as it stands, while the PC chairs investigate.

11. Practical Suggestions for Writing Reviews

The following suggestions may be helpful when reviewing papers:

  1. Before reading, remind yourself of the preceding reviewing criteria.
  2. Read the paper, and as you do, note positive and negative aspects for each of the preceding reviewing criteria.
  3. Use your notes to outline a review organized by the seven criteria, so authors can understand your judgments for each criterion.
  4. Draft your review based on your outline.
  5. Edit your review, making it as constructive and clear as possible. Even a very negative review should be respectful to the author(s), helping to educate them. Avoid comments about the author(s) themselves; focus on the document.
  6. Based on your review, choose scores for each of the criteria.
  7. Based on your review and scores, choose a recommendation score and decide whether to recommend the paper for consideration for a best paper award.

Thank you very much for reading this document and thank you very much for being part of the ICER reviewing process. Do not hesitate to email the Program Co-Chairs at pc-chairs@icer.acm.org if you have any questions.

This year, we received 173 papers, of which 151 were sent out for review (others were withdrawn or desk rejected). Of these, we have accepted or conditionally accepted 25 papers. While this acceptance rate (16%) is low, it resulted from a process with multiple safeguards, which this post describes.

We did not use numerical cutoff values nor imposed an upper limit based on some assumed number of “slots” in the conference. Instead, each paper was considered individually based on its merits as perceived by the reviewers, the senior program committee (SPC), and the PC chairs. The only exception was for systematic literature reviews. As individual reviewers were applying different criteria to these submissions, the PC chairs did a cross-submission consistency check as part of the decision process.

ICER’s process, as described in the reviewer guidelines on the website, has three phases: (1) the reviewers submit reviews, (2) the SPC assigned to each paper leads discussion among the reviewers on each paper, then (3) the SPC and chairs meet to discuss decisions for papers for which a clear decision did not emerge during (2). The chairs monitored the discussions in (2), again checking for consistent application of criteria. Each paper receives a “metareview” written by the assigned SPC, which summarizes the discussions from phases 2 and 3.

Our process this year asked SPC members to try to identify champions for papers from among those who reviewed the paper. This was meant as a calibration check, as different reviewers interpret ratings such as “weak accept” differently. For example, some reviewers gave low scores but assumed the paper would be fine after “conditional accept” revisions while others gave higher scores based on the assumed clarity or contribution post-revision. Papers with multiple accept-range ratings that did not have champions were either discussed at the SPC meeting (in which case a second SPC evaluated the paper to arrive at a decision) or were reviewed by the PC chairs. Both processes checked that there were concrete, actionable weaknesses given for rejecting papers, rather than issues with interest in the topic or novelty of topics or ideas.

In total, 37 papers were discussed at the SPC meeting, which (across its four parts to accommodate different time zones) lasted for almost 10 hours.

All metareviews were approved by the reviewers and/or confirmed during the SPC meetings. In some cases, the individual review scores may appear inconsistent with the final decision. While some reviewers choose to adjust their ratings following discussion, others do not. Please refer to the metareview to understand which comments or issues emerged as significant during the decision phase.

A handful of rejected papers were “close”, in that they were missing key information or raised clarifying questions that might have been fixable in a short period of time. Conditional acceptance at ICER is a promise to accept the paper if all stated issues are resolved. If the SPC determined that these details could change the assessment of the result, the paper was rejected. While conferences in other areas of CS have shepherding processes and/or rebuttal periods to handle such situations, these are not currently part of ICER’s process.

Finally, upon completion of the SPC meetings, the PC chairs once more went over all papers that were discussed at the SPC meeting but not moved forward towards (conditional) acceptance and verified that the criteria for rejection were applied consistently both within this group and in comparison to the (conditionally) accepted papers.