ICER 2022
Sun 7 - Thu 11 August 2022 Lugano, Switzerland

Call for Papers

Aims and Scope

The 18th annual ACM Conference on International Computing Education Research (ICER) aims to gather high-quality contributions to the Computing Education Research discipline. The “Research Papers” track invites submissions describing original research results related to any aspect of teaching and learning computing, from introductory through advanced material. Submissions are welcome from across the research methods used in Computing Education Research and related fields. Each contribution will be assessed based on the appropriateness and soundness of its methods, its relevance to computing education, and the depth of its contribution to the community’s understanding of the question at hand.

Research areas of particular interest include:

  • design-based research, learner-centered design, and evaluation of educational technology supporting computing knowledge or skills development,
  • discipline based education research (DBER) about computing, computer science, and related disciplines,
  • informal learning experiences related to programming and software development (all ages), ranging from after-school programs for children, to end-user development communities, to workplace training of computing professionals,
  • learnability of programming languages and tools,
  • learning analytics and educational data mining in computing education contexts,
  • learning sciences work in the computing content domain,
  • measurement instrument development and validation (e.g., concept inventories, attitudes scales, etc) for use in computing disciplines,
  • pedagogical environments fostering computational thinking,
  • psychology of programming,
  • rigorous replication of empirical work to compare with or extend previous empirical research results,
  • teacher professional development at all levels.

While this above list is non-exclusive, authors are also invited to consider the call for papers for the “Lightning Talks & Posters” and “Work-in-Progress” tracks if in doubt about the suitability of their work for this track.

Please see the Submission Instructions for details on how to prepare your submission. As a published ACM author, you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Note also the ACM policy on Plagiarism, Misrepresentation, and Falsification.

All questions about this call should go to the ICER 2022 program committee chairs at pc-chairs@icer.acm.org.

Important Dates

All submission deadlines are “anywhere on Earth” (AoE, UTC-12).

What When
Titles, abstracts, and authors due. (The chairs will use this information to assign papers to PC members.) Friday, March 18th, 2022, AoE
Full paper submission deadline Friday, March 25th, 2022, AoE
Decisions announced Tuesday, May 24th, 2022
“Conditional Accept” revisions due Wednesday, June 1st, 2022
“Conditional Accept” revisions approval notification Wednesday, June 8th, 2022
Final versions due to TAPS Wednesday, June 15th, 2022, AoE
Published in the ACM Digital Library The official publication date is the date the proceedings are made available in the ACM Digital Library. This date will be the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work.

Guidelines

We maintain two set of guidelines to increase transparency of all processes:

The ICER conference maintains an evolving set of author guidelines, containing recommendations about scope, statistics, qualitative methods, theory, and other concerns that may arise when drafting your submission. These guidelines are a ground truth for reviewers; study them closely as you plan your research and prepare your submission.

Submission Process

Submit at the ICER 2022 HotCRP site.

When you submit the abstract or full version ready for review, you need to perform the following actions:

  • Check the checkbox “ready for review” at the bottom of the submission form. (Otherwise it will be marked as a draft).

  • Check the checkbox “I have read and understood the ACM Publications Policy on Research Involving Human Participants and Subjects”. Note: “Where such research is conducted in countries where no such local governing laws and regulations related to human participant and subject research exist, Authors must at a bare minimum be prepared to show compliance with the above detailed principles.”

  • Check the checkbox “I have read and understood the ACM Policy on Plagiarism, Misrepresentation, and Falsification; in particular, no version of this work is under submission elsewhere.”. Make sure to disclose possible overlap with your own previous work (“redundant publication”) to the ICER Program Committee co-chairs.

  • Check the checkbox “I have read and understood the ICER Anonymization Policy” (see below).

ICER Anonymization Policy

ICER research paper submissions will be reviewed using a double-anonymous process: the authors do not know the identity of the reviewers and the reviewers do not know the identity of the authors. To ensure this:

  • Avoid titles that indicate a clearly identifiable research project.

  • Remove author names and affiliations. (If you are using LaTeX, you can start your document declaration with \documentclass[manuscript,review,anonymous]{acmart} to easily anonymize these.

  • Avoid referring to yourself when citing your own work.

  • Avoid references to your affiliation. For example, rather than referring to your actual university, you might write “A Large Metropolitan University (ALMU)” rather than “Auckland University of Technology (AUT)”.

  • Redact any other identifying information such as contributors, course numbers, IRB names and numbers, grant titles and numbers, from the main text and the acknowledgements.

  • Omit author details from the PDF you generate, such as author name or the name of the source document. These are often automatically inserted into exported PDFs, so be sure to check your PDF before submission.

Do not simply cover identifying details with a black box, as the text can easily be seen from under the box by dragging the cursor over it, and will still be read by screen readers.

Work that is not sufficiently anonymized will be desk-rejected by the PC chairs without offering an option to redact and resubmit.

Conflict of Interests

The SIGCSE Conflict of Interest policy applies to all submissions. You can review how conflicts will be managed by consulting our Reviewer Guidelines, which details our review process.

Submission Format and Publication Workflow

Papers submitted to the research track of ICER 2022 have to be prepared according to the ACM TAPS workflow system. Read this page carefully to understand the new workflow.

The most notable change from ICER conferences prior to 2021 is that the submission format and the publication format differ. The final publication format separates content from presentation in support of accessibility. For submission, we standardize on a single-column presentation.

  • The submission template is either the single column Word Submission Template or the single column LaTeX (using the “manuscript,review,anonymous” style available in template, which you can see an example of in the sample-manuscript.tex example in the LaTeX master template samples). Reviewers will review in this single column format. You can download these templates on the ACM Master Article Templates page.

  • The publication template is either the single column Word Submission Template or LaTeX template using “sigconf” style in acmart. You can download the templates on the ACM TAPS workflow page page, where you can also see example papers using the TAPS-compatible Word and LaTeX templates. If your paper is accepted, you will use the TAPS system to generate your final publication outputs. This will involve more than just submitting a PDF, requiring you to instead submit your Word or LaTeX source files and fix any errors in your source before the final version deadline listed above. The final published versions will be the ACM two-column conference PDF format (as well as XML, HTML, and ePub formats in the future).

For LaTeX users, be aware that there is a list of approved LaTeX packages for use with ACM TAPS. Not all packages are allowed.

This separation of submission and publication format results in several benefits:

  • Improved quality of paper metadata, improving ACM Digital Library search.

  • Multiple paper output formats, including PDFs, responsive HTML5, XML, and ePub.

  • Improved accessibility of paper content for people with disabilities.

  • Streamlined publication timelines.

One consequence of this new publication workflow is that it is no longer feasible to limit papers by page count, as the single column formats and final two-column formats result in hard-to-predict differences in length. When this workflow was introduced in 2021, the 2021 PC chairs and ICER Steering Committee considered several policies for how to manage length, and decided to continue to limit length using word count instead. There is no established way to count words, and so here is how we will count for ICER 2022: authors may submit papers up to 11,000 words in length, excluding acknowledgements, references, figures, but including all other text, including tables. The PC chairs will use the following procedures for counting words for TAPS approved formats:

  • For papers written in the Microsoft Word template, Word’s built-in word-count mechanism will be used, selecting all text except acknowledgements and references.

  • For papers written in the LaTeX template, the document will be converted to plain text using the “ExtractText” functionality of the Apache pdfbox suite (see here) and then post processed with a standard command-line word count tool (“wc -w”, to be precise). Line numbers added by the “review” class option for LaTeX will be removed prior to counting by using “grep -v -E ‘^[0-9]+$’” (thanks to N. Brown for this).

    • We acknowledge that many authors may want to use Overleaf to avoid dealing with command-line tools and, consequently, may be less enthusiastic about using another command-line tool for assessing the word count. As it is configured by default, Overleaf does not count text in tables, captions, and math formula and, thus, is very likely to significantly underestimate the number obtained through the tool described above. To obtain a more realistic word count during the writing of the manuscript, authors need to take these additional steps:

      • Add the following lines at the very beginning of your Overleaf LaTeX document:
      %TC:macro \cite [option:text,text]
      %TC:macro \citep [option:text,text]
      %TC:macro \citet [option:text,text]
      %TC:envir table 0 1
      %TC:envir table* 0 1
      %TC:envir tabular [ignore] word
      %TC:envir displaymath 0 word
      %TC:envir math 0 word
      %TC:envir comment 0 0
      
      • Make sure to write math formulae delimited by \begin{math} \end{math} for in-line math and \begin{displaymath} \end{displaymath} for equations. Do not use dollar signs or \[ \]; these will result in Overleaf not counting math tokens (unlike Word and pdfbox) and thus underestimate your word count.
    • The above flags will ensure that in-text citations, tables, and math formulae will be counted but that comments will be ignored.

    • The above flags do not cover more advanced LaTeX environments, so if authors use such environments, they should interpret the Overleaf word count with care (then again, if authors know how to work with such environments it is very reasonable to assume that they also know how to work with command-line tools such as pdfbox).

    • Authors relying on Overleaf word count should be advised that the submission chairs will not have access to the source files and cannot re-run or verify any counting mechanism done by the submitting authors. To provide a fair treatment across all submission types, only the approved tools mentioned above will be used for word count. That said, submission chairs will operate under a bona fide assumption when it comes to extreme borderline cases.

  • Papers in either format may not use figures to render text in ways that work around the word count limit; papers abusing figures in this way will be desk-rejected.

A paper under the word count limit with either of the above approved tools is acceptable. The submissions chairs will evaluate each submission using the procedures above, notify the PC chairs of papers exceeding the limit, and desk-reject any papers that do.

We expect papers to vary in word count. Abstracts may vary in length, less than 300 words is a good guideline for conciseness. Submission length should be commensurate with its contributions; we expect most papers to be less than 9,000 words according to the rules above, though some may use up to the limit in order to convey details authors deem necessary to evaluate the work. Papers may be judged as too long if they are repetitive, verbose, violate formatting rules, or use figures to save on word count. Papers may be judged as too short if they omit critical details or ignore relevant prior work. See the reviewer guidelines (to be updates soon) for more on how reviewers are expected to assess conciseness.

All of the procedures above, and the TAPS workflow, will likely undergo continued iteration in partnership with ACM, the ICER Steering Committee, and the SIGCSE board. Notify the chairs of questions, edge cases, and other concerns to help improve this new workflow.

Acceptance and Conditional Acceptance

All papers recommended for acceptance after the Senior PC meetings are either accepted or conditionally accepted. For accepted papers, there is no resubmission required; authors of such papers can submit an approved version to TAPS. For conditionally-accepted papers, the paper’s meta-review will indicate one or more minor revisions that are necessary for final acceptance; authors are responsible for submitting these minor revisions to HotCRP prior to the “Conditional Accept revisions due” deadline in the Call for Papers. The Senior PC and Program Chairs will review the final revisions; if they are acceptable, the paper will be officially accepted, and authors will have one week to submit an approved version to TAPS for publication. If the PC judges that the request for minor revisions were not suitably addressed, the paper will be rejected.

Because the turnaround time for conditional acceptance is only one week, requested revisions will necessarily be minor: they may include presentation issues or requests for added clarity or details helpful for future readers of the archived paper. New results, new methodological details that change the interpretation of the results, or other substantially new content will neither be asked for nor allowed to be added.

Kudos

After a paper has been accepted and uploaded into the ACM Digital Library, authors will receive an invitation from Kudos to create an account and add plain-language text into Kudos on its platform. The Kudos “Shareable PDF” integration with ACM will then allow an author to generate a PDF to upload to websites, such as author homepages, institutional repositories, and preprint services, such as ArXiv. This PDF contains the author’s plain-text summary of the paper as well as a link to the full-text version of an article in the ACM Digital Library, adding to the DL download and citation counts there, as well as adding views from other platforms to the author’s Kudos dashboard.

Using Kudos is entirely optional. Authors may also use the other ACM copyright options to share their work (retaining copyright, paying for open access, etc.).

If you are reading this page, you are probably considering submitting to ICER. Congratulations! We are excited to review your work. Whether your research is just starting or nearly finished, this guide is intended to help authors meet the expectations of the computing education research community. It reflects a community-wide perspective on what constitutes rigorous research on the teaching and learning of computing.

Read on for our community’s current guidelines, and if you like, read our Reviewer Guidelines (to be made available soon) to understand our review process and review criteria.

What’s in scope at ICER?

ICER’s goal is to be an inclusive conference, both with respect to epistemology (how we know we know things) and with respect to phenomena (who is learning and in what context). Therefore, any research related to the teaching and learning of computing is in scope, using any definition of computing, and using any methods. We particularly encourage work that goes beyond the community’s past focus on introductory programming courses in post-secondary education: such as work on primary and secondary education, work on more advanced computing concepts, informal learning in any setting or learning amongst adults. (However, note that simply using computing technology to perform research in an educational setting is not in itself enough, the focus must be on the teaching or learning of computing topics.) If you have not seen a particular topic published on a topic at ICER, or you have not seen a particular method be used, that is okay. We value new topics, new methods, new perspectives, and new ideas, just as much as more broadly accepted ones.

That said, under the current review process, we cannot promise that we have recruited all the necessary expertise to our program committee to fairly review your work. Check who is on the program committee this year, and if you do not see a lot of expertise on your methods or phenomena, make sure your submission spends a bit of extra time explaining theories or methods that reviewers are unlikely to know. If you have any questions regarding this, email the program chairs (pc-chairs@icer.acm.org).

Note that we used the word “research” above. Research is hard to define, but we can say that ICER is not a place to submit practical descriptions of courses, curriculum, or instruction materials you want to share. If you’re looking to share your experiences at a conference, consider submitting to the SIGCSE Technical Symposium’s Experience Report or Position and Curricula Initiatives tracks. Research, in contrast, should meet the criteria presented throughout this document.

What makes a good computing education research paper?

It’s impossible to anticipate every kind of paper that might be submitted. The current ICER review criteria are listed in the Reviewer Guidelines (to be made available soon). These will evolve over time as the community grows. There are many other criteria that reviews could discuss in relation to specific types of research contributions, but the criteria listed there are generally inclusive to many epistemologies and contribution types. This includes empirical studies that answer research questions, replicate prior results, or present negative research results as well as other, non-empirical types of research that provide novel or deepened insights into the teaching and learning of computer science content.

What prior work should be cited?

As with any research work, your submission should cite all significant publications that are relevant to your research questions. With respect to ICER submissions, this may include not only work that has been published in ACM-affiliated venues like ICER, ITiCSE, SIGCSE, Koli Calling, but also the wide range of conferences and journals in the learning sciences, education, educational psychology, HCI, and software engineering. If you are new to research, consider guides on study design and surveys of prior work like the 2019 Cambridge Handbook of Computing Education Research, which attempts to survey most of what we know about computing education up to 2018.

Papers will be judged on how adequately they are grounded in prior work published across academia. They will also be assessed regarding their accuracy of citing related work: read what you cite closely and ensure the discoveries in published work are supporting your claims; many of the authors of the works you are likely to cite are members of the computing education research community and may be your reviewers. Finally, papers will also be expected to return to prior work in a discussion of a paper’s contributions. All papers should explain how the paper’s contributions advance upon prior work, cause us to reinterpret prior work, or reveal conflicts with prior work.

How might theory be used?

Different disciplines across academia vary greatly on how they use and develop theory. At the moment, the position of the community is that theory can be a useful tool for framing research, connecting it to prior work, and interpreting findings. Papers can also contribute new theories, or refine them. However, it may also be possible for papers to be atheoretical, discovering interesting new relationships or interventions that cannot yet be explained. All of these uses of theory are appropriate.

It is also possible to misuse theory. Sometimes the theories used are too general for a question, where a theory more specific to computing education might be appropriate. In other cases, a theory might be wrongly applied to some phenomena, or a paper might use a theory that has been discredited. Be careful when using theory to understand its history, its body of evidence in support of and against its claims, and its scope of relevance.

Note that our community has discussed the role of theory multiple times, and that conversations about how to use theory are evolving:

  • Nelson and Ko (2018) argued that there are tensions between expectations of theory building and innovative exploration of design ideas, and that our field’s theory building should focus on theories specific to computing education.

  • Malmi et al. (2019) found that while computing education researchers have widely cited many dozens of unique theoretical ideas about learning, behavior, beliefs, and other phenomena, the use of theory in the field remains somewhat shallow.

  • Kafai et al. (2019) argued that there are many types of theories, and that we should more deeply leverage their explanatory potential, especially theories about the sociocultural and societal factors at play in computing education, not just the cognitive factors.

In addition to using theories when appropriate, ICER encourages the contribution of new theories. There is not a community-level consensus on what constitutes a good theory contribution, but there are examples you might learn from. Papers proposing a new theoretical model should consider including concrete examples of said model.

How should educational contexts be described?

If you’re reporting empirical work in a specific education context or set of contexts, it is important to remember that our research community is global, and that education systems across the world are structured differently. This is of particular importance when describing research that took place in primary and secondary schools. Keep in mind that not all readers can be familiar with your educational context. Describe the structure of the educational system. Define terminology related to your education system. Characterize who is teaching, and what prior knowledge and preparation they have. When describing learners, at a minimum, describe their gender, race, ethnicity, age, level in school, and prior knowledge (assuming collecting and publishing this type of data is legal in the context in which the study was conducted, see also the ACM Publications Policy on Research Involving Human Participants and Subjects). Include information about other structural factors that might affect how the results are interpreted, including whether courses are required or elective, what incentives students have to enrol in courses, how students in courses vary. For authors in the United States, common terminology to avoid include “elementary school”, “middle school”, “high school”, and “college”, which do not have well-defined meanings elsewhere. Use the more common globally inclusive phrases “primary”, “secondary”, and “post-secondary”. Given the broad spectrum of, e.g., introductory computing courses that run under the umbrella of “CS1”, make sure to provide enough information on the course content rather than relying on an assumed shared understanding.

What details should we report about our methods?

ICER values a wide range of methods of all kinds, including quantitative, qualitative, design, argumentation, and more. It is critical to describe your methods in detail, both so that reviewers and readers can understand how you arrived at your conclusions, and so they can evaluate the appropriateness of your methods both to the work and, for readers, to their own contexts.

Some contributions might benefit from following the Center for Open Science’s recommendations to ensure replicable, transparent science. These include practices such as:

  • Data should be posted to a trusted repository.

  • Data in that repository is properly cited in the paper.

  • Any code used for analysis is posted to a trusted repository.

  • Results are independently reproduced.

  • Materials used for the study are posted to a trusted repository.

  • Studies and their analysis plans are pre-registered prior to being conducted.

Our community is quite far from adopting any of these standards as expectations. Additionally, pursuing many of these goals might impose significant barriers to conducting research ethically, as educational data can often not be sufficiently anonymized to prevent disclosing identity. Therefore, these supplementary materials are not required for review, but we encourage you to include them where feasible and ethical.

The ACM has adopted a new policy on Research Involving Human Participants and Subjects that requires research to be conducted in accordance with ethical and legal standards. In accordance with the policy, your methods description should briefly describe how these standards were met. This can be as simple as a sentence that your study design was reviewed by a local review board (IRB), or a few sentences with key details if you engaged with human subjects and an IRB review was not appropriate to your context or work. Read the ACM policy for additional details.

How should we report statistics?

The world is moving beyond p-values, but computing education, like most of academia, still relies on them. When reporting the results of statistical hypothesis tests, it is critical to report:

  • The test used

  • The rationale for choosing the test, including a discussion of the data characteristics that allowed this test to be used

  • The test statistic computed

  • The actual p-value (not just whether it was greater than or less than an arbitrary threshold)

  • An effect size and its confidence intervals.

Effect sizes are especially relevant, as they indicate the extent to which something impacts or explains some phenomena in computing education; small effect sizes might not be that significant to learning. The above data should be reported regardless of whether a hypothesis test was significant. Chapters that introduce statistical methods can be found in the Cambridge Handbook of Computing Education Research.

Do not assume that reviewers or future readers have a deep understanding of statistical methods (although they might). If you’re using more advanced or non-standard techniques, justify them in detail, so that the reviewers and future readers understand your choice of methods. We recognize that length limits might prevent a detailed explanation of methods for entirely unfamiliar readers; reviewers are expected to not criticize papers for excluding extensive explanations when there was not space to include them.

How should we report on qualitative methods?

Best practices in other fields for addressing the reliability of qualitative methods suggest providing detailed arguments and rationale for qualitative approaches and analyses. Some fields that rely on qualitative methods have moved toward a recoverability criterion, which like replicability in quantitative methods, aims to ensure a study’s core methods are available for inspection and interpretation; however, recoverability does not imply repeatability, as qualitative methods rely on interpretation, which may not be repeatable.

When qualitative data is counted and used for quantitative methods, authors should report on the inter-rater reliability (IRR) of the qualitative judgements underlying those counts. There are many ways of calculating inter-rater reliability, each with tradeoffs. However, note that IRR analysis is not ubiquitous across social sciences, and not always appropriate; authors should make a clear soundness argument for why it was or was not performed.

Another challenge in reporting qualitative results is that they require more space in a paper; an abundance of quotes, after all, may take considerably more space than a table full of aggregate statistics. Be careful to provide enough evidence of your claims, while being mindful with your use of space.

What makes a good abstract?

A good abstract should summarize the question your paper asks and what answers it found. It is not enough to just say “We discuss our results and their implications”; say what you actually discovered, so future readers can learn that from your summary.

If your paper is empirical in nature, ICER recommends (but does not require) using a structured abstract that contains the following sections, each 1-2 sentences:

  • Background and Context. What is the problem space you are working in? Which phenomena are you considering and why are they relevant and important for an ICER audience?

  • Objectives. What research questions were you trying to answer?

  • Method. What did you do to answer your research questions?

  • Findings. What did you discover? Both positive and negative results should be summarized.

  • Implications. What implications does your discovery have on prior and future research, and on the practice of computing education?

Not all papers may fit this structure, but if yours does, it will greatly help reviewers and future readers understand your paper’s research design and contribution.

What counts as plagiarism?

Read ACM’s policy on Plagiarism, Misrepresentation, and Falsification; these criteria will be applied during review. In particular, attention will be paid to avoiding redundant publication.

Who should be an author on my paper?

ICER follows ACM’s Authorship Policy and Publications Policy on the Withdrawal, Correction, Retraction, and Removal of Works from ACM Publications and ACM DL. These state that any person listed as an author on a paper must (1) have made substantial contributions to the work, (2) have participated in drafting/revising the paper, (3) be aware that the paper has been submitted, and (4) agree to be held accountable for the content of the paper. Note that this policy allows enforcement of plagiarism sanctions, but it could impact people who work in large, collaborative research groups, and on postgraduate advisors who have not contributed directly to a paper.

Must submissions be in English?

At the moment, yes. Our reviewing community’s only lingua franca is English, and any other language would greatly limit the pool of expert reviewers to evaluate your work. We recognize that this is a challenging barrier for many authors globally, and that it greatly limits the diversity of voices in global discourse on computing education. Therefore, we wish to express our support of other computing education conferences around the world that you might consider submitting papers to. To mitigate this somewhat, papers will not be penalized for minor English spelling and grammar errors that can easily be corrected with minor revisions.

Resources

American Educational Research Association. (2006). Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35(6), 33–40. http://edr.sagepub.com/content/35/6/33.full.pdf+html.

Decker, A,, McGill, M. M., & Settle, A (2016). Towards a Common Framework for Evaluating Computing Outreach Activities. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE ’16). ACM, New York, NY, USA, 627-632. DOI: https://doi.org/10.1145/2839509.2844567.

Fincher, S. A., & Robins, A. V. (Eds.). (2019). The Cambridge Handbook of Computing Education Research. Cambridge University Press. DOI: https://dx.doi.org/10.1017/9781108654555.

Petre, M., Sanders, K., McCartney, R., Ahmadzadeh, M., Connolly, C., Hamouda, S., Harrington, B., Lumbroso, J., Maguire, J., Malmi, L., McGill, M.M., Vahrenhold, J. (2020). Mapping the Landscape of Peer Review in Computing Education Research, In: ITiCSE-WGR ’20: Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, ACM. New York, NY, USA, 173–209. DOI: https://doi.org/10.1145/3437800.3439207.

to be published soon