ISSTA 2018: ACM SIGSOFT International Symposium on Software Testing and Analysis – Amsterdam, Netherlands

Please submit to the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018). The submission deadline is January 29 for the technical track — as always, please check the webpage for any extensions. I’m a member of the Programm Committee.

The ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems. ISSTA’18 will be co-located with the European Conference on Object-Oriented Programming (ECOOP ’18), and with Curry On, a conference focused on programming languages & emerging challenges in industry.

Research Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, in-depth case studies, infrastructures of testing and analysis methods or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Reproducibility Studies (New!)

ISSTA would like to encourage researchers to reproduce results from previous papers, which is why ISSTA 2018 will introduce a new paper category called Reproducibility Studies. A reproducibility study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A reproducibility study should clearly report on results that the authors were able to reproduce as well as on aspects of the work that were irreproducible. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.