ACM SIGSOFT Elections 2018

I running for Chair of ACM SIGSOFT. Find out more about my goals.

APSA 2018: International Workshop on Anti-Patterns for Software Analytics – Gothenburg, Sweden

Please submit to the 1st International Workshop on Anti-Patterns for Software Analytics (APSA 2018), in conjunction with ICSE 2018, Gothenburg, Sweden. The submission deadline is February 5 — as always, please check the webpage for any extensions. I’m a member of the Program Committee.

In this data-driven economy, as society makes increasing use of data mining technology, it is now more important to that our community has a shared understanding on how to assess the results coming out of those data miners. Recent results experience shows that, in the arena of software analytics, we do not share that understanding.

We now have more than a decade of research on data mining in software repositories, reported at all major software engineering venues (ICSE, TSE, EMSE, MSR, ASE, ICSME, ESEM, …). Based on the organizers’ experience on their last dozen journal papers, we assert that conference and journal reviewers in SE have very little shared criteria on how to assess data miners. Simple low-level issues, such as what performance indicator to use, are still controversial. Some reviewers eschew accuracy or precision; some demand SE (standardized error). Similarly, many higher issues are also unclear such as what statistical test to use on how many data sets (and where should that data come from). More generally, recently several papers reported on failed replications or problems with the data we use.
All the above hints at general and systemic problems with the way we evaluate and compare our research. This is a pressing, open and urgent problem not just for researchers since we know many software developers who routinely ship some kind of analytics functionality as part of their delivery tools. If we, as academics, cannot agree on how to assess those tools, then how can industrial practitioners ever certify that the analytic tools they are shipping to clients are useful (or, at the very least, not misleading).

Accordingly, this workshop’s goal is the development of guidelines for assessing software analytics. We want to bring together the community to discuss anti-patterns as a first step towards guidelines for repeatable, comparable, and replicable software analytics research, e.g., on defect prediction and effort prediction. As such, we do not want to discuss new techniques, data sets, or ways to mine data, but instead focus solely on the discussion of how we should actually evaluate our research. This shall give researchers a forum to share anti-patterns they frequently observe and how to avoid them.

ISSTA 2018: ACM SIGSOFT International Symposium on Software Testing and Analysis – Amsterdam, Netherlands

Please submit to the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2018). The submission deadline is January 29 for the technical track — as always, please check the webpage for any extensions. I’m a member of the Programm Committee.

The ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems. ISSTA’18 will be co-located with the European Conference on Object-Oriented Programming (ECOOP ’18), and with Curry On, a conference focused on programming languages & emerging challenges in industry.

Research Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, in-depth case studies, infrastructures of testing and analysis methods or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Reproducibility Studies (New!)

ISSTA would like to encourage researchers to reproduce results from previous papers, which is why ISSTA 2018 will introduce a new paper category called Reproducibility Studies. A reproducibility study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A reproducibility study should clearly report on results that the authors were able to reproduce as well as on aspects of the work that were irreproducible. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.

ICSME 2017: International Conference on Software Maintenance and Evolution – Shanghai, China

Please submit to the 33rd International Conference on Software Maintenance and Evolution (ICSME 2017). The submission deadline is Thursday, April 6 (abstracts March 30) for the technical track— as always, please check the webpage for any extensions. Together with Lu Zhang I’m Co-Chair of the program committee for the research track. ICSME will be the first IEEE-sponsored conference to award the IEEE TCSE Distinguished Paper Awards.

The International Conference on Software Maintenance and Evolution (ICSME) is the premier international forum for researchers and practitioners from academia, industry, and government to present, discuss, and debate the most recent ideas, experiences, and challenges in software maintenance and evolution.

ICSME 2017, the 33rd in the conference series, will be held in Shanghai, China. Shanghai is the largest and most prosperous city in China. World Expo 2010 was held in Shanghai. Shanghai is situated in the Yangtze River delta next to the East China Sea.

As one of the largest cities in Asia, Shanghai has rich cultural relics. You can not only experience a modern trip including the Bund, Xintiandi, the Oriental Pearl TV Tower and World Financial Center, but also explore an old and ancient journey by visiting the Yuyuan Garden, Jade Buddha Temple and Zhujiajiao Ancient Town. Nearby Suzhou and Hangzhou, you can easily visit some ancient water towns in Jiangsu and Zhejiang Provinces from Shanghai.

This is Shanghai from Rob Whitworth on Vimeo.

The Work Life of Developers: Activities, Switches and Perceived Productivity – TSE 2017

Many software development organizations strive to enhance the productivity of their developers. All too often, efforts aimed at improving developer productivity are undertaken without knowledge about how developers spend their time at work and how it influences their own perception of productivity. To fill in this gap, we deployed a monitoring application at 20 computers of professional software developers from four companies for an average of 11 full workdays in situ. Corroborating earlier findings, we found that developers spend their time on a wide variety of activities and switch regularly between them, resulting in highly fragmented work. Our findings extend beyond existing research in that we correlate developers’ work habits with perceived productivity and also show productivity is a personal matter. Although productivity is personal, developers can be roughly grouped into morning, low-at-lunch and afternoon people. A stepwise linear regression per participant revealed that more user input is most often associated with a positive, and emails, planned meetings and work unrelated websites with a negative perception of productivity. We discuss opportunities of our findings, the potential to predict high and low productivity and suggest design approaches to create better tool support for planning developers’ workdays and improving their personal productivity.

[click for more details…]

Master Maker: Understanding Gaming Skill through Practice and Habit from Gameplay Behavior – topiCS 2017

The study of expertise is difficult to do in a lab environment due to the challenge of finding people at different skill levels and the lack of time for participants to acquire mastery. In this paper, we report on two studies that analyze naturalistic gameplay data using cohort analysis to better understand how skill relates to practice and habit. Two cohorts are analyzed, each from two different games (Halo Reach and StarCraft 2). Our work follows skill progression through 7 months of Halo matches for a holistic perspective, but also explores low-level in-game habits when controlling game units in StarCraft 2. Players who played moderately frequently without long breaks were able to gain skill the most efficiently. What set the highest performers apart was their ability to gain skill more rapidly and without dips compared to other players. At the beginning of matches, top players habitually warmed up by selecting and re-selecting groups of units repeatedly in a meaningless cycle. They exhibited unique routines during their play that aided them when under pressure.

[click for more details…]

Ramp-up Journey of New Hires: Do strategic practices of software companies influence productivity? – ISEC 2017

Software companies regularly recruit skilled and talented employees to meet evolving business requirements. Although companies expect early contributions, new hires often take several weeks to reach the same productivity level as existing employees. We refer to this transition of new hires from novices to experts as ramp-up journey. There can be various factors such as lack of technical skills or lack of familiarity with the process that influence the ramp-up journey of new hires. The goal of our work is to identify those factors and study their influence on the ramp-up journey. We expect the results from this study to help identify the need of various types of assistance to new hires to ramp-up faster. As a first step towards our goal, this paper explores the impact of two strategic practices, namely distributed development and internship on the ramp-up journey of new hires. Our results show that new hires in proximity to the core development team and new hires with prior internship experience perform better than others in the beginning. In the overall ramp-up journey, the effect of the two factors attenuates, yet nevertheless better compared to their counterparts. Product teams can use this information to pay special attention to non-interns and use better tools for distributed, cooperative work to help new hires ramp-up faster.

[click for more details…]