Please submit to the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE 2015), which will be held November 9-13, 2015 in Lincoln, Nebraska, USA. The submission deadline is May 15, 2015 (abstracts May 8; as always, please check the webpage for any extensions). I’m a member of the Expert Review Panel for the technical research track.
The IEEE/ACM Automated Software Engineering (ASE) Conference series is the premier research forum for automated software engineering. Each year, it brings together researchers and practitioners from academia and industry to discuss foundations, techniques and tools for automating the analysis, design, implementation, testing, and maintenance of large software systems. In 2015, ASE will be celebrating its 30th year as a premier venue for novel work in software automation.
ASE 2015 invites high quality contributions describing significant, original, and unpublished results.
Please submit to the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM 2015), which will be held October 22-23, 2015 in Beijing, China. The submission deadline is April 22 (as always, please check the webpage for any extensions). I’m a member of the Program Committee for the technical research track.
The ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) is the premier conference for research results related to empirical software engineering. These include discussions of: i) strengths and weaknesses of software engineering technologies and methods from an empirical viewpoint; ii) the design and analysis of empirical studies, ranging from controlled experiments to case studies and from quantitative to qualitative studies; and iii) the use of data and measurement to understand, evaluate, and model software engineering phenomena. The symposium encourages the presentation of both novel work and replication studies.
ESEM provides a stimulating forum where researchers and practitioners can present and discuss recent research results on a wide range of topics, in addition to also exchanging ideas, experiences and challenging problems.
Please submit to the Papers track at Onward! 2015, which will be held October 25-30 during SPLASH week in Pittsburgh, Pennsylvania, United States. The submission deadline is April 2 (as always, please check the webpage for any extensions). I’m a member of the Program Committee for the Onward! Papers track.
Onward! is a premier multidisciplinary conference focused on everything to do with programming and software: including processes, methods, languages, communities, and applications. Onward! is more radical, more visionary, and more open than other conferences to ideas that are well-argued but not yet proven. We welcome different ways of thinking about, approaching, and reporting on programming language and software engineering research.
Onward! is looking for grand visions and new paradigms that could make a big difference in how we will one day build software. But Onward! is not looking for research-as-usual papers—conferences like OOPSLA are the place for that. Those conferences require rigorous validation such as theorems or empirical experiments, which are necessary for scientific progress, but which typically preclude discussion of early-stage ideas. Onward! papers must also supply some degree of validation because mere speculation is not a good basis for progress. However, Onward! accepts less rigorous methods of validation such as compelling arguments, exploratory implementations, and substantial examples. The use of worked-out examples to support new ideas is strongly encouraged.
Onward! is reaching out for constructive criticism of current software development technology and practices, and to present ideas that could change the realm of software development. Experienced researchers, graduate students, practitioners, and anyone else dissatisfied with the state of our art is encouraged to share insights about how to reform software development.
Changes in software development come in many forms. Some changes are frequent, idiomatic, or repetitive (e.g. adding checks for nulls or logging important values) while others are unique. We hypothesize that unique changes are different from the more common similar (or non-unique) changes in important ways; they may require more expertise or represent code that is more complex or prone to mistakes. As such, these changes are worthy of study. In this paper, we present a definition of unique changes and provide a method for identifying them in software project history. Based on the results of applying our technique on the Linux kernel and two large projects at Microsoft, we present an empirical study of unique changes. We explore how prevalent unique changes are and investigate where they occur along the architecture of the project. We further investigate developers’ contribution towards uniqueness of changes. We also describe potential applications of leveraging the uniqueness of change and implement two such applications, evaluating the risk of changes based on uniqueness and providing change recommendations for non-unique changes.
[click for more details…]
Developers sometimes take the initiative to build tools to solve problems they face. What motivates developers to build these tools? What is the value for a company? Are the tools built useful for anyone besides their creator? We conducted a qualitative study of tool building, adoption, and impact within Microsoft. This paper presents our findings on the extrinsic and intrinsic factors linked to toolbuilding, the value of building tools, and the factors associated with tool spread. We find that the majority of developers build tools. While most tools never spread beyond their creator’s team, most have more than one user, and many have more than one collaborator. Organizational cultures that are receptive towards toolbuilding produce more tools, and more collaboration on tools. When nurtured and spread, homegrown tools have the potential to create significant impact on organizations.
[click for more details…]
Smartphone applications (apps) have gained popularity recently. Millions of smartphone applications (apps) are available on different app stores which gives users plethora of options to choose from, however, it also raises concern if these apps are adequately tested before they are released for public use. In this study, we want to understand the test automation culture prevalent among app developers. Specifically, we want to examine the current state of testing of apps, the tools that are commonly used by app developers, and the problems faced by them. To get an insight on the test automation culture, we conduct two different studies. In the first study, we analyse over 600 Android apps collected from F-Droid, one of the largest repositories containing information about open-source Android apps. We check for the presence of test cases and calculate code coverage to measure the adequacy of testing in these apps. We also survey developers who have hosted their applications on GitHub to understand the testing practices followed by them. We ask developers about the tools that they use and “pain points” that they face while testing Android apps. For the second study, based on the responses from Android developers, we improve our survey questions and resend it to Windows app developers within Microsoft.We conclude that many Android apps are poorly tested – only about 14% of the apps contain test cases and only about 9% of the apps that have executable test cases have coverage above 40%. Also, we find that Android app developers use automated testing tools such as JUnit, Monkeyrunner, Robotium, and Robolectric, however, they often prefer to test their apps manually, whereas Windows app developers prefer to use inhouse tools such as Visual Studio and Microsoft Test Manager. Both Android and Windows app developers face many challenges such as time constraints, compatibility issues, lack of exposure, cumbersome tools, etc. We give suggestions to improve the test automation culture in the growing app community.
[click for more details…]
When software engineers fix bugs, they may have several options as to how to fix those bugs. Which fix they choose has many implications, both for practitioners and researchers: What is the risk of introducing other bugs during the fix? Is the bug fix in the same code that caused the bug? Is the change fixing the cause or just covering a symptom? In this paper, we investigate alternative fixes to bugs and present an empirical study of how engineers make design choices about how to fix bugs. We start with a motivating case study of the Pex4Fun environment. Then, based on qualitative interviews with 40 engineers working on a variety of products, data from 6 bug triage meetings, and a survey filled out by 326 Microsoft engineers and 37 developers from other companies, we found a number of factors, many of them non-technical, that influence how bugs are fixed, such as how close to release the software is. We also discuss implications for research and practice, including how to make bug prediction and localization more accurate.
[click for more details…]