Please submit to the Papers track at Onward! 2015, which will be held October 25-30 during SPLASH week in Pittsburgh, Pennsylvania, United States. The submission deadline is April 2 (as always, please check the webpage for any extensions). I’m a member of the Program Committee for the Onward! Papers track.

Onward! is a premier multidisciplinary conference focused on everything to do with programming and software: including processes, methods, languages, communities, and applications. Onward! is more radical, more visionary, and more open than other conferences to ideas that are well-argued but not yet proven. We welcome different ways of thinking about, approaching, and reporting on programming language and software engineering research.

Onward! is looking for grand visions and new paradigms that could make a big difference in how we will one day build software. But Onward! is not looking for research-as-usual papers—conferences like OOPSLA are the place for that. Those conferences require rigorous validation such as theorems or empirical experiments, which are necessary for scientific progress, but which typically preclude discussion of early-stage ideas. Onward! papers must also supply some degree of validation because mere speculation is not a good basis for progress. However, Onward! accepts less rigorous methods of validation such as compelling arguments, exploratory implementations, and substantial examples. The use of worked-out examples to support new ideas is strongly encouraged.

Onward! is reaching out for constructive criticism of current software development technology and practices, and to present ideas that could change the realm of software development. Experienced researchers, graduate students, practitioners, and anyone else dissatisfied with the state of our art is encouraged to share insights about how to reform software development.

{ Comments on this entry are closed }

Changes in software development come in many forms. Some changes are frequent, idiomatic, or repetitive (e.g. adding checks for nulls or logging important values) while others are unique. We hypothesize that unique changes are different from the more common similar (or non-unique) changes in important ways; they may require more expertise or represent code that is more complex or prone to mistakes. As such, these changes are worthy of study. In this paper, we present a definition of unique changes and provide a method for identifying them in software project history. Based on the results of applying our technique on the Linux kernel and two large projects at Microsoft, we present an empirical study of unique changes. We explore how prevalent unique changes are and investigate where they occur along the architecture of the project. We further investigate developers’ contribution towards uniqueness of changes. We also describe potential applications of leveraging the uniqueness of change and implement two such applications, evaluating the risk of changes based on uniqueness and providing change recommendations for non-unique changes.

[click for more details…]

{ Comments on this entry are closed }

Developers sometimes take the initiative to build tools to solve problems they face. What motivates developers to build these tools? What is the value for a company? Are the tools built useful for anyone besides their creator? We conducted a qualitative study of tool building, adoption, and impact within Microsoft. This paper presents our findings on the extrinsic and intrinsic factors linked to toolbuilding, the value of building tools, and the factors associated with tool spread. We find that the majority of developers build tools. While most tools never spread beyond their creator?s team, most have more than one user, and many have more than one collaborator. Organizational cultures that are receptive towards toolbuilding produce more tools, and more collaboration on tools. When nurtured and spread, homegrown tools have the potential to create significant impact on organizations.

[click for more details…]

{ Comments on this entry are closed }

Smartphone applications (apps) have gained popularity recently. Millions of smartphone applications (apps) are available on different app stores which gives users plethora of options to choose from, however, it also raises concern if these apps are adequately tested before they are released for public use. In this study, we want to understand the test automation culture prevalent among app developers. Specifically, we want to examine the current state of testing of apps, the tools that are commonly used by app developers, and the problems faced by them. To get an insight on the test automation culture, we conduct two different studies. In the first study, we analyse over 600 Android apps collected from F-Droid, one of the largest repositories containing information about open-source Android apps. We check for the presence of test cases and calculate code coverage to measure the adequacy of testing in these apps. We also survey developers who have hosted their applications on GitHub to understand the testing practices followed by them. We ask developers about the tools that they use and “pain points” that they face while testing Android apps. For the second study, based on the responses from Android developers, we improve our survey questions and resend it to Windows app developers within Microsoft.We conclude that many Android apps are poorly tested – only about 14% of the apps contain test cases and only about 9% of the apps that have executable test cases have coverage above 40%. Also, we find that Android app developers use automated testing tools such as JUnit, Monkeyrunner, Robotium, and Robolectric, however, they often prefer to test their apps manually, whereas Windows app developers prefer to use inhouse tools such as Visual Studio and Microsoft Test Manager. Both Android and Windows app developers face many challenges such as time constraints, compatibility issues, lack of exposure, cumbersome tools, etc. We give suggestions to improve the test automation culture in the growing app community.

[click for more details…]

{ Comments on this entry are closed }

A Mouse reading the paper 'The Design Space of Bug Fixes and How Developers Navigate It'

When software engineers fix bugs, they may have several options as to how to fix those bugs. Which fix they choose has many implications, both for practitioners and researchers: What is the risk of introducing other bugs during the fix? Is the bug fix in the same code that caused the bug? Is the change fixing the cause or just covering a symptom? In this paper, we investigate alternative fixes to bugs and present an empirical study of how engineers make design choices about how to fix bugs. We start with a motivating case study of the Pex4Fun environment. Then, based on qualitative interviews with 40 engineers working on a variety of products, data from 6 bug triage meetings, and a survey filled out by 326 Microsoft engineers and 37 developers from other companies, we found a number of factors, many of them non-technical, that influence how bugs are fixed, such as how close to release the software is. We also discuss implications for research and practice, including how to make bug prediction and localization more accurate.

[click for more details…]

{ Comments on this entry are closed }

Software development teams consist of developers with varying expertise and levels of productivity. With reported productivity variation of up to 1:20, the quality of assignment of developers to tasks can have a huge impact on project performance. Developers are characterized according to a defined core set of technical competence areas. The objective is to find a feasible assignment, which minimizes the total time needed to fix all given bugs. In this paper, the modeling of the developer’s assignment to bugs is given. Subsequently, a genetic algorithm called GA@DAB (Genetic Algorithm for Developer’s Assignment to Bugs) is proposed and empirically evaluated. The performance of GA@DAB was evaluated for 2040 bugs of 19 open-source milestone projects from the Eclipse platform. As part of that, a comparative analysis was done with a previously developed approach using K-Greedy search. Our results and analysis shows that GA@DAB performs statistically significantly better than K-greedy search in 17 out of 19 projects. Overall, the results support the argument of the applicability of customized genetic search techniques in the context of developer-to-bug assignments.

[click for more details…]

{ Comments on this entry are closed }

Please submit to the Testing in Practice (TIP) track at the 8th IEEE International Conference on Software Testing, Verification and Validation (ICST 2015), in Graz, Austria. I’m co-organizing the track together with Mihai Nica and Ina Schieferdecker.

The submission deadline is February 23, 2015. We seek submissions of two pages abstracts by authors in the software testing community on industry relevant topics in technology, tools and practices related to software testing, quality, safety, metrics, reliability, and modeling. The submission format is very lightweight; for more information on the format, please visit the TIP call for papers.

The objective of the Testing in Practice track (formerly known as Industry Practice program) is to establish a fruitful and meaningful dialog among software practitioners and with software engineering researchers on the results (both good and bad), obstacles, and lessons learned associated with applying software development practices in various environments. The TIP presentations will provide accounts of the application of software engineering practices (which may be principles, techniques, tools, methods, processes, testing techniques etc.) to a specific domain or to the development of a significant software system. In particular, we are interested in software development techniques that prevent bugs or detect bugs early during development in addition to various downstream bug metrics and reliability growth curves etc. We would like the TIP presentations to be of interest to software development professionals as well as software quality groups.

{ Comments on this entry are closed }