Keynote Speaker
Brian Fitzgerald
Prof Brian Fitzgerald is the Director at Lero and holds an endowed professorship, the Frederick A Krehbiel II Chair in Innovation in Global Business & Technology, at the University of Limerick, Ireland, where he was also Vice President Research from 2008-2011. He is Principal Investigator in Lero and was Founding Director of the Lero Graduate School in Software Engineering. His research interests, i.e. primarily in software development, encompassing development methods, Global software development, agile methods and open source software.
Title: Advice on Conducting Software Engineering Research (and Getting it Published)
Abstract:
This talk seeks to deliver on three important aspects of the research process.
- An overview of empirical research methods for software engineering research
- Reviewing SE research papers
- Navigating the review process in getting a paper published in a top journal.
To address the first item above, attendees will receive a paper entitled, The ABC of Software Engineering Research which was published in ACM TOSEM in 2018. This paper, which is actually the initial version submitted to TOSEM, provides a holistic view of eight archetypal research strategies for conducting SE studies. These strategies are illustrated in two key SE domains: global software engineering and requirements engineering. Attendees need to do the following:
- Read the paper to become familiar with these research strategies
- Write an initial review of the paper
The reviews of the paper by the three actual TOSEM reviewers will be provided to attendees, who can see how a set of reviewers assessed the paper. Also, our response to these reviews will be provided, thereby helping candidates to develop their own strategy for responding to reviews, both in terms of resolving issues that can be resolved, and rebutting issues where misunderstandings might have occurred.
A Taxonomy of Metrics for Software Fault Prediction
Maria Caulo
(University of Basilicata, Italy)
In the field of Software Fault Prediction (SFP), researchers exploit software metrics to build predictive models using machine learning and/or statistical techniques. SFP has existed for several decades and the number of metrics used has increased dramatically. Thus, the need for a taxonomy of metrics for SFP arises firstly to standardize the lexicon used in this field so that the communication among researchers is simplified and then to organize and systematically classify the used metrics. In this doctoral symposium paper, I present my ongoing work which aims not only to build such a taxonomy as comprehensive as possible, but also to provide a global understanding of the metrics for SFP in terms of detailed information: acronym(s), extended name, univocal description, granularity of the fault prediction (e.g., method and class), category, and research papers in which they were used.
Article Search
Distributed Execution of Test Cases and Continuous Integration
Carmen Coviello
(University of Basilicata, Italy)
I present here a part of the research conducted in my Ph.D. course. In particular, I focus on my ongoing work on how to support testing in the context of Continuous Integration (CI) development by distributing the execution of test cases (TCs) on geographically dispersed servers. I show how to find a trade-off between the cost of leased servers and the time to execute a given test suite (TS). The distribution and the execution of TCs on servers is modeled as a multi-objective optimization problem, where the goal is to balance the cost to lease servers and the time to execute TCs. The preliminary results : (i) show evidence of the existence of a Pareto Front (trade-off between costs to lease servers and TCs time) and (ii) suggest that the found solutions are worthwhile as compared to a traditional non-distributed TS execution (i.e., a single server/PC). Although the obtained results cannot be considered conclusive, it seems that the solutions are worth to speed up the testing activities in the context of CI.
Article Search
A Longitudinal Field Study on Creation and Use of Domain-Specific Languages in Industry
Jasper Denkers
(Delft University of Technology, Netherlands)
Domain-specific languages (DSLs) have extensively been investigated in research and have frequently been applied in practice for over 20 years. While DSLs have been attributed improvements in terms of productivity, maintainability, and taming accidental complexity, surprisingly, we know little about their actual impact on the software engineering practice. This PhD project, that is done in close collaboration with our industrial partner Océ – A Canon Company, offers a unique opportunity to study the application of DSLs using a longitudinal field study. In particular, we focus on introducing DSLs with language workbenches, i.e., infrastructures for designing and deploying DSLs, for projects that are already running for several years and for which extensive domain analysis outcomes are available. In doing so, we expect to gain a novel perspective on DSLs in practice. Additionally, we aim to derive best practices for DSL development and to identify and overcome limitations in the current state-of-the-art tooling for DSLs.
Article Search
Failure-Driven Program Repair
Davide Ginelli
(University of Milano-Bicocca, Italy)
Program repair techniques can dramatically reduce the cost of program debugging by automatically generating program fixes. Although program repair has been already successful with several classes of faults, it also turned out to be quite limited in the complexity of the fixes that can be generated.
This Ph.D. thesis addresses the problem of cost-effectively generating fixes of higher complexity by investigating how to exploit failure information to directly shape the repair process. In particular, this thesis proposes Failure-Driven Program Repair, which is a novel approach to program repair that exploits its knowledge about both the possible failures and the corresponding repair strategies, to produce highly specialized repair tasks that can effectively generate non-trivial fixes.
Article Search
On Extending Single-Variant Model Transformations for Reuse in Software Product Line Engineering
Sandra Greiner
(University of Bayreuth, Germany)
Software product line engineering (SPLE) aims at increasing productivity by following the principles of variability and organized reuse. Combining the discipline with model-driven software engineering (MDSE) seeks to intensify this effect by raising the level of abstraction. Typically, a product line developed in a model-driven way is composed of various kinds of models, like class diagrams and database schemata. To automatically generate further necessary representations from a initial (source) model, model transformations may create a respective target model.
In annotative approaches to SPLE, variability annotations, which are boolean expressions over the features of the product line, state in which products a (model) element is visible. State-of-the-art single-variant model transformations (SVMT), however, do not consider variability annotations additionally associated with model elements. Thus, multi-variant model transformations (MVMT) should bridge the gap between existing SPLE and MDSE approaches by reusing already existing technology to propagate annotations additionally to the the target.
The present contribution gives an overview on the research we conduct to reuse SVMTs in model-driven SPLE and provides a plan on which steps are still to be taken.
Article Search
Exploratory Test Agents for Stateful Software Systems
Stefan Karlsson
(ABB, Sweden; Mälardalen University, Sweden)
The adequate testing of stateful software systems is a hard and costly activity.
Failures that result from complex stateful interactions can be of high impact, and it can be hard to replicate failures resulting from erroneous stateful interactions.
Addressing this problem in an automatic way would save cost and time and increase the quality of software systems in the industry. In this paper, we propose an approach that uses agents to explore software systems with the intention to find faults and gain knowledge.
Article Search
Helping Developers Search and Locate Task-Relevant Information in Natural Language Documents
Arthur Marques
(University of British Columbia, Canada)
While performing a task, software developers interact with a myriad of natural language documents.
Not all information in these documents is relevant to a developer’s task forcing them to filter relevant information from large amounts of irrelevant information.
If a developer misses some of the necessary information for her task, she will have an incomplete or incorrect basis from which to complete the task.
Many approaches mine relevant text fragments from natural language artifacts.
However, existing approaches mine information for pre-defined tasks and from a restricted set of artifacts.
I hypothesize that it is possible to design a more generalizable approach that can identify, for a particular task, relevant text across different artifact types
establishing relationships between them and facilitating how developers search and locate task-relevant information.
To investigate this hypothesis, I propose to match a developer’s task to text fragments in natural language artifacts according to their semantics.
By semantically matching textual pieces to a developer’s task we aim to more precisely identify fragments relevant to a task.
To help developers in thoroughly navigating through the identified fragments
I also propose to synthesize and group them.
Ultimately, this research aims to help developers make more informed decisions regarding their software development task.
Dr. Gail C. Murphy supervises this work.
Article Search
Improving Requirements Engineering Practices to Support Experimentation in Software Startups
Jorge Melegati
(Free University of Bolzano, Italy)
The importance of startups to economic development is indisputable. Software startups are startups that develop an innovative software-intensive product or service. In spite of the rising of several methodologies to improve their efficiency, most of software startups still fail. There are several possible reasons to failure including under or over-engineering the product because of not-suitable engineering practices, wasted resources, and missed market opportunities. The literature argues that experimentation is essential to innovation and entrepreneurship. Even though well-known startup development methodologies employ it, studies revealed that practitioners still do not use it. Given that requirements engineering is in between software engineering and business, in this study, I aim to improve these practices to foster experimentation in software startups. To achieve that, first I investigated how requirements engineering activities are performed in software startups. Then, my goal is to propose new requirements engineering practices to foster experimentation in this context.
Article Search
Managing the Open Cathedral
Matthias Müller
(Graz University of Technology, Austria)
Already early in the history of open source projects it became apparent that they are driven by only a few contributors, creating the biggest portion of code. Whereas this has already been shown in previous research, this work adds a time perspective and considers the dynamics and evolution of communities. These aspects become increasingly important with the growing involvement of firms in such communities. Open source software is today used in many commercial applications, but also gets actively developed by businesses. Therefore, understanding and managing such projects into a common direction gets of increasing interest. The author’s work is intended to build a better understanding of these communities, their dynamics over time, key players and dependencies on them.
Article Search
Machine-Learning Supported Vulnerability Detection in Source Code
Tim Sonnekalb
(DLR, Germany)
The awareness of writing secure code rises with the increasing number of attacks and their resultant damage. But often, software developers are no security experts and vulnerabilities arise unconsciously during the development process. They use static analysis tools for bug detection, which often come with a high false positive rate. The developers, therefore, need a lot of resources to mind about all alarms, if they want to consistently take care of the security of their software project.
We want to investigate, if machine learning techniques could point the user to the position of a security weak point in the source code with a higher accuracy than ordinary methods with static analysis. For this purpose, we focus on current machine learning on code approaches for our initial studies to evolve an efficient way for finding security-related software bugs. We will create a configuration interface to discover certain vulnerabilities, categorized in CWEs.
We want to create a benchmark tool to compare existing source code representations and machine learning architectures for vulnerability detection and develop a customizable feature model. At the end of this PhD project, we want to have an easy-to-use vulnerability detection tool based on machine learning on code.
Article Search
09:00 Introduction to the day, award best papers, introduce best presentation competition
09:30 Brian’s keynote
10:30 Break
11:00 Reviewing each other work: Warm Up
11:30 Presentations (3 papers): Session1
12:30 Lunch
13:30 Presentations (5 papers): Session2
15:30 Break
16:00 Two best papers: Session 3
16:40 Panel Session
17:10 Best presentation award
17:15 Closing
19:30 Social dinner (location to be announced)
The detailed program of presentations can be found here.