Christian Kästner :: CMU
 

Christian Kästner

[pronunciation and spelling]

Associate Professor · Carnegie Mellon University · Institute for Software Research

 
Christian Kästner

I am an associate professor in the School of Computer Science at Carnegie Mellon University. My current interests are in software engineering for software systems with ML components (or teaching software engineering to data scientists, "machine learning in production"), open-source sustainability, and software-supply-chain security. I am generally interested in understanding the` limits of modularity and complexity caused by variability in software systems, which naturally brings me to questions of quality assurance, interoperability, and feature interactions. My research combines rigorous empirical research with program analysis and tool building.

I currently serve as the director of the CMU Software Engineering Ph.D. Program.

Profiles: Curriculum vitae, Google Scholar, ACM, dblp, Google+ .

Software and Societal Systems Department (S3D)
School of Computer Science
Carnegie Mellon University
 
Office: TCS 345
Email: kaestner (at) cs.cmu.edu
Mailing Address: C. Kaestner, S3D - TCS Hall 430, 4665 Forbes Avenue, Pittsburgh, PA 15213, USA

 
 
 

News News feed

 
15 Jan. 2024
Machine Learning in Production Book finished and submitted to publisher
Over the last two years, I was writing and refining a book on software engineering for building products with machine learning components, based on our course Machine Learning in Production. I have released as chapters incrementally on Medium. I have finally declared the project as complete and handed over the manuscript to the publisher MIT Press and expect a formal release in about one year. The book remains under a creative commons license and the public (not yet finally copyedited) version of the book is now live at https://mlip-cmu.github.io/book/.
 
12 Oct. 2022
Keynote: From Models to Systems: Rethinking the Role of Software Engineering for Machine Learning
I was invited to give a keynote at MSR 2022 and used this to argue that we should invest in teaching software engineering to data scientists. This talk provides a good overview of how I think about teaching in this area and why I think that "software engineering for ML" is more of an education problem that a research problem. The remote version of the talk was recorded and is here on youtube:
 
5 Oct. 2020
Lecture Recordings: Software Engineering for AI-Enabled Systems
All summer, I recorded all lectures of my class Software Engineering for AI-Enabled Systems. The students graciously consented in releasing those recordings, which can now all be found in a YouTube playlist under a creative commons license (like the rest of the course material):

Also my annotated bibliography on the topic has seen some updates recently and I've written again about requirements engineering for production ML systems.
 
 
Older News...
 
 
 

Research Overview

 
 

Software Engineering for AI-Enabled Systems

We explore how different facets of software engineering change with the introduction of machine learning components in production systems, with an interest in interdisciplinary collaboration, quality assurance, system-level thinking, safety, and better data science tools: Capturing Software Engineering for AI-Enabled Systems · Interdisciplinary Collaboration in Engineering AI-Enabled Systems · Developer Tooling for Data Scientists

 
 

Sustainability and Fairness in Open Source

We study the dynamics of open source communities with a focus on unstanding and fostering fair and sustainable environments. Primarily with empirical research methods, we explore topics, such as open source culture, coordination, stress and disengagement, funding, and security: Sustainability and Fairness in Open Source · Collaboration and Coordination in Open Source · Adoption of Practices and Tooling

 
 

Quality Assurance for Highly-Configurable Software Systems

We explore approaches to scale quality assurance strategies, including parsing, type checking, data-flow analysis, and testing, to huge configuration spaces in order to find variability bugs and detect feature interactions: Variational Analysis · Analysis of Unpreprocessed C Code · Variational Type Checking and Data-Flow Analysis · Variational Execution (Testing) · Sampling · Feature Interactions · Variational Specifications · Assuring and Understanding Quality Attributes as Performance and Energy · Security

 
 

Maintenance and Implementation of Highly-Configurable Systems

We explore a wide range of different variability implementation mechanisms and their tradeoffs; in addition, we explore reverse engineering and refactoring mechanisms for variability and support developers with variability-related maintenance: Reverse Engineering Variability Implementations · Feature Flags · Feature-Oriented Programming · Assessing and Understanding Configuration-Related Complexity · Understanding Preprocessor Use · Tracking Load-Time Configuration Options · Build Systems · Modularity and Feature Interactions

 
 

Working with Imperfect Modularity

We explore mechanisms to support developers in scenarios in which traditional modularity mechanisms face challenges; among others, we explore strategies to complement modularity mechanisms with tooling: Virtual Separation of Concerns · Awareness for Evolution in Software Ecosystems · Conceptual Discussions

 
 

Variability Mechanisms Beyond Configurable Software Systems

We explore how analyses developed for variability can solve problems in contexts beyond software product lines, such as design space exploration, that share facets of the problem such as large finite search spaces with similarities among candidates: Developer Support and Quality Assurance for PHP · Sensitivity Analysis · Mutation Testing and Program Repair

 
 

Other Topics

We have collaborated on a number of other software engineering and programming languages topics, including dynamic software updates, extensible domain-specific languages, software merging, and various empirical methods topics: Understanding Program Comprehension with fMRI

 
 
 

Teaching

 
PRG
 
Fall 2024
 
Spring 2024
 
Fall 2023
 
Spring 2023
 
Fall 2022
 
Spring 2022
 
Fall 2021
 
Spring 2021
 
 
See also the full teaching history.
 
 
 

Team

 
Team photo Dec. 2017
 
Former
 
 
 

Service

TOSEM (Associate Editor, 2019-2022)
 
ICSE-SEET 2027 (PC Co-Chair), CAIN 2025 (PC), OOPSLA 2025 (PC), FSE 2025 (PC), CAIN 2024 (PC), FSE 2024 (PC), ICSE 2024 (PC), CAIN 2023 (PC), ICSE 2022 (Conference Chair, PC), ASE 2021 (PC), ESEC/FSE 2021 (PC), ASE 2020 (PC), SPLC 2020 (PC), ICSE 2020 (PC, SMP Chair), ESEC/FSE 2019 (PC, JF Chair), ASE 2019 (PC), SPLC 2019 (PC), OOPSLA 2019 (ERC), ICSE 2019 (SMP Chair), ICSE-NIER 2019 (PC), ASE 2018 (PC Co-Chair), ICSE 2018 (PC), SE 2018 (PC), ECOOP 2017 (PC), ASE 2017 (PC, DS Chair), ICSE 2017 (PC), ESEC/FSE 2017 (PC), ASE 2016 (ERC), ECOOP 2016 (ERC), SPLC 2016 (PC), MV 2016 (PC), ASE 2015 (DS), ASE 2015 (PC), SBCARS 2015 (PC), SPLC 2015 (PC), GPCE 2015 (General Chair), SPLC 2014 (PC), ASE 2014 (PC), ECOOP 2014 (ERC), MV 2014 (PC), GPCE 2014 (SC), OOPSLA 2013 (PC), GPCE 2013 (PC Chair), SE 2013 (PC), GPCE 2012 (PC), SC 2011 (PC), GPCE 2011 (PC)
 
SQA4AI 2025 (PC), VaMoS 2023 (PC), VaMoS 2021 (PC), VaMoS 2020 (PC), VaMoS 2019 (PC), SPLTea 2018 (PC), VaMoS 2018 (PC), WAPI 2017 (PC), VaMoS 2017 (PC), RELENG 2016 (PC), VaMoS 2016 (PC), VaMoS 2015 (PC), SPLTea 2015 (PC), ICSE-D 2015 (PC), MultiPLE 2014 (PC), SPLat 2014 (PC), SPLTea 2014 (PC), REVE 2014 (PC), FOSD 2014 (OC), ICSE-TB 2014 (PC), VaMoS 2014 (PC), SCORE 2013 (PC), MPLE 2013 (PC), FOSD 2013 (OC), VaMoS 2013 (PC), REVE 2013 (PC), SLE-DS 2012 (PC), FOSD 2012 (OC), SPLC-TD 2012 (PC), NFPinDSML 2012 (PC), RAM-SE 2012 (PC), ESCOT 2012 (OC), MISS 2012 (PC), PEPM 2012 (PC), VaMoS 2012 (PC), FOSD 2011 (OC), FREECO 2011 (PC), FOSD 2010 (OC), ASE-TD 2010 (PC), PLEERPS 2010 (PC), FOSD 2009 (OC)
 
 
 

Blog posts, video, and other media

 
Website, Aug. 2023
Secure Software Supply Chain Center
 
Talk, May. 2022
From Models to Systems: Rethinking the Role of Software Engineering for Machine Learning | MSR'22 Keynote
 
Talk, Mar. 2021
Toward a System-Wide and Interdisciplinary Perspective on ML System Performance | FastPath'21 Workshop Keynote
 
Blog post, Mar. 2021
Rediscovering Unit Testing: Testing Capabilities of ML Models
 
Blog post, Jan. 2021
Why Robustness is not Enough for Safety and Security in Machine Learning
 
Blog post, Nov. 2020
On the Process for Building Software with ML Components
 
Blog post, Oct. 2020
The World and the Machine and Responsible Machine Learning
 
Talk, Sep. 2020
State of the Source 2020: Analyzing Tens of Terabytes of Public Trace Data & Open Source Sustainabilty (w/ B. Vasilescu)
 
Podcast, Sep. 2020
What the Fork? Shurui Zhou on Forking in Open Source | Sustain Podcast (by S. Zhou)
 
Lecture, Aug. 2020
Complete Lecture Recordings: Software Engineering for AI-Enabled Systems
 
Infographic, Jun. 2020
Infographic: Donations in Open Source (by C. Overney)
 
Talk, Jun. 2020
Engineering AI-Enabled Systems with Interdisciplinary Teams | SEMLA'20
 
Blog post, Jun. 2020
A Software Testing View on Machine Learning Model Quality
 
Talk, Jun. 2020
Teaching Software Engineering for AI-Enabled Systems | ICSE SEET'20
 
Talk, Apr. 2020
Software Engineering for ML-Enabled Systems | Code & Supply
 
Blog post, Mar. 2020
Machine Learning is Requirements Engineering
 
Website, Jan. 2020
Software Engineering for AI/ML -- An Annotated Bibliography
 
Blog post, May. 2019
Feature Flags vs Configuration Options — Same Difference?
 
Infographic, Mar. 2018
Infographic: npm badges (w/ A. Trockman, B. Vasilescu)
 
Talk, May. 2017
How to Break an API: How Community Values Influence Practices | JSConf EU 2017
 
Website, May. 2017
How to break an API? (w/ C. Bogart)
 
Blog post, Mar. 2015
On Paper Titles (Bad Ideas, Rejected Ideas, and Final Titles)
 
Blog post, Jul. 2014
Teaching Software Construction with Travis CI
 
 
 

Selected Publications Full publication feed

For a complete list of publications, see the publication page.

 
2025
Christian Kästner. Machine Learning in Production: From Models to Products. Cambridge, MA: The MIT Press, April 2025. [ http, bib ]

A practical and innovative textbook detailing how to build real-world software products with machine learning components, not just models. Traditional machine learning texts focus on how to train and evaluate the machine learning model, while MLOps books focus on how to streamline model development and deployment. But neither focus on how to build actual products that deliver value to users. This practical textbook, by contrast, details how to responsibly build products with machine learning components, covering the entire development lifecycle from requirements and design to quality assurance and operations. Machine Learning in Production brings an engineering mindset to the challenge of building systems that are usable, reliable, scalable, and safe within the context of real-world conditions of uncertainty, incomplete information, and resource constraints. Based on the author's popular class at Carnegie Mellon, this pioneering book integrates foundational knowledge in software engineering and machine learning to provide the holistic view needed to create not only prototype models but production-ready systems. • Integrates coverage of cutting-edge research, existing tools, and real-world applications • Provides students and professionals with an engineering view for production-ready machine learning systems • Proven in the classroom • Offers supplemental resources including slides, videos, exams, and further readings

 
2024
Nadia Nahar, Christian Kästner, Jenna Butler, Chris Parnin, Thomas Zimmermann, and Christian Bird. Beyond the Comfort Zone: Emerging Solutions to Overcome Challenges in Integrating LLMs into Software Products. Technical Report 2410.12071, arXiv, October 2024. [ http, bib ]

Large Language Models (LLMs) are increasingly embedded into software products across diverse industries, enhancing user experiences, but at the same time introducing numerous challenges for developers. Unique characteristics of LLMs force developers, who are accustomed to traditional software development and evaluation, out of their comfort zones as the LLM components shatter standard assumptions about software systems. This study explores the emerging solutions that software developers are adopting to navigate the encountered challenges. Leveraging a mixed-method research, including 26 interviews and a survey with 332 responses, the study identifies 19 emerging solutions regarding quality assurance that practitioners across several product teams at Microsoft are exploring. The findings provide valuable insights that can guide the development and evaluation of LLM-based products more broadly in the face of these challenges.

 
ASE 2024
Chenyang Yang, Yining Hong, Grace Lewis, Tongshuang Wu, and Christian Kästner. What Is Wrong with My Model? Identifying Systematic Problems with Semantic Data Slicing. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE), Los Alamitos, CA: IEEE Computer Society, November 2024. [ .pdf, bib ]

Machine learning models make mistakes, yet sometimes it is difficult to identify the systematic problems behind the mistakes. Practitioners engage in various activities, including error analysis, testing, auditing, and red-teaming, to form hypotheses of what can go (or has gone) wrong with their models. To validate these hypotheses, practitioners employ data slicing to identify examples relevant to their hypotheses. However, traditional data slicing is limited by available features and programmatic slicing functions. In this work, we propose SemSlicer, a framework that supports semantic data slicing, which identifies a semantically coherent slice, without the need for existing features. SemSlicer uses Large Language Models (LLMs) to annotate datasets and generate slices from any user-defined slicing criteria. We show that SemSlicer generates accurate slices with low cost, allows flexible trade-offs between different design dimensions, reliably identifies under-performing data slices, and helps practitioners identify useful data slices that reflect systematic problems.

 
ICSE 2025
Nadia Nahar, Haoran Zhang, Grace Lewis, Shurui Zhou, and Christian Kästner. The Product Beyond the Model -- An Empirical Study of Repositories of Open-Source ML Products. In Proceedings of the 47th International Conference on Software Engineering (ICSE), April 2025. [ .pdf, bib ]

Machine learning (ML) components are increasingly incorporated into software products for end-users, but developers face challenges in transitioning from ML prototypes to products. Academics have limited access to the source of commercial ML products, challenging research progress. In this study, first, we contribute a novel process to identify 262 open-source ML products among more than half a million ML-related projects on GitHub. Then, we qualitatively and quantitatively analyze 30 open-source ML products to answer six broad research questions about development practices and system architecture. We find that the majority of the ML products in our sample represent startup-style development reported in past interview studies. We report 21 findings, including limited involvement of data scientists in many ML products, unusually low modularity between ML and non-ML code, diverse architectural choices on incorporating models into products, and limited prevalence of industry best practices such as model testing, pipeline automation, and monitoring. Additionally, we discuss 7 implications of this study on research, development, and education, including the need for tools to assist teams without data scientists, education opportunities, and open-source-specific research for privacy-preserving telemetry.

 
ICSE 2025
Courtney Miller, Mahmoud Jahanshahi, Audris Mockus, Bogdan Vasilescu, and Christian Kästner. Understanding the Response to Open-Source Dependency Abandonment in the npm Ecosystem. In Proceedings of the 47th International Conference on Software Engineering (ICSE), April 2025. [ .pdf, bib ]

Many developers relying on open-source digital infrastructure expect continuous maintenance, but even the most critical packages can become unmaintained. Despite this, there is little understanding of the prevalence of abandonment of widely-used packages, of subsequent exposure, and of reactions to abandonment in practice, or the factors that influence them. We perform a large-scale quantitative analysis of all widely-used npm packages and find that abandonment is common among them, that abandonment exposes many projects which often do not respond, that responses correlate with other dependency management practices, and that removal is significantly faster when a projects end-of-life status is explicitly stated. We end with recommendations to both researchers and practitioners who are facing dependency abandonment or are sunsetting projects, such as opportunities for low-effort transparency mechanisms to help exposed projects make better, more informed decisions.

 
FAccT 2024
Nadia Nahar, Jenny Rowlett, Matthew Bray, Zahra Abba Omar, Xenophon Papademetris, Menon Alka, and Christian Kästner. Regulating Explainability in Machine Learning Applications -- Observations from a Policy Design Experiment. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT), pages 2101--2112, June 2024. [ .pdf, doi, bib ]

With the rise of artificial intelligence (AI), concerns about AI applications causing unforeseen harms to safety, privacy, security, and fairness are intensifying. While attempts to create regulations are underway, with initiatives such as the EU AI Act and the 2023 White House executive order, skepticism abounds as to the efficacy of such regulations. This paper explores an interdisciplinary approach to designing policy for the explainability of AI-based products, as the widely discussed "right to explanation" in the EU General Data Protection Regulation is ambiguous. To develop practical guidance for explainability, we conducted an experimental study that involved continuous collaboration among a team of researchers with AI and policy backgrounds over the course of ten weeks. The objective was to determine whether, through interdisciplinary effort, we can reach consensus on a policy for explainability in AI—one that is clearer, and more actionable and enforceable than current provisions. We share nine observations, derived from an iterative policy design process, which included drafting the policy, attempting to comply with it (or circumvent it), and collectively evaluating its effectiveness on a weekly basis. The observations include: iterative and continuous feedback was useful to improve policy drafts over time, discussing what evidence would satisfy policy was necessary during policy design, and human-subject studies were found to be necessary evidence to ensure effectiveness. We conclude with a note of optimism, arguing that meaningful policies can be achieved within a moderate time frame and with limited experience in policy design, as demonstrated by our student researchers on the team. This holds promising implications for policymakers, signaling that practical and effective regulation for AI applications is attainable.

 
EMNLP 2023
Chenyang Yang, Rishabh Rustogi, Rachel A Brower-Sinning, Grace Lewis, Christian Kästner, and Tongshuang Wu. Beyond Testers’ Biases: Guiding Model Testing with Knowledge Bases using LLMs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing -- Findings (EMNLP), pages 13504--13519, December 2023. [ .pdf, doi, http, bib ]

Current model testing work has mostly focused on creating test cases. Identifying what to test is a step that is largely ignored and poorly supported. We propose Weaver, an interactive tool that supports requirements elicitation for guiding model testing. Weaver uses large language models to generate knowledge bases and recommends concepts from them interactively, allowing testers to elicit requirements for further testing. Weaver provides rich external knowledge to testers and encourages testers to systematically explore diverse concepts beyond their own biases. In a user study, we show that both NLP experts and non-experts identified more, as well as more diverse concepts worth testing when using Weaver. Collectively, they found more than 200 failing test cases for stance detection with zero-shot ChatGPT. Our case studies further show that Weaver can help practitioners test models in real-world settings, where developers define more nuanced application scenarios (e.g., code understanding and transcript summarization) using LLMs.

 
ESEC/FSE 2023
Courtney Miller, Christian Kästner, and Bogdan Vasilescu. "We Feel Like We're Winging It:" A Study on Navigating Open-Source Dependency Abandonment. In Proceedings of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 1281--1293, New York, NY: ACM Press, December 2023. [ .pdf, doi, bib ]

While lots of research has explored how to prevent maintainers from abandoning the open-source projects that serve as our digital infrastructure, there are very few insights on addressing abandonment when it occurs. We argue open-source sustainability research must expand its focus beyond trying to keep particular projects alive, to also cover the sustainable use of open source by supporting users when they face potential or actual abandonment. We perform an interview study with 33 developers who have experienced open-source dependency abandonment and analyze the data using iterative thematic analysis. Often, multiple strategies were used to cope with abandonment, for example, first reaching out to the community to find potential alternatives, then switching to a community-accepted alternative if one exists. We found many developers felt they had little to no support or guidance when facing abandonment, leaving them to figure out what to do through a trial-and-error process on their own. Abandonment introduces cost for otherwise seemingly free dependencies, but users can decide whether and how to prepare for abandonment through a number of different strategies, such as dependency monitoring, building abstraction layers, and community involvement. In many cases, community members can invest in resources that help others facing the same abandoned dependency, but often do not because of the many other competing demands on their time – in a form of the volunteer's dilemma. We discuss cost reduction strategies and ideas to overcome this volunteers dilemma. Our findings can be used directly by open-source users seeking resources on dealing with dependency abandonment, or by researchers to motivate future work supporting the sustainable use of open source.

 
CAIN 2023
Nadia Nahar, Haoran Zhang, Grace Lewis, Shurui Zhou, and Christian Kästner. A Meta-Summary of Challenges in Building Products with ML Components – Collecting Experiences from 4758+ Practitioners. In Proceedings of the International Conference on AI Engineering - Software Engineering for AI (CAIN), pages 171--183, May 2023. [ .pdf, doi, http, bib ]

Incorporating machine learning (ML) components into software products raises new software-engineering challenges and elevates already existing challenges. Many researchers have invested significant effort into understanding the challenges of industry practitioners working on building products with ML components through interviews and surveys with practitioners. With the intention to aggregate and present their collective findings, we conduct a meta-summary study: We collect 50 relevant papers that together interacted with over 4758 practitioners using guidelines for systematic literature reviews and subsequently group and organize the over 500 mentions of challenges within those papers. We highlight the most commonly reported challenges and how this meta-summary will be a useful resource for the research community to prioritize research and education in this field.

 
CHI 2023
Avinash Bhat, Austin Coursey, Grace Hu, Sixian Li, Nadia Nahar, Shurui Zhou, Christian Kästner, and Jin L.C. Guo. Aspirations and Practice of ML Model Documentation: Moving the Needle with Nudging and Traceability. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI), Article No.: 749, April 2023. [ .pdf, doi, http, bib ]

The documentation practice for machine-learned (ML) models often falls short of established practices for traditional software, which impedes model accountability and inadvertently abets inappropriate or misuse of models. Recently, model cards, a proposal for model documentation, have attracted notable attention, but their impact on the actual practice is unclear. In this work, we systematically study the model documentation in the field and investigate how to encourage more responsible and accountable documentation practice. Our analysis of publicly available model cards reveals a substantial gap between the proposal and the practice. We then designed a tool named DocML aiming to (1) nudge the data scientists to comply with the model cards proposal during the model development, especially the sections related to ethics, and (2) assess and manage the documentation quality. A lab study reveals the benefit of our tool towards long-term documentation quality and accountability.

 
ASE 2022
Chenyang Yang, Rachel A Brower-Sinning, Grace Lewis, and Christian Kästner. Data Leakage in Notebooks: Static Detection and Better Processes. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE), Article No.: 30, New York, NY: ACM Press, October 2022. [ .pdf, doi, http, bib ]

Data science pipelines to train and evaluate models with machine learning may contain bugs just like any other code. Leakage between training and test data can lead to overestimating the model's accuracy during offline evaluations, possibly leading to deployment of low-quality models in production. Such leakage can happen easily by mistake or by following poor practices but may be tedious and challenging to detect manually. We develop a static analysis approach to detect common forms of data leakage in data science code. Our evaluation shows that our analysis accurately detects data leakage and that such leakage is pervasive among over 100,000 analyzed public notebooks. We discuss how our static analysis approach can help both practitioners and educators, and how leakage prevention can be designed into the development process.

 
IEEE-Sw 2022
Christian Kästner, Eunsuk Kang, and Sven Apel. Feature Interactions on Steroids: On the Composition of ML Models. IEEE Software (IEEE-Sw), 39(3):120--124, May 2022. [ .pdf, doi, bib ]

One of the key differences between traditional software engineering and machine learning (ML) is the lack of specifications for ML models. Traditionally, specifications provide a cornerstone for compositional reasoning and for the divide-and-conquer strategy of how we build large and complex systems from components, but these are hard to come by for machine learned components. While the lack of specification seems like a fundamental new problem at first sight, in fact, software engineers routinely deal with iffy specifications in practice. We face weak specifications, wrong specifications, and unanticipated interactions among specifications. ML may push us further, but the problems are not fundamentally new. Rethinking ML model composition from the perspective of the feature-interaction problem highlights the importance of software design.

 
ICSE 2022
Nadia Nahar, Shurui Zhou, Grace Lewis, and Christian Kästner. Collaboration Challenges in Building ML-Enabled Systems: Communication, Documentation, Engineering, and Process. In Proceedings of the 44th International Conference on Software Engineering (ICSE), pages 413--425, New York, NY: ACM Press, May 2022. Distinguished Paper Award. [ .pdf, doi, http, video, bib ]

The introduction of machine learning (ML) components in software projects has created the need for software engineers to collaborate with data scientists and other specialists. While collaboration can always be challenging, ML introduces additional challenges with its exploratory model development process, additional skills and knowledge needed, difficulties testing ML systems, need for continuous evolution and monitoring, and non-traditional quality requirements such as fairness and explainability. Through interviews with 45 practitioners from 28 organizations, we identified key collaboration challenges that teams face when building and deploying ML systems into production. We report on common collaboration points in the development of production ML systems for requirements, data, and integration, as well as corresponding team patterns and challenges. We find that most of these challenges center around communication, documentation, engineering, and process and collect recommendations to address these challenges

 
ICSE 2022
Courtney Miller, Sophie Cohen, Daniel Klug, Bogdan Vasilescu, and Christian Kästner. "Did You Miss My Comment or What?" Understanding Toxicity in Open Source Discussions. In Proceedings of the 44th International Conference on Software Engineering (ICSE), pages 710--722, New York, NY: ACM Press, May 2022. Distinguished Paper Award. [ .pdf, doi, video, bib ]

Online toxicity is ubiquitous across the internet and its negative impact on the people and online communities it effects has been well documented. However, toxicity manifests differently on various platforms and toxicity in open source communities, while frequently discussed, is not well understood. We take a first stride at understanding the characteristics of open source toxicity to better inform future work designing effective intervention and detection methods. To this end, we curate a sample of 100 toxic GitHub issue discussions combining multiple search and sampling strategies. We then qualitatively analyze the sample to gain an understanding of the characteristics of open-source toxicity. We find that the prevalent forms of toxicity in open source differ from those observed on other platforms like Reddit or Wikipedia. We find some of the most prevalent forms of toxicity in open source are entitled, demanding, and arrogant comments from project users and insults arising from technical disagreements. In addition, not all toxicity was written by people external to the projects, project members were also common authors of toxicity. We also provide in-depth discussions about the implications of our findings including patterns that may be useful for detection work and subsequent questions for future work.

 
ICSE 2022
Miguel Velez, Pooyan Jamshidi, Norbert Siegmund, Sven Apel, and Christian Kästner. On Debugging the Performance of Configurable Software Systems: Developer Needs and Tailored Tool Support. In Proceedings of the 44th International Conference on Software Engineering (ICSE), pages 1571--1583, New York, NY: ACM Press, May 2022. [ .pdf, doi, http, video, bib ]

Determining whether a configurable software system has a performance bug or the system was misconfigured is often challenging. While there are numerous debugging techniques that can support developers in this task, there is limited empirical evidence of how useful the techniques are to address the actual needs that developers have when debugging the performance of configurable systems; most techniques are often evaluated in terms of technical accuracy instead of their usability. In this paper, we take a human-centered approach to identify, design, implement, and evaluate a solution to support developers in the process of debugging the performance of configurable software systems. We first conduct an exploratory study with 19 developers to identify the information needs that developers have during this process. Subsequently, we design and implement a tailored tool, building on relevant information provided by Global and Local performance-influence models, CPU profiling, and program slicing, to support those needs. Two user studies, with a total of 20 developers, validate and confirm that the information that we provide help developers debug the performance of configurable software systems.

 
ASE 2021
Chenyang Yang, Shurui Zhou, Jin L.C. Guo, and Christian Kästner. Subtle Bugs Everywhere: Generating Documentation for Data Wrangling Code. In Proceedings of the 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 304--316, Los Alamitos, CA: IEEE Computer Society, November 2021. [ .pdf, doi, bib ]

Data scientists reportedly spend a significant amountof their time in their daily routines on data wrangling, i.e., cleaning data and extracting features. However, data wrangling code is often repetitive and error-prone to write. Moreover, itis easy to introduce subtle bugs when reusing and adopting existing code, which result not in crashes but reduce model quality. To support data scientists with data wrangling, we present a technique to generate interactive documentation for data wrangling code. We use (1) program synthesis techniques to automatically summarize data transformations and (2) test case selection techniques to purposefully select representative examples from the data based on execution information collected with tailored dynamic program analysis. We demonstrate that a JupyterLab extension with our technique can provide documentation for many cells in popular notebooks and find in a user study that users with our plugin are faster and more effective at finding realistic bugs in data wrangling code.

 
ESEC/FSE 2021
Chu-Pan Wong, Priscila Santiesteban, Christian Kästner, and Claire Le Goues. VarFix: Balancing Edit Expressiveness and Search Effectiveness in Automated Program Repair. In Proceedings of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 354--366, New York, NY: ACM Press, August 2021. [ .pdf, doi, bib ]

Automatically repairing a bugging program is essentially a search problem, searching for code transformations that pass a set of tests. Various search strategies have been explored, but they either navigate the search space in an ad hoc way using heuristics, or systemically but at the cost of limited edit expressiveness in the kinds of supported program edits. In this work, we explore the possibility of systematically navigating the search space without sacrificing edit expressiveness. The key enabler of this exploration is variational execution, a dynamic analysis technique that has been shown to be effective at exploring many similar executions in large search spaces. We evaluate our approach on IntroClassJava and Defects4J, showing that a systematic search is effective at leveraging and combining fixing ingredients to find patches, including many high-quality patches and multi-edit patches.

 
TOSEM 2021
Christopher Bogart, Christian Kästner, James Herbsleb, and Ferdian Thung. When and how to make breaking changes: Policies and practices in 18 open source software ecosystems. ACM Transactions on Software Engineering and Methodology (TOSEM), 30(4):Article No.: 42, pp 1--56, October 2021. [ .pdf, http, bib ]

Open source software projects often rely on package management systems that help projects discover, incorporate, and maintain dependencies on other packages, maintained by other people. Such systems save a great deal of effort over adhoc ways of advertising, packaging, and transmitting useful libraries, but coordination among project teams is still needed when one package makes a breaking change affecting other packages. Ecosystems differ in their approaches to breaking changes, and there is no general theory to explain the relationships between features, behavioral norms, ecosystem outcomes, and motivating values. We address this through two empirical studies. In an interview case study we contrast Eclipse, NPM, and CRAN, demonstrating that these different norms for coordination of breaking changes shift the costs of using and maintaining the software among stakeholders, appropriate to each ecosystem’s mission. In a second study, we combine a survey, repository mining, and document analysis to broaden and systematize these observations across 18 ecosystems. We find that all ecosystems share values such as stability and compatibility, but differ in other values. Ecosystems’ practices often support their espoused values, but in surprisingly diverse ways. The data provides counterevidence against easy generalizations about why ecosystem communities do what they do.

 
ICSE 2021
Miguel Velez, Pooyan Jamshidi, Norbert Siegmund, Sven Apel, and Christian Kästner. White-Box Analysis over Machine Learning: Modeling Performance of Configurable Systems. In Proceedings of the 43rd International Conference on Software Engineering (ICSE), pages 1072--1084, Los Alamitos, CA: IEEE Computer Society, May 2021. [ .pdf, doi, http, bib ]

Performance-influence models can help stakeholders understand how and where configuration options and their interactions influence the performance of a system. With this understanding, stakeholders can debug performance and make deliberate configuration decisions. Current black-box techniques to build such models combine various sampling and learning strategies, resulting in trade offs between measurement effort, accuracy, and interpretability. We present Comprex, a white-box approach to build performance-influence models for configurable systems, combining insights of local measurements, dynamic taint analysis to track options in the implementation, compositionality, and compression of the configuration space, without using machine learning to extrapolate incomplete samples. Our evaluation on 4 widely-used open-source projects demonstrates that Comprex builds similarly accurate performance-influence models to the most accurate and expensive black-box approach, but at a reduced cost and with additional benefits from interpretable and local models.

 
ICSE 2021
Gabriel Ferreira, Limin Jia, Joshua Sunshine, and Christian Kästner. Containing Malicious Package Updates in npm with a Lightweight Permission System. In Proceedings of the 43rd International Conference on Software Engineering (ICSE), pages 1334--1346, Los Alamitos, CA: IEEE Computer Society, May 2021. [ .pdf, doi, bib ]

The large amount of third-party packages available in fast-moving software ecosystems, such as the Node.js/npm, enables attackers to compromise applications by pushing malicious updates to their package dependencies. Studying the npm repository, we observed that many packages perform only simple computations and do not need access to filesystem or network APIs. This offers the opportunity to enforce least-privilege design per package, protecting them from malicious updates. We discuss the design space and propose a lightweight permission system that protects Node.js/npm applications by enforcing package permissions at runtime. Our system makes a large number of packages much harder to be exploited, almost for free.

 
ESEC/FSE 2020
Chu-Pan Wong, Jens Meinicke, Leo Chen, João P. Diniz, Christian Kästner, and Eduardo Figueiredo. Efficiently Finding Higher-Order Mutants. In Proceedings of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 1165--1177, New York, NY: ACM Press, November 2020. [ .pdf, doi, teaser, video, bib ]

Higher-order mutation has the potential for improving major drawbacks of traditional first-order mutation, such as by simulating more realistic faults or improving test optimization techniques. Despite interest in studying promising higher-order mutants, such mutants are difficult to find due to the exponential search space of mutation combinations. State-of-the-art approaches rely on genetic search, which is often incomplete and expensive due to its stochastic nature. First, we propose a novel way of finding a complete set of higher-order mutants by using variational execution, a technique that can, in many cases, explore large search spaces completely and often efficiently. Second, we use the identified complete set of higher-order mutants to study their characteristics. Finally, we use the identified characteristics to design and evaluate a new search strategy, independent of variational execution, that is highly effective at finding higher-order mutants even in large code bases.

 
ICSE-SEIP 2020
Jens Meinicke, Chu-Pan Wong, Bogdan Vasilescu, and Christian Kästner. Exploring Differences and Commonalities between Feature Flags and Configuration Options. In Proceedings of the Proc. International Conference on Software Engineering -- Software Engineering in Practice Track (ICSE-SEIP), pages 233--242, May 2020. [ .pdf, doi, video, bib ]

Feature flags for continuous deployment and configuration options for customizing software share many similarities, both conceptually and technically. However, neither academic nor practitioner publications seem to distinguish these two concepts. We argue that a distinction is valuable, as applications, goals, and challenges differ fundamentally between feature flags and configuration options. In this work, we explore the differences and commonalities of both concepts to help understand practices and challenges and to help transfer existing solutions (e.g., for testing). To better understand feature flags and how they relate to configuration options, we performed nine semi-structured interviews with feature-flag experts. We discovered a number of distinguishing characteristics but also opportunities for knowledge and technology transfer across both communities. Overall, we think that both communities can learn from each other.

 
ICSE 2020
Cassandra Overney, Jens Meinicke, Christian Kästner, and Bogdan Vasilescu. How to Not Get Rich: An Empirical Study of Donations in Open Source. In Proceedings of the 42nd International Conference on Software Engineering (ICSE), pages 1209--1221, New York, NY: ACM Press, May 2020. [ .pdf, doi, video, bib ]

Open source is ubiquitous and critical infrastructure, yet funding and sustaining it is challenging. While there are many different funding models for open-source donations and concerted efforts through foundations, donation platforms like Paypal, Patreon, or OpenCollective are popular and low-bar forms to raise funds for open-source development, for which GitHub recently even built explicit support. With a mixed-method study, we explore the emerging and largely unexplored phenomenon of donations in open source: We quantify how commonly open-source projects ask for donations, statistically model characteristics of projects that ask for and receive donations, analyze for what the requested funds are needed and used, and assess whether the received donations achieve the intended outcomes. We find 25,885 projects asking for donations on GitHub, often to support engineering activities; however, we also find no clear evidence that donations influence the activity level of a project. In fact, we find that donations are used in a multitude of ways, raising new research questions about effective funding.

 
ICSE 2020
Shurui Zhou, Bogdan Vasilescu, and Christian Kästner. How Has Forking Changed in the Last 20 Years? A Study of Hard Forks on GitHub. In Proceedings of the 42nd International Conference on Software Engineering (ICSE), pages 445--456, New York, NY: ACM Press, May 2020. [ .pdf, doi, video, bib ]

The notion of forking has changed with the rise of distributed version control systems and social coding environments, like GitHub. Traditionally forking refers to splitting off an independent development branch (which we call hard forks); research on hard forks, conducted mostly in pre-GitHub days showed that hard forks were often seen critical as they may fragment a community. Today, in social forking environments, open-source developers are encouraged to fork a project in order to integrate contributions to the community (which we call social forks), which may have also influenced perceptions and practices around hard forks. To revisit hard forks, we identify, study and classify 15,306 hard forks on GitHub and interview 18 owners of hard forks or forked repositories. We find that, among others, hard forks often evolve out of social forks rather than being planned deliberately and that perception about hard forks have indeed changed dramatically, seeing them often as a positive non-competitive alternative to the original project.

 
ICSE-SEET 2020
Christian Kästner, and Eunsuk Kang. Teaching Software Engineering for AI-Enabled Systems. In Proceedings of the Proc. International Conference on Software Engineering -- Software Engineering Education and Training Track (ICSE-SEET), pages 45--48, New York, NY: ACM Press, May 2020. [ .pdf, doi, http, video, bib ]

Software engineers have significant expertise to offer when building intelligent systems, drawing on decades of experience and methods for building systems that scale and are responsive and robust, even when built on unreliable components. Systems with artificial-intelligence or machine-learning (ML) components raise new challenges and require careful engineering. We designed a new course to teach software-engineering skills to students with a background in ML. We specifically go beyond traditional ML courses that teach modeling techniques under artifical conditions and focus, in lecture and assignments, on realism with large and changing datasets, robust and evolvable infrastructure, and purposeful requirements engineering that considers also ethics and fairness. We describe the course and our infrastructure and share experience and all material from teaching the course for the first time.

 
TOSEM 2018
Alexander von Rhein, Jörg Liebig, Andreas Janker, Christian Kästner, and Sven Apel. Variability-Aware Static Analysis at Scale: An Empirical Study. ACM Transactions on Software Engineering and Methodology (TOSEM), 27(4):Article No. 18, 2018. [ .pdf, doi, bib ]

The advent of variability management and generator technology enables users to derive individual system variants from a configurable code base by selecting desired configuration options. This approach gives rise to the generation of possibly billions of variants, which, however, cannot be efficiently analyzed for bugs and other properties with classic analysis techniques. To address this issue, researchers and practitioners have developed sampling heuristics and, recently, variability-aware analysis techniques. While sampling reduces the analysis effort significantly, the information obtained is necessarily incomplete, and it is unknown whether state-of-the-art sampling techniques scale to billions of variants. Variability-aware analysis techniques process the configurable code base directly, exploiting similarities among individual variants with the goal of reducing analysis effort. However, while being promising, so far, variability-aware analysis techniques have been applied mostly only to small academic examples. To learn about the mutual strengths and weaknesses of variability-aware and sample-based static-analysis techniques, we compared the two by means of seven concrete control-flow and data-flow analyses, applied to five real-world subject systems: BusyBox, OpenSSL, SQLite, the x86 Linux kernel, and uclibc. In particular, we compare the efficiency (analysis execution time) of the static analyses and their effectiveness (potential bugs found). Overall, we found that variability-aware analysis outperforms most sample-based static-analysis techniques with respect to efficiency and effectiveness. For example, checking all variants of OpenSSL with a variability-aware static analysis is faster than checking even only two variants with an analysis that does not exploit similarities among variants.

 
OOPSLA 2018
Chu-Pan Wong, Jens Meinicke, Lukas Lazarek, and Christian Kästner. Faster Variational Execution with Transparent Bytecode Transformation. Proceedings of the ACM on Programming Languages, Issue OOPSLA (OOPSLA), 2:117:1--117:30, 2018. [ .pdf, doi, bib ]

Variational execution is a novel dynamic analysis technique for exploring highly configurable systems and accurately tracking information flow. It is able to efficiently analyze many configurations by aggressively sharing redundancies of program executions. The idea of variational execution has been demonstrated to be effective in exploring variations in the program, especially when the configuration space grows out of control. Existing implementations of variational execution often require heavy lifting of the runtime interpreter, which is painstaking and error-prone. Furthermore, the performance of this approach is suboptimal. For example, the state-of-the-art variational execution interpreter for Java, VarexJ, slows down executions by 100 to 800~times over a single execution for small to medium size Java programs. Instead of modifying existing JVMs, we propose to transform existing bytecode to make it variational, so it can be executed on an unmodified commodity JVM. Our evaluation shows a dramatic improvement on performance over the state-of-the-art, with a speedup of 2 to 46 times, and high efficiency in sharing computations.

 
ICSE 2018
Asher Trockman, Shurui Zhou, Christian Kästner, and Bogdan Vasilescu. Adding Sparkle to Social Coding: An Empirical Study of Repository Badges in the npm Ecosystem. In Proceedings of the 40th International Conference on Software Engineering (ICSE), pages 511--522, New York, NY: ACM Press, May 2018. [ .pdf, doi, http, bib ]

In fast-paced, reuse-heavy software development, the transparency provided by social coding platforms like GitHub is essential to decision making. Developers infer the quality of projects using visible cues, known as signals, collected from personal profile and repository pages. We report on a large-scale, mixed-methods empirical study of npm packages that explores the emerging phenomenon of repository badges, with which maintainers signal underlying qualities about the project to contributors and users. We investigate which qualities maintainers intend to signal and how well badges correlate with those qualities. After surveying developers, mining 294,941 repositories, and applying statistical modeling and time series analysis techniques, we find that non-trivial badges, which display the build status, test coverage, and up-to-dateness of dependencies, are mostly reliable signals, correlating with more tests, better pull requests, and fresher dependencies. Displaying such badges correlates with best practices, but the effects do not always persist.

 
ICSE 2018
Shurui Zhou, Ștefan Stănciulescu, Olaf Leßenich, Yingfei Xiong, Andrzej Wąsowski, and Christian Kästner. Identifying Features in Forks. In Proceedings of the 40th International Conference on Software Engineering (ICSE), pages 105--116, New York, NY: ACM Press, May 2018. [ .pdf, doi, http, bib ]

Fork-based development has been widely used both in open source community and industry, because it gives developers flexibility to modify their own fork without affecting others. Unfortunately, this mechanism has downsides; when the number of forks becomes large, it is difficult for developers to get or maintain an overview of activities in the forks. Current tools provide little help. We introduced INFOX, an approach to automatically identifies not-merged features in forks and generates an overview of active forks in a project. The approach clusters cohesive code fragments using code and network analysis techniques and uses information-retrieval techniques to label clusters with keywords. The clustering is effective, with 90% accuracy on a set of known features. In addition, a human-subject evaluation shows that INFOX can provide actionable insight for developers of forks.

 
TSE 2018
Max Lillack, Christian Kästner, and Eric Bodden. Tracking Load-time Configuration Options. IEEE Transactions on Software Engineering (TSE), 44(12):1269--1291, 2018. [ .pdf, doi, bib ]

Highly configurable software systems are pervasive, although configuration options and their interactions raise complexity of the program and increase maintenance effort. Especially load-time configuration options, such as parameters from command-line options or configuration files, are used with standard programming constructs such as variables and if-statements intermixed with the program’s implementation; manually tracking configuration options from the time they are loaded to the point where they may influence control-flow decisions is tedious and error prone. We design and implement LOTRACK , an extended static taint analysis to track configuration options automatically. LOTRACK derives a configuration map that explains for each code fragment under which configurations it may be executed. An evaluation on Android apps and Java applications from different domains shows that LOTRACK yields high accuracy with reasonable performance. We use LOTRACK to empirically characterize how much of the implementation of Android apps depends on the platform’s configuration options or interactions of these options.

 
ASE 2017
Pooyan Jamshidi, Norbert Siegmund, Miguel Velez, Christian Kästner, Akshay Patel, and Yuvraj Agarwal. Transfer Learning for Performance Modeling of Configurable Systems: An Exploratory Analysis. In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 497--508, Los Alamitos, CA: IEEE Computer Society, November 2017. [ .pdf, doi, http, bib ]

Modern software systems provide many configuration options which not only influence their functionality but also non-functional properties such as response-time. To understand and predict the effect of configuration options, several sampling, analysis, and learning strategies have been proposed, albeit often with significant cost to cover the highly dimensional configuration space. Recently, transfer learning has been applied to reduce the effort of constructing performance models by transferring knowledge about performance behavior across environments. While this line of research is promising to learn more accurate models at lower cost, it is unclear until now why and when transfer learning works for performance modeling and analysis in highly configurable systems. To shed light on when it is beneficial to apply transfer learning, we conducted an empirical study on four popular software systems, varying software configurations and environmental conditions, such as hardware, workload, and software versions, to identify the key knowledge pieces that can be exploited for transfer learning. Our results show that in small environmental changes (e.g., homogeneous workload change), by applying a linear transformation to the performance model of the source environment, we can understand the performance behavior of the target environment, while for severe environmental changes (e.g., drastic workload change) we can transfer only knowledge that makes sampling in the target environment more efficient, e.g., by reducing the dimensionality of the configuration space.

 
TSE 2018
Flávio Medeiros, Márcio Ribeiro, Rohit Gheyi, Sven Apel, Christian Kästner, Bruno Ferreira, Luiz Carvalho, and Baldoino Fonseca. Discipline Matters: Refactoring of Preprocessor Directives in the #ifdef Hell. IEEE Transactions on Software Engineering (TSE), 44(5):453--469, May 2018. [ .pdf, doi, bib ]

The C preprocessor is used in many C projects to support variability and portability. However, researchers and practitioners criticize the C preprocessor because of its negative effect on code understanding and maintainability and its error proneness. More importantly, the use of the preprocessor hinders the development of tool support that is standard in other languages, such as automated refactoring. Developers aggravate these problems when using the preprocessor in undisciplined ways (e.g., conditional blocks that do not align with the syntactic structure of the code). In this article, we proposed a catalogue of refactorings and we evaluated the number of application possibilities of the refactorings in practice, the opinion of developers about the usefulness of the refactorings, and whether the refactorings preserve behavior. Overall, we found 5670 application possibilities for the refactorings in 63 real-world C projects. In addition, we performed an online survey among 246 developers, and we submitted 28 patches to convert undisciplined directives into disciplined ones. According to our results, 63% of developers prefer to use the refactored (i.e., disciplined) version of the code instead of the original code with undisciplined preprocessor usage. To verify that the refactorings are indeed behavior preserving, we applied them to more than 36 thousand programs generated automatically using a model of a subset of the C language, running the same test cases in the original and refactored programs. Furthermore, we applied the refactorings to three real-world projects: BusyBox, OpenSSL, and SQLite. This way, we detected and fixed a few behavioral changes, 62% caused by unspecified behavior in the C programming language.

 
ASE 2016
Jens Meinicke, Chu-Pan Wong, Christian Kästner, Thomas Thüm, and Gunter Saake. On Essential Configuration Complexity: Measuring Interactions In Highly-Configurable Systems. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 483--494, New York, NY: ACM Press, September 2016. [ .pdf, doi, bib ]

Quality assurance for highly-configurable systems is challenging due to the exponentially growing configuration space. Interactions among multiple options can lead to surprising behaviors, bugs, and security vulnerabilities. Analyzing all configurations systematically might be possible though if most options do not interact or interactions follow specific patterns that can be exploited by analysis tools. To better understand interactions in practice, we analyze program traces to identify where interactions occur on control flow and data. To this end, we developed a dynamic analysis for Java based on variability-aware execution and monitor executions of multiple mid-sized real-world programs. We find that the essential configuration complexity of these programs is indeed much lower than the combinatorial explosion of the configuration space indicates, but also that the interaction characteristics that allow scalable and complete analyses are more nuanced than what is exploited by existing state-of-the-art quality assurance strategies.

 
FSE 2016
Christopher Bogart, Christian Kästner, James Herbsleb, and Ferdian Thung. How to Break an API: Cost Negotiation and Community Values in Three Software Ecosystems. In Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), pages 109--120, New York, NY: ACM Press, November 2016. [ .pdf, doi, http, bib ]

Change introduces conflict into software ecosystems: breaking changes may ripple through the ecosystem and trigger rework for users of a package, but often developers can invest additional effort or accept opportunity costs to alleviate or delay downstream costs. We performed a multiple case study of three software ecosystems with different tooling and philosophies toward change, Eclipse, R/CRAN, and Node.js/npm, to understand how developers make decisions about change and change-related costs and what practices, tooling, and policies are used. We found that all three ecosystems differ substantially in their practices and expectations toward change and that those differences can be explained largely by different community values in each ecosystem. Our results illustrate that there is a large design space in how to build an ecosystem, its policies and its supporting infrastructure; and there is value in making community values and accepted tradeoffs explicit and transparent in order to resolve conflicts and negotiate change-related costs.

 
ICSE 2016
Flávio Medeiros, Christian Kästner, Márcio Ribeiro, Rohit Gheyi, and Sven Apel. A Comparison of 10 Sampling Algorithms for Configurable Systems. In Proceedings of the 38th International Conference on Software Engineering (ICSE), pages 643--654, New York, NY: ACM Press, May 2016. [ .pdf, doi, bib ]

Almost every software system provides configuration options to tailor the system to the target platform and application scenario. Often, this configurability renders the analysis of every individual system configuration infeasible. To address this problem, researchers proposed a diverse set of sampling algorithms. We present a comparative study of 10 state-of-the-art sampling algorithms regarding their fault-detection capability and size of sample sets. The former is important to improve software quality and the latter to reduce the time of analysis. In a nutshell, we found that the sampling algorithms with larger sample sets detected higher numbers of faults. Furthermore, we observed that the limiting assumptions made in previous work influence the number of detected faults, the size of sample sets, and the ranking of algorithms. Finally, we identified a number of technical challenges when trying to avoid the limiting assumptions, which question the practicality of certain sampling algorithms.

 
ESEC/FSE 2015
Norbert Siegmund, Alexander Grebhahn, Christian Kästner, and Sven Apel. Performance-Influence Models for Highly Configurable Systems. In Proceedings of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 284--294, New York, NY: ACM Press, August 2015. [ .pdf, bib ]

Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of the individual configurations options on performance is unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Such a model shall be useful for automatic performance prediction and optimization, on the one hand, and performance debugging for developers, on the other hand. Our approach combines machine-learning and sampling technique in a novel way. Our approach improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performances predictions one can make with them. Using our approach, we were able to identify a number of real performance bugs and other problems in real-world systems.

 
ESEC/FSE 2015
Hung Viet Nguyen, Christian Kästner, and Tien N. Nguyen. Cross-language Program Slicing for Dynamic Web Applications. In Proceedings of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 369--380, New York, NY: ACM Press, August 2015. [ .pdf, bib ]

During software maintenance, program slicing is a useful technique to assist developers in understanding the impact of their changes. While different program-slicing techniques have been proposed for traditional software systems, program slicing for dynamic web applications is challenging since the client-side code is generated from the server-side code and data entities are referenced across different languages and are often embedded in string literals in the server-side program. To address those challenges, we introduce WebSlice, an approach to compute program slices across different languages for web applications. We first identify data-flow dependencies among data entities for PHP code based on symbolic execution. We also compute SQL queries and a conditional DOM that represents client-code variations and construct the data flows for embedded languages: SQL, HTML, and JavaScript. Next, we connect the data flows across different languages and those across PHP pages. Finally, we compute a program slice for any given entity based on the established data flows. Running WebSlice on five real-world PHP systems, we found that out of 40,670 program slices, 10 % cross languages, 38 % cross files, and 13 % cross string fragments, demonstrating the potential benefit of tool support for cross-language program slicing in web applications.

 
TSE 2015
Sarah Nadi, Thorsten Berger, Christian Kästner, and Krzysztof Czarnecki. Where do Configuration Constraints Stem From? An Extraction Approach and an Empirical Study. IEEE Transactions on Software Engineering (TSE), 41(8):820--841, 2015. [ .pdf, doi, bib ]

Highly configurable systems allow users to tailor software to specific needs. Valid combinations of configuration options are often restricted by intricate constraints. Describing options and constraints in a variability model allows reasoning about the supported configurations. To automate creating and verifying such models, we need to identify the origin of such constraints. We propose a static analysis approach, based on two rules, to extract configuration constraints from code. We apply it on four highly configurable systems to evaluate the accuracy of our approach and to determine which constraints are recoverable from the code. We find that our approach is highly accurate (93 % and 77 % respectively) and that we can recover 28 % of existing constraints. We complement our approach with a qualitative study to identify constraint sources, triangulating results from our automatic extraction, manual inspections, and interviews with 27 developers. We find that, apart from low-level implementation dependencies, configuration constraints enforce correct runtime behavior, improve users’ configuration experience, and prevent corner cases. While the majority of constraints is extractable from code, our results indicate that creating a complete model requires further substantial domain knowledge and testing. Our results aim at supporting researchers and practitioners working on variability model engineering, evolution, and verification techniques.

 
ECOOP 2015
Flávio Medeiros, Christian Kästner, Márcio Ribeiro, Sarah Nadi, and Rohit Gheyi. The Love/Hate Relationship with The C Preprocessor: An Interview Study. In Proceedings of the 29th European Conference on Object-Oriented Programming (ECOOP), volume 37 of Leibniz International Proceedings in Informatics, pages 495--518, Dagstuhl, Germany: Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik, 2015. [ .pdf, doi, bib ]

The C preprocessor has received strong criticism in academia, among others regarding separation of concerns, error proneness, and code obfuscation, but is widely used in practice. Many (mostly academic) alternatives to the preprocessor exist, but have not been adopted in practice. Since developers continue to use the preprocessor despite all criticism and research, we ask how practitioners perceive the C preprocessor. We performed interviews with 40 developers, used grounded theory to analyze the data, and cross-validated the results with data from a survey among 202 developers, repository mining, and results from previous studies. In particular, we investigated four research questions related to why the preprocessor is still widely used in practice, common problems, alternatives, and the impact of undisciplined annotations. Our study shows that developers are aware of the criticism the C preprocessor receives, but use it nonetheless, mainly for portability and variability. They indicate that they regularly face preprocessor-related problems and preprocessor-related bugs. The majority of our interviewees do not see any current C-native technologies that can entirely replace the C preprocessor. However, developers tend to mitigate problems with guidelines, but those guidelines are not enforced consistently. We report the key insights gained from our study and discuss implications for practitioners and researchers on how to better use the C preprocessor to minimize its negative impact.

 
FSE 2014
Hung Viet Nguyen, Christian Kästner, and Tien N. Nguyen. Building Call Graphs for Embedded Client-Side Code in Dynamic Web Applications. In Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE), pages 518--529, New York, NY: ACM Press, November 2014. [ .pdf, doi, bib ]

When developing and maintaining a software system, programmers often rely on IDEs to provide editor services such as syntax highlighting, auto-completion, and “jump to declaration”. In dynamic web applications, such tool support is currently limited to either the server-side code or to hand-written or generated client-side code. Our goal is to build a call graph for providing editor services on client-side code while it is still embedded as string literals within server-side code. First, we symbolically execute the server-side code to identify all possible client-side code variations. Subsequently, we parse the generated client-side code with all its variations into a VarDOM that compactly represents all DOM variations for further analysis. Based on VarDOM, we build conditional call graphs for embedded HTML, CSS, and JS. Our empirical evaluation on real-world web applications show that our analysis achieves 100 % precision in identifying call-graph edges. 62 % of the edges cross PHP strings, and 17 % of them cross files—in both situations, navigation without tool support is tedious and error prone.

 
Onward! 2014
Eric Walkingshaw, Christian Kästner, Martin Erwig, Sven Apel, and Eric Bodden. Variational Data Structures: Exploring Tradeoffs in Computing with Variability. In Proceedings of the 13rd SIGPLAN Symposium on New Ideas in Programming and Reflections on Software at SPLASH (Onward!), pages 213--226, New York, NY: ACM Press, 2014. [ .pdf, doi, bib ]

Variation is everywhere, but in the construction and analysis of customizable software it is paramount. In this context, there arises a need for variational data structures for efficiently representing and computing with related variants of an underlying data type. So far, variational data structures have been explored and developed ad hoc. This paper is a first attempt and a call to action for systematic and foundational research in this area. Research on variational data structures will benefit not only customizable software, but the many other application domains that must cope with variability. In this paper, we show how support for variation can be understood as a general and orthogonal property of data types, data structures, and algorithms. We begin a systematic exploration of basic variational data structures, exploring the tradeoffs between different implementations. Finally, we retrospectively analyze the design decisions in our own previous work where we have independently encountered problems requiring variational data structures.

 
CSUR 2014
Thomas Thüm, Sven Apel, Christian Kästner, Ina Schaefer, and Gunter Saake. A Classification and Survey of Analysis Strategies for Software Product Lines. ACM Computing Surveys (CSUR), 47(1):Article 6, June 2014. [ .pdf, doi, http, bib ]

Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a set of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, such that it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 76 articles, we infer a research agenda to guide future research on product-line analyses.

 
ICSE 2014
Janet Siegmund, Christian Kästner, Sven Apel, Chris Parnin, Anja Bethmann, Thomas Leich, Gunter Saake, and André Brechmann. Understanding Understanding Source Code with Functional Magnetic Resonance Imaging. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 378--389, June 2014. [ .pdf, doi, bib ]

Program comprehension is an important cognitive process that inherently eludes direct measurement. Thus, researchers are struggling with providing optimal programming languages, tools, or coding conventions to support developers in their everyday work. With our approach, we explore whether functional magnetic resonance imaging (fMRI), which is well established in cognitive neuroscience, is feasible to directly measure program comprehension. To this end, we observed 17 participants inside an fMRI scanner while comprehending short source-code snippets, which we contrasted with locating syntax errors. We found a clear, distinct activation pattern of five brain regions, which are related to working memory, attention, and language processing—all processes that fit well to our understanding of program comprehension. Based on the results, we propose a model of program comprehension. Our results encourage us to use fMRI in future studies to measure program comprehension and, in the long run, answer questions, such as: Can we predict whether someone will be an excellent programmer? How effective are new languages and tools for program understanding? How do we train someone to become an excellent programmer?

 
ICSE 2014
Hung Viet Nguyen, Christian Kästner, and Tien N. Nguyen. Exploring Variability-Aware Execution for Testing Plugin-Based Web Applications. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 907--918, June 2014. [ .pdf, doi, bib ]

In plugin-based systems, plugin conflicts may occur when two or more plugins interfere with one another, changing their expected behaviors. It is highly challenging to detect plugin conflicts due to the exponential explosion of the combinations of plugins (i.e., configurations). In this paper, we address the challenge of executing a test case over many configurations. Leveraging the fact that many executions of a test are similar, our variability-aware execution runs common code once. Only when encountering values that are different depending on specific configurations will the execution split to run for each of them. To evaluate the scalability of variability-aware execution on a large real-world setting, we built a prototype PHP interpreter called Varex and ran it on the popular WordPress blogging Web application. The results show that while plugin interactions exist, there is a significant amount of sharing that allows variability-aware execution to scale to 2^50 configurations within seven minutes of running time. During our study, with Varex, we were able to detect two plugin conflicts: one was recently reported on WordPress forum, and another one is not yet discovered.

 
ICSE 2014
Sarah Nadi, Thorsten Berger, Christian Kästner, and Krzysztof Czarnecki. Mining Configuration Constraints: Static Analyses and Empirical Results. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 140--151, June 2014. [ .pdf, doi, bib ]

Highly-configurable systems allow users to tailor the software to their specific needs. Not all combinations of configuration options are valid though, and constraints arise for technical or non-technical reasons. Explicitly describing these constraints in a variability model allows reasoning about the supported configurations. To automate creating variability models, we need to identify the origin of such configuration constraints. We propose an approach which uses build-time errors and a novel feature-effect heuristic to automatically extract configuration constraints from C code. We conduct an empirical study on four highly-configurable open-source systems with existing variability models having three objectives in mind: evaluate the accuracy of our approach, determine the recoverability of existing variability-model constraints using our analysis, and classify the sources of variability-model constraints. We find that both our extraction heuristics are highly accurate (93 % and 77 % respectively), and that we can recover 19 % of the existing variability-models using our approach. However, we find that many of the remaining constraints require expert knowledge or more expensive analyses. We argue that our approach, tooling, and experimental results support researchers and practitioners working on variability model re-engineering, evolution, and consistency-checking techniques.

 
ICSE 2014
Márcio Ribeiro, Paulo Borba, and Christian Kästner. Feature Maintenance with Emergent Interfaces. In Proceedings of the 36th International Conference on Software Engineering (ICSE), pages 989--1000, June 2014. [ .pdf, doi, bib ]

Hidden code dependencies are responsible for many complications in maintenance tasks. With the introduction of variable features in product lines, dependencies may even cross feature boundaries and related problems are prone to be detected late. Many current implementation techniques for product lines lack proper interfaces, which could make such dependencies explicit. As alternative to changing the implementation approach, we provide a comprehensive tool-based solution to support developers in recognizing and dealing with feature dependencies: emergent interfaces. Emergent interfaces are computed on demand, based on feature-sensitive interprocedural data-flow analysis. They emerge in the IDE and emulate benefits of modularity not available in the host language. To evaluate the potential of emergent interfaces, we conducted and replicated a controlled experiment, and found, in the studied context, that emergent interfaces can improve performance of code change tasks by up to 3 times while also reducing the number of errors.

 
TSE 2014
Christian Kästner, Alexander Dreiling, and Klaus Ostermann. Variability Mining: Consistent Semiautomatic Detection of Product-Line Features. IEEE Transactions on Software Engineering (TSE), 40(1):67--82, 2014. [ .pdf, doi, http, bib ]

Software product line engineering is an efficient means to generate a set of tailored software products from a common implementation. However, adopting a product-line approach poses a major challenge and significant risks, since typically legacy code must be migrated toward a product line. Our aim is to lower the adoption barrier by providing semiautomatic tool support—called variability mining—to support developers in locating, documenting, and extracting implementations of product-line features from legacy code. Variability mining combines prior work on concern location, reverse engineering, and variability-aware type systems, but is tailored specifically for the use in product lines. Our work pursues three technical goals: (1) we provide a consistency indicator based on a variability-aware type system, (2) we mine features at a fine level of granularity, and (3) we exploit domain knowledge about the relationship between features when available. With a quantitative study, we demonstrate that variability mining can efficiently support developers in locating features.

 
2013
Sven Apel, Don Batory, Christian Kästner, and Gunter Saake. Feature-Oriented Software Product Lines: Concepts and Implementation. Berlin/Heidelberg: Springer-Verlag, 2013. 308 pages, ISBN 978-3-642-37520-0. [ http, bib ]

While standardization has empowered the software industry to substantially scale software development and to provide affordable software to a broad market, it often does not address smaller market segments, nor the needs and wishes of individual customers. Software product lines reconcile mass production and standardization with mass customization in software engineering. Ideally, based on a set of reusable parts, a software manufacturer can generate a software product based on the requirements of its customer. The concept of features is central to achieving this level of automation, because features bridge the gap between the requirements the customer has and the functionality a product provides. Thus features are a central concept in all phases of product-line development. The authors take a developer’s viewpoint, focus on the development, maintenance, and implementation of product-line variability, and especially concentrate on automated product derivation based on a user’s feature selection. The book consists of three parts. Part I provides a general introduction to feature-oriented software product lines, describing the product-line approach and introducing the product-line development process with its two elements of domain and application engineering. The pivotal Part II covers a wide variety of implementation techniques including design patterns, frameworks, components, feature-oriented programming, and aspect-oriented programming, as well as tool-based approaches including preprocessors, build systems, version-control systems, and virtual separation of concerns. Finally, Part III is devoted to advanced topics related to feature-oriented product lines like refactoring, feature interaction, and analysis tools specific to product lines. In addition, an Appendix lists various helpful tools for software product-line development, along with a description of how they relate to the topics covered in this book. To tie the book together, the authors use two running examples that are well documented in the product-line literature: data management for embedded systems, and variations of graph data structures. They start every chapter by explicitly stating the respective learning goals and finish it with a set of exercises; additional teaching material is also available online. All these features make the book ideally suited for teaching – both for academic classes and for professionals interested in self-study.

 
OOPSLA 2012
Christian Kästner, Klaus Ostermann, and Sebastian Erdweg. A Variability-Aware Module System. In Proceedings of the 27th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 773--792, New York, NY: ACM Press, October 2012. [ .pdf, doi, bib ]

Module systems enable a divide and conquer strategy to software development. To implement compile-time variability in software product lines, modules can be composed in different combinations. However, this way variability dictates a dominant decomposition. Instead, we introduce a variability-aware module system that supports compile-time variability inside a module and its interface. This way, each module can be considered a product line that can be type checked in isolation. Variability can crosscut multiple modules. The module system breaks with the antimodular tradition of a global variability model in product-line development and provides a path toward software ecosystems and product lines of product lines developed in an open fashion. We discuss the design and implementation of such a module system on a core calculus and provide an implementation for C, which we use to type check the open source product line Busybox with 811 compile-time options.

 
EMSE 2012
Janet Feigenspan, Christian Kästner, Sven Apel, Jörg Liebig, Michael Schulze, Raimund Dachselt, Maria Papendieck, Thomas Leich, and Gunter Saake. Do Background Colors Improve Program Comprehension in the #ifdef Hell? Empirical Software Engineering (EMSE), 18(4):699--745, 2012. [ .pdf, doi, http, bib ]

Software-product-line engineering aims at the development of variable and reusable software systems. In practice, software product lines are often implemented with preprocessors. Preprocessor directives are easy to use, and many mature tools are available for practitioners. However, preprocessor directives have been heavily criticized in academia and even referred to as “#ifdef hell”, because they introduce threats to program comprehension and correctness. There are many voices that suggest to use other implementation techniques instead, but these voices ignore the fact that a transition from preprocessors to other languages and tools is tedious, erroneous, and expensive in practice. Instead, we and others propose to increase the readability of preprocessor directives by using background colors to highlight source code annotated with ifdef directives. In three controlled experiments with over 70 subjects in total, we evaluate whether and how background colors improve program comprehension in preprocessor-based implementations. Our results demonstrate that background colors have the potential to improve program comprehension, independently of size and programming language of the underlying product. Additionally, we found that subjects generally favor background colors. We integrate these and other findings in a tool called FeatureCommander, which facilitates program comprehension in practice and which can serve as a basis for further research.

 
ICSE 2012
Norbert Siegmund, Sergiy S. Kolesnikov, Christian Kästner, Sven Apel, Don Batory, Marko Rosenmüller, and Gunter Saake. Predicting Performance via Automated Feature-Interaction Detection. In Proceedings of the 34th International Conference on Software Engineering (ICSE), pages 167--177, Los Alamitos, CA: IEEE Computer Society, 2012. [ .pdf, bib ]

Customizable programs and program families provide user-selectable features to tailor a program to an application scenario. Knowing in advance which feature selection yields the best performance is difficult because a direct measurement of all possible feature combinations is infeasible. Our work aims at predicting program performance based on selected features. The challenge is predicting performance accurately when features interact. An interaction occurs when a feature combination has an unexpected influence on performance. We present a method that automatically detects performance feature interactions to improve prediction accuracy. To this end, we propose three heuristics to reduce the number of measurements required to detect interactions. Our evaluation consists of six real-world case studies from varying domains (e.g. databases, compression libraries, and web server) using different configuration techniques (e.g., configuration files and preprocessor flags). Results show, on average, a prediction accuracy of 95 %.

 
TSE 2013
Sven Apel, Christian Kästner, and Christian Lengauer. Language-Independent and Automated Software Composition: The FeatureHouse Experience. IEEE Transactions on Software Engineering (TSE), 39(1):63--79, 2013. [ .pdf, http, bib ]

Superimposition is a composition technique that has been applied successfully in many areas of software development. Although superimposition is a general-purpose concept, it has been (re)invented and implemented individually for various kinds of software artifacts. We unify languages and tools that rely on superimposition by using the language-independent model of feature structure trees (FSTs). On the basis of the FST model, we propose a general approach to the composition of software artifacts written in different languages. Furthermore, we offer a supporting framework and tool chain, called FeatureHouse. We use attribute grammars to automate the integration of additional languages. In particular, we have integrated Java, C#, C, Haskell, Alloy, and JavaCC. A substantial number of case studies demonstrate the practicality and scalability of our approach and reveal insights into the properties that a language must have in order to be ready for superimposition. We discuss perspectives of our approach and demonstrate how we extended FeatureHouse with support for XML languages (in particular, XHTML, XMI/UML, and Ant) and alternative composition approaches (in particular, aspect weaving). Rounding off our previous work, we provide here a holistic view of the FeatureHouse approach based on rich experience with numerous languages and case studies and reflections on several years of research.

 
FOSD 2011
Christian Kästner, Sven Apel, and Klaus Ostermann. The Road to Feature Modularity? In Proceedings of the 3rd International Workshop on Feature-Oriented Software Development (FOSD), pages 5:1--5:8, New York, NY: ACM Press, September 2011. [ .pdf, doi, bib ]

Modularity of feature representations has been a long standing goal of feature-oriented software development. While some researchers regard feature modules and corresponding composition mechanisms as a modular solution, other researchers have challenged the notion of feature modularity and pointed out that most feature-oriented implementation mechanisms lack proper interfaces and support neither modular type checking nor separate compilation. We step back and reflect on the feature-modularity discussion. We distinguish two notions of modularity, cohesion without interfaces and information hiding with interfaces, and point out the different expectations that, we believe, are the root of many heated discussions. We discuss whether feature interfaces should be desired and weigh their potential benefits and costs, specifically regarding crosscutting, granularity, feature interactions, and the distinction between closed-world and open-world reasoning. Because existing evidence for and against feature modularity and feature interfaces is shaky and inconclusive, more research is needed, for which we outline possible directions.

 
OOPSLA 2011
Christian Kästner, Paolo G. Giarrusso, Tillmann Rendel, Sebastian Erdweg, Klaus Ostermann, and Thorsten Berger. Variability-Aware Parsing in the Presence of Lexical Macros and Conditional Compilation. In Proceedings of the 26th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 805--824, New York, NY: ACM Press, October 2011. [ .pdf, doi, bib ]

In many projects, lexical preprocessors are used to manage different variants of the project (using conditional compilation) and to define compile-time code transformations (using macros). Unfortunately, while being a simply way to implement variability, conditional compilation and lexical macros hinder automatic analysis, even though such analysis would be urgently needed to combat variability-induced complexity. To analyze code with its variability, we need to parse it without preprocessing it. However, current parsing solutions use heuristics, support only a subset of the language, or suffer from exponential explosion. As part of the TypeChef project, we contribute a novel variability-aware parser that can parse unpreprocessed code without heuristics in practicable time. Beyond the obvious task of detecting syntax errors, our parser paves the road for further analysis, such as variability-aware type checking. We implement variabilityaware parsers for Java and GNU C and demonstrate practicability by parsing the product line MobileMedia and the entire X86 architecture of the Linux kernel with 6065 variable features.

 
ECOOP 2011
Klaus Ostermann, Paolo G. Giarrusso, Christian Kästner, and Tillmann Rendel. Revisiting Information Hiding: Reflections on Classical and Nonclassical Modularity. In Proceedings of the 25th European Conference on Object-Oriented Programming (ECOOP), volume 6813 of Lecture Notes in Computer Science, pages 155--178, Berlin/Heidelberg: Springer-Verlag, 2011. [ .pdf, doi, epub, bib ]

What is modularity? Which kind of modularity should developers strive for? Despite decades of research on modularity, these basic questions have no definite answer. We submit that the common understanding of modularity, and in particular its notion of information hiding, is deeply rooted in classical logic. We analyze how classical modularity, based on classical logic, fails to address the needs of developers of large software systems, and encourage researchers to explore alternative visions of modularity, based on nonclassical logics, and henceforth called nonclassical modularity.

 
TOSEM 2012
Christian Kästner, Sven Apel, Thomas Thüm, and Gunter Saake. Type Checking Annotation-Based Product Lines. ACM Transactions on Software Engineering and Methodology (TOSEM), 21(3):Article 14, 2012. [ .pdf, doi, epub, bib ]

Software-product-line engineering is an efficient means to generate a family of program variants for a domain from a single code base. However, because of the potentially high number of possible program variants, it is difficult to test them all and ensure properties like type safety for the entire product line. We present a product-line–aware type system that can type check an entire software product line without generating each variant in isolation. Specifically, we extend the Featherweight Java calculus with feature annotations for product-line development and prove formally that all program variants generated from a well-typed product line are well-typed. Furthermore, we present a solution to the problem of typing mutually exclusive features. We discuss how results from our formalization helped implementing our own product-line tool CIDE for full Java and report of experience with detecting type errors in four existing software-product-line implementations.

 
AOSD 2011
Jörg Liebig, Christian Kästner, and Sven Apel. Analyzing the Discipline of Preprocessor Annotations in 30 Million Lines of C Code. In Proceedings of the 10th ACM International Conference on Aspect-Oriented Software Development (AOSD), pages 191--202, New York, NY: ACM Press, March 2011. [ .pdf, acm, bib ]

The C preprocessor cpp is a widely used tool for implementing variable software. It enables programmers to express variable code of features that may crosscut the entire implementation with conditional compilation. The C preprocessor relies on simple text processing and is independent of the host language (C, C++, Java, and so on). Language independent text processing is powerful and expressive|programmers can make all kinds of annotations in the form of #ifdefs but can render unpreprocessed code difficult to process automatically by tools, such as code aspect refactoring, concern management, and also static analysis and variability-aware type checking. We distinguish between disciplined annotations, which align with the underlying source-code structure, and undisciplined annotations, which do not align with the structure and hence complicate tool development. This distinction raises the question of how frequently programmers use undisciplined annotations and whether it is feasible to change them to disciplined annotations to simplify tool development and to enable programmers to use a wide variety of tools in the first place. By means of an analysis of 40 mediumsized to large-sized C programs, we show empirically that programmers use cpp mostly in a disciplined way: about 85 % of all annotations respect the underlying source-code structure. Furthermore, we analyze the remaining undisciplined annotations, identify patterns, and discuss how to transform them into a disciplined form.

 
ICSE 2010
Jörg Liebig, Sven Apel, Christian Lengauer, Christian Kästner, and Michael Schulze. An Analysis of the Variability in Forty Preprocessor-Based Software Product Lines. In Proceedings of the 32nd International Conference on Software Engineering (ICSE), pages 105--114, New York, NY: ACM Press, May 2010. [ .pdf, acm, doi, bib ]

Over 30 years ago, the preprocessor cpp was developed to extend the programming language C by lightweight metaprogramming capabilities. Despite its error-proneness and low abstraction level, the cpp is still widely being used in presentday software projects to implement variable software. However, not much is known about how the cpp is employed to implement variability. To address this issue, we have analyzed forty open-source software projects written in C. Specifically, we answer the following questions: How does program size influence variability? How complex are extensions made via cpp's variability mechanisms? At which level of granularity are extensions applied? What is the general type of extensions? These questions revive earlier discussions on understanding and refactoring of the preprocessor. To answer them, we introduce several metrics measuring the variability, complexity, granularity, and type of extensions. Based on the data obtained, we suggest alternative implementation techniques. The data we have collected can influence other research areas, such as language design and tool support.

 
GPCE 2009
Christian Kästner, Sven Apel, and Martin Kuhlemann. A Model of Refactoring Physically and Virtually Separated Features. In Proceedings of the 8th ACM International Conference on Generative Programming and Component Engineering (GPCE), pages 157--166, New York, NY: ACM Press, October 2009. [ .pdf, acm, doi, bib ]

Physical separation with class refinements and method refinements à la AHEAD and virtual separation using annotations à la #ifdef or CIDE are two competing groups of implementation approaches for software product lines with complementary advantages. Although both groups have been mainly discussed in isolation, we strive for an integration to leverage the respective advantages. In this paper, we provide the basis for such an integration by providing a model that supports both, physical and virtual separation, and by describing refactorings in both directions. We prove the refactorings complete, such that every virtually separated product line can be automatically transformed into a physically separated one (replacing annotations by refinements) and vice versa. To demonstrate the feasibility of our approach, we have implemented the refactorings in our tool CIDE and conducted four case studies.

 
JOT 2009
Sven Apel, and Christian Kästner. An Overview of Feature-Oriented Software Development. Journal of Object Technology (JOT), 8(5):49--84, July/August 2009. Refereed Column. [ .pdf, http, bib ]

Feature-oriented software development (FOSD) is a paradigm for the construction, customization, and synthesis of large-scale software systems. In this survey, we give an overview and a personal perspective on the roots of FOSD, connections to other software development paradigms, and recent developments in this field. Our aim is to point to connections between different lines of research and to identify open issues.

 
ICSE 2009
Thomas Thüm, Don Batory, and Christian Kästner. Reasoning about Edits to Feature Models. In Proceedings of the 31st International Conference on Software Engineering (ICSE), pages 254--264, Los Alamitos, CA: IEEE Computer Society, May 2009. [ .pdf, bib ]

Features express the variabilities and commonalities among programs in a software product line (SPL). A feature model defines the valid combinations of features, where each combination corresponds to a program in an SPL. SPLs and their feature models evolve over time. We classify the evolution of a feature model via modifications as refactorings, specializations, generalizations, or arbitrary edits. We present an algorithm to reason about feature model edits to help designers determine how the program membership of an SPL has changed. Our algorithm takes two feature models as input (before and after edit versions), where the set of features in both models are not necessarily the same, and it automatically computes the change classification. Our algorithm is able to give examples of added or deleted products and efficiently classifies edits to even large models that have thousands of features.

 
ICSE 2008
Christian Kästner, Sven Apel, and Martin Kuhlemann. Granularity in Software Product Lines. In Proceedings of the 30th International Conference on Software Engineering (ICSE), pages 311--320, New York, NY: ACM Press, May 2008. Most Influencial Paper Award at SPLC'19. [ .pdf, acm, doi, epub, bib ]

Building software product lines (SPLs) with features is a challenging task. Many SPL implementations support features with coarse granularity - e.g., the ability to add and wrap entire methods. However, fine-grained extensions, like adding a statement in the middle of a method, either require intricate workarounds or obfuscate the base code with annotations. Though many SPLs can and have been implemented with the coarse granularity of existing approaches, fine-grained extensions are essential when extracting features from legacy applications. Furthermore, also some existing SPLs could benefit from fine-grained extensions to reduce code replication or improve readability. In this paper, we analyze the effects of feature granularity in SPLs and present a tool, called Colored IDE (CIDE), that allows features to implement coarse-grained and fine-grained extensions in a concise way. In two case studies, we show how CIDE simplifies SPL development compared to traditional approaches.

 
SPLC 2007
Christian Kästner, Sven Apel, and Don Batory. A Case Study Implementing Features Using AspectJ. In Proceedings of the 11st International Software Product Line Conference (SPLC), pages 223--232, Los Alamitos, CA: IEEE Computer Society, September 2007. [ .pdf, bib ]

Software product lines aim to create highly configurable programs from a set of features. Common belief and recent studies suggest that aspects are well-suited for implementing features. We evaluate the suitability of AspectJ with respect to this task by a case study that refactors the embedded database system Berkeley DB into 38 features. Contrary to our initial expectations, the results were not encouraging. As the number of aspects in a feature grows, there is a noticeable decrease in code readability and maintainability. Most of the unique and powerful features of AspectJ were not needed. We document where AspectJ is unsuitable for implementing features of refactored legacy applications and explain why.

 
 

more...

Copyright Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

 
 
 

FOSD Cool Wall

The cool wall was created and evolved during the yearly FOSD meetings (see fosd.net). With it, we encourage researchers to look for better tool names. Up to 2012, the listing was completely subjective, feel free to complain. Starting 2013, we started voting. In 2013 and 2014 we even gave out a Coolest Tool Name award. Unfortunately, the 2014 listing is incomplete, as the photos of the votes got lost.

Cool Wall 2016
 
 
 

Private Interests

Juggling, Cooking, Board games, Concerts