This special issue on “Automation in Software Performance Engineering” was initially conceived at the light of the 5th ACM/SPEC International Conference on Performance Engineering (ICPE 2014). We envisioned to offer a chance for improvement to those papers addressing the topic of automation. However, once launched, the Call for Papers attracted researchers and works beyond the Conference. The field of Software Performance Engineering focuses interest on quantitative evaluation of modern software systems (data-intensive, autonomous, ubiquitous, mobile, distributed, adaptive, embedded) and the trade-offs among performance and other quality of service (QoS) attributes (e.g., security, reliability or availability). Performance engineering methods, methodologies, and techniques have been developed for system evaluation. However, systematic application and automation of performance engineering for the modeling, evaluation, and assessment of industrial level software systems is still an open challenge that this special issue tries to address. Finally, this issue in particular presents four state of the art works that address automation challenges in Performance Testing, Cloud Performance, and Tools for Software Performance.

A Petri net tool for software performance estimation based on upper throughput bounds—10.1007/s10515-015-0186-2 by Ricardo J. Rodriguez presents PeabraiN, an open source tool that enables a rapid prototype implementation of methods based on Linear Programming algorithms and Petri nets. Namely, this paper introduces the tool and concrete modules to estimate throughput and optimal distribution of resources.

Automated QoS-oriented cloud resource optimization using containers—10.1007/s10515-016-0191-0 by Yu Sun and colleagues aims at optimizing the automatic deployment of software in cloud environments while maximizing QoS and minimizing costs.

Unit testing performance with Stochastic Performance Logic—10.1007/s10515-015-0188-0 by Petr Tuma and colleagues addresses unit testing for performance improvement. Concretely, they present Stochastic Performance Logic, a formalism for expressing performance requirements, together with interpretations that facilitate performance evaluation in the unit test context.

Continuous validation of performance test workloads—10.1007/s10515-016-0196-8 by Mark D. Syer and colleagues propose an automated approach to validate whether a performance test resembles the field workload and, if not, determines how they differ.

We want to thank all the contributors for their works and the reviewers for their generous effort. In our opinion the current issue is an important achievement for the Software Performance community.