Debian's "secret" sauce
Ignore previous instructions; subscribe to LWN todayEvery article on LWN.net is written by humans, for humans. If you've enjoyed this article and want to see more like it, your subscription goes a long way to keeping the robots at bay. We are offering a free one-month trial subscription (no credit card required) to get you started.
While Debian's "sauce" is not actually all that secret, it is not particularly
well-known either, Samuel Henrique said at the start of his DebConf24 talk. There is a lot
of software-engineering effort that has been put in place by the
distribution in order to create and maintain its releases, but "loads of
people are not aware
" of it. That may be due to the fact that all of
that is
not really documented anywhere in a central location that he can just point
someone to. Recognizing that is what led him to give the talk;
hopefully it will be a "first step toward
" helping solve the problem.
Henrique said that he is a Brazilian and has been a Debian Developer since 2018. He mostly works in the security tools packaging team, but also maintains some packages outside of that team, including curl and rsync. In addition, he mentors Debian newcomers, mostly on packaging tasks. He works on the Amazon Linux security team. He noted that his web site contains links to the slides from the talk; a WebM video is also available. [Update: A YouTube video with subtitles in English and Portuguese is also available.]
Underpinnings
![Samuel Henrique [Samuel Henrique]](https://static.lwn.net/images/2024/debconf-henrique-sm.png)
A distribution is a project set up to distribute software; it can choose
defaults or tweak how a software package behaves to make it easier to use, for
example, as part of that work. But it is important not to ship bugs or
other issues, including
security problems, in those packages. Some distributions try to stay as
close as possible to the upstream code, but "in Debian, we like to
improve things
" by "applying our own judgment
" to the code.
Beyond that, the project needs to support whatever it is that gets shipped.
That means there is a need to put procedures in place to "make sure we
are following up with reported issues
" and fixing them. At all times,
the project needs to be supporting the current Debian stable release, while
developing the next stable release. "We have to do both things at the
same time, and so that is a constant process.
"
The Debian social
contract aligns the project in an "ideological manner
"; it
provides a definition for "free software", for example. The Debian constitution
provides a framework for how the project operates, how it apportions power,
how it makes its decisions, and so on. On a technical level, the policy manual
describes how packages should (and should not) operate, as well as how they
should be put together. And, finally, the developer's
reference is meant to give recommendations to packagers on how to
follow the policies and to describe tools that can help. Those documents
help create a "tight organization where we are all sharing the same
goals, following the same rules, roughly speaking
", which serves as a
basis for the work that the project does.
Forms of Debian
There are multiple releases and repositories that the project works with, Henrique said. For example, a Debian developer who wants to try something out might push it to the Debian experimental repository, which typically only has a few packages. Experimental is not a full release, so it requires being enabled on a system running Debian unstable, then packages from experimental can be installed.
Unstable, also known as "sid", is a
full release, so it can be installed on a system. Unstable is a rolling
release that is "constantly being updated
" and is not officially supported, so "you should not be installing
this on your production servers
". Debian testing is also a rolling
release, but it is "more stable than Debian unstable
"
Finally, there is Debian
stable, "which is what you should be running on production
servers
". There is also oldstable, which is the
previous stable release that is still being supported. The backports repository provides
"newer packages for users of Debian stable
"; it is meant to give
users flexibility, but it is "not officially supported
" as a
release.
Architectures
Debian supports 20 architectures, "nine of them are official, 11 of them
are not official
". Being official means that all of the packages in
stable and oldstable are
built for them, as are the packages in unstable, testing, and experimental.
The non-official architectures only get packages built for stable unstable, testing, and
experimental. The list of architectures is "constantly changing
";
the Debian release
team determines which architectures are official or not before each new
stable release. That decision
generally comes down to a question of long-term support for an
architecture, he said: will there be sufficient infrastructure to build the
packages and enough people to help support them?
Henrique put up a screen shot from the buildd.debian.org web site, which showed the build status of the coreutils package on sid for all of the architectures. It shows green for a successful build and red for a failure (only hurd-i386 currently and in his screen shot). The page shows the logs of the build failure; the results of earlier builds and other releases can also be seen at the site.
Supporting multiple architectures matters, he said, because it finds bugs.
Currently, Debian supports two architectures for non-Linux kernels, both for GNU Hurd on x86 systems,
though it used
to also support kFreeBSD. It
supports both big- and little-endian systems, as well as 32- and 64-bit
architectures; there is also diversity in the size of C data types. All of
that is summarized on the
Debian wiki. That diversity makes it easier to find problems in
upstream software, even when the developer does not have access to, say, a
big-endian system to test with. "So if your software is packaged for
Debian, you're at least going to know if it runs or not
"; maintainers may
care about that or might not, but they will at least know.
Henrique pointed to a fix to the tests for GoAWK, which is an AWK implementation in Go, when it is built on 32-bit systems. It had been packaged for Debian and entered the distribution as part of the unstable release when the problem was discovered; the Debian maintainer reported it upstream and developed a fix. Debian was apparently the first to build and test GoAWK on 32-bit systems.
Another example is a bug he reported in the Aircrack-ng WiFi-security tool. It turned out to be a problem building and running the tests on big-endian systems. This kind of thing has happened before, he said, and is fairly easy to spot because all five big-endian architectures suddenly start reporting build/test problems.
Repositories
Henrique created the following diagram to help attendees understand how packages flow into the development repositories: experimental, unstable, and testing. The Debian developer in the upper left has four different places where they can send a new or updated package. They can directly put the package into the experimental or unstable repositories, or they can send updates that are bound for testing to either the release team, for regular updates, or the security team, for security fixes.
He noted that the experimental repository was not further connected in the
diagram, "it goes nowhere
", which makes sense because of how it is
used. The repository makes it "very easy to test things
". For
example, if
he wanted to enable a new architecture for a package and make it available
to others for testing, he could upload it to experimental; that will also
cause some of the automated testing to happen, which will generate useful
reports. "I get a preview of what would have happened if I sent this package
to unstable.
"
It is easy to revert changes in experimental, as the package can simply be removed. In addition, successful changes can be pushed to unstable, which allows having two variants of the same package available at the same time. The curl packagers recently used that ability when the TLS backend was being changed from OpenSSL to GnuTLS; the GnuTLS version stayed on experimental for a month or so, while the OpenSSL version was updated several times on unstable.
The unstable repository does not provide a supported release, but it does
have all of the packages available, unlike experimental. It is "installed
and used by expert users
", however; since it is the first place that new
packages show up, a lot of developers make use of it, he said. Packages in
unstable get full coverage from the testing and QA infrastructure. It is
"important not to break things too badly
" because it risks breaking
other packages; for risky changes, experimental should be used instead.
Packages in unstable automatically migrate to testing "after some rules
are followed
"; the package needs to be passing its tests and spend some
time in unstable before it gets promoted. While testing is not supported
either, it is generally much more stable than unstable since the packages
there are bound for the next Debian stable release. There were some recent
problems with the 64-bit time_t transition that affected testing,
however, which is the kind of thing "that happens every 20 years,
maybe
"; that shows that testing can still break, but it is "quite
stable
" overall.
Unlike experimental or unstable, testing has an installer available, so
that the installer team can test it before releasing. That means testing
can be installed from scratch. He recommends testing for non-server use
cases, desktops for example. "I believe it is as stable as any other
rolling release
" due to all of the checks and procedures that the
distribution has put in place.
Stable
When thinking about Debian stable, there are two parts to consider,
Henrique said: its creation and its maintenance. The creation happens
every two years, by using a snapshot of testing; once it is created,
packages are not updated as they are for unstable or testing. Users
running stable are looking for something that is predictable, not something
that is bug-free; "the point is there won't be ten new bugs showing [up]
each
day
".
But there needs to be a time when packages cannot freely flow from unstable to testing in order to stabilize something that will become stable. The release team has a freeze process that changes the ability of packages to migrate to testing. The requirements for package migration change and get stricter as the various phases of the freeze process progress; in the final phase, any changes need to be manually reviewed and approved by the release team.
The last freeze, for the Debian 12 ("bookworm") release, lasted around six months. During the freeze, there is less activity in the unstable and testing repositories, because the focus is on fixing what is in testing; but updates that are bound for testing are still being pushed to unstable.
Some of the packages pushed to unstable during a freeze may not be bound for stable, however; if there is a need for a fix to the version of that package in testing, a different path must be taken. That is where the testing-proposed-updates and testing-security repositories (from the diagram) come into play; with manual review from the release or security team, patches can go into those repositories and then to testing from there.
Once there has been a stable release, though, the path for package updates
also requires manual review by one of the teams depending "on the nature
of the fix
", Henrique said. Critical security fixes, such as for a CVE, might go
directly into
stable, but others will "spend baking time in the proposed-updates
"
repository; the normal requirement is that packages spend a week in
proposed-updates before moving to stable. Periodically, the release team
will pause migrations from proposed-updates to stable in order to create a
point release.
Tools
There are "a million tools in Debian
" that can be used for doing
various kinds of testing, such as for continuous integration (CI) or
general QA. He chose to focus on a few of those in his talk, but all of
them are freely available; "anybody can just query all of the build logs
of Debian packages, identify a pattern of something that's wrong, and
provide patches
" to fix the problems found. For those who are curious,
the Debian QA group wiki page
covers the full set of tools and the processes the group uses.
The dh_auto_test
tool is hooked into the package-building system and tries to run the
upstream tests once the package is built. It can detect things like
testing targets in makefiles and run them automatically, but if it cannot
determine how to run the tests, it can be configured to run any tests of
interest. It is meant to confirm that the package works in the Debian
environment, for all of its architectures, and with the build flags
(including hardening options) that the distribution uses. Even though the
upstream project is also running its tests, the dependencies that Debian
uses may be different; "sometimes we spot issues
" because of
all of those differences.
Lintian does lots of
different tests on packages to find "silly things, like spelling
mistakes
" or more serious problems like the presence of binary blobs
without source files. It is hooked into the build process and if it
detects problems that are serious enough, the package will be rejected.
His first contribution to the curl package was for a problem detected
by lintian: curl
was still using Python 2 in its tests after the distribution had switched to Python 3,
which was easy to fix.
Autopkgtest
standardizes the test definitions for packages so that they can be run as
part of the CI system. These tests are not build-time tests, but are used
to run end-to-end tests of the packaged software. The autopkgtest scripts
often include running the upstream tests, but additional tests are
generally added; "you have a lot of freedom here
" to add
dependencies needed to test the software in various ways.
While autopkgtest is the mechanism used, debci is the service that runs the
tests continuously, the results of which can be seen at ci.debian.net. It runs these tests
"on a long list of architectures
" for each change, though not all of
the architectures that the
distribution supports. The main artifact that it produces is reports of
the tests, which can be used to determine whether the package should
migrate from unstable to testing.
A given package will be tested, but there is more to it than that; its dependencies are also being tested, as are the packages which either depend on it or have tests that do. The intent is to find regressions where the previous version of the package was passing, but it no longer does; so, when tests fail, the previous version is tested to see if it also fails. So if package A gets uploaded to unstable, debci will look at which packages depend on it and find package B, for example; it will then run the tests for B using both the old and new versions of A and compare the results. That can all be seen on tracker.debian.org on a per-package basis, such as for the glibc package.
He gave two examples of where this process finds bugs that may have gone unnoticed otherwise. A 2021 test failure for the aeskeyfind utility, which is a forensic tool to find AES keys in memory, silently failed because it was relying on undefined behavior that changed due to a different GCC version. Since aeskeyfind is unmaintained, other distributions may also have the problem, though Henrique has tried to get the word out on it.
The other was a failure of UDPTunnel test due to a change in the Nmap port scanner. The problem was a hidden regression in Nmap that was detected by autopkgtest and debci, which resulted in an Nmap bug report and subsequent fix.
Salsa
CI is one of his favorite tools; it is based on the Debian GitLab
instance, Salsa, and helps
"make sure our packages are all nice and stable
". The Salsa CI team provides
recipes for tests that run on individual commits for packages. He showed a
Salsa CI
page for curl, which lists a bunch of different tests that were run,
including build tests, autopkgtest, a test for possibly missing build flags,
such as for hardening, a build reproducibility test, and more. There were
a few other tools he briefly mentioned, including one to ensure there are
no multi-arch problems and Janitor, which will proactively
scan source repositories and send merge requests with suggested
improvements.
An audience member asked about what other distributions do with regard to
QA. Henrique said that he knew a bit about others, including Fedora, which
has some equivalent systems and processes, but thought that Debian is
"the best one doing this today
". With that, the session ran out of
time. Overall, he provided a nice whirlwind tour of Debian development and
maintenance in his presentation—and helped show the recipe for the sauce.
[I would like to thank the Linux Foundation, LWN's travel sponsor, for its assistance in visiting Busan for DebConf24.]
Index entries for this article | |
---|---|
Conference | DebConf/2024 |
Posted Oct 14, 2024 16:45 UTC (Mon)
by detiste (subscriber, #96117)
[Link] (5 responses)
Posted Oct 15, 2024 14:26 UTC (Tue)
by jake (editor, #205)
[Link] (4 responses)
jake
Posted Oct 15, 2024 17:09 UTC (Tue)
by sthibaul (✭ supporter ✭, #54477)
[Link] (2 responses)
Posted Oct 15, 2024 17:17 UTC (Tue)
by sthibaul (✭ supporter ✭, #54477)
[Link]
Additionally, "unreleased" is specific to non-release archs where arch-specific packages and temporary fixes can be uploaded
Posted Oct 15, 2024 21:36 UTC (Tue)
by SamuelOPH (subscriber, #107804)
[Link]
Posted Oct 16, 2024 13:12 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link]
It's only unstable and experimental, there is no testing either which is due to the lack of a Britney instance in Debian Ports.
Posted Oct 14, 2024 19:50 UTC (Mon)
by ceplm (subscriber, #41334)
[Link] (25 responses)
And yes, we have similar experience to Debian: by running tests on various platforms and versions (including various IBM and ARM hardware), we constantly discover many platform-related issues, which are hidden by your run-of-the-mill free CIs like GitHub Actions running only one version of Ubuntu on x86_64. And for example, yes, Big Endian/Little Endian difference is still the issue after all these years (and yes, there is still Big Endian hardware around).
Posted Oct 15, 2024 3:43 UTC (Tue)
by azumanga (subscriber, #90158)
[Link] (12 responses)
Yes, old hardware exists, but to be honest as an open-source developer, at this point I'm not sure it's worth my time, unless someone is willing to pay me to fix it (and no-one is).
Time is limited, and while some people enjoy maintaining their big-endian CPU systems (and, they can enjoy that!), is it really worth dev's limited time to file bugs against hardware they mostly do not, and will never, have access to? How many people are actually installing, and using, random packages on big-endian systems, compared to the amount of work spent on testing and maintaining?
Last time I got a big-endian bug reported by Debian, I spent half a day trying to get a big-endian debian running in qemu with working networking, and after following 3 different out-of-date and broken guides, gave up. Looking around the open-suse wiki, and some googling, I couldn't find any simple advice for getting big-endian open-suse working easily either.
Posted Oct 15, 2024 5:17 UTC (Tue)
by ceplm (subscriber, #41334)
[Link] (10 responses)
Well, all these bugs are clearly bugs. Some developers care about those.
Posted Oct 15, 2024 5:24 UTC (Tue)
by ceplm (subscriber, #41334)
[Link] (8 responses)
Posted Oct 15, 2024 6:39 UTC (Tue)
by azumanga (subscriber, #90158)
[Link] (7 responses)
https://www.ibm.com/support/pages/just-faqs-about-little-...
Posted Oct 15, 2024 6:41 UTC (Tue)
by azumanga (subscriber, #90158)
[Link]
https://developer.ibm.com/articles/l-power-little-endian-...
Where they say "However, customers running big endian distributions would be wise to begin planning their transitions to little endian distributions as their applications become available and time permits."
Posted Oct 16, 2024 13:14 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link] (5 responses)
AIX is still big-endian and so is IBM zSeries.
Posted Oct 18, 2024 17:56 UTC (Fri)
by anton (subscriber, #25547)
[Link] (4 responses)
And, of course, how to test. I have not yet had access to an s390x system. At least there is one AIX system on cfarm, as well as a number of SPARCs and (big-endian) Linux-ppc64 systems (in addition to Linux-ppc64le systems), and a big-endian MIPS64 running at the moment.
Sure, I take pride in my free software running on a wide range of hardware (especially when I compare it with the commercial competition), but I can fully understand anyone who thinks that the benefits are not worth the effort. If IBM (the only remaining company providing non-eoled big-endian hardware) wants us to port to big-endian, they need to provide more opportunities for testing on such hardware.
Posted Oct 18, 2024 19:38 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
Because counting the number of systems is actually a terrible metric?
An S390x may be just one system, but (and I don't really remember the details) some ISP many moons ago bought - iirc - just ONE mainframe to host some 15,000 customer VMs.
Likewise, in terms of system cost, I don't know how the cost of an S390x compares against a decent single-developer PC, but I'm sure they punch well above their weight in installed value, too.
Plus, as I seem to recall others saying, making sure your code is endian-clean seems to be a good tactic for finding genuine bugs ...
Cheers,
Posted Oct 19, 2024 6:50 UTC (Sat)
by anton (subscriber, #25547)
[Link]
Likewise, in terms of system cost, I don't know how the cost of an S390x compares against a decent single-developer PC, but I'm sure they punch well above their weight in installed value, too.
Posted Oct 18, 2024 20:47 UTC (Fri)
by ceplm (subscriber, #41334)
[Link]
Posted Oct 19, 2024 10:57 UTC (Sat)
by farnz (subscriber, #17727)
[Link]
I care about big-endian only bugs for two reasons:
It is rare, in my experience, for a big-endian bug to not fall into one of those two categories; and the UB lottery ones are especially useful, since my experience is that if I'm consistently winning the UB lottery with a given compiler version on little-endian platforms, I'll consistently lose it on big-endian platforms, whereas when a compiler upgrade results in me sometimes losing and sometimes winning (e.g. changing optimization level, putting in state dumps via printf can cause the compiler to be unable to exploit the UB). And it's much easier to debug something that's consistently failing, even as I mutate the code to narrow down where the bug is, than to debug something where any change can be enough to stop me losing the UB lottery.
Posted Oct 15, 2024 9:10 UTC (Tue)
by dottedmag (subscriber, #18590)
[Link]
Sadly, not many developers say upfront what platforms and toolchains they are targeting, so they have to be asked, like the following: https://github.com/tink-crypto/tink-go/issues/19
Posted Oct 15, 2024 10:10 UTC (Tue)
by ballombe (subscriber, #9523)
[Link]
So really this not an hardware problem.
Posted Oct 15, 2024 4:11 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
I'd love to play with modern-ish BE hardware. But I think only SPARC still is holding out? Even the IBM has surrendered: https://developer.ibm.com/articles/l-power-little-endian-...
Posted Oct 15, 2024 7:31 UTC (Tue)
by azumanga (subscriber, #90158)
[Link] (2 responses)
https://www.fujitsu.com/global/products/computing/servers...
So with Power going LE and SPARC going away, I think there is no consumer big-endian hardware left. I think in the future the only remaining big-endian CPU at this point is the IBM Z series, which most of us will have no sensible way to access.
Posted Oct 15, 2024 17:27 UTC (Tue)
by hmh (subscriber, #3838)
[Link]
You'd usually run Openwrt or Yocto on them, not Debian. We (Debian) don't do very well on small RAM, small FLASH devices like the 2-core 4-thread BE MIPS inside my network router, where you go very out of your way to never write to the filesystem needlessly, for example.
But with my upstream hat on? I most definitely must support 32/64 bit and BE/LE hardware, and it is neither difficult nor troublesome to be endian-clean. 32/64 can be a lot harder. I guess this does not matter for high-end applications like most web browsers, or LLMS or some classes of games, though. But if it would run well on a smaller ARM32 device, by being endian-clean it will also run on MIPS devices.
Posted Oct 16, 2024 13:18 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link]
Solaris on SPARC is support at least until 2034. Plus, there is also SPARC Leon which is actively developed and maintained.
> So with Power going LE and SPARC going away, I think there is no consumer big-endian hardware left.
POWER has always bi-endian and AIX is actually still defaulting to big-endian on POWER.
> I think in the future the only remaining big-endian CPU at this point is the IBM Z series, which most of us will have no sensible way to access.
s390x can be emulated by QEMU, so it's easy to get access to big-endian Linux. Plus, there are cloud-based s390x solutions which community members can use.
Posted Oct 15, 2024 7:52 UTC (Tue)
by pm215 (subscriber, #98099)
[Link] (1 responses)
Posted Oct 16, 2024 0:00 UTC (Wed)
by azumanga (subscriber, #90158)
[Link]
Posted Oct 16, 2024 13:16 UTC (Wed)
by glaubitz (subscriber, #96452)
[Link]
Posted Oct 15, 2024 19:44 UTC (Tue)
by q3cpma (subscriber, #120859)
[Link] (3 responses)
Posted Oct 15, 2024 21:46 UTC (Tue)
by SamuelOPH (subscriber, #107804)
[Link] (2 responses)
I wish I had dedicated some time to pointing out some things that other distros are good at.
This doesn't exactly answer your question but I wanted to say that Gentoo is very good at identifying build regressions caused by newer gcc/glibc, as they often seems to be the first compiling something with that new toolchain. For context, Debian doesn't automatically rebuilds everything on any new version of gcc/glibc, whereas Gentoo folks are constantly rebuilding their packages.
Posted Oct 16, 2024 13:43 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (1 responses)
Posted Oct 17, 2024 15:44 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link]
If people were able to coordinate better they’d all test the same runtime. But, since the bugs are not created by the testing, that would mean this runtime would concentrate all the problems and be quite difficult to integrate.
Plus when all people test the same thing it is real easy to miss some problems, till they become too big to ignore (and at this point they also become quite expensive to fix).
Posted Oct 15, 2024 21:56 UTC (Tue)
by SamuelOPH (subscriber, #107804)
[Link]
That sounds like the same we do for debci, tests are run against all packages that either depends on it or has a test that depends on it. Does SUSE also have an equivalent to a migration reference (to identify false positives)?
Although distros have different tools and frameworks for integration tests, with some duplicated effort, the most important thing at the end of the day is that upstream gets bug reports and patches, that way all distros benefit from it.
Posted Oct 15, 2024 21:38 UTC (Tue)
by SamuelOPH (subscriber, #107804)
[Link]
I was probably not very clear, but what the release team
The part about packages spending baking time in proposed-updates is supposed to
Posted Oct 15, 2024 21:41 UTC (Tue)
by SamuelOPH (subscriber, #107804)
[Link] (1 responses)
It's run as its own service and it's also one of the salsa-ci jobs.
For more information, see https://piuparts.debian.org/
Posted Oct 15, 2024 23:31 UTC (Tue)
by sthibaul (✭ supporter ✭, #54477)
[Link]
Posted Oct 17, 2024 7:56 UTC (Thu)
by MKesper (subscriber, #38539)
[Link]
glitch
Also unstable, a few hours/minutes later
glitch
glitch
glitch
glitch
glitch
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
Which leads back to the question of why an upstream developer should care. Even in the Debian popularity contest, the number of big-endian systems seems to be 134, about 0.05% of the counted population (about 245000 systems):
openSUSE
hppa : 8
m68k : 2
mips : 5
powerpc : 50
ppc64 : 37
s390x : 10
sparc : 2
sparc64 : 22
openSUSE
Wol
openSUSE
An S390x may be just one system, but (and I don't really remember the details) some ISP many moons ago bought - iirc - just ONE mainframe to host some 15,000 customer VMs.
So there is commercial interest in s390x. If IBM, the host ISP or the customers of that ISP actually really want the upstreams to fix problems with portability to s390x, they can pay the upstreams to do it. Or they can do it themselves, and send patches to the upstreams. But what happens is that nobody even bothers to put an s390x machine on cfarm (which might induce some upstreams to do it for free). And sorry, some well-hidden (I had never read about it before) IBM community cloud that requires jumping through extra hoops puts extra hurdles in the way; I don't know if I will find the time to cross them, probably not.
making sure your code is endian-clean seems to be a good tactic for finding genuine bugs
Not in my experience, and I don't see a reason why it should be.
openSUSE
Caring about big-endian only bugs
openSUSE
openSUSE
it is just that the OS standardized on little endian.
But there use to be both big-endian and little-endian Debian port for arm
(armel, armeb), mips (mips,mipsel), powerpc64 (powerpc64, powerpc64el)
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
openSUSE
This is an example where having different systems with different organizing principals but sharing the same software application ecosystem can provide testing benefit, although I'm generally skeptical the extra work and complexity of trying to make everything work on every combination of incompatibly organized system is always worth it, even if you are technically finding bugs, the value of fixing some of those bugs isn't worth burning maintainers out, and its why I like the idea of flatpak where you can name and test an entire runtime as a single target ABI, reducing the combinations allowing one to make simplifying assumptions and batching bug reports to reduce developer context switching. Traditional distros obviously try to do this across the whole repo, but that's a _lot_ of software to try and shoehorn into a single runtime that then moves in lockstep across _all_ packages, having different versions of the ABI able to safely co-exist allows edge leaf applications to pick the pace of maintenance that best fits their developer resources.
openSUSE
openSUSE
small correction regarding proposed-updates
before moving to stable. Periodically, the release team will pause migrations
from proposed-updates to stable in order to create a point release."
will pause is actually the introduction of new packages to proposed-updates,
not the migration itself. The migration from proposed-updates to stable only
happens when a point release is created, it's not constantly running.
mean that it's at least 1 week, but it could be whenever time it's pending for
the next stable point release, I hope that makes sense.
honorable mention of piuparts
honorable mention of piuparts
Relevance of distributions for container images
Especially the hunt for fixing of CVEs is never-ending work.