An interesting architecture-dependent autopkgtest regression
More than two years after I last uploaded the purity-off Debian package, its autopkgtest (the Debian distribution-wide continuous integration system) started failing on arm64, and only on this architecture.
The failing test is very simple: it prints a long stream of "y" or "n" characters to purity(6)'s standard input and then checks the output for the expected result.
While investigating the live autopkgtest system, I figured out that:
- The paging function of purity(6) became enabled, but only on arm64!
- Paging consumed more "y" characters from standard input than the 5000 provided by the test script.
- The paging code does not consider EOF a valid input, so at that point it would start asking again and printing "--- more ---" forever in a tight loop.
- And this output, being redirected to a file, would fill the file system where the autopkgtest is running.
I did not have time to verify this, but I have noticed that the 25 years old purity(6) program calls TIOCGWINSZ to determine the screen length, and then uses the results in the answer buffer without checking if the ioctl(2) call returned an error. Which it obviously does in this case, because standard input is not a console but a pipe. So my theory is that paging is enabled because the undefined result of the ioctl has changed, and only on this architecture.
Since I do not want to fix purity(6) right now, I have implemented the workaround of printing many more "y" characters as input.
CISPE's call for new regulations on VMware
A few days ago CISPE, a trade association of European cloud providers, published a press release complaining about the new VMware licensing scheme and asking for regulators and legislators to intervene.
But VMware does not have a monopoly on virtualization software: I think that asking regulators to interfere is unnecessary and unwise, unless, of course, they wish to question the entire foundations of copyright. Which, on the other hand, could be an intriguing position that I would support...
I believe that over-reliance on a single supplier is a typical enterprise risk: in the past decade some companies have invested in developing their own virtualization infrastructure using free software, while others have decided to rely entirely on a single proprietary software vendor.
My only big concern is that many public sector organizations will continue to use VMware and pay the huge fees designed by Broadcom to extract the maximum amount of money from their customers. However, it is ultimately the citizens who pay these bills, and blaming the evil US corporation is a great way to avoid taking responsibility for these choices.
"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."
Insert here the Jeremy Clarkson "Oh no! Anyway..." meme.
Extending access to the systemd RuntimeDirectory with a POSIX ACL
inn2 uses ephemeral UNIX domain sockets in /run/news/ to communicate with the ctlinnd program. Since the directory is only writeable by the "news" user, other unprivileged users are not able to use the command.
I solved this by extending the inn2.service systemd unit with a drop-in file which uses setfacl to give access to my user "md" to the RuntimeDirectory created by systemd. This is the content of /etc/systemd/system/inn2.service.d/md-ctlinnd.conf:
[Service] # innd will change the permissions of /run/news/ when started: without # creating it now with mode 0775 then that will change the ACL mask. RuntimeDirectoryMode=0775 # allow user md to run ctlinnd(8), which creates sockets in /run/news/ ExecStartPost=/usr/bin/setfacl --modify user:md:rwx $RUNTIME_DIRECTORY
The non-obvious issue here is that the innd daemon on startup will change the directory permissions in a way which sets a more restrictive (non group-writeable) ACL mask, and this would make the newly created user ACL ineffective. The solution is to create the directory group-writeable from start.
(Beware: this creates a trivial privileges escalation from md to news.)
On having a track record in operating systems development
Now that Debian 12 has been released with proprietary firmwares on the official media, non-optional merged-/usr and systemd adopted by everybody, I want to take a moment to list, not without some pride, a few things that I was right about over the last 20 years:
Accepting the obvious solution about firmwares took 18 years. My work on the merged-/usr transition started in 2014, and the first discussions about replacing sysvinit are from 2011. The general adoption of udev (and dynamic device names, and persistent network interface names...) took less time in comparison and no large-scale flame wars, since people could enable it at their own pace. But it required countless little debates in the Debian Bug Tracking System: I still remember the people insisting that they would never use this newfangled dynamic /dev/, or complaining about their beloved /dev/cdrom symbolic link and persistent network interface names.
So follow me for more rants about inevitable technologies.
Installing Debian 12 on a Banana Pi M5
I recently bought a Banana Pi BPI-M5, which uses the Amlogic S905X3 SoC: these are my notes about installing Debian on it.
While this SoC is supported by the upstream U-Boot it is not supported by the Debian U-Boot package, so debian-installer does not work. Do not be fooled by seeing the DTB file for this exact board being distributed with debian-installer: all DTB files are, and it does not mean that the board is supposed to work.
As I documented in #1033504, the Debian kernels are currently missing some patches needed to support the SD card reader.
I started by downloading an Armbian Banana Pi image and booted it from an SD card. From there I partitioned the eMMC, which always appears as /dev/mmcblk1
:
parted /dev/mmcblk1 (parted) mklabel msdos (parted) mkpart primary ext4 4194304B -1 (parted) align-check optimal 1 mkfs.ext4 /dev/mmcblk1p1
Make sure to leave enough space before the first partition, or else U-Boot will overwrite it: as it is common for many ARM SoCs, U-Boot lives somewhere in the gap between the MBR and the first partition.
I looked at Armbian's /usr/lib/u-boot/platform_install.sh
and installed U-Boot by manually copying it to the eMMC:
dd if=/usr/lib/linux-u-boot-edge-bananapim5_22.08.6_arm64/u-boot.bin of=/dev/mmcblk1 bs=1 count=442 dd if=/usr/lib/linux-u-boot-edge-bananapim5_22.08.6_arm64/u-boot.bin of=/dev/mmcblk1 bs=512 skip=1 seek=1
Beware: Armbian's U-Boot 2022.10 is buggy, so I had to use an older image.
I did not want to install a new system, so I copied over my old Cubieboard install:
mount /dev/mmcblk1p1 /mnt/ rsync -xaHSAX --delete --numeric-ids root@old-server:/ /mnt/ --exclude='/tmp/*' --exclude='/var/tmp/*'
Since the Cubieboard has a 32 bit CPU and the Banana Pi requires an arm64 kernel I enabled the architecture and installed a new kernel:
dpkg --add-architecture arm64 apt update apt install linux-image-arm64 apt purge linux-image-6.1.0-6-armmp linux-image-armmp
At some point I will cross-grade the entire system.
Even if ttyS0 exists it is not the serial console, which appears as ttyAML0 instead. Nowadays systemd automatically starts a getty if the serial console is enabled on the kernel command line, so I just had to disable the old manually-configured getty:
systemctl disable serial-getty@ttyS0.service
I wanted to have a fully working flash-kernel, so I used Armbian's boot.scr
as a template to create /etc/flash-kernel/bootscript/bootscr.meson
and then added a custom entry for the Banana Pi to /etc/flash-kernel/db
:
Machine: Banana Pi BPI-M5 Kernel-Flavors: arm64 DTB-Id: amlogic/meson-sm1-bananapi-m5.dtb U-Boot-Initrd-Address: 0x0 Boot-Initrd-Path: /boot/uInitrd Boot-Initrd-Path-Version: yes Boot-Script-Path: /boot/boot.scr U-Boot-Script-Name: bootscr.meson Required-Packages: u-boot-tools
All things considered I do not think that I would recommend to Debian users to buy Amlogic-based boards since there are many other better supported SoCs.
I replaced grub with systemd-boot
To be able to investigate and work on the the measured boot features I have switched from grub to systemd-boot (sd-boot).
This initial step is optional, but it is useful because this way /etc/kernel/cmdline
will become the new place where the kernel command line can be configured:
. /etc/default/grub echo "root=/dev/mapper/root $GRUB_CMDLINE_LINUX $GRUB_CMDLINE_LINUX_DEFAULT" > /etc/kernel/cmdline
Do not forget to set the correct root file system there, because initramfs-tools does not support discovering it at boot time using the Discoverable Partitions Specification.
The installation has been automated since systemd version 252.6-1, so installing the package has the effect of installing sd-boot in the ESP, enabling it in the UEFI boot sequence and then creating boot loader entries for the kernels already installed on the system:
apt install systemd-boot
If needed, it could be manually installed again just by running bootctl install
.
I like to show the boot menu by default, at least until I will be more familiar with sd-boot:
bootctl set-timeout 4
Since other UEFI binaries can be easily chainloaded, I am also going to keep around grub for a while, just to be sure:
cat <<END > /boot/efi/loader/entries/grub.conf title Grub linux /EFI/debian/grubx64.efi END
At this point sd-boot works, but I still had to enable secure boot. So far sd-boot has not been signed with a Debian key known to the shim bootloader, so I needed to create a Machine Owner Key (MOK), enroll it in UEFI and then use it to sign everything.
I dislike the complexity of mokutil and the other related programs, so after removing it and the boot shim I have decided to use sbctl instead. With it I easily created new keys, enrolled them in the EFI key store and then signed everything:
sbctl create-keys sbctl enroll-keys for file in /boot/efi/*/*/linux /boot/efi/EFI/*/*.efi; do sbctl sign -s $file done
Since there is no sbctl package yet I need to make sure that also the kernels installed in the future will be automatically signed, so I have created a trivial script in /etc/kernel/install.d/
which automatically runs sbctl sign -s
or sbctl remove-file
.
The Debian wiki SecureBoot page documents how do do this with mokutil and sbsigntool, but I think that sbctl is much friendlier.
Since I am not using the boot shim, I also had to set DisableShimForSecureBoot=true
in /etc/fwupd/uefi_capsule.conf
to make firmware updates work automatically.
As a bonus, I have also added to the boot menu the excellent Debian-based GRML live distribution. Since sd-boot is not capable of loopback-mounting CD-ROM images like grub, I first had to extract the kernel and initramfs and copy them to the ESP:
mount -o loop /boot/grml/grml64-full_2022.11.iso /mnt/ mkdir /boot/efi/grml/ cp /mnt/boot/grml64full/* /boot/efi/grml/ umount /mnt/ cat <<END > /boot/efi/loader/entries/grml.conf title GRML linux /grml/vmlinuz initrd /grml/initrd.img options boot=live bootid=grml64full202211 findiso=/grml/grml64-full_2022.11.iso live-media-path=/live/grml64-full net.ifnames=0 END
As expected, after a reboot bootctl
reports the new security features:
System: Firmware: UEFI 2.70 (Lenovo 0.4496) Firmware Arch: x64 Secure Boot: enabled (user) TPM2 Support: yes Boot into FW: supported Current Boot Loader: Product: systemd-boot 252.5-2 Features: ✓ Boot counting ✓ Menu timeout control ✓ One-shot menu timeout control ✓ Default entry control ✓ One-shot entry control ✓ Support for XBOOTLDR partition ✓ Support for passing random seed to OS ✓ Load drop-in drivers ✓ Support Type #1 sort-key field ✓ Support @saved pseudo-entry ✓ Support Type #1 devicetree field ✓ Boot loader sets ESP information ESP: /dev/disk/by-partuuid/1b767f8e-70fa-5a48-b444-cfe5c272d66e File: └─/EFI/systemd/systemd-bootx64.efi ...
Relevant documentation:
Using a custom domain as the Mastodon identity
I just did again the usual web search, and I have verified that Mastodon still does not support managing multiple domains on the same instance, and that there is still no way to migrate an account to a different instance without losing all posts (and more?).
As much as I like the idea of a federated social network, open standards and so on, I do not think that it would be wise for me to spend time developing a social network identity on somebody else's instance which could disappear at any time.
I have managed my own email server since the '90s, but I do not feel that the system administration effort required to maintain a private Mastodon instance would be justified at this point: there is not even a Debian package! Mastodon either needs to become much simpler to maintain or become much more socially important, and so far it is neither. Also, it would be wasteful to use so many computing resources for a single-user instance.
While it is not ideal, for the time being I compromised by redirecting WebFinger requests for md@linux.it using this Apache configuration:
<Location /.well-known/host-meta> Header set Access-Control-Allow-Origin: "*" Header set Content-Type: "application/xrd+xml; charset=utf-8" Header set Cache-Control: "max-age=86400" </Location> <Location /.well-known/webfinger> Header set Access-Control-Allow-Origin: "*" Header set Content-Type: "application/jrd+json; charset=utf-8" Header set Cache-Control: "max-age=86400" </Location> # WebFinger (https://www.rfc-editor.org/rfc/rfc7033) RewriteMap lc int:tolower RewriteMap unescape int:unescape RewriteCond %{REQUEST_URI} ^/\.well-known/webfinger$ RewriteCond ${lc:${unescape:%{QUERY_STRING}}} (?:^|&)resource=acct:([^&]+)@linux\.it(?:$|&) RewriteRule .+ /home/soci/%1/public_html/webfinger.json [L,QSD] # answer 404 to requests missing "acct:" or for domains != linux.it RewriteCond %{REQUEST_URI} ^/\.well-known/webfinger$ RewriteCond ${unescape:%{QUERY_STRING}} (?:^|&)resource= RewriteRule .+ - [L,R=404] # answer 400 to requests without the resource parameter RewriteCond %{REQUEST_URI} ^/\.well-known/webfinger$ RewriteRule .+ - [L,R=400]
This is the equivalent lighttpd configuration:
$HTTP["url"] =~ "^/\.well-known/" { setenv.add-response-header += ( "Access-Control-Allow-Origin" => "*", "Cache-Control" => "max-age=86400", ) } $HTTP["url"] == "/.well-known/host-meta" { mimetype.assign = ("" => "application/xrd+xml; charset=utf-8") } $HTTP["url"] == "/.well-known/webfinger" { mimetype.assign = ("" => "application/jrd+json; charset=utf-8") $HTTP["querystring"] =~ "(?:^|&)resource=acct:([^&]+)@linux\.it(?:$|&)" { alias.url = ( "" => "/home/soci/$1/public_html/webfinger.json" ) } else { alias.url = ( "" => "/does-not-exist" ) } }
Debian bookworm on a Lenovo T14s Gen3 AMD
I recently upgraded my laptop to a Lenovo T14s Gen3 AMD and I am happy to report that it works just fine with Debian/unstable using a 5.19 kernel.
The only issue is that some firmware files are still missing and I had to install them manually.
Updates are needed for the firmware-amd-graphics package (#1019847) for the Radeon 680M GPU (AMD Rembrandt) and for the firmware-atheros package (#1021157) for the Qualcomm NFA725A Wi-Fi card (which is actually reported as a NFA765).
s2idle (AKA "modern suspend") works too, and a ~10 seconds delay on resume has been removed by setting iommu=pt on the kernel command line.
For improved energy efficiency it is recommended to switch from the acpi_cpufreq CPU frequency scaling driver to amd_pstate. Please note that so far it is not loaded automatically.
As expected, fwupdmgr can update the system BIOS and the firmware of the NVMe device. Everybody should do it immediately, because there are major suspend bugs with BIOS releases earlier than 1.25.
Run an Ansible playbook in a remote chroot
Running a playbook in a remote chroot or container is not supported by Ansible, but I have invented a good workaround to do it anyway.
The first step is to install Mitogen for Ansible (ansible-mitogen in Debian) and then configure ansible.cfg to use it:
[defaults] strategy = mitogen_linear
But everybody should use Mitogen anyway, because it makes Ansible much faster.
The trick to have Ansible operate in a chroot is to make it call a wrapper script instead of Python. The wrapper can be created manually or by another playbook, e.g.:
vars: - fsroot: /mnt tasks: - name: Create the chroot wrapper copy: dest: "/usr/local/sbin/chroot_{{inventory_hostname_short}}" mode: 0755 content: | #!/bin/sh -e exec chroot {{fsroot}} /usr/bin/python3 "$@" - name: Continue with stage 2 inside the chroot debug: msg: - "Please run:" - "ansible-playbook therealplaybook.yaml -l {{inventory_hostname}} -e ansible_python_interpreter=/usr/local/sbin/chroot_{{inventory_hostname_short}}"
This works thanks to Mitogen, which funnels all remote tasks inside that single call to Python. It would not work with standard Ansible, because it copies files to the remote system with SFTP and would do it outside of the chroot.
The same principle can also be applied to containers by changing the wrapper script, e.g:
#!/bin/sh -e exec systemd-run --quiet --pipe --machine={{container_name}} --service-type=exec /usr/bin/python3 "$@"
After the wrapper has been installed then you can run the real playbook by setting the ansible_python_interpreter variable, either on the command line, in the inventory or anywhere else that variables can be defined:
ansible-playbook therealplaybook.yaml -l {{inventory_hostname}} -e ansible_python_interpreter=/usr/local/sbin/chroot_{{inventory_hostname_short}}
My resignation from freenode
As it is now known, the freenode IRC network has been taken over by a Trumpian wannabe korean royalty bitcoins millionaire. To make a long story short, the former freenode head of staff secretly "sold" the network to this person even if it was not hers to sell, and our lawyers have advised us that there is not much that we can do about it without some of us risking financial ruin. Fuck you Christel, lilo's life work did not deserve this.
What you knew as freenode after 12:00 UTC of May 19 will be managed by different people.
As I have no desire to volunteer under the new regime, this marks the end of my involvement with freenode. It had started in 1999 when I encouraged the good parts of #linux-it
to leave ircnet, and soon after I became senior staff. Even if I have not been very active recently, at this point I was the longest-serving freenode staff member and now I expect that I will hold this record forever.
The people that I have met on IRC, on freenode and other networks, have been and still are a very important part of my life, second only to the ones that I have known thanks to Usenet. I am not fine, but I know that the communities which I have been a part of are not defined by a domain name and will regroup somewhere else.
The current freenode staff members have resigned with me, these are some of their farewell messages:
Together we have created Libera.Chat, a new IRC network based on the same principles of the old freenode.