systemd has not implemented age verification
This needs to be clear: systemd is under attack by a trolling campaign orchestrated by fascist elements. Nobody is forced to like or use systemd, but anybody who wants to pick a side should know the facts.
Recently, the free software Nazi bar crowd styling themselves as "concerned citizens" has tried to start a moral panic by saying that systemd is implementing age verification checks or that somehow it will require providing personally identifiable information.
This is a lie: the facts are simply that the systemd users database has gained an optional "date of birth" field, which the desktop environments may use or not as they deem appropriate. Of course there is no "identity verification" or requirements to provide any data, which in any case would not be shared beyond authorized local applications.
While the multiple recent bills proposing that general purpose operating systems implement age verification mechanisms are often concerning, both from a social and technical point of view, this is not the topic being discussed here. They are often suboptimal, but for a long time I have been opposing attempts to implement parental control at the network level and argued that it should be managed locally, by parents on their own machines: I cannot see why I should outright reject an attempt to implement the infrastructure to do that.
If we want to keep age-appropriate controls out of the hands of centralized authorities, the alternative is giving families the means to manage it themselves: this is what this field enables. Whether desktop environments use it for parental controls, for birthday reminders, or for nothing at all, is their users' decision.
By the way, the original UNIX users database has allowed storing PII in the GECOS field since it was invented in the '70s. Similar fields are also specified by many popular LDAP schemes: adding such an optional field is consistent with the UNIX tradition.
And while we are at it, let's also refute the other smear campaign started by the same people: the systemd project is not accepting "AI slop". What happened is that a documentation file for the benefit of coding agents was added to the repository. To be clear: agents still cannot submit merge requests. The file itself remarks that all contributions must be reviewed in detail by humans, and this is basically the same policy used by the Linux kernel.
Bypassing deep packet inspection with socat and HTTPS tunnels
Recently I found myself with a few hours to kill, but with the only
available connectivity provided by an annoying firewall which would
normally allow requests only to a few very specific web sites.
This post shows how to work around this kind of restrictions by hiding
SSH in an HTTPS connection, which then can be used as a SOCKS proxy to
allow general connectivity.
socat
does all the hard work.
First, create two self-signed RSA keys pairs, one for the client (bongo) and one for the server (attila):
domain=bongo.example.net openssl req -x509 -newkey rsa:2048 -days 7300 \ -subj /CN=$domain -addext "subjectAltName = DNS:$domain" \ -keyout socat.key -nodes \ -out socat.pem
Then, concatenate the public and private keys to create the file
provided to the cert option, and use the public key as
the file for the cafile option on the other
side.
On the client side, if you normally would connect to
attila.example.net then you can add something like this to
~/.ssh/config:
Host httpstunnel-attila.example.net
ProxyCommand socat --statistics STDIO OPENSSL:attila.example.net:443,↩️
cert=$HOME/.ssh/socat-bongo.pem,cafile=$HOME/.ssh/socat-attila.pem,↩️
snihost=${SOCAT_SNI:-x.com}
DynamicForward 1080
Compression yes
HostKeyAlias attila.example.net
ControlMaster yes
ControlPath ~/.ssh/.control_attila.example.net_22_%r
The ProxyCommand directive uses socat to
provide the connectivity which ssh will use over stdio
instead of connecting to port 22 of the server.
The snihost option is enough to make many firewalls
believe that this is an authorized HTTPS request.
On the server side we use a simple systemd unit to start a forking
instance of socat, which will accept and process requests
from the client (and from random crawlers on the Internet: expect a lot
of cruft in that log...):
[Unit] Description=socat tunnel After=network.target [Service] Type=exec ExecStart=socat -ly OPENSSL-LISTEN:443,fork,reuseaddr,↩️ cert=%d/tlskey,cafile=%d/tlsca TCP:localhost:22 SuccessExitStatus=143 LoadCredential=tlskey:/etc/ssh/socat-attila.pem LoadCredential=tlsca:/etc/ssh/socat-bongo.pem Restart=on-abnormal RestartSec=5s DynamicUser=yes PrivateDevices=yes PrivateTmp=yes ProtectClock=yes ProtectControlGroups=yes ProtectHome=yes ProtectHostname=yes ProtectKernelLogs=yes ProtectKernelModules=yes ProtectKernelTunables=yes ProtectProc=invisible ProtectSystem=strict RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX RestrictNamespaces=yes RestrictRealtime=yes RestrictSUIDSGID=yes LockPersonality=yes MemoryDenyWriteExecute=yes NoNewPrivileges=yes AmbientCapabilities=CAP_NET_BIND_SERVICE CapabilityBoundingSet=CAP_NET_BIND_SERVICE SystemCallArchitectures=native SystemCallErrorNumber=EPERM SystemCallFilter=@system-service SystemCallFilter=~@resources SystemCallFilter=~@privileged [Install] WantedBy=multi-user.target
Strong sandboxing is enabled, so the socat instance
is confined with very limited privileges. An interesting point is the
use of systemd credentials
to provide the cryptographic keys, since it allows to store them in a
part of the file system which would not be accessible to the program.
Advanced users can use this method to provide the keys from secure
storage.
Are the IXPs actually critical infrastructure?
Over the past year, the peering LAN of MIX-IT, the largest internet exchange point in Italy carrying 2.5-3 Tbps of traffic, has experienced multiple outages. The longest lasted 34 hours, which I understand has set a new world record.
These incidents should clearly demonstrate to regulators that IXPs do not generally meet the criteria for critical infrastructure / essential entities, and this is a major issue now that all European countries are starting to implement the NIS 2 directive. Despite such a long outage, the sky has not fallen and nobody has even noticed it outside the circle of network operators commenting from the sidelines.
Indeed, in most european countries the local internet exchanges have not been automatically declared as critical infrastructure.
I have been saying for a long time that IXPs should shut down their network once per year just to show that everything will still work without them… This may be perceived as a joke, but the point is that while internet exchanges as a category are important and help to improve the quality of connectivity for their members, no single IXP is indispensable in itself for the Internet to work. During an outage the traffic will just move to other IXPs or transits and it is the responsibility of individual networks to implement appropriate redundancy.
On a related note, Italian IXPs exhibit a notable distinction when compared to most of their European counterparts: here three of the five largest internet exchanges also operate their own data centers and passive interconnection infrastructures (commonly known as meet-me rooms). While the peering LANs themselves are not critical infrastructure, the big meet-me rooms (in Italy those operated by MIX, NAMEX and Retelit) should unequivocally be considered as such.
On the use of SaaS in systems engineering
We want to use an hyperscaler cloud because it is cheaper to
delegate operating a scalable and redundant database to an hyperscaler
is something that can be debated from business and technical points of view.
We want to use an hyperscaler cloud because our developers do not
want to operate a scalable and redundant database
just means that
you need to hire competent developers and/or system administrators.
We must stop normalizing the idea that the people whose only skill is gluing together a few dozens of AWS services can continue calling themselves developers. We should also find a sufficiently demeaning name to refer to them...
An interesting architecture-dependent autopkgtest regression
More than two years after I last uploaded the purity-off Debian package, its autopkgtest (the Debian distribution-wide continuous integration system) started failing on arm64, and only on this architecture.
The failing test is very simple: it prints a long stream of "y" or "n" characters to purity(6)'s standard input and then checks the output for the expected result.
While investigating the live autopkgtest system, I figured out that:
- The paging function of purity(6) became enabled, but only on arm64!
- Paging consumed more "y" characters from standard input than the 5000 provided by the test script.
- The paging code does not consider EOF a valid input, so at that point it would start asking again and printing "--- more ---" forever in a tight loop.
- And this output, being redirected to a file, would fill the file system where the autopkgtest is running.
I did not have time to verify this, but I have noticed that the 25 years old purity(6) program calls TIOCGWINSZ to determine the screen length, and then uses the results in the answer buffer without checking if the ioctl(2) call returned an error. Which it obviously does in this case, because standard input is not a console but a pipe. So my theory is that paging is enabled because the undefined result of the ioctl has changed, and only on this architecture.
Since I do not want to fix purity(6) right now, I have implemented the workaround of printing many more "y" characters as input.
CISPE's call for new regulations on VMware
A few days ago CISPE, a trade association of European cloud providers, published a press release complaining about the new VMware licensing scheme and asking for regulators and legislators to intervene.
But VMware does not have a monopoly on virtualization software: I think that asking regulators to interfere is unnecessary and unwise, unless, of course, they wish to question the entire foundations of copyright. Which, on the other hand, could be an intriguing position that I would support...
I believe that over-reliance on a single supplier is a typical enterprise risk: in the past decade some companies have invested in developing their own virtualization infrastructure using free software, while others have decided to rely entirely on a single proprietary software vendor.
My only big concern is that many public sector organizations will continue to use VMware and pay the huge fees designed by Broadcom to extract the maximum amount of money from their customers. However, it is ultimately the citizens who pay these bills, and blaming the evil US corporation is a great way to avoid taking responsibility for these choices.
"Several CISPE members have stated that without the ability to license and use VMware products they will quickly go bankrupt and out of business."
Insert here the Jeremy Clarkson "Oh no! Anyway..." meme.
Extending access to the systemd RuntimeDirectory with a POSIX ACL
inn2 uses ephemeral UNIX domain sockets in /run/news/ to communicate with the ctlinnd program. Since the directory is only writeable by the "news" user, other unprivileged users are not able to use the command.
I solved this by extending the inn2.service systemd unit with a drop-in file which uses setfacl to give access to my user "md" to the RuntimeDirectory created by systemd. This is the content of /etc/systemd/system/inn2.service.d/md-ctlinnd.conf:
[Service] # innd will change the permissions of /run/news/ when started: without # creating it now with mode 0775 then that will change the ACL mask. RuntimeDirectoryMode=0775 # allow user md to run ctlinnd(8), which creates sockets in /run/news/ ExecStartPost=/usr/bin/setfacl --modify user:md:rwx $RUNTIME_DIRECTORY
The non-obvious issue here is that the innd daemon on startup will change the directory permissions in a way which sets a more restrictive (non group-writeable) ACL mask, and this would make the newly created user ACL ineffective. The solution is to create the directory group-writeable from start.
(Beware: this creates a trivial privileges escalation from md to news.)
On having a track record in operating systems development
Now that Debian 12 has been released with proprietary firmwares on the official media, non-optional merged-/usr and systemd adopted by everybody, I want to take a moment to list, not without some pride, a few things that I was right about over the last 20 years:
Accepting the obvious solution about firmwares took 18 years. My work on the merged-/usr transition started in 2014, and the first discussions about replacing sysvinit are from 2011. The general adoption of udev (and dynamic device names, and persistent network interface names...) took less time in comparison and no large-scale flame wars, since people could enable it at their own pace. But it required countless little debates in the Debian Bug Tracking System: I still remember the people insisting that they would never use this newfangled dynamic /dev/, or complaining about their beloved /dev/cdrom symbolic link and persistent network interface names.
So follow me for more rants about inevitable technologies.
Installing Debian 12 on a Banana Pi M5
I recently bought a Banana Pi BPI-M5, which uses the Amlogic S905X3 SoC: these are my notes about installing Debian on it.
While this SoC is supported by the upstream U-Boot it is not supported by the Debian U-Boot package, so debian-installer does not work. Do not be fooled by seeing the DTB file for this exact board being distributed with debian-installer: all DTB files are, and it does not mean that the board is supposed to work.
As I documented in #1033504, the Debian kernels are currently missing some patches needed to support the SD card reader.
I started by downloading an Armbian Banana Pi image and booted it from an SD card. From there I partitioned the eMMC, which always appears as /dev/mmcblk1:
parted /dev/mmcblk1 (parted) mklabel msdos (parted) mkpart primary ext4 4194304B -1 (parted) align-check optimal 1 mkfs.ext4 /dev/mmcblk1p1
Make sure to leave enough space before the first partition, or else U-Boot will overwrite it: as it is common for many ARM SoCs, U-Boot lives somewhere in the gap between the MBR and the first partition.
I looked at Armbian's /usr/lib/u-boot/platform_install.sh and installed U-Boot by manually copying it to the eMMC:
dd if=/usr/lib/linux-u-boot-edge-bananapim5_22.08.6_arm64/u-boot.bin of=/dev/mmcblk1 bs=1 count=442 dd if=/usr/lib/linux-u-boot-edge-bananapim5_22.08.6_arm64/u-boot.bin of=/dev/mmcblk1 bs=512 skip=1 seek=1
Beware: Armbian's U-Boot 2022.10 is buggy, so I had to use an older image.
I did not want to install a new system, so I copied over my old Cubieboard install:
mount /dev/mmcblk1p1 /mnt/ rsync -xaHSAX --delete --numeric-ids root@old-server:/ /mnt/ --exclude='/tmp/*' --exclude='/var/tmp/*'
Since the Cubieboard has a 32 bit CPU and the Banana Pi requires an arm64 kernel I enabled the architecture and installed a new kernel:
dpkg --add-architecture arm64 apt update apt install linux-image-arm64 apt purge linux-image-6.1.0-6-armmp linux-image-armmp
At some point I will cross-grade the entire system.
Even if ttyS0 exists it is not the serial console, which appears as ttyAML0 instead. Nowadays systemd automatically starts a getty if the serial console is enabled on the kernel command line, so I just had to disable the old manually-configured getty:
systemctl disable serial-getty@ttyS0.service
I wanted to have a fully working flash-kernel, so I used Armbian's boot.scr as a template to create /etc/flash-kernel/bootscript/bootscr.meson and then added a custom entry for the Banana Pi to /etc/flash-kernel/db:
Machine: Banana Pi BPI-M5 Kernel-Flavors: arm64 DTB-Id: amlogic/meson-sm1-bananapi-m5.dtb U-Boot-Initrd-Address: 0x0 Boot-Initrd-Path: /boot/uInitrd Boot-Initrd-Path-Version: yes Boot-Script-Path: /boot/boot.scr U-Boot-Script-Name: bootscr.meson Required-Packages: u-boot-tools
All things considered I do not think that I would recommend to Debian users to buy Amlogic-based boards since there are many other better supported SoCs.
I replaced grub with systemd-boot
To be able to investigate and work on the the measured boot features I have switched from grub to systemd-boot (sd-boot).
This initial step is optional, but it is useful because this way /etc/kernel/cmdline will become the new place where the kernel command line can be configured:
. /etc/default/grub echo "root=/dev/mapper/root $GRUB_CMDLINE_LINUX $GRUB_CMDLINE_LINUX_DEFAULT" > /etc/kernel/cmdline
Do not forget to set the correct root file system there, because initramfs-tools does not support discovering it at boot time using the Discoverable Partitions Specification.
The installation has been automated since systemd version 252.6-1, so installing the package has the effect of installing sd-boot in the ESP, enabling it in the UEFI boot sequence and then creating boot loader entries for the kernels already installed on the system:
apt install systemd-boot
If needed, it could be manually installed again just by running bootctl install.
I like to show the boot menu by default, at least until I will be more familiar with sd-boot:
bootctl set-timeout 4
Since other UEFI binaries can be easily chainloaded, I am also going to keep around grub for a while, just to be sure:
cat <<END > /boot/efi/loader/entries/grub.conf title Grub linux /EFI/debian/grubx64.efi END
At this point sd-boot works, but I still had to enable secure boot. So far sd-boot has not been signed with a Debian key known to the shim bootloader, so I needed to create a Machine Owner Key (MOK), enroll it in UEFI and then use it to sign everything.
I dislike the complexity of mokutil and the other related programs, so after removing it and the boot shim I have decided to use sbctl instead. With it I easily created new keys, enrolled them in the EFI key store and then signed everything:
sbctl create-keys sbctl enroll-keys for file in /boot/efi/*/*/linux /boot/efi/EFI/*/*.efi; do sbctl sign -s $file done
Since there is no sbctl package yet I need to make sure that also the kernels installed in the future will be automatically signed, so I have created a trivial script in /etc/kernel/install.d/ which automatically runs sbctl sign -s or sbctl remove-file.
The Debian wiki SecureBoot page documents how do do this with mokutil and sbsigntool, but I think that sbctl is much friendlier.
Since I am not using the boot shim, I also had to set DisableShimForSecureBoot=true in /etc/fwupd/uefi_capsule.conf to make firmware updates work automatically.
As a bonus, I have also added to the boot menu the excellent Debian-based GRML live distribution. Since sd-boot is not capable of loopback-mounting CD-ROM images like grub, I first had to extract the kernel and initramfs and copy them to the ESP:
mount -o loop /boot/grml/grml64-full_2022.11.iso /mnt/ mkdir /boot/efi/grml/ cp /mnt/boot/grml64full/* /boot/efi/grml/ umount /mnt/ cat <<END > /boot/efi/loader/entries/grml.conf title GRML linux /grml/vmlinuz initrd /grml/initrd.img options boot=live bootid=grml64full202211 findiso=/grml/grml64-full_2022.11.iso live-media-path=/live/grml64-full net.ifnames=0 END
As expected, after a reboot bootctl reports the new security features:
System:
Firmware: UEFI 2.70 (Lenovo 0.4496)
Firmware Arch: x64
Secure Boot: enabled (user)
TPM2 Support: yes
Boot into FW: supported
Current Boot Loader:
Product: systemd-boot 252.5-2
Features: ✓ Boot counting
✓ Menu timeout control
✓ One-shot menu timeout control
✓ Default entry control
✓ One-shot entry control
✓ Support for XBOOTLDR partition
✓ Support for passing random seed to OS
✓ Load drop-in drivers
✓ Support Type #1 sort-key field
✓ Support @saved pseudo-entry
✓ Support Type #1 devicetree field
✓ Boot loader sets ESP information
ESP: /dev/disk/by-partuuid/1b767f8e-70fa-5a48-b444-cfe5c272d66e
File: └─/EFI/systemd/systemd-bootx64.efi
...
Relevant documentation: