Any reason why you can’t buy a 2TB SSD and have both a 1TB and 2TB? I have another comment on this thread outlining the complexities of caching on Linux.
Any reason why you can’t buy a 2TB SSD and have both a 1TB and 2TB? I have another comment on this thread outlining the complexities of caching on Linux.
L2ARC is not a read cache in the conventional sense, but something closer to swap for disks only. It is only effective if your ARC hit rate is really low from memory constraints, although I’m not sure how things stack up now with persistent L2ARC. ZFS does have special allocation devices, though, where metadata and optionally small blocks of data (which HDDs struggle with) can go, but you can lose data if these devices fail. There’s also the SLOG, where sync writes can go. It’s often useful to use something like optane drives for it.
Personally, I’d just keep separate drives. A lot of caching methods are afterthoughts (bcache is not really maintained as Kent is now working on bcachefs) or, like ZFS, are really complex are not true readback/writeback caches. In particular, LVM cache can, depending on its configuration, lead to data loss if a cache device is lost, and LVM itself can occur some overhead.
Flash is cheap. A 2TB NVMe drive is now roughly the cost of 2 AAA games (which is sad, really). OP should just buy a new drive.
Not all docker containers contain a shell binary.. You can still propose an issue to moby, the upstream of docker, though.
Yes, probably. It is possible to flash and use dasharo (a downstream fork of coreboot) onto a modern MSI Z790A motherboard, which gets you pcie gen 5, 14th gen intel, and so on. I’m not sure if the necessary code to get it running has been upstreamed into coreboot yet. https://docs.dasharo.com/unified/msi/overview/
From there, you can use corna’s me_cleaner to disable (and clean) the management engine. There are reports of it working on alder lake: https://docs.dasharo.com/unified/msi/overview/
Here’s a full tutorial on disabling your ME on modern systems: https://github.com/mostav02/Remove_IntelME_FPT?tab=readme-ov-file#neutralizing-me-and-flashing-via-fpt
To be honest, though, I wouldn’t bother unless you’re doing it for fun. I’m not sure if this entire process necessarily works on the Z790+14th gen intel anyway.
then IBM/Red Hat steal it lol.
Not really. RH provides all the hosting for the Fedora project, pays multiple people to work on it full time, and on top of that, the RPM specs (which are used to actually build packages) are all MIT licensed. It’d be like complaining bluehat steals the Linux kernel by cloning it from a git repo and making/distributing their own version of it, which is exactly what they do.
cause most of the Centos volunteer team to resign
The centos volunteers never resigned because of RH. The reason RH got centos was because centos almost didn’t get a few major releases out. It wasn’t until other companies started providing support for their own RHEL derivatives that they chose to restrict sources.
They didn’t murder centos
They murdered it, hollowed it out, then re-used the name for something completely new. Granted, what’s new is far from a bad thing, and despite having half the support cycle, the cycle itself is way more consistent and constant because there is no lag time between minor updates (because there are none). Releases are still apparently checked by RH QA, and bug fixes now come a little faster, too.
Most people have bought into FUD, and spout off the same BS points, and were never centos users to begin with.
I’ll do you one better: the centos users got exactly what they paid for, and were able to step in at any time to keep centos from turning into centos stream by making their own supported distro. Nobody did until centos original was gone, and were somehow surprised that a distro with a fixed 10 year support cycle takes a nontrivial amount of resources to run. I guess Oracle kind of tried to make their own version of centos with OL before the advent of CentOS Stream, though it was far from being “by the community, for the community”.
Redhat was the “corporate/govt” OS and I know it’s changed
I wouldn’t necessarily say this. RH’s still a major defense contractor, and IBM too.
Tbf you did start your post with
I’m in the process of starting a proper backup
So you’re going to end up with at least a few people talking about how to onboard your existing backups into a proper backup solution (like borg). Your bullet points can certainly probably be organized into a shell script with sync, but why? A proper backup solution with a full backup history is going to be way more useful than dumping all your files into a directory and renaming in case something clobbers. I don’t see the point in doing anything other than tarring your old backups and using borg import-tar
(docs). It feels like you’re trying to go from one half-baked, odd backup solution to another, instead of just going with a full, complete solution.
As said previously, Borg is a full dedplicating incremental archiver complete with compression. You can use relative paths temporarily to build up your backups and a full backup history, then use something like pika to browse the archives to ensure a complete history.
I think it’d also be good to document:
Alpine and NixOS: both 6 months
Minor releases of RHEL: 6 months
Non LTS Ubuntu: 6 months
The question also brings up Fedora rawhide. Fedora rawhide never has releases, though its version is always the current latest branched release (not necessarily stable/beta/alpha) + 1.
Since the pace of development was also brought up:
Fedora Rawhide and ELN (same package set) -> Fedora Stable after ~2-3 months of being “stabilized” (this stabilization period has periodic “freezes” which is why bad versions of XZ never made it into Fedora 40’s beta)
Fedora Rawhide and ELN (same package set) -> CentOS Stream (currently unclear how long it takes to go from branched to full release, though it was branched months ago from ELN) -> RHEL every 6 months
AlmaLinux releases are tagged from CentOS stream every 6 months, and patched with security updates. When a new version releases, the current minor release is immediately EOL’d, unlike RHEL. Rocky is the same. Both have extra support services from third parties. RHEL offers EUS releases for every other minor release (as of RHEL 9).
It’s a common misconception that Fedora stable releases become CentOS Stream releases. This pattern was true pre-CentOS stream, but now, for the most part, CentOS Stream and Fedora stable might share a few patches at most, but their development timelines are different. They branch directly off the rolling Fedora Rawhide/ELN trunk.
Debian unstable -> Debian testing (auto-promoted after 2 weeks iirc) -> Ubuntu stable or Debian stable
There are existing mirrors for Fedora and Ubuntu packages in China, which are used because mirrors in other countries are often blocked. I’m sure there are no legality issues—the issue in the case of flatpak and china in particular is that China blocks Fastly because Fastly does not host any POPs in China. This is why Cloudflare, for example, has their own network in China that international users can pay to use. There’s no legal issues here, just logistical. Besides, as previously shown, people do (with great difficulty) managed to bring up their own flatpak mirror without any consequences for a few years now.
Besides, there shouldn’t be legality issues for businesses wanting to host their own mirrors for compliance issues.
I’m not sure if anyone said it was the fault of flathub. My point is that, regardless of fault, accessibility to the main instance is an issue for several reasons, and a good way to solve it is to build a system for mirrors.
It’s not about funding. Many prefer mirrors because the main instance isn’t globally available (the GitHub issue I linked, for example, is all about people trying and failing to access flathub in China) or because they can’t for compliance reasons (many businesses already mirror stuff like epel, too, which is what throws off Rocky’s stat counters). Neither of those issues can be assessed by throwing more money at a CDN.
To everyone saying you can’t mirror a flatpak repo… you’re absolutely right. There should be a far easier way to set up your own mirror without needing to build everything from scratch. That being said, if you wanted to try to make your own repo with every one of flathub’s apps, here you go:
https://docs.flatpak.org/en/latest/hosting-a-repository.html
Edit: Some did get a flathub mirror working. The issue is that a. Fastly works good enough and b. There is no concept of “packages” on the server side. It’s just one big addressed content store because of ostree, and syncing is apparently difficult? Idk, not being able to sync the state of content is like the entire point of ostree…
I hope this makes the US revisit the concept of building something like the SSC. Competition in science is awesome.
Am I wrong in assuming that both OS’s should be sharing the EFI and /boot partitions?
Shared ESP is fine as long as you don’t run out of space. Nothing in /boot should conflict but that’s not guaranteed, although having 2 potential boot partitions means having 2 potential grub configs. I’d make sda3 a ~2GB ext4 boot partition just for Fedora (mounted at /boot), and an sda5 with btrfs with a home subvolume mounted at /home, and a root subvolume mounted at /, then mount sda1 at /boot/efi (this is the default layout iirc, albeit with different partitions, ofc). This might be easier to do in the advanced blivet gui.
And yes, Linux’s boot process is a convoluted, fragile mess and there are currently multiple ongoing discussions on how to improve it.
So you want to build something like apko (alpine packages/repos, used in chainguard’s images) or rules_oci (used in google’s Debian-based distroless images) but for portage?
I think it’d be cool. Just keep in mind:
I actually do use quadlets on my server which are absolutely fantastic and hook into systemd, but I don’t see any reason why a similar init system couldn’t do similar or even contribute something like podman generate systemd
but for a different init system.
I think it’s worth giving the ycombinator post a read.