Y’all are too creative for me… I have:
- poweredge-r520-0
- poweredge-t620-0
- poweredge-t620-1
- pi4-0
- pi3b-0
- pi3b-1
- pi3b-2
- pi3b-3
- vostro-3525-0
- ideapad-c340-0
Y’all are too creative for me… I have:
I just checked again, but I have no such option in my BIOS. In fact, there aren’t any video-related options at all.
My BIOS splash screen only shows up if the monitor’s attached to the motherboard video output. The outputs on the GPU have no signal until plasma starts…
I’ve never needed to do more then sudo apt install nvidia-driver
, after that it Just Works™.
debian btw
How is the software-rendered image supposed to show up on the screen if GPU is nonresponsive? Excluding laptops with switchable graphics, the GPU is the one actually connected to the display. If the GPU hangs, how could the CPU continue to update the framebuffer in GPU memory?
Making more wallets would cost nothing more than a few hundred bytes of storage each for the keys. I have no idea why they wouldn’t have split their funds into evenly sized wallets of, say, $1M each.
Here’s a much more effective MIDI-ification of a song with vocals: https://youtu.be/0k7UD64RwGU
That’s ~2.4Gbit/s. There are multiple residential ISPs in my area offering 10Gbit/s up for around $40/month, so even if we assume the bandwidth is significantly oversubscribed a single cheap residential internet plan should be able to handle that bandwidth no problem (let alone a for a datacenter setup which probably has 100Gbit/s links or faster)
50MB/s is like 0.4Gbit/s. Idk where you are, but in Switzerland you can get a symmetric 10Gbit/s fiber link for like 40 bucks a month as a residential customer. Considering 100Gbit/s and even 400Gbit/s links are already widely deployed in datacenter environments, 300MB/s (or 2.4Gbit/s) could easily be handled even by a single machine (especially since the workload basically consists of serving static files).
I’ve got an old HP laptop which I’ve been running a Jenkins server on for years. The fan died back in like 2018, and I just kept putting off buying a replacement, so it has been running with no fan for 7 years now. Remarkably it still works fine, although a but slower than it used to thanks to thermal throttling :P
Roscoe is one of my professors at ETH, and he gave a keynote at VISCon a few months ago where he discussed this stuff and what his department is working on. Apparently a lot of their (they being the systems department at ETH) current work is related to formally modeling which parts of a system have access to what other parts, and then figuring out which of those permissions are actually needed and then deriving the strictest possible MPU configuration while still having a working system. The advantage of this approach over an entirely new kernel is that, well, it doesn’t require an entirely new kernel, but can be built into an existing system, while still allowing them to basically eliminate the entire class of vulnerabilities they’re targeting.
This guy (Roscoe) is one of my professors and I’ve heard him give a few talks related to this before, so I’ll try to summarize the problem:
Basically, modern systems do not really match with the classic model of “there’s a some memory and perhipheral devices attached to a bus, and they’re all driven by the CPU running a kernel which is responsible for controlling everything”. Practically every component has it’s own memory and processor(s), each running their own software independently of the main kernel (sometime even with their own separate kernel!), there are separate buses completely inaccessible to the CPU specifically for communicating between components, often virtually every component is directly attached to the memory bus and therefore bypasses the CPU’s memory protection mechanisms, and a lot of these hidden coprocessors are completely undocumented. A modern smartphone SoC can have 10s of separate processors all running their own software independently of each other.
This is bad for a lot of reasons, most importantly that it becomes basically impossible to reason about the correctness or security of the system when the “OS kernel” is actually just one of many equally privileged devices sharing the same bus. An example of what this allows: it is (or was) possible to send malformed WiFi packets and trigger a buffer overrun in certain mobile WiFi modems, allowing an attacker to get arbitrary code execution on the modem and then use that to overwrite the linux kernel in main memory, thus achieving full kernel-level RCE with no user interaction required. You can have the most security-hardened linux kernel you want, but that doesn’t mean a damn thing if any one of dozens of other processors can just… overwrite your code or read sensitive data directly from applications!
As I understand it, the goal of these projects is basically to make the kernel truly control all the hardware again, by having them also provide the firmware/control software for every component in the system. Obviously this requires a very different approach than conventional kernel designs, which basically just assume they rule the machine.
This is specific to page reclamations, which only occur when the kernel is removing a block of memory from a process. VMs in particular pretty much never do this; they pin a whole ton of memory and leave it entirely up to the guest OS to manage. The JVM also rarely ever returns heap memory to the kernel - only a few garbage collectors even support doing so (and support is relatively recent), and even if you have it configured correctly it’ll only release memory when the Java application is relatively idle (so the performance hit isn’t noticeable).
Yeah, it’d definitely be faster while pages are actively being moved to swap.
This probably won’t make much difference unless your application is frequently adding and removing large numbers of page mappings (either because it’s explicitly unmapping memory segments, or because pages are constantly being swapped in and out due to low system memory). I would suspect that the only things which would really benefit from this under “normal” circumstances are some particularly I/O intensive applications with lots of large memory mappings (e.g. some webservers, some BitTorrent clients), or applications which are frequently allocating and deallocating huge slabs of memory.
There might be some improvement during application startup as all the code+data pages are mapped in and the memory allocator’s arenas fill up, but as far as I know anonymous mappings are typically filled in one page at a time on first write so I don’t know how relevant this sort of batching might be.
Yes, I will store the expensive paintings in a house made of toothpicks and cardboard with a tar roof in a state known to have widespread fires every year. It is a good idea.
Deleting their account doesn’t change the rules, only the person enforcing them.
“If everyone could see it”
Proceeds to show an alternate future where having seen it is apparently a status symbol, which implies that not everyone is able to see it
I guess the artist just wanted to make a “haha rich people bad” joke, but surely there are other ways to make the same joke without completely ignoring the hypothetical scenario you described literally one panel before?
Debian-based distros (and probably most othera as well) actually have a package called “intel-microcode” which gets updated fairly regularly.
A huge chunk of Linux development is subsidized by the hundreds of corporations which depend on it and pay developers to maintain things. There is no corporate interest in developing and/or maintaining an alternative browser engine when chromium already exists and dominates the market.