

+1 for Niagra. It takes a few days to get used to but it’s the launcher every power user didn’t know they wanted. Lifetime purchase options and a very responsive/passionate dev


+1 for Niagra. It takes a few days to get used to but it’s the launcher every power user didn’t know they wanted. Lifetime purchase options and a very responsive/passionate dev


https://en.wikipedia.org/wiki/Markov_chain
Before the advent of AI, I wrote a slack bot called slackbutt that made Markov chains of random lengths between 2 and 4 out of the chat history of the channel. It was surprisingly coherent. Making an “llm” like that would be trivial.


It definitely can be disabled post-install but is much simpler to install without it at install-time, and has the added benefit of not pulling 2-5gb of other things that won’t be relevant to your use case. It’s not that the disk waste is that big of a deal, but any issues you run into will be that much easier to troubleshoot with fewer moving parts.


That wasn’t quite the takeaway I was going for. You can get a lot done on 8gb of ram. I was just trying to point out that it would probably be your first bottleneck as you started to scale out, and that you should consider using the server headless to make the ram you have go that much further.


All of those would be perfectly cromulent nodes for small containers. The first issue you’ll run into is the low ram. Some homelab projects would cause you to exceed 8gb, but the good news is if you’re using an external backend via NFS, you can always scale out (more nodes) or up(more compute per node,) later with minimal headache.
If you’re going to be memory constrained, don’t waste 1-2gb on a gui, install Ubuntu/Debian/whatever headless


Not the same chips, but ddr5, gddr7, and hbm2 are made off the same wafers in the same plants. The issue is allocation in wafer and production time skewing towards the higher-margin items. DDR5 additionally is being made more into the server ecc variant, which companies are buying in droves for cost-efficient MOE inference.


What is the flag for this?


CGNAT does have a designated range by spec. 100.64.0.0/10, which covers addresses from 100.64.0.0 to 100.127.255.255. Technically they could be using any other private address space but it would be very uncommon in a modern ISP.


No offence: but the problem is an app forces me to trust you; a website does not. I have toghter and easier control over a web request than I do over an app, and even if an app doesn’t have these permissions today, an update or an update after a sale could trivially and silently introduce them.
A website is obvious if the deal changes-- you put up a login wall to harvest data; I stop using the site. You put trackers and ads into the UI; I block it at the DNS level.


First instinct: being an app gives me over-permissive data collection scam vibes. I will not be installing it even though I might otherwise find a website of similar capability useful.


Unfortunately it not only has to be companies, but unless you are a producer of products that are HDMI certified already your membership will be denied. It would take a lot of fuckery to make that many corporations and not have all of their membership applications be denied. Also I’m not sure that it’s even a voting democracy in the traditional sense even if you could.


I suspect the difference in experiences is more due to x11/pulse(my custom systems) vs Wayland/pipewire(bazzite) than it is any particular GPU vendor or driver branch. Which I guess is a roundabout way of saying
Maybe? Probably?
Judging by the protondb entry on CS2 I strongly suspect I would have at least the audio issue regardless of gpu.


Appreciate the recommended fixes. I did find similar and was able to work through some of the issues with CS2 but I did that on instinct, and it wasn’t until I was halfway through troubleshooting game 2 of 2 attempted that I realized it wasn’t where I needed it to be for a remote support hand-me-down.
I did briefly entertain the idea of setting up rustdesk on it but the atomic nature + Wayland made unattended (read: “help I broke it and I can’t log in”) not really viable. By the time I got to “hrm, I could probably set up a reverse ssh tunnel into my homelab for persistent support?” I decided windows was probably the play here.


Like the other guy said I think this is a bazzite-induced problem. I have other Linux systems at home. My daily driver and my wife’s daily driver are both highly custom Ubuntu server derivatives, we both have Nvidia GPUs (3050, 5070), and neither of us have similar issues.
The reason I wanted to try bazzite was that I didn’t want to remotely support something super custom.


I just went to repurpose some old hardware for my nephew (4790k + 32gb ddr3 + rtx 3050) which I thought would make a very passable bazzite box. I put 2 drives in the test rig, one with bazzite Nvidia + kde and one with win11 running with the rufus tpm bypass hacks.
CS2 ran at ~40fps in bazzite with no sound once you got in game, win11 ran at ~100
Helldivers2 ran at ~50fps in bazzite with constant frame drops even after letting it precompile shaders. On windows it was a very playable 70fps.
I mainline Linux myself and I wanted bazzite to be the set-and-forget answer but it really wasn’t. I can’t in good faith hand that build over to an 12 year old with bazzite and that was super disappointing.


It’s much simpler than that actually. Nvidia makes a lot of money in feature licensing, particularly GRID/vgpu. If they fully open-sourced the driver they would have no method of enforcing license restrictions.


Unattended-upgrade does security-only patching once every 4 hours (in rough sync with my local mirror)
Full upgrades are done weekly, accompanied by a reboot
I find that the split between security patching and feature/bug patching maintains a healthy balance knowing when something is likely to break but never being behind on the latest cve.


Back in the day when our community was switching from xmpp to discord, our solution was to write a bot on either end that relayed messages from one to the other. The xmpp bot got more and more naggy over time until eventually we put the xmpp side in read-only for everyone except the relay bot. It did a good enough job at building momentum to switch that the final holdouts came over when we went r/o.
You might consider building something similar if you want to make a genuine effort to switch to matrix or IRC. A relay bot solves the problem of the first people being punished by virtue of being first.


I switched to Niagara a few years back because Nova didn’t have good support for foldables and tbh I haven’t looked back. It’s very different but once you get used to it it’s much faster than a traditional launcher.
There are server chips like the E7-8891 v3 which lived in a weird middle ground of supporting both ddr3 and ddr4. On paper, it’s about on par with a ryzen 5 5500 and they’re about $20 on US eBay. I’ve been toying with the idea of buying an aftermarket/used server board to see if it holds up the way it appears to on paper. $20 for a CPU (could even slot 2), $80 for a board, $40 for 32gb of ddr3 in quad chanel. ~$160 for a set of core components doesn’t seem that bad in modern times, especially if you can use quad/oct channel to offset the bandwidth difference between ddr3 and ddr4.
I think finding a cooler and a case would be the hardest part