

5·
2 months agoI use Ceph/CephFS myself for my own 671TiB array (382TiB raw used, 252TiB-ish data stored) – I find it a much more robust and better architected solution than Gluster. It supports distributed block devices (RBD), filesystems (CephFS), and object storage (RGW). NFS is pretty solid though for basic remote mounting filesystems.
The naive approach of thinking of swap as “extra ram that is slow” and so “why is my system using swap when I have ram” is not really the best analogy. It’s better to think of ram as a place that holds memory pages, and disk as a place that holds file blocks/extents. Ram happens to be very fast and disk happens to be very slow, but conceptually programs access both memory pages and file blocks as they are running.
There is no reason that memory pages must live in ram, and no reason that file blocks must live on disk, especially in the modern world with virtual memory, and so sometimes it is more efficient for the kernel to say “hey, this memory page is never used and this file block is used a lot, so I’m going to put the memory page on disk and the file block in memory and everything will be faster.”
When a memory page is moved to disk, this is called swapping and the place it is stored on disk is swapspace. When a file block is stored in memory, this is called cache (or in some cases, buffers). So if you have plenty of ram free, it can still be beneficial for the kernel to move unused memory pages to swap to make space for cache. And once it is in swap, there’s no good reason to move it back until you need it, so even if space frees up in ram, the kernel won’t preemptively move it back into ram.