tuning(7) begins:
-
tuning(7) begins:
"The swap partition should typically be approximately 2x the size of main memory for systems with less than 4GB of RAM, or approximately equal to the size of main memory if you have more. "
I can't believe that 64 GB swap should be a norm for a system with 64 GB RAM.
<https://man.freebsd.org/cgi/man.cgi?query=tuning&sektion=7&manpath=freebsd-current>
-
tuning(7) begins:
"The swap partition should typically be approximately 2x the size of main memory for systems with less than 4GB of RAM, or approximately equal to the size of main memory if you have more. "
I can't believe that 64 GB swap should be a norm for a system with 64 GB RAM.
<https://man.freebsd.org/cgi/man.cgi?query=tuning&sektion=7&manpath=freebsd-current>
@grahamperrin I feel like that metric should only apply if you plan to hibernate to swap, as on a laptop. 64GB of physical RAM shouldn't even need a swap partition or swapfile at all, unless your workflow regularly caps that massive amount.
-
tuning(7) begins:
"The swap partition should typically be approximately 2x the size of main memory for systems with less than 4GB of RAM, or approximately equal to the size of main memory if you have more. "
I can't believe that 64 GB swap should be a norm for a system with 64 GB RAM.
<https://man.freebsd.org/cgi/man.cgi?query=tuning&sektion=7&manpath=freebsd-current>
I think that advice predates the unified buffer cache.
I have a machine with 64 GiB of RAM and 64 GB of swap, mostly because the disks are big and it’s sometimes useful to have a partition you can nuke (I installed FreeBSD on a BIOS-only machine and then replaced the motherboard with a UEFI-only one, so it was very useful to have space to pop a UEFI partition). But I don’t think I’ve seen it put even 1 GiB in swap.
Quite a few processes have some pages that are used on startup and not touched, and if you leave them for long enough these will eventually be swapped out and the RAM used for hotter disk pages, but these days a lot of RAM is typically filled with disk cache and will be preferentially evicted in case of memory pressure.
I’ve seen large swap cause some real problems with processes with run-away allocation. The process grows to fill RAM and swap, then segfaults and dumps core. Dumping core requires paging in every page that was swapped out and writing it to disk. That can take a really long time, hurt system performance while it’s happening and, if you’re really unlucky, fill up the disk (bonus points if the program was running as root).
-
tuning(7) begins:
"The swap partition should typically be approximately 2x the size of main memory for systems with less than 4GB of RAM, or approximately equal to the size of main memory if you have more. "
I can't believe that 64 GB swap should be a norm for a system with 64 GB RAM.
<https://man.freebsd.org/cgi/man.cgi?query=tuning&sektion=7&manpath=freebsd-current>
@grahamperrin
It can be logical if FreeBSD start supporting hibernation with swap as where the memory image is saved into. -
I think that advice predates the unified buffer cache.
I have a machine with 64 GiB of RAM and 64 GB of swap, mostly because the disks are big and it’s sometimes useful to have a partition you can nuke (I installed FreeBSD on a BIOS-only machine and then replaced the motherboard with a UEFI-only one, so it was very useful to have space to pop a UEFI partition). But I don’t think I’ve seen it put even 1 GiB in swap.
Quite a few processes have some pages that are used on startup and not touched, and if you leave them for long enough these will eventually be swapped out and the RAM used for hotter disk pages, but these days a lot of RAM is typically filled with disk cache and will be preferentially evicted in case of memory pressure.
I’ve seen large swap cause some real problems with processes with run-away allocation. The process grows to fill RAM and swap, then segfaults and dumps core. Dumping core requires paging in every page that was swapped out and writing it to disk. That can take a really long time, hurt system performance while it’s happening and, if you’re really unlucky, fill up the disk (bonus points if the program was running as root).
@david_chisnall @grahamperrin
I saw swapping around 14GB (or possibly more I've not noticed, as I've not logged the usage) while running poudriere bulk builds on i9-12900H detected as 20 CPUs (6 HTT-capable p-cores and 8 HTT-incapable e-cores) with 64GB of RAM, limiting 6 builder jails (-J 6). -
tuning(7) begins:
"The swap partition should typically be approximately 2x the size of main memory for systems with less than 4GB of RAM, or approximately equal to the size of main memory if you have more. "
I can't believe that 64 GB swap should be a norm for a system with 64 GB RAM.
<https://man.freebsd.org/cgi/man.cgi?query=tuning&sektion=7&manpath=freebsd-current>
@grahamperrin rust, sir. compiling rust will do it. heh.
(The peak page allocations when building some packages requires it. like, i've tried building with cpu core limits but 32g, 64g of RAM and no swap, and it still gets killed for out of memory.)
-
@grahamperrin
It can be logical if FreeBSD start supporting hibernation with swap as where the memory image is saved into.@TomAoki true, however I'm certain that in reality: the beginning of the page is terribly outdated … and when the first sentence of a page is wrong, it doesn't inspire confidence in the rest of the page.
Whilst <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=218538> (2017) is probably too blunt – "tuning(7) should be either strictly maintained, or removed" – I do empathise with the sentiment.
-
@grahamperrin rust, sir. compiling rust will do it. heh.
(The peak page allocations when building some packages requires it. like, i've tried building with cpu core limits but 32g, 64g of RAM and no swap, and it still gets killed for out of memory.)
@erikarn thanks, however that's not a typical use case.
The manual page is "typically" …
-
@TomAoki true, however I'm certain that in reality: the beginning of the page is terribly outdated … and when the first sentence of a page is wrong, it doesn't inspire confidence in the rest of the page.
Whilst <https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=218538> (2017) is probably too blunt – "tuning(7) should be either strictly maintained, or removed" – I do empathise with the sentiment.
@grahamperrin @david_chisnall
What's difficult about "Tuning" manpage would be because tunings strongly depend on both the planned workload and resources.
And more, how interactivity is important or not (even on some kind of server workloads, responses to manual managements "real time" are important, while some others are focused on how fast "batch" jobs are finished.
The only way to cover all types would be to completely iignoring costs / budgedts. But it's not at all "tunings". -
@david_chisnall @grahamperrin
I saw swapping around 14GB (or possibly more I've not noticed, as I've not logged the usage) while running poudriere bulk builds on i9-12900H detected as 20 CPUs (6 HTT-capable p-cores and 8 HTT-incapable e-cores) with 64GB of RAM, limiting 6 builder jails (-J 6).@david_chisnall @grahamperrin
Just now, observed below on poudriere builds...last pid: 54092; load averages: 75.87, 70.98, 65.17 up 1+11:06:07 19:01:01
341 processes: 81 running, 260 sleeping
CPU: 93.3% user, 1.3% nice, 4.4% system, 1.0% interrupt, 0.0% idle
Mem: 43G Active, 6873M Inact, 2507M Laundry, 8681M Wired, 347K Buf, 1359M Free
ARC: 2033M Total, 1037M MFU, 567M MRU, 7356K Anon, 32M Header, 375M Other
1309M Compressed, 3096M Uncompressed, 2.37:1 Ratio
Swap: 64G Total, 16G Used, 49G Free, 24% Inuse, 652K In% poudriere status -b
=>> [16amd64-default] [2026-02-15_07h21m26s] [parallel_build] Time: 11:41:43
Queued: 2594 Inspected: 0 Ignored: 1 Built: 1965 Failed: 0 Skipped: 0 Fetched: 0 Remaining: 628
ID TOTAL ORIGIN PKGNAME PHASE TIME TMPFS CPU% MEM%
[01] 01:07:36 www/webkit2-gtk@40 | webkit2-gtk_40-2.46.6_6 build 01:01:26 2.80 GiB 160.9% 3.3%
[02] 00:12:05 graphics/kdiagram-qt6 | kdiagram-qt6-3.0.1 package 00:00:06 37.36 MiB 8.7% 0.1%
[03] 00:07:27 devel/kf6-ktexttemplate | kf6-ktexttemplate-6.23.0 build 00:01:12 18.96 MiB 320.2% 3.3%
[04] 02:44:41 editors/zed | zed-editor-0.222.4 build 02:34:44 7.93 GiB 22.6% 31.3%
[05] 02:34:43 www/webkit2-gtk@41 | webkit2-gtk_41-2.46.6_6 build 02:29:57 2.80 GiB 480.5% 17.2%
[06] 02:34:31 www/webkit2-gtk@60 | webkit2-gtk_60-2.46.6_6 build 02:24:56 2.80 GiB 453.2% 19.2%
=>> Logs: /poudriere/data/logs/bulk/16amd64-default/2026-02-15_07h21m26s -
@david_chisnall @grahamperrin
Just now, observed below on poudriere builds...last pid: 54092; load averages: 75.87, 70.98, 65.17 up 1+11:06:07 19:01:01
341 processes: 81 running, 260 sleeping
CPU: 93.3% user, 1.3% nice, 4.4% system, 1.0% interrupt, 0.0% idle
Mem: 43G Active, 6873M Inact, 2507M Laundry, 8681M Wired, 347K Buf, 1359M Free
ARC: 2033M Total, 1037M MFU, 567M MRU, 7356K Anon, 32M Header, 375M Other
1309M Compressed, 3096M Uncompressed, 2.37:1 Ratio
Swap: 64G Total, 16G Used, 49G Free, 24% Inuse, 652K In% poudriere status -b
=>> [16amd64-default] [2026-02-15_07h21m26s] [parallel_build] Time: 11:41:43
Queued: 2594 Inspected: 0 Ignored: 1 Built: 1965 Failed: 0 Skipped: 0 Fetched: 0 Remaining: 628
ID TOTAL ORIGIN PKGNAME PHASE TIME TMPFS CPU% MEM%
[01] 01:07:36 www/webkit2-gtk@40 | webkit2-gtk_40-2.46.6_6 build 01:01:26 2.80 GiB 160.9% 3.3%
[02] 00:12:05 graphics/kdiagram-qt6 | kdiagram-qt6-3.0.1 package 00:00:06 37.36 MiB 8.7% 0.1%
[03] 00:07:27 devel/kf6-ktexttemplate | kf6-ktexttemplate-6.23.0 build 00:01:12 18.96 MiB 320.2% 3.3%
[04] 02:44:41 editors/zed | zed-editor-0.222.4 build 02:34:44 7.93 GiB 22.6% 31.3%
[05] 02:34:43 www/webkit2-gtk@41 | webkit2-gtk_41-2.46.6_6 build 02:29:57 2.80 GiB 480.5% 17.2%
[06] 02:34:31 www/webkit2-gtk@60 | webkit2-gtk_60-2.46.6_6 build 02:24:56 2.80 GiB 453.2% 19.2%
=>> Logs: /poudriere/data/logs/bulk/16amd64-default/2026-02-15_07h21m26sPoudriere could be significantly improved by exporting a semaphore into each jail and patching gmake, ninja, and a handful of other ports to acquire it before starting a job and release it at the end.
Currently it has the nested schedulers problem: the things making the scheduling decision do not have the global knowledge to make good decisions. On a 16-core machine, you either let each jail build sequentially and end up with an LLVM build that everything depends on running with 15 cores idle, or you allow each jail to use 16 cores and end up with two big builds (e.g. LLVM, Chromium, LibreOffice) some slicing the cores and using more RAM than you have.
Ideally, each jail spawns more than one build job only when other jails are not running, and limits parallel jobs (e.g. LLD) based on the number of available cores.
CMake can now put link jobs in a separate resource pool because they tend to be memory and I/O limited, rather than CPU-bound. Linking LLVM will mmap several gigabytes of files and do copies of fragments of them. This can easily trigger swapping you’re linking multiple tools at the same time.
-
R relay@relay.infosec.exchange shared this topic