let me check, it will take a while to run. But I think dedup runs on a per block instance, where as optimise just hardlinks files… And I’ve had cases where optimize reported negative savings before as well
In short, despite autoOptimise on, I had a dedup of 1.35.
(I’ve turned off autoOptimise now)
I should also note that in my system of 8GB ram, nix-env --install irrespective of size of derivation are regular but nixos-rebuild takes my RAM usage from 30% of 98% and the system comes to a stand-still. I am not sure how to explain this …
FWIW (preallocate-contents option in nixUnstable), it should be available in nixos-unstable but not in 20.09 yet – someone would need to backport the update to stable for that.
Since the PR for auto-detection was brought up here I’ve updated it & it should be ready, let’s see how that goes, but it also won’t be available right away either. It’s not too complicated to update though, and nix seems well tested so I don’t think a PR updating nixUnstable would be refused
Ah, my bad.
nixpkgs’s master branch’s f25bb567f608 ("nixUnstable: 3.0pre20200829_f156513 -> 3.0pre20201020_e0ca98c") updates nixUnstable to a sufficiently recent version, but it’s not in unstable yet. I somehow assumed nixos-unstable was directly equivalent to master but it’s not, sorry.
I’m not sure how the branches are updated, it looks linear to me. nixos-unstable-small has it but not nixos-unstable yet for some reason – then again the commit is only two days old so I assume it’ll get there sooner rather than later…
the release channels have to finish all builds, and all “jobs” need to be successful for it to be updated.
Although nix{os,pkgs}-unstable follows master, it will always lag behind, sometimes for days.
Usually the long delays are because some aarch64 or darwin build is taking forever to complete, since those are architectures/platforms with less hardware dedicated to them.
EDIT: not to mention the aarch64 builder just has really slow single core performance
Hi @asmadeus! It seems that preallocate-contents is available now with nixUnstable, the version of nixUnstable is 3.0pre20201020_e0ca98c. I’m not seeing more that warning about preallocate-contents being a unknown setting:
❯ sudo nixos-rebuild switch
[sudo] senha para thiago:
building Nix...
building the system configuration...
activating the configuration...
setting up /etc...
reloading user units for gdm...
reloading user units for thiago...
setting up tmpfiles
Only if you let it. It can pretty much use as little RAM as you ask it to; it just defaults to using a ton because the ARC is sophisticated enough to actually make great use of a large cache size, unlike the regular linux page cache. Dedup is of course the exception, but even then only for write workloads.
That’s true but when you make it too small you might turn a memory problem into a CPU problem because then the kernel is constantly busy evicting blocks from the ARC ([arc_prune]). The zpool on my laptop has 116 GB but setting zfs.zfs_arc_max to 512 MB already resulted in huge CPU load. Now my ARC is 1 GB which is 1/4 of my RAM but that actually works fine.
Hm. Yea I guess I’ve never seriously considered < 1GB of RAM for ARC, which to me is pretty small. But if you have such requirements, you do make a valid point.