Replies: 4 comments
-
The peak definitely happens when the repos are first loaded by libsolv. After the cache (solv) files are created it gets compressed requiring less memory on subsequent runs. In addition with dnf5 it can happen that it will be loading multiple repos at the same time which will need more memory but the user can configure the repos and work with only one at a time. Right now 50MB doesn't seem possible, even in the best scenario I could not get below ~215 MiB. I just did some testing in a F38 container:
With just the
With just the
(For context the With the
It does still seem too high to me, especially the last example. I will investigate further. |
Beta Was this translation helpful? Give feedback.
-
Previously I was doing the memory measuring with Now I was testing running It seems like 1gb vm should be enough? |
Beta Was this translation helpful? Give feedback.
-
I tried following:
I am suggesting that processing larger transaction or bigger RPMs might require additional RAM. |
Beta Was this translation helpful? Give feedback.
-
Anyway this is huge improvement in comparison to microdnf that requires 550 MB. |
Beta Was this translation helpful? Give feedback.
-
Is there any theory for how much memory dnf5 will consume for basic operations like "dnf5 upgrade" or "dnf5 install $FOO_PKG"?
I was hoping that dnf5 would get this under control, but even after the latest libsolv improvement, dnf5 memory usage still way too use on a 1gb vm without triggering oom.
For example, I'm wondering is it conceptually possible for dnf5 to run with less than 50MB? Is that based on the number of packages, repos? I can easily imagine that depsolving is creating tons of datastructures to try to model things, and that there's a tradeoff between cpu and memory, but it would seem that for all tiny use cases (embedded and micro vms) that memory is the hard resource that dnf needs to work around, and cpu is a secondary consideration.
Beta Was this translation helpful? Give feedback.
All reactions