Replies: 8 comments
-
GPU memory usage is not necessarily optimized on MadNLP. Curious if this is CuCholeskySolver's issue or general MadNLP issue. Have you tried |
Beta Was this translation helpful? Give feedback.
-
I have tried, but it fails with an error reported in 333. I run dual boot of Windows and Linux and on Linux there is much less severe memory issue, but still existent. |
Beta Was this translation helpful? Give feedback.
-
With package update as suggested in 333 issue, I've managed to run OPF with |
Beta Was this translation helpful? Give feedback.
-
Converting this as a discussion, as memory management within GPU is not something we have direct control over |
Beta Was this translation helpful? Give feedback.
-
Do you think we could have some command to clear taken GPU memory taken by MadNLP without stopping Julia sesion? |
Beta Was this translation helpful? Give feedback.
-
I'd suggest we start from https://cuda.juliagpu.org/stable/usage/memory/ Maybe you can try calling |
Beta Was this translation helpful? Give feedback.
-
@sshin23 memory management issue is largely resolved with cuDSS 0.3 update. There is still build-up with repetitive running in the same session, but it is much smaller now. It is about 100 MB per run for 78k network. Also, less total GPU memory is used compared to v0.1. To free GPU memory usage I need to call |
Beta Was this translation helpful? Give feedback.
-
@KSepetanc Thanks for letting us know; indeed we're observing improvement in memory management. The compatibility is updated in the recent release of MadNLPGPU |
Beta Was this translation helpful? Give feedback.
-
I found out that solving the ExaModels AC OPF documentation example in a loop results in possible memory leak. All of 6 GB of GPU's VRAM is taken in about 33 solves.
Below is part relevant part of the code and full code to reproduce can be downloaded: gpu memory leak.zip
Configuration:
Windows,
CPU: AMD R9 5950x
GPU GTX 1060 6GB
Beta Was this translation helpful? Give feedback.
All reactions