-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build wheels without build isolation #1473
build wheels without build isolation #1473
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good to me. Just to verify, nothing is needed for cuproj
since it doesn't have C++ wheels, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! Thanks for iterating a few times James, I think we've come up with a clean end state that we can port everywhere now.
And yes that's right Bradley, no C++ cuproj. It's built as part of libcuspatial. |
RAPIDS_PACKAGE_VERSION=$(rapids-generate-version) rapids-conda-retry mambabuild \ | ||
conda/recipes/libcuspatial | ||
|
||
sccache --show-adv-stats |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added sccache
stats printing for conda builds as well based on rapidsai/build-planning#111 and offline conversation with @vyasr .
example build with these: https://github.com/rapidsai/cuspatial/actions/runs/11465788119/job/31906443973?pr=1473
Why not build.sh
?
Here, I'm proposing that these be put in ci/build_{cpp,python}.sh
, wrapped around any conda mambabuild
invocations that will end up compiling anything (so, for example, I included them in build_python.sh
because cuspatial
and cuproj
both have some Cython extensions).
I think this is preferable to build.sh
, because the build_{cpp,python}.sh
scripts are unambiguously for CI, and can unconditionally invoke sccache
. build.sh
is invoked in CI, but it's also intended for local use, and I didn't want to assume the presence of sccache
for all local builds are introduce the complexity of having to check for its use.
There are also some projects, like cuvs
, that have their own handling of printing sccache
stats and creating build summary reports based on them in build.sh
, and I didn't want to interfere with that.
example:
https://github.com/rapidsai/cuvs/blob/e7f1085b71c340b9600f5f38f7f0059a5c7aa806/build.sh#L364-L366
https://github.com/rapidsai/cuvs/blob/e7f1085b71c340b9600f5f38f7f0059a5c7aa806/build.sh#L408-L424
Thanks for the reviews everyone! This looks good to me (double-checked that we're seeing |
/merge |
Description
Contributes to rapidsai/build-planning#108
Contributes to rapidsai/build-planning#111
Proposes building
libcuspatial
wheels with--no-build-isolation
, to improve the rate ofsccache
cache hits and therefore reduce CI times.Also proposes printing
sccache
stats to CI logs after wheel and conda builds.Notes for Reviewers
Checklist
How I tested this
Ran a first build with these changes to populate the cache, then ran again... saw a 100% cache hit rate and the actual wheel-building part of a
wheel-build-libcuspatial
job take only 37 seconds.(build link)