Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate feature diff for NN Descent from RAFT to cuVS #421

Open
wants to merge 7 commits into
base: branch-24.12
Choose a base branch
from

Conversation

divyegala
Copy link
Member

@divyegala divyegala commented Oct 21, 2024

This PR is an amalgamation of the diff of 3 PRs in RAFT:

  1. Enable distance return for NN Descent raft#2345
  2. Use slicing kernel to copy distances inside NN Descent raft#2380
  3. [FEA] Batching NN Descent raft#2403

This PR also addresses part 1 of #419 and making CAGRA use the compiled headers of NN Descent, which seemed to have been a pending TODO

// TODO: Fixme- this needs to be migrated
#include "../../nn_descent.cuh"

Also, batch tests are disabled in this PR due to issue rapidsai/raft#2450. PR #424 will attempt to re-enable them.

@divyegala divyegala added feature request New feature or request non-breaking Introduces a non-breaking change labels Oct 21, 2024
@divyegala divyegala self-assigned this Oct 21, 2024
@github-actions github-actions bot added the cpp label Oct 21, 2024
@divyegala divyegala marked this pull request as ready for review October 22, 2024 21:26
@divyegala divyegala requested a review from a team as a code owner October 22, 2024 21:26
@@ -55,6 +55,8 @@ struct index_params : cuvs::neighbors::index_params {
size_t intermediate_graph_degree = 128; // Degree of input graph for pruning.
size_t max_iterations = 20; // Number of nn-descent iterations.
float termination_threshold = 0.0001; // Termination threshold of nn-descent.
bool return_distances = false; // return distances if true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we want to set this to true by default and have CAGRA set it to false when it uses it. The reason the distances aren't needed in CAGRA is a special case, whereas in general a knn graph should have distances returned.

void build(raft::resources const& res,
index_params const& params,
raft::device_matrix_view<const float, int64_t, raft::row_major> dataset,
index<uint32_t>& index);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why doesn't nn-descent return the built index like all the other index types?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does, there's an API for that as well. We need this API as well especially for CAGRA because it needs to own the knn graph that it sends to NN Descent. For that, it needs to construct an index first and that's why we need this API.

static const std::string RAFT_NAME = "raft";
using pinned_memory_resource = thrust::universal_host_pinned_memory_resource;

using pinned_memory_resource = thrust::universal_host_pinned_memory_resource;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should work to minimize the direct thrust calls in this algorithm. Not somehting that needs to be done in this PR, but can you create an issue to do this as tech debt in some future version?

The more direct calls we make to libraries we don't control, the more things can break as those libraries evolve. We have a lot of pinned stuff that we do within RAFT, and we should try to use RAFT calls as much as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cpp feature request New feature or request non-breaking Introduces a non-breaking change
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

2 participants