-
-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Adds specializations for handling sparse matrix unary operations #2599
base: develop
Are you sure you want to change the base?
Conversation
…whether they should return values for zeroed out values
…4.1 (tags/RELEASE_600/final)
…4.1 (tags/RELEASE_600/final)
…4.1 (tags/RELEASE_600/final)
…4.1 (tags/RELEASE_600/final)
@andrjohns would you mind taking a look at this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay in getting to this! Got a couple of queries and comments, but nothing major
*/ | ||
template <typename F, typename T, typename Enable = void> | ||
template <typename F, typename T, bool ApplyZero = false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this might be a bit clearer if the naming was ReturnSparse
or ReturnDense
(or something similar), so that it's explicit that the flag will affect the return type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I need to update the docs and this PRs summary. After discussion with @bob-carpenter, @nhuurre, and the Eigen dev team (mostly in #2597) I think it makes more sense to return a sparse matrix with all entries filled in. I do kind of wonder if we should have something like exp_nonzero(sparse_matrix) available at the Stan language level where it will only run over the nonzero entries. But we can sort that out at a later time
* parameter F to the specified matrix argument. | ||
* | ||
* @tparam SparseMat A type derived from `Eigen::SparseMatrixBase` | ||
* @tparam NonZeroZero Shortcut trick for using class template for deduction, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doc type name doesn't match function declaration.
Also, neat trick!
/** | ||
* Special case for `ApplyZero` set to true, returning a full sparse matrix. | ||
* Return the result of applying the function defined by the template | ||
* parameter F to the specified matrix argument. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The doc for both overloads state that a sparse matrix is returned, so might need to clarify which is which
} | ||
for (Eigen::Index k = 0; k < x.outerSize(); ++k) { | ||
for (typename SparseMat::InnerIterator it(x, k); it; ++it) { | ||
triplet_list[it.row() * x.cols() + it.col()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than iterating over each element, would it be more performant to Map the values as a vector and call the vectorised implementation? For example:
Map<Eigen::VectorXd> mapSparse(x.valuePtr(), x.nonZeros());
apply_scalar_unary<F, Eigen::VectorXd>::apply(mapSparse);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would def be better, though I avoided it because I was a bit confused about the compressed sparse matrix scheme Eigen uses. If you look at their docs here under 'The SparseMatrix class' they sometimes allow empty values so its easier to write to the sparse matrix like
Values: 22 7 _ 3 5 14 _ _ 1 _ 17 8
InnerIndices: 1 2 _ 0 2 4 _ _ 2 _ 1 4
I was worried that directly grabbing the values could lead to some bad things since there may be places in the value pointer that are not initialized. Another issue is that our arena_matrix
scheme assumes that if you are passing it an Eigen map that the memory has already been allocated elsewhere for use in the reverse mode pass, which may not explicitly be true for sparse matrices.
So I agree what your saying here is def faster, but imo for this first pass I'd rather do the safer path atm. Once we have everything working at the language level and know more about our constraints and assumptions we can make then we can come back and do these kinds of optimizations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, okay now I'm thinking about this more and I think that at the stan language level we can make it so that you can only assign to elements that are already non-zero. So I wonder if what we can do here is just assume that we are always working with compressed matrices, since then we don't have to worry about the uninitialized values and can do the fast thing safely
* @tparam Container type to check | ||
*/ | ||
template <typename Container> | ||
using is_container = bool_constant< | ||
math::disjunction<is_eigen<Container>, is_std_vector<Container>>::value>; | ||
using is_container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we also expect sparse matrices to be detected as a container though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I'm kind of confused. It seems like a lot of the time we are declaring things containers when they are "vectorizable" so for sparse matrix I was like, "Well these are not really vectorizable so I would not say they are a container". imo I kind of feel like we should just have container
simply mean a container pointing to items containing dynamically allocated memory like std::vector<std::vector<Eigen::Matrix>>
. Like there is a few types of memory patterns that I'm thinking of here
std::vector<double/var> & Eigen::Matrix<double/var, R, C>
where generic memory can be mapped to access linearly
std::vector<std::vector<double/var> & Eigen::Matrix<double/var, R, C>>
where generic memory the container points to can be mapped to linearly
Eigen::SparseMatrix<double/var>
If compressed, this can be mapped to generic memory and accessed linearly.
Hmm, maybe this PR is the wrong path and I should be using the apply_vector_unary
pattern? Because then I would do what you are saying in another comment above and just calling the vectorized function over the memory
Moving to draft; it's still marked at "WIP." Please move it out of draft when you're ready. |
Summary
This is a WIP for supporting sparse matrix unary operations in Stan math. Related to #2597 , this API will return a sparse matrix with all values filled in for sparse matrices when the function does not have a
f(0) -> 0
mapping. The scheme here just adds abool ApplyZero
template parameter toapply_scalar_unary
where ifApplyZero
is true, a dense matrix is returned and forfalse
a sparse matrix is returned.Though note if we want to change the API to always be
Sparse -> Sparse
that's pretty easy hereTests
The tests have been modified for
atan()
andacos()
to take a vector input and the usual lambda functor we use for testing and make a sparse matrix with the vector values along the diagonal via a helper functionmake_sparse_mat_func()
.Side Effects
Release notes
Adds support for Unary sparse matrix functions
Checklist
Math issue How to handle sparse matrices when f(0) returns not zero? #2597
Copyright holder: Steve Bronder
The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
- Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
- Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
the basic tests are passing
./runTests.py test/unit
)make test-headers
)make test-math-dependencies
)make doxygen
)make cpplint
)the code is written in idiomatic C++ and changes are documented in the doxygen
the new changes are tested