Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QuadratiK: A Collection of Methods Using Kernel-Based Quadratic Distances for Statistical Inference and Clustering #632

Open
1 of 20 tasks
giovsaraceno opened this issue Mar 13, 2024 · 54 comments

Comments

@giovsaraceno
Copy link

giovsaraceno commented Mar 13, 2024

Submitting Author Giovanni Saraceno
Submitting Author Github Handle: @giovsaraceno
Other Package Authors Github handles: @rmj3197
Repository: https://github.com/giovsaraceno/QuadratiK-package§
Version submitted:1.1.1
Submission type: Stats
Badge grade: gold
Editor: @emitanaka
Reviewers: @kasselhingee

Due date for @kasselhingee: 2024-07-16

Archive: TBD
Version accepted: TBD


  • DESCRIPTION file:
Type: Package
Package: QuadratiK
Title: A Collection of Methods Using Kernel-Based Quadratic Distances for 
       Statistical Inference and Clustering
Version: 1.0.0
Authors@R: c(
person("Giovanni", "Saraceno", , "[email protected]", role = c("aut", "cre"),
comment = "ORCID 000-0002-1753-2367"),
person("Marianthi", "Markatou", role = "aut"),
person("Raktim", "Mukhopadhyay", role = "aut"),
person("Mojgan", "Golzy", role = "ctb")
)
Maintainer: Giovanni Saraceno <[email protected]>
Description: The package includes test for multivariate normality, test for
uniformity on the Sphere, non-parametric two- and k-sample tests,
random generation of points from the Poisson kernel-based density and a
clustering algorithm for spherical data. For more information see
Saraceno, G., Markatou, M., Mukhopadhyay, R., Golzy, M. (2024)
<arXiv:2402.02290>, Ding, Y., Markatou, M., Saraceno, G. (2023)
<doi:10.5705/ss.202022.0347>, and Golzy, M., Markatou, M. (2020)
<doi:10.1080/10618600.2020.1740713>.
License: GPL (>= 3)
URL: https://cran.r-project.org/web/packages/QuadratiK/index.html, 
     https://github.com/giovsaraceno/QuadratiK-package
BugReports: https://github.com/giovsaraceno/QuadratiK-package/issues
Depends: 
R (>= 3.5.0)
Imports: 
cluster,
clusterRepro,
doParallel,
foreach,
ggplot2,
ggpp,
ggpubr,
MASS,
mclust,
methods,
moments,
movMF,
mvtnorm,
Rcpp,
RcppEigen,
rgl,
rlecuyer,
rrcov,
sn,
stats,
Tinflex
Suggests: 
knitr,
rmarkdown,
roxygen2,
testthat (>= 3.0.0)
LinkingTo: 
Rcpp,
RcppEigen
VignetteBuilder: 
knitr
Config/testthat/edition: 3
Encoding: UTF-8
LazyData: true
Roxygen: list(markdown=TRUE, roclets=c("namespace", "rd", "srr::srr_stats_roclet"))
RoxygenNote: 7.2.3

Scope

Data Lifecycle Packages

  • data retrieval
  • data extraction
  • data munging
  • data deposition
  • data validation and testing
  • workflow automation
  • version control
  • citation management and bibliometrics
  • scientific software wrappers
  • field and lab reproducibility tools
  • database software bindings
  • geospatial data
  • text analysis

Statistical Packages

  • Bayesian and Monte Carlo Routines

  • Dimensionality Reduction, Clustering, and Unsupervised Learning

  • Machine Learning

  • Regression and Supervised Learning

  • Exploratory Data Analysis (EDA) and Summary Statistics

  • Spatial Analyses

  • Time Series Analyses

  • Explain how and why the package falls under these categories (briefly, 1-2 sentences). Please note any areas you are unsure of:

This category is the most suitable due to QuadratiK's clustering technique, specifically designed for spherical data. The package's clustering algorithm falls within the realm of unsupervised learning, where the focus is on identifying groupings in the data without pre-labeled categories. The two- and k-sample tests serve as additional tools for testing the differences between the identified groups.
Following the link https://stats-devguide.ropensci.org/standards.html we noticed in the "Table of contents" that category 6.9 refers to Probability Distribution. We are unsure how we fit and if we fit this category. Can you please advise?

Yes, we have incorporated documentation of standards into our QuadratiK package by utilizing the srr package, considering the categories "General" and "Dimensionality Reduction, Clustering, and Unsupervised Learning", in line with the recommendations provided in the rOpenSci Statistical Software Peer Review Guide.

  • Who is the target audience and what are scientific applications of this package?

The QuadratiK package offers robust tools for goodness-of-fit testing, a fundamental aspect in statistical analysis, where accurately assessing the fit of probability distributions is essential. This is especially critical in research domains where model accuracy has direct implications on conclusions and further research directions. Spherical data structures are common in fields such as biology, geosciences and astronomy, where data points are naturally mapped to a sphere. QuadratiK provides a tailored approach to effectively handle and interpret these data. Furthermore, this package is also of particular interest to professionals in health and biological sciences, where understanding and interpreting spherical data can be crucial in studies ranging from molecular biology to epidemiology. Moreover, its implementation in both R and Python broadens its accessibility, catering to a wide audience accustomed to these popular programming languages.

Yes, there are other R packages that address goodness-of-fit (GoF) testing and multivariate analysis. Notable among these are the energy package for energy statistics-based tests. The function kmmd in the kernlab package offers a kernel-based test which has similar mathematical formulation. The package sphunif provides all the tests for uniformity on the sphere available in literature. The list of implemented tests includes the test for uniformity based on the Poisson kernel. However, there are fundamental differences between the methods encoded in the aforementioned packages and those offered in the QuadratiK package.

QuadratiK uniquely focuses on kernel-based quadratic distances methods for GoF testing, offering a comprehensive set of tools for one-sample, two-sample, and k-sample tests. This specialization provides more nuanced and robust methodologies for statistical analysis, especially in complex multivariate contexts. QuadratiK is optimized for high-dimensional datasets, employing efficient C++ implementations. This makes it particularly suitable for contemporary large-scale data analysis challenges. The package introduces advanced methods for kernel centering and critical value computation, as well as optimal tuning parameter selection based on midpower analysis. QuadratiK includes a unique clustering algorithm for spherical data. These innovations are not covered in other available packages. With implementations in both R and Python, QuadratiK appeals to a wider audience across different programming communities. We also provide a user-friendly dashboard application which further enhances accessibility, catering to users with varying levels of statistical and programming expertise.

In summary there are fundamental differences between QuadratiK and all existing R packages:

  1. The goodness-of-fit tests are U-statistics based on centered kernels. The concept and methodology of centering is novel and unique to our methods and is not part of the methods of other existing packages.
  2. An algorithm for connecting the tuning parameter with the statistical properties of the test, namely power and degrees of freedom of the kernel (DOF) is provided. This feature differentiates our novel methods from all encoded methods in the aforementioned R packages.
  3. A new clustering algorithm for data that reside on the sphere is offered. This aspect is not a feature of existing packages.
  4. We also offer algorithms for generating random samples from Poisson kernel-based densities. This capability is also unique to our package.

Yes, our package, QuadratiK, is compliant with the rOpenSci guidelines on Ethics, Data Privacy, and Human Subjects Research. We have carefully considered and adhered to ethical standards and data privacy laws relevant to our work.

  • Any other questions or issues we should be aware of?:

Please see the question posed in the first bullet.

@ldecicco-USGS
Copy link

@ropensci-review-bot check srr

1 similar comment
@maelle
Copy link
Member

maelle commented Mar 18, 2024

@ropensci-review-bot check srr

@ropensci-review-bot
Copy link
Collaborator

'srr' standards compliance:

  • Complied with: 57 / 101 = 56.4% (general: 37 / 68; unsupervised: 20 / 33)
  • Not complied with: 44 / 101 = 43.6% (general: 31 / 68; unsupervised: 13 / 33)

✔️ This package complies with > 50% of all standads and may be submitted.

@ldecicco-USGS
Copy link

Thanks for the submission @giovsaraceno ! I'm getting some advice from the other editors about your question. One thing that would be really helpful - could you push up your documentation to a GitHub page?

From the usethis package, there's a function that helps setting it up:
https://usethis.r-lib.org/reference/use_github_pages.html

@mpadge
Copy link
Member

mpadge commented Mar 20, 2024

Hi @giovsaraceno, Mark here from the rOpenSci stats team to answer your question. We've done our best to clarify the role of Probability Distributions Standards:

Unlike most other categories of standards, packages which fit in this category will also generally be expected to fit into at least one other category of statistical software. Reflecting that expectation, standards for probability distributions will be expected to only pertain to some (potentially small) portion of code in any package.

So packages should generally fit within some main category, with Probability Distributions being an additional category. In your case, Dimensionality Reduction seems like the appropriate main category, but it seems like your package would also fit within Probability Distributions. Given that, the next step would be for you to estimate what proportion of those standards you think might apply to your package? Our general rule-of-thumb is that at least 50% should apply, but for Probability Distributions as an additional category, that figure may be lower.

We are particularly keen to document compliance with this category, because it is where our standards have a large overlap with many core routines of the R language itself. As always, we encourage feedback on our standards, so please also feel very welcome to open issues in the Stats Software repository, or add comments or questions in the discussion pages. Thanks for you submission!

@giovsaraceno
Copy link
Author

Thanks for the submission @giovsaraceno ! I'm getting some advice from the other editors about your question. One thing that would be really helpful - could you push up your documentation to a GitHub page?

From the usethis package, there's a function that helps setting it up: https://usethis.r-lib.org/reference/use_github_pages.html

Thanks @ldecicco-USGS for your guidance during this process. Following your suggestion, I've now pushed the documentation for the QuadratiK package to a GitHub page. You can find it displayed on the main page of the GitHub repository. Here's the direct link for easy access: QuadratiK package GitHub page.

@giovsaraceno
Copy link
Author

Hi @giovsaraceno, Mark here from the rOpenSci stats team to answer your question. We've done our best to clarify the role of Probability Distributions Standards:

Unlike most other categories of standards, packages which fit in this category will also generally be expected to fit into at least one other category of statistical software. Reflecting that expectation, standards for probability distributions will be expected to only pertain to some (potentially small) portion of code in any package.

So packages should generally fit within some main category, with Probability Distributions being an additional category. In your case, Dimensionality Reduction seems like the appropriate main category, but it seems like your package would also fit within Probability Distributions. Given that, the next step would be for you to estimate what proportion of those standards you think might apply to your package? Our general rule-of-thumb is that at least 50% should apply, but for Probability Distributions as an additional category, that figure may be lower.

We are particularly keen to document compliance with this category, because it is where our standards have a large overlap with many core routines of the R language itself. As always, we encourage feedback on our standards, so please also feel very welcome to open issues in the Stats Software repository, or add comments or questions in the discussion pages. Thanks for you submission!

Hi Mark,

Thank you for the additional clarification regarding the standards for Probability Distributions and their integration with other statistical software categories. Following your guidance, we have conducted a thorough review of the standards applicable to the Probability Distributions category in relation to our package.

Based on our assessment, we found that the current version of our package satisfies 14% of the standards directly. Furthermore, we identified that an additional 36% of the standards could potentially apply to our package, but this would require us to make some enhancements, including the addition of checks and test codes. We feel the remaining 50% of the standards are not applicable to our package.

We are committed to improve our package and aim to fulfill the applicable standards. To this end, we plan to work on a separate branch dedicated to implementing these enhancements, with the goal of meeting the 50% of the standards for the Probability Distributions category. Before proceeding, we would greatly appreciate your opinion on this plan.    

Thank you for your time and support. Giovanni

@giovsaraceno
Copy link
Author

Hi @giovsaraceno, Mark here from the rOpenSci stats team to answer your question. We've done our best to clarify the role of Probability Distributions Standards:

Unlike most other categories of standards, packages which fit in this category will also generally be expected to fit into at least one other category of statistical software. Reflecting that expectation, standards for probability distributions will be expected to only pertain to some (potentially small) portion of code in any package.

So packages should generally fit within some main category, with Probability Distributions being an additional category. In your case, Dimensionality Reduction seems like the appropriate main category, but it seems like your package would also fit within Probability Distributions. Given that, the next step would be for you to estimate what proportion of those standards you think might apply to your package? Our general rule-of-thumb is that at least 50% should apply, but for Probability Distributions as an additional category, that figure may be lower.
We are particularly keen to document compliance with this category, because it is where our standards have a large overlap with many core routines of the R language itself. As always, we encourage feedback on our standards, so please also feel very welcome to open issues in the Stats Software repository, or add comments or questions in the discussion pages. Thanks for you submission!

Hi Mark,

Thank you for the additional clarification regarding the standards for Probability Distributions and their integration with other statistical software categories. Following your guidance, we have conducted a thorough review of the standards applicable to the Probability Distributions category in relation to our package.

Based on our assessment, we found that the current version of our package satisfies 14% of the standards directly. Furthermore, we identified that an additional 36% of the standards could potentially apply to our package, but this would require us to make some enhancements, including the addition of checks and test codes. We feel the remaining 50% of the standards are not applicable to our package.

We are committed to improve our package and aim to fulfill the applicable standards. To this end, we plan to work on a separate branch dedicated to implementing these enhancements, with the goal of meeting the 50% of the standards for the Probability Distributions category. Before proceeding, we would greatly appreciate your opinion on this plan.    

Thank you for your time and support. Giovanni

Hi Mark,

We addressed the enhancements we discussed, and our package now meets 50% of the standards for the Probability Distributions category. These updates are in the probability-distributions-standards branch of our repository.
We would like your opinion on merging this branch with the submitted version of the package.

Thank you, Giovanni

@mpadge
Copy link
Member

mpadge commented Mar 27, 2024

Hi Giovanni, your srrstats tags for probability distribution standards definitely look good enough to proceed. That said, one aspect which could be improved, and which I would request if I were reviewing the package, is the compliance statements in the tests. In both test-dpkb.R and test-rkpb.R you claim compliance in single statements at the start, yet I can't really see where or how a few of these are really complied with. In particular, there do not appear to be explicit tests for output values, as these are commonly tested using test_equal with an explicit tolerance parameter, which you don't have. It is also not clear to me where and how you compare results of different distributions, because you have no annotations in the tests about what the return values of the functions are.

Those are very minor points which you may ignore for the moment if you'd like to get the review process started, or you could quickly address them straight away if you prefer. Either way, feel free to ask the bot to check srr when you think you're ready to proceed. Thanks!

@giovsaraceno
Copy link
Author

Hi, thank you for your suggestions on our compliance statements and testing practices.
Regarding the explicit testing for output values and the use of test_equal with a tolerance parameter, we aimed to ensure that our functions return the expected outputs. However, we recognize that our current tests may not explicitly demonstrate compliance with this standard in the way you've described. We're uncertain about the best approach to incorporate test_equal with a tolerance parameter effectively, for testing the numeric equality of outputs from the provided random generation and density functions. Can you provide some tips?

As for comparing results from different distributions, the rpkb function in our package provides options to generate random observations using three distinct algorithms based on different probability distributions. We've conducted tests to confirm that each method functions as intended. We added also a new vignette in which the methods are compared by graphically displaying the generated points. Is this what you are looking for?

We're inclined to address them promptly. We would appreciate if we can get an answer to the questions posed above so that we can start the review process.
Thanks, Giovanni

@noamross
Copy link
Contributor

Sorry we didn't reply faster, @giovsaraceno. In, say, a single-variable distribution tests might include:

  • A correctness that the density function with given parameters has means, modes, or variances as theoretically expected.
  • A parameter recovery that the mean of a sufficiently large number of randomly generated values is within a window of expectations.
    In your case my understanding is that you are generating multivariate outputs. Ultimately we aim to see tests that those outputs are as expected, so for both density and random values. I think the thing to do is test that summary properties of those outputs, deterministic for density and within bounds for random, match those expected based on the input parameters

@giovsaraceno
Copy link
Author

Thanks @noamross for your explanation. We have taken your suggestions into consideration and have implemented them accordingly.
We are now ready to request the automatic bot check for our package. We look forward to any further instructions or feedback that might come from this next step.

@ldecicco-USGS
Copy link

@ropensci-review-bot check package

@ropensci-review-bot
Copy link
Collaborator

Thanks, about to send the query.

@ropensci-review-bot
Copy link
Collaborator

🚀

The following problems were found in your submission template:

  • HTML variable [editor] is missing
  • HTML variable [reviewers-list] is missing
  • HTML variable [due-dates-list] is missing
    Editors: Please ensure these problems with the submission template are rectified. Package checks have been started regardless.

👋

@ropensci-review-bot
Copy link
Collaborator

Checks for QuadratiK (v1.0.0)

git hash: 21541a40

  • ✔️ Package is already on CRAN.
  • ✔️ has a 'codemeta.json' file.
  • ✔️ has a 'contributing' file.
  • ✔️ uses 'roxygen2'.
  • ✔️ 'DESCRIPTION' has a URL field.
  • ✔️ 'DESCRIPTION' has a BugReports field.
  • ✔️ Package has at least one HTML vignette
  • ✔️ All functions have examples.
  • ✔️ Package has continuous integration checks.
  • ✔️ Package coverage is 78.2%.
  • ✖️ Package contains unexpected files.
  • ✔️ R CMD check found no errors.
  • ✖️ R CMD check found 1 warning.
  • 👀 Function names are duplicated in other packages

Important: All failing checks above must be addressed prior to proceeding

(Checks marked with 👀 may be optionally addressed.)

Package License: GPL (>= 3)


1. rOpenSci Statistical Standards (srr package)

This package is in the following category:

  • Dimensionality Reduction, Clustering and Unsupervised Learning

✔️ All applicable standards [v0.2.0] have been documented in this package (204 complied with; 49 N/A standards)

Click to see the report of author-reported standards compliance of the package with links to associated lines of code, which can be re-generated locally by running the srr_report() function from within a local clone of the repository.


2. Package Dependencies

Details of Package Dependency Usage (click to open)

The table below tallies all function calls to all packages ('ncalls'), both internal (r-base + recommended, along with the package itself), and external (imported and suggested packages). 'NA' values indicate packages to which no identified calls to R functions could be found. Note that these results are generated by an automated code-tagging system which may not be entirely accurate.

type package ncalls
internal base 382
internal QuadratiK 50
internal utils 10
internal grDevices 1
imports stats 29
imports methods 26
imports sn 14
imports ggpp 2
imports cluster 1
imports mclust 1
imports moments 1
imports rrcov 1
imports clusterRepro NA
imports doParallel NA
imports foreach NA
imports ggplot2 NA
imports ggpubr NA
imports MASS NA
imports movMF NA
imports mvtnorm NA
imports Rcpp NA
imports RcppEigen NA
imports rgl NA
imports rlecuyer NA
imports Tinflex NA
suggests knitr NA
suggests rmarkdown NA
suggests roxygen2 NA
suggests testthat NA
linking_to Rcpp NA
linking_to RcppEigen NA

Click below for tallies of functions used in each package. Locations of each call within this package may be generated locally by running 's <- pkgstats::pkgstats(<path/to/repo>)', and examining the 'external_calls' table.

base

list (46), data.frame (26), matrix (24), nrow (23), t (20), log (19), rep (19), ncol (18), c (14), numeric (12), for (11), sqrt (10), length (8), mean (8), as.numeric (6), return (6), sample (6), T (6), vapply (6), apply (5), as.factor (5), table (5), unique (5), as.vector (4), cumsum (4), exp (4), rbind (4), sum (4), as.matrix (3), kappa (3), lapply (3), lgamma (3), pi (3), q (3), replace (3), unlist (3), as.integer (2), diag (2), max (2), readline (2), rownames (2), rowSums (2), which (2), which.max (2), with (2), beta (1), colMeans (1), expand.grid (1), F (1), factor (1), if (1), levels (1), norm (1), rep.int (1), round (1), seq_len (1), subset (1)

QuadratiK

DOF (3), kbNormTest (3), normal_CV (3), C_d_lambda (2), compute_CV (2), cv_ksample (2), d2lpdf (2), dlpdf (2), lpdf (2), norm_vec (2), objective_norm (2), poisson_CV (2), rejvmf (2), sample_hypersphere (2), statPoissonUnif (2), compare_qq (1), compute_stats (1), computeKernelMatrix (1), computePoissonMatrix (1), dpkb (1), elbowMethod (1), generate_SN (1), NonparamCentering (1), objective_2 (1), objective_k (1), ParamCentering (1), pkbc_validation (1), rejacg (1), rejpsaw (1), select_h (1), stat_ksample_cpp (1), stat2sample (1)

stats

df (12), quantile (4), dist (2), rnorm (2), runif (2), aggregate (1), cov (1), D (1), qchisq (1), sd (1), sigma (1), uniroot (1)

methods

setMethod (12), setGeneric (8), new (3), setClass (3)

sn

rmsn (14)

utils

data (8), prompt (2)

ggpp

annotate (2)

cluster

silhouette (1)

grDevices

colorRampPalette (1)

mclust

adjustedRandIndex (1)

moments

skewness (1)

rrcov

PcaLocantore (1)

NOTE: Some imported packages appear to have no associated function calls; please ensure with author that these 'Imports' are listed appropriately.


3. Statistical Properties

This package features some noteworthy statistical properties which may need to be clarified by a handling editor prior to progressing.

Details of statistical properties (click to open)

The package has:

  • code in C++ (17% in 2 files) and R (83% in 12 files)
  • 4 authors
  • 5 vignettes
  • 1 internal data file
  • 21 imported packages
  • 24 exported functions (median 14 lines of code)
  • 56 non-exported functions in R (median 16 lines of code)
  • 16 R functions (median 13 lines of code)

Statistical properties of package structure as distributional percentiles in relation to all current CRAN packages
The following terminology is used:

  • loc = "Lines of Code"
  • fn = "function"
  • exp/not_exp = exported / not exported

All parameters are explained as tooltips in the locally-rendered HTML version of this report generated by the checks_to_markdown() function

The final measure (fn_call_network_size) is the total number of calls between functions (in R), or more abstract relationships between code objects in other languages. Values are flagged as "noteworthy" when they lie in the upper or lower 5th percentile.

measure value percentile noteworthy
files_R 12 65.5
files_src 2 79.1
files_vignettes 5 96.9
files_tests 10 90.7
loc_R 1408 76.6
loc_src 281 34.1
loc_vignettes 235 55.3
loc_tests 394 70.0
num_vignettes 5 97.9 TRUE
data_size_total 11842 71.9
data_size_median 11842 80.1
n_fns_r 80 70.4
n_fns_r_exported 24 72.5
n_fns_r_not_exported 56 70.6
n_fns_src 16 40.4
n_fns_per_file_r 5 67.1
n_fns_per_file_src 8 69.1
num_params_per_fn 5 69.6
loc_per_fn_r 15 46.1
loc_per_fn_r_exp 14 35.1
loc_per_fn_r_not_exp 16 54.8
loc_per_fn_src 13 41.6
rel_whitespace_R 24 82.7
rel_whitespace_src 18 36.2
rel_whitespace_vignettes 16 29.2
rel_whitespace_tests 34 78.1
doclines_per_fn_exp 50 62.8
doclines_per_fn_not_exp 0 0.0 TRUE
fn_call_network_size 50 66.3

3a. Network visualisation

Click to see the interactive network visualisation of calls between objects in package


4. goodpractice and other checks

Details of goodpractice checks (click to open)

3a. Continuous Integration Badges

(There do not appear to be any)

GitHub Workflow Results

id name conclusion sha run_number date
8851531581 pages build and deployment success 21541a 25 2024-04-26
8851531648 pkgcheck failure 21541a 60 2024-04-26
8851531643 pkgdown success 21541a 25 2024-04-26
8851531649 R-CMD-check success 21541a 83 2024-04-26
8851531642 test-coverage success 21541a 83 2024-04-26

3b. goodpractice results

R CMD check with rcmdcheck

R CMD check generated the following warning:

  1. checking whether package ‘QuadratiK’ can be installed ... WARNING
    Found the following significant warnings:
    Warning: 'rgl.init' failed, running with 'rgl.useNULL = TRUE'.
    See ‘/tmp/RtmpQrtXuf/file133861d90686/QuadratiK.Rcheck/00install.out’ for details.

R CMD check generated the following note:

  1. checking installed package size ... NOTE
    installed size is 16.6Mb
    sub-directories of 1Mb or more:
    libs 15.0Mb

R CMD check generated the following check_fails:

  1. no_import_package_as_a_whole
  2. rcmdcheck_examples_run_without_warnings
  3. rcmdcheck_significant_compilation_warnings
  4. rcmdcheck_reasonable_installed_size

Test coverage with covr

Package coverage: 78.21

Cyclocomplexity with cyclocomp

The following function have cyclocomplexity >= 15:

function cyclocomplexity
select_h 46

Static code analyses with lintr

lintr found the following 20 potential issues:

message number of times
Avoid library() and require() calls in packages 9
Lines should not be more than 80 characters. 9
Use <-, not =, for assignment. 2


5. Other Checks

Details of other checks (click to open)

✖️ Package contains the following unexpected files:

  • src/RcppExports.o
  • src/kernel_function.o

✖️ The following function name is duplicated in other packages:

    • extract_stats from ggstatsplot


Package Versions

package version
pkgstats 0.1.3.13
pkgcheck 0.1.2.21
srr 0.1.2.9


Editor-in-Chief Instructions:

Processing may not proceed until the items marked with ✖️ have been resolved.

@giovsaraceno
Copy link
Author

giovsaraceno commented May 13, 2024

We have solved all the marked items and we are now ready to request the automatic bot check.
Thanks

@jooolia
Copy link
Member

jooolia commented May 29, 2024

@ropensci-review-bot check package

@jooolia jooolia self-assigned this May 29, 2024
@ropensci-review-bot
Copy link
Collaborator

Thanks, about to send the query.

@ropensci-review-bot
Copy link
Collaborator

🚀

The following problems were found in your submission template:

  • HTML variable [editor] is missing
  • HTML variable [reviewers-list] is missing
  • HTML variable [due-dates-list] is missing
    Editors: Please ensure these problems with the submission template are rectified. Package checks have been started regardless.

👋

@giovsaraceno
Copy link
Author

Hi @jooolia,
thanks for checking the package. Can you give us indications on how we should address the listed problems?
At the moment, we do not know which information to insert in the mentioned fields (editor, reviewers and due-dates list).
Thanks in advance

@mpadge
Copy link
Member

mpadge commented May 31, 2024

@jooolia The automated checks failed because of issue linked to above. @giovsaraceno When you've fixed this issue and confirmed that pkgcheck workflows once again succeed in your repo, please call @ropensci-review-bot check package here to run checks again. Thanks

@giovsaraceno
Copy link
Author

@ropensci-review-bot check package

@ropensci-review-bot
Copy link
Collaborator

Thanks, about to send the query.

@ropensci-review-bot
Copy link
Collaborator

🚀

The following problems were found in your submission template:

  • HTML variable [editor] is missing
  • HTML variable [reviewers-list] is missing
  • HTML variable [due-dates-list] is missing
    Editors: Please ensure these problems with the submission template are rectified. Package checks have been started regardless.

👋

@emitanaka
Copy link

@ropensci-review-bot seeking reviewers

@ropensci-review-bot
Copy link
Collaborator

Please add this badge to the README of your package repository:

[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/632_status.svg)](https://github.com/ropensci/software-review/issues/632)

Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news

@emitanaka
Copy link

@ropensci-review-bot add @kasselhingee as reviewer

@ropensci-review-bot
Copy link
Collaborator

Can't assign reviewer because there is no editor assigned for this submission yet

@emitanaka
Copy link

@ropensci-review-bot assign @kasselhingee as reviewer

@ropensci-review-bot
Copy link
Collaborator

Can't assign reviewer because there is no editor assigned for this submission yet

@emitanaka
Copy link

@mpadge it worked before but not sure why adding a reviewer is not working anymore here?

@mpadge
Copy link
Member

mpadge commented Jun 21, 2024

@emitanaka Can you please try again?

@emitanaka
Copy link

@ropensci-review-bot assign @kasselhingee as reviewer

@ropensci-review-bot
Copy link
Collaborator

Can't assign reviewer because there is no editor assigned for this submission yet

@emitanaka
Copy link

@ropensci-review-bot add @kasselhingee as reviewer

@ropensci-review-bot
Copy link
Collaborator

Can't assign reviewer because there is no editor assigned for this submission yet

@emitanaka
Copy link

@mpadge Nope, still not working

@mpadge
Copy link
Member

mpadge commented Jun 24, 2024

@emitanaka Sorry about that. The issue template at the very top had been modified, including removing the "editor" field needed by the bot to identify you. I've reinstated everything now, so should be okay.

@giovsaraceno There are a couple of fields which still need to be filled in. Can you please edit the top of the initial issue text at the top and fill out:

  • Version submitted
  • "Badge grade" - choose one and delete all others

Thanks!

@giovsaraceno
Copy link
Author

@mpadge I have modified the initial issue text by additing the version submitted and choosing the badge grade. Please let us know if anything else is needed.
Thanks!

@giovsaraceno
Copy link
Author

Please add this badge to the README of your package repository:

[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/632_status.svg)](https://github.com/ropensci/software-review/issues/632)

Furthermore, if your package does not have a NEWS.md file yet, please create one to capture the changes made during the review process. See https://devguide.ropensci.org/releasing.html#news

We have added the provided badge into the README file and added the NEWS.md file into the package.

@emitanaka
Copy link

@ropensci-review-bot add @kasselhingee as reviewer

@ropensci-review-bot
Copy link
Collaborator

@kasselhingee added to the reviewers list. Review due date is 2024-07-16. Thanks @kasselhingee for accepting to review! Please refer to our reviewer guide.

rOpenSci’s community is our best asset. We aim for reviews to be open, non-adversarial, and focused on improving software quality. Be respectful and kind! See our reviewers guide and code of conduct for more.

@ropensci-review-bot
Copy link
Collaborator

@kasselhingee: If you haven't done so, please fill this form for us to update our reviewers records.

@emitanaka
Copy link

It is working now. Thank you @mpadge !

@kasselhingee
Copy link

Package Review

  • I have no relationship past or present with the package authors.
  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (if you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need: clearly stating problems the software is designed to solve and its target audience in README
  • Installation instructions: for the development version of package and any non-standard dependencies in README
  • Vignette(s): demonstrating major functionality that runs successfully locally
  • Function Documentation: for all exported functions
  • Examples: (that run successfully locally) for all exported functions
  • Community guidelines: including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software have been confirmed.
  • Performance: Any performance claims of the software have been confirmed.
  • Automated tests: Unit tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.
  • Packaging guidelines: The package conforms to the rOpenSci packaging guidelines.

Estimated hours spent reviewing:

  • Should the author(s) deem it appropriate, I agree to be acknowledged as a package reviewer ("rev" role) in the package DESCRIPTION file.

Review Comments

I'm only partially familiar with the area of spherical data. Both your package's tests of uniformity and clustering sound useful. The kb.test() tests sound complicated but I'm sure with more explanation their use will become clear.

I found it really hard to understand your package initially. That was because most of the documentation ignores G1.3 on explaining statistical terms. For example, I jumped into the kb.test() part of the package which was really confusing until I found an arXiv document on QuadratiK that had more information.
Although the other parts were less confusing to me, they still didn't explain themselves well, often stating that the function performs "the [a bespoke method by authors]" without explanation of the method.
Because of this, currently your package feels only usable by expert statisticians who have read your papers and want to try out your methods.

I would love it if your readme described the major benefits with more detail. For example, that pk.test() performs much better than competitors when the alternative is multimodal.
Your answers to rOpenSci here state that novel and unique kernel-based methods are used, but you don't say what is good about them. Also, please clarify if the kernel in the 'Poisson kernel-based densities' different from the kernel in your 'kernel-based quadratic distances'.

More on following G1.3. There are many unexplained terms I've never heard of before, a few used differently to expectation, and others used vaguely. At times it felt like these were all the terms!

  • An intro to PKB distributions would be nice.
  • And exact meaning of parameters in rpkb().

Package-Level Comments

Major Functionality Claims

The following appear to be met from the documentation and function results:

  • A test of uniformity on the sphere for multimodal alternatives
  • Clustering algorithms
  • Equal-distribution tests (this is my current guess of what the k-sample tests are)
  • Goodness of fit to an estimated PKB distribution?
  • Some graphics help

However, I haven't checked if they are implemented correctly and the package lacks tests to confirm results in simple situations.

These claims seem unmet to me:

  • I can't see the bridge between statistics and machine learning claimed in the README
  • I haven't got the scatterplots of clustering results to work
  • I can't see the Python version
  • In answering some rOpenSci questions you mentioned a dashboard application. I completely missed it! Where is it?

Statement of need in README

  • Statement of need lacks precision. When should I use your methods vs existing methods?
  • The majority of methods are based on statistical methods published with peer-review by the authors, so I haven't checked the novelty or importance of the methods myself.
    An exception is the 'k-sample goodness of fit tests', which are in a manuscript I couldn't find. Not sure what they are exactly, or how good they are compared to existing methods.
  • Two and k-sample tests often refer to testing equality between groups (or similar).
    • Please be more clear in the README what the hypotheses are in these tests.
    • I'm especially confused by a k-sample test of goodness of fit to probability distribution. Without reading your arXiv document it sounded like it would test the fit of each sample to the model probability distributions.
    • What model does the single-sample goodness of fit test?
    • Specify what class of probability distributions (PKB?)
  • Target audience is unclear in README. Currently I'm thinking expert statisticians and machine learners who have read your papers/manuscripts, but I see your answers to rOpenSci questions hope for more general use.

Installation

  • Doesn't have installation instructions in README for the development version or CRAN version (but should be very easy to add: I did it with devtools::install_github("https://github.com/giovsaraceno/QuadratiK-package/tree/master"))

Dependencies

  • Why is ggplot2, rlecuyer, doParallel, foreach, imported but aren't called according to the automatic check?
  • Package has lots of dependencies (imports) which could make it hard to maintain. rrcov, mvtnorm, movMF, moments, mclust, clusterRepro, cluster, Tinflex, ggpp, ggpubr all seem to be only called once. Are they all crucial? Could they be optional extra functionality?

Parts of the Package

Vignettes

  • Seem to cover the main uses of the package

  • One vignette can't be run locally

  • Overall they appear to be written for people already familiar with the authors' papers or the other vignettes.

  • wireless_clustering vignette

    • Please justify this conversion to the sphere more. "Given that the Wi-Fi signal strength takes values in a limited range, it is appropriate to consider the spherically transformed observations, by L2 normalization, and consequently perform the clustering algorithm on the 7-dimensional sphere." On the face of it a sphere seems in appropriate, why doesn't the total length of a vector in the space matter?
    • Typo: "it returns an object with IGP the InGroup Proportion (IGP), and metrics a table of computed evaluation measures" --> remove 'IGP the'.
    • A chunk is visible but isn't evaluated, which confused me a lot, because the function inside the chunk called validation() doesn't exist. I'm guessing it is now pkbc_validation().
    • What is Average Silhouette Width etc. from pkbc_validation()?
    • The paragraph on plotting is unclear which object the plot method is for. A plot(res_pk) would be nice, with an example plot showing why K=4 looks appropriate.
  • k-sample test vignette

    • "Recall that the computed test statistics correspond to the omnibus tests." Recall from where? And what are the omnibus tests?
    • The commented-out line in the code chunk "#k_test_h <- kb.test(x=x, y=y)" confused me. I think you mean it to be an example of automatic use of select_h()?
    • h_k <- select_h(x=x, y=y, alternative="skewness") takes a really long time on my machine, would be good to mention in the help for select_h() that it takes a long time.
  • I quickly scanned and checked the remaining vignettes could render, but didn't run the code myself.

Function help

  • I suspect help assumes user has read your references e.g.,

    • For pkbc_validation(), how do I interpret all the measures?
    • For kb.test() see my comments about k-sample tests in the README.
  • kb.test()

    • What happens if the reference distribution parameters are NULL?
    • I googled for your 2024 manuscript but couldn't find it, so not sure what the tests are.
    • There are many aspects not seen documented in kb.test(). What are the U-statistics, what is Vn, and a V-statistic etc?
    • The function clearly does a lot of things, but what are those things? All you've said is "the kernel-based quadratic distance tests using the Gaussian kernel with bandwidth parameter h". Please describe what they are.
  • kb.test class

    • Great to have this class documented explicitly.
    • Link to kb.test() and vice versa to explain the objects.
    • The class and the function are so closely linked so consider documenting them in the same .Rd file. But it is okay for them to be separate.
  • pk.test() and pk.test-class

    • What happens if rho is NULL?
    • Would be great to say somewhere when pk.test() is advised - from the referenced paper it is when the alternative is multimodal.
  • pkbc()

    • Written like the user is familiar with the referenced paper
  • pkbc_validation()

    • I'm guessing the columns of pkbc_validation(res)$metrics are the different cluster numbers in res?
  • plot.pkbc()

    • For some reason on my system after installing from CRAN: ?plot.pkbc and help(plot.pkbc) both find nothing. But, for example help(plot.TinflexC) gets me to the appropriate help. Do you know why it isn't working? The manual for plot.pkbc is crucial to understanding the plots so should be accessible from the console.
    • Can you please make it so that it can be used non-interactively?
      • This is crucial for reproducibility for anyone using this function.
    • Please document what the options mean. In the wireless_clustering vignette plot(res_pk) choosing scatter plot then 4 clusters to display didn't make a plot at all! Is this a bug, or what it should be doing? And surely it means displaying the 4-cluster model, rather than say displaying the first 4 clusters of the 6-cluster model?
    • It would be nice if the elbow plot panels made clear which metric they are using, but I know that sort of thing can be hard to program. I'm hoping the order corresponds to the order in the help (left = Euclidean, right = cosine similarity).
  • select_h()

    • Can take a really long time - worth warning the user
  • wine data set

    • Source seems to be missing some spaces
  • wireless data set

    • Source seems to be missing some spaces

Examples

I get warnings from geom_table_npc() when I run the examples. I'm using version 0.5.7 of package ggpp.

Test Coverage

  • Could you test that pk.test() rejects uniformity correctly?
  • You could test rpkb against a known approximation with preexisting simulation methods (e.g. vMF, kent etc?) (currently you only test the mean)
  • Test of pkbc() doesn't seem to check that it gets the clustering correct in a simple situation
  • Likewise 'stats_clusters()andpkbc_validation()` (these all seem to test that the structure/class of the output is correct, but not the actual values of the output)
  • Considering your value of the graphical methods (and that I couldn't get one to work), testing that they work would be good!

Packaging guidelines

  • README should include a brief demonstration
  • No citation instructions yet
  • Guidelines suggest a general vignette to introduce the package
  • Vignettes are missing citations
  • There are places where cross-links would be appropriate to add (e.g. between pb.test help and pb.test-class, link to plot.kpbc help from kbpc and vice-versa).
  • Do you have a pkgdown website yet?

More

  • G1.6 suggests a demo of pk.test() on multimodal data vs another uniform testing method
  • I can see a few spelling errors (e.g. silhouette) according to devtools::spell_check()

@giovsaraceno
Copy link
Author

Hello @kasselhingee
Thank you very much for your review, which we found to be very helpful.

All the points raised in this review will be addressed. I do have some questions associated with some of the points and I would greatly appreciate if you could please provide some guidance.

Here are my questions:

  • Dependencies: some of the listed packages are used for providing additional features to the main functionalities. For example, the packages mclust, clusterRepro and cluster are used for computing the measures of ARI, IGP and ASW in the pkbc_validation() function. This portion could be an optional extra functionality. Could you advise on the best approach to code this?
  • Test coverage for rpkb(): in the second point it is suggested to compare the function against a known approximation method. The proposed function rpkb() is the first in the literature for random generation from a Poisson kernel-based distribution. While using different circular distributions, such as the vMF and Kent, might illustrate the differences between these distributions, it is unclear what specific tests would be most appropriate in this context. Could you provide further clarification or suggestions on this matter?

Thank you again for your time and the helpful review.
Giovanni

@ropensci-review-bot
Copy link
Collaborator

📆 @kasselhingee you have 2 days left before the due date for your review (2024-07-16).

@kasselhingee
Copy link

Hi @giovsaraceno , glad it was helpful.

  • optional extra functionality: I haven't done this much myself, but I would put these packages as 'Suggested' in the DESCRIPTION file and inside pkbc_validation() check for mclust etc (using require(pkgname) say). I've can imagine two situations then (1) ARI, IGP and ASW are core to pkbc_validation(), in which case I've seen other people's packages ask me if I want to install the packages (not sure how they did it) OR (2) they aren't core to pkbc_validation() and you could just print a message() if the packages aren't available. In either case you'll have to make sure the help for pkbc_validation() explains the behaviour.

  • Test coverage for rpkb(): I got the impression that certain parameter settings for the Poisson kernel-based distribution can closely approximate a vMF or Kent etc. I suggest checking that the results of rpkb() are very similar to the results for existing vMF and Kent etc simulators in at least one of these situations. (And if the situation is close to a typical use case for rpkb() that will be an even better test!).

@giovsaraceno
Copy link
Author

Hello @kasselhingee,
Thank you very much for your insightful review. We apologize for the delay in our response. We have addressed the points raised in the review and you can find below point-by-point response.

Review Comments

I'm only partially familiar with the area of spherical data. Both your package's tests of uniformity and clustering sound useful. The kb.test() tests sound complicated but I'm sure with more explanation their use will become clear.

I found it really hard to understand your package initially. That was because most of the documentation ignores G1.3 on explaining statistical terms. For example, I jumped into the kb.test() part of the package which was really confusing until I found an arXiv document on QuadratiK that had more information. Although the other parts were less confusing to me, they still didn't explain themselves well, often stating that the function performs "the [a bespoke method by authors]" without explanation of the method. Because of this, currently your package feels only usable by expert statisticians who have read your papers and want to try out your methods.

I would love it if your readme described the major benefits with more detail. For example, that pk.test() performs much better than competitors when the alternative is multimodal. Your answers to rOpenSci here state that novel and unique kernel-based methods are used, but you don't say what is good about them. Also, please clarify if the kernel in the 'Poisson kernel-based densities' different from the kernel in your 'kernel-based quadratic distances'.

More on following G1.3. There are many unexplained terms I've never heard of before, a few used differently to expectation, and others used vaguely. At times it felt like these were all the terms!

Thank you very much for your detailed and constructive feedback. We acknowledge the complexity of the statistical methods implemented in the package, particularly for users who may not be familiar with spherical data analysis or the specific kernel-based techniques we have employed.
We have taken significant steps to improve the clarity and accessibility of our documentation. Specifically, we have expanded the README to include more detailed explanations of the major benefits of our methods, such as the advantages of the methods; we have added comprehensive descriptions in the function help files, ensuring that users can understand and effectively utilize these methods without needing to consult external research papers.
Moreover, we have reviewed and revised the use of statistical terminology throughout the package to align with the best practices recommended in G1.3, ensuring that terms are used consistently and are well-explained. Our goal is to make the package accessible to a broader audience, not just expert statisticians who are familiar with our previous work.
We hope these improvements address your concerns and make the package more user-friendly and accessible, while still providing the robust statistical tools that are central to its purpose.

  • An intro to PKB distributions would be nice.
  • And exact meaning of parameters in rpkb().

We have added an introduction of the PKB distributions in the README file and the help of the dpkb() and rpkb() functions, including the description of the parameters.

Package-Level Comments

Major Functionality Claims

The following appear to be met from the documentation and function results:

  • A test of uniformity on the sphere for multimodal alternatives
  • Clustering algorithms
  • Equal-distribution tests (this is my current guess of what the k-sample tests are)
  • Goodness of fit to an estimated PKB distribution?
  • Some graphics help

However, I haven't checked if they are implemented correctly and the package lacks tests to confirm results in simple situations.

The package documentation has been significantly enhanced in order to clarify the introduction and usage of the mentioned points. It now provides a more comprehensive overview, featuring a brief introduction to the methods, a clear explanation of the theoretical foundations, and a discussion of the advantages and appropriate use cases. Additionally, we have incorporated further tests in line with the reviewer's suggestions.

These claims seem unmet to me:

  • I can't see the bridge between statistics and machine learning claimed in the README

Please notice that we have removed the mentioned statement from the README file.

  • I haven't got the scatterplots of clustering results to work

Thank you for identifying this bug. We have fixed this and now the scatter-plots are correctly generated.

  • I can't see the Python version

In the README file, we have added an Installation section which includes the link for the Python package. It is implemented separately and it is now under review with pyOpenSci.

  • In answering some rOpenSci questions you mentioned a dashboard application. I completely missed it! Where is it?

The dashboard is generated using the associated Python package. You can find the link with detailed instructions on how to access it in the Installation section of the README file.

Statement of need in README

  • Statement of need lacks precision. When should I use your methods vs existing methods?

The README file and the documentation for the main functions now include a statement highlighting the advantages of the proposed methods and the scenarios in which they are most appropriate.

  • The majority of methods are based on statistical methods published with peer-review by the authors, so I haven't checked the novelty or importance of the methods myself.

Thank you for your feedback. We appreciate your acknowledgment of the peer-reviewed nature of the statistical methods included in the package. While the methods implemented have been extensively validated in the literature and are recognized for their contributions to the field, we believe the practical utility and ease of access these methods offer to users is a significant contribution. We welcome any further insights or suggestions you might have regarding the implementation or documentation of these methods.

An exception is the 'k-sample goodness of fit tests', which are in a manuscript I couldn't find. Not sure what they are exactly, or how good they are compared to existing methods.

We apologize for the inconvenience, it was not available at the time of the first submission. More information about the two and k-sample tests can be found in the newly released arXiv pre-print
Markatou Marianthi & Saraceno Giovanni (2024). “A Unified Framework for Multivariate Two- and k-Sample Kernel-based Quadratic Distance Goodness-of-Fit Tests.” arXiv:2407.16374
including extensive simulation studies comparing the proposed methods with existing methods.

  • Two and k-sample tests often refer to testing equality between groups (or similar).

    • Please be more clear in the README what the hypotheses are in these tests.

Please see our answer above where we state clearly that with the kb.test we offer tests for 1) testing normality of a single sample; 2) testing equality of the distributions of two samples; 3) testing the equality of the distributions of $k$ samples.

  • I'm especially confused by a k-sample test of goodness of fit to probability distribution. Without reading your arXiv document it sounded like it would test the fit of each sample to the model probability distributions.

Apologies, we hope it is clear now.

  • What model does the single-sample goodness of fit test?

It tests if the sample follows a multivariate normal distributions with a given mean vector and covariance matrix. If these parameters are not provided, the standard normal distribution is considered. We hope now it is clearly explained.

  • Specify what class of probability distributions (PKB?)

We refer here to general distributions for the two and $k$-sample tests, that is $H_0:F_1 = F_2 = F$ and $H_0:F_1 = \ldots = F_k = F$, where the distribution $F$ is general and does not need to be specified.

  • Target audience is unclear in README. Currently I'm thinking expert statisticians and machine learners who have read your papers/manuscripts, but I see your answers to rOpenSci questions hope for more general use.

Thank you for your feedback. We have improved the documentation to clearly outline the specific use cases for the package’s functionalities. Our goal is to make the package accessible to a broader audience, including those who may not be familiar with the associated papers. For those interested in a deeper understanding, we have included references to the original research.

Installation

  • Doesn't have installation instructions in README for the development version or CRAN version (but should be very easy to add: I did it with devtools::install_github("https://github.com/giovsaraceno/QuadratiK-package/tree/master"))

The README file now has an Installation section with instructions for installing the CRAN version or the development version of the package.

Dependencies

  • Why is ggplot2, rlecuyer, doParallel, foreach, imported but aren't called according to the automatic check?

The packages 'rlecuyer', 'doParallel', 'foreach' and 'parallel' are fundamental for performing the parallel computing in the select_h function, hence they are kept in the 'Imports' section, as well as the package 'moments'. They are now called, except for 'rlecuyer' which is needed for parallel computing but it has not a direct call.

  • Package has lots of dependencies (imports) which could make it hard to maintain. rrcov, mvtnorm, movMF, moments, mclust, clusterRepro, cluster, Tinflex, ggpp, ggpubr all seem to be only called once. Are they all crucial? Could they be optional extra functionality?

The packages 'mclust', 'clusterRepro' and 'cluster' are used for comupting the ARI, IGP and ASW in the pkbc_validation function. These packages are removed from the 'Imports' and included in the 'Suggests' section of the DESCRIPTION file. If the pkbc_validation function is performed, the user is asked if he/she wants to install them in order to use the function. They are also called during the automatic check. It is clearly explained in the documentation of the pkbc_validation function.

The packages 'Tinflex' and 'movMF' are used for the random generation of points from the Poisson kernel-based distribution through the rpkb() function. Also in this case, the function asks if the user wants to install the packages. They are also called during the automatic check. This is explained in the help documentation of the rpkb() function.

The package 'ggpp' is no longer used.

The packages 'rrcov', 'mvtnorm' and 'ggpubr' are now called according the automatic check. They are not moved to the 'Suggests' section since they are used for the package's functionalities.

There are also the packages 'sphunif' and 'circular' that appear not be called according the automatic check. This happens since they are used in the tests or the vignettes.

Parts of the Package

Vignettes

  • Seem to cover the main uses of the package

Thank you for the comment.

  • One vignette can't be run locally

We checked that all the vignettes can be run locally.

  • Overall they appear to be written for people already familiar with the authors' papers or the other vignettes.

Thank you for the comment. In the revised vignettes we added more details and it is not necessary that readers are familiar with the papers or the other vignettes.

  • wireless_clustering vignette

    • Please justify this conversion to the sphere more. "Given that the Wi-Fi signal strength takes values in a limited range, it is appropriate to consider the spherically transformed observations, by L2 normalization, and consequently perform the clustering algorithm on the 7-dimensional sphere." On the face of it a sphere seems in appropriate, why doesn't the total length of a vector in the space matter?

The proposed test for uniformity and clustering algorithm are tailored for data points on the sphere $\mathcal{S}^{d-1} = {\mathbf{x} \in \mathbb{R}^d : ||\mathbf{x}||=1}$ where $||\mathbf{x}||=\sqrt{x_1^2 + x_2^2 + \ldots + x_d^2}$. The L2 normalization is necessary for performing the methods. The sentence has been rewritten more clearly. It is also specified in the Note section of the help documentation of the pkbc() function.

  • Typo: "it returns an object with IGP the InGroup Proportion (IGP), and metrics a table of computed evaluation measures" --> remove 'IGP the'.

Corrected.

  • A chunk is visible but isn't evaluated, which confused me a lot, because the function inside the chunk called validation() doesn't exist. I'm guessing it is now pkbc_validation().

The chunck has been corrected using the function pkbc_validation() and now it is evaluated correctly.

  • What is Average Silhouette Width etc. from pkbc_validation()?

Each cluster is represented by a so-called silhouette which is based on the comparison of its tightness and separation. The average silhouette width provides an evaluation of clustering validity, and might be used to select an ‘appropriate’ number of clusters.
We have added this information in the documentation of the pkbc_validation() function and specified the corresponding reference.

  • The paragraph on plotting is unclear which object the plot method is for. A plot(res_pk) would be nice, with an example plot showing why K=4 looks appropriate.

We have modified the plot() function such that it directly displays the scatter plot of data points with the elbow plots. An example is added to the wireless_clustering vignette.

  • k-sample test vignette

    • "Recall that the computed test statistics correspond to the omnibus tests." Recall from where? And what are the omnibus tests?

Apologies, now it is clearly stated. We avoided the term 'omnibus tests'.

  • The commented-out line in the code chunk "#k_test_h <- kb.test(x=x, y=y)" confused me. I think you mean it to be an example of automatic use of select_h()?

This chunck shows the usage of kb.test() when $h$ is not specified. In the example, we show the usage of the function select_h() separately.

  • h_k <- select_h(x=x, y=y, alternative="skewness") takes a really long time on my machine, would be good to mention in the help for select_h() that it takes a long time.

Thanks for this comment. We have modified the code such that the mentioned line can be run with the sample size considered in the example. However, this aspect is now mentioned in the help of the select_h() function.

  • I quickly scanned and checked the remaining vignettes could render, but didn't run the code myself.

Thanks, we have checked again the correct compilation of the remaining vignettes.

Function help

  • I suspect help assumes user has read your references e.g.,

We have modified the help such that the corresponding references do not need to be read.

  • For pkbc_validation(), how do I interpret all the measures?

In the details section of the pkbc_validation() help documentation we have added a brief explanation of the computed measures.

  • For kb.test() see my comments about k-sample tests in the README.

We have improved the description of the function kb.test() in the Details section, clearly stating the possible usage.

  • kb.test()

    • What happens if the reference distribution parameters are NULL?

The role of reference distribution parameters has been specified in the Details section.

  • I googled for your 2024 manuscript but couldn't find it, so not sure what the tests are.

Apologies. Our 2024 manuscript is now available on arXiv and we updated the corresponding references.

  • There are many aspects not seen documented in kb.test(). What are the U-statistics, what is Vn, and a V-statistic etc?

In the kb.test() documentation, we now introduce the computed statistics and corresponding results, without the need of knowing the mentioned references. We also added a note with the explanation of the used terms.

  • The function clearly does a lot of things, but what are those things? All you've said is "the kernel-based quadratic distance tests using the Gaussian kernel with bandwidth parameter h". Please describe what they are.

The help of the kb.test() function now includes a detailed description.

  • kb.test class

    • Great to have this class documented explicitly.

Thank you for the comment.

  • Link to kb.test() and vice versa to explain the objects.

We have added the corresponding links.

  • The class and the function are so closely linked so consider documenting them in the same .Rd file. But it is okay for them to be separate.

Thank you for the suggestion. At this step we prefer to organize them separately.

  • pk.test() and pk.test-class

    • What happens if rho is NULL?

rho must be provided for computing the tests. Now this argument has no default value.

  • Would be great to say somewhere when pk.test() is advised - from the referenced paper it is when the alternative is multimodal.

We have added this information in the README file and the help of the pk.test() function.

  • pkbc()

    • Written like the user is familiar with the referenced paper

We added all the relevant information for introducing the clustering algorithm in the help documentation, including information about the initialization and stopping rule.

  • pkbc_validation()

    • I'm guessing the columns of pkbc_validation(res)$metrics are the different cluster numbers in res?

It is correct. It is now clearly explained in the help of the pkbc_validation function.

  • plot.pkbc()

    • For some reason on my system after installing from CRAN: ?plot.pkbc and help(plot.pkbc) both find nothing. But, for example help(plot.TinflexC) gets me to the appropriate help. Do you know why it isn't working? The manual for plot.pkbc is crucial to understanding the plots so should be accessible from the console.

We thank the reviewer for pointing this out. The ?plot.pkb was not accessible for a missing tag for the 'roxygen2' package which is used for building the documentation. Now, the help documentation for the plot.pkbc function is displayed correctly.

  • Can you please make it so that it can be used non-interactively?

    • This is crucial for reproducibility for anyone using this function.

We have modified the plot function as suggested.

  • Please document what the options mean. In the wireless_clustering vignette plot(res_pk) choosing scatter plot then 4 clusters to display didn't make a plot at all! Is this a bug, or what it should be doing? And surely it means displaying the 4-cluster model, rather than say displaying the first 4 clusters of the 6-cluster model?

We thank the reviewer for pointing this out and we fixed the mentioned bug. The scatter plot displays data points in color and the color indicates the cluster membership of each point obtained for the indicated number of clusters. In the current version, if the number of clusters is not provided, the function shows the scatterplot for each possible value of clusters considered, as indicated in the arguments.

  • It would be nice if the elbow plot panels made clear which metric they are using, but I know that sort of thing can be hard to program. I'm hoping the order corresponds to the order in the help (left = Euclidean, right = cosine similarity).

The missing information is now included in the details of the help documentation of the function, as well as it is indicated as header in the created plots.

  • select_h()

    • Can take a really long time - worth warning the user

This information has been added in the Note section of the documentation of the select_h() function.

  • wine data set

    • Source seems to be missing some spaces
  • wireless data set

    • Source seems to be missing some spaces

Corrected.

Examples

I get warnings from geom_table_npc() when I run the examples. I'm using version 0.5.7 of package ggpp.

The function geom_table_npc() is used for displaying the tables of computed statistics together with the qqplots for normality tests, uniformity test and the two-sample test. Since these statistics are reported in the output of the summary() function, we removed the geom_table_npc() function and ggpp package to avoid the warnings.

Test Coverage

  • Could you test that pk.test() rejects uniformity correctly?

It has been added.

  • You could test rpkb against a known approximation with preexisting simulation methods (e.g. vMF, kent etc?) (currently you only test the mean)

Golzy and Markatou (2020) proposed an acceptance-rejection method for simulating data from a PKBD. Furthermore Sablica, Hornik and Leydold (2023) proposed new ways for simulating from the PKBD. We have checked the results in the case $d=2$ and $\rho=0.5$. Generating data from PKBD in this case and data from the Wrapped Chauchy distribution with concentration parameter $\rho=0.5$ and the same location as PKBD is equivalent.

  • Test of pkbc() doesn't seem to check that it gets the clustering correct in a simple situation

We have added a test where three well separed clusters are generated. We test that the clustering algorithm correcly identifies the three clusters.

  • Likewise 'stats_clusters()andpkbc_validation()` (these all seem to test that the structure/class of the output is correct, but not the actual values of the output)

We also added tests for these functions.

  • Considering your value of the graphical methods (and that I couldn't get one to work), testing that they work would be good!

We added a test checking that the plot method does not return any error or warning.

Packaging guidelines

  • README should include a brief demonstration

README file includes installation instructions, describes the main functionalities, and indicates the corresponding citations and references. There is also a link pointing to the introductory vignette of the package.

  • No citation instructions yet

The Citation section has been added.

  • Guidelines suggest a general vignette to introduce the package

An introductory vignette has been added presenting the key features of the packages with simple examples and useful links.

  • Vignettes are missing citations

Citations have been added.

  • There are places where cross-links would be appropriate to add (e.g. between pb.test help and pb.test-class, link to plot.kpbc help from kbpc and vice-versa).

Cross-links have been added

  • Do you have a pkgdown website yet?

Yes, the pkgdown website can be assessed from the GitHub page of the package at the link https://giovsaraceno.github.io/QuadratiK-package/. This link is already present in the DESCRIPTION file and in the Citation section of the README file.

More

  • G1.6 suggests a demo of pk.test() on multimodal data vs another uniform testing method

In the vignette which shows the usage of the test for uniformity on the sphere we added an example generating data from a multimodal distribution and we compared the obtained results with two tests from the literature.

  • I can see a few spelling errors (e.g. silhouette) according to devtools::spell_check()

The suggested command is run and the spelling errors are handled.

@kasselhingee
Copy link

Hi @giovsaraceno great to hear back from you. I've had a busy few weeks, but I hope to get something back to you by the end of next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants