-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rec. 29: Implement FAIR metrics #29
Comments
Doesn't this recommendation subsume #9? |
4TU.Centre for Research Data position: The funders should have a bigger role here, since they are requiring FAIR data. Having a final and strong statement of the funders interpretation of FAIR data (per discipline) would help the data service to define metrics better. |
DFG position: As commented to Recommendations 6, 9 and 11 metric methods to assess science and the FAIRness of data sets is seen rather critical. The wish to measure is perspicuous due to its inviting ease to qualify any kind of output and of course, it is fair to search for adequate means to do so. However, in science, metric assessment did not produce better science and new findings so far and it can be expected, that metrics are not of plausible support to implement the FAIR-principles. Any outcome of an assessment based on metrical methods bears the potential, to stall valuable initiatives simply based on (potentially) questionable numbers. That holds in particular true for attempts to introduce automatic means of metrical methodologies. |
Contribution on behalf of the International Association of STM Publishers (STM): |
As noted on #9, this action would benefit from building on http://fairmetrics.org/ and the NIH Data Commons work on FAIR objects. |
Metrics is a viable way to automatically measure the level of FAIRness of e.g. a repository. However, the FAIR principals are just that, guidelines that are intentionally vague and not specified in any level of detail. Herein lies the challenge of defining metrics that can be used to measure FAIRness. It is necessary to set a reference point. As data becomes FAIRer, the reference point will be raised and thus all metrics become devalued. There will probably be a need to introduce FAIR versions, so that data can be said to be compliant to FAIR version X. Currently, most repositories (or datasets) will not meet the majority of machine-actionable tests, and will thus fail miserably. |
Some overlap with Recommendations 5, 6, 9, 10, 11, and 14 on FAIR Data assessment. Perhaps merge? |
Agreed sets of metrics should be implemented and monitored to track changes in the FAIRness of datasets or data-related resources over time.
Repositories should publish assessments of the FAIRness of datasets, where practical, based on community review and the judgement of data stewards. Methodologies for assessing FAIR data need to be piloted and developed into automated tools before they can be applied across the board by repositories.
Stakeholders: Data services; Institutions; Publishers.
Metrics for the assessment of research contributions, organisations and projects should take the past FAIRness of datasets and other related outputs into account. This can include citation metrics, but appropriate alternatives should also be found for the research / researchers / research outputs being assessed.
Stakeholders: Funders; Institutions.
The text was updated successfully, but these errors were encountered: