Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rec. 29: Implement FAIR metrics #29

Open
sjDCC opened this issue Jun 10, 2018 · 10 comments
Open

Rec. 29: Implement FAIR metrics #29

sjDCC opened this issue Jun 10, 2018 · 10 comments
Labels
data services stakeholder group funders stakeholder group institutions stakeholder group Metrics Recommendation related to FAIR metrics and certification of services publishers stakeholder group

Comments

@sjDCC
Copy link
Member

sjDCC commented Jun 10, 2018

Agreed sets of metrics should be implemented and monitored to track changes in the FAIRness of datasets or data-related resources over time.

  • Repositories should publish assessments of the FAIRness of datasets, where practical, based on community review and the judgement of data stewards. Methodologies for assessing FAIR data need to be piloted and developed into automated tools before they can be applied across the board by repositories.
    Stakeholders: Data services; Institutions; Publishers.

  • Metrics for the assessment of research contributions, organisations and projects should take the past FAIRness of datasets and other related outputs into account. This can include citation metrics, but appropriate alternatives should also be found for the research / researchers / research outputs being assessed.
    Stakeholders: Funders; Institutions.

@sjDCC sjDCC added Metrics Recommendation related to FAIR metrics and certification of services data services stakeholder group funders stakeholder group institutions stakeholder group publishers stakeholder group labels Jun 10, 2018
@AlasdairGray
Copy link

Doesn't this recommendation subsume #9?

@ghost
Copy link

ghost commented Jul 30, 2018

4TU.Centre for Research Data position: The funders should have a bigger role here, since they are requiring FAIR data. Having a final and strong statement of the funders interpretation of FAIR data (per discipline) would help the data service to define metrics better.

@katerbow
Copy link

DFG position: As commented to Recommendations 6, 9 and 11 metric methods to assess science and the FAIRness of data sets is seen rather critical. The wish to measure is perspicuous due to its inviting ease to qualify any kind of output and of course, it is fair to search for adequate means to do so. However, in science, metric assessment did not produce better science and new findings so far and it can be expected, that metrics are not of plausible support to implement the FAIR-principles.

Any outcome of an assessment based on metrical methods bears the potential, to stall valuable initiatives simply based on (potentially) questionable numbers. That holds in particular true for attempts to introduce automatic means of metrical methodologies.

@Eefkesmit
Copy link

Contribution on behalf of the International Association of STM Publishers (STM):
As mentioned under several related recommendations, we see 4 cornerstone components in a machine-actionable eco system for FAIR Data. Of these the folllowing is relevant for FAIR Data Metrics:
Data Citation standards -- Promote and implement data citation rules and standards according to the recommendations of FORCE11, to provide credit for good data practice.

@Drosophilic
Copy link

As noted on #9, this action would benefit from building on http://fairmetrics.org/ and the NIH Data Commons work on FAIR objects.

@ajaunsen
Copy link

ajaunsen commented Aug 2, 2018

Metrics is a viable way to automatically measure the level of FAIRness of e.g. a repository. However, the FAIR principals are just that, guidelines that are intentionally vague and not specified in any level of detail. Herein lies the challenge of defining metrics that can be used to measure FAIRness. It is necessary to set a reference point. As data becomes FAIRer, the reference point will be raised and thus all metrics become devalued. There will probably be a need to introduce FAIR versions, so that data can be said to be compliant to FAIR version X.

Currently, most repositories (or datasets) will not meet the majority of machine-actionable tests, and will thus fail miserably.

@ferag
Copy link

ferag commented Aug 3, 2018

@pkdoorn
Copy link

pkdoorn commented Aug 3, 2018

Combine with Rec. #9: Develop robust FAIR data metrics #9 and perhaps Rec. #14: Recognise and reward FAIR data and data stewardship #14.

@mromanie
Copy link

mromanie commented Aug 3, 2018

ESO position
See Rec06, Rec09 and Rec11.

@gtoneill
Copy link

gtoneill commented Aug 6, 2018

Some overlap with Recommendations 5, 6, 9, 10, 11, and 14 on FAIR Data assessment. Perhaps merge?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
data services stakeholder group funders stakeholder group institutions stakeholder group Metrics Recommendation related to FAIR metrics and certification of services publishers stakeholder group
Projects
None yet
Development

No branches or pull requests

10 participants