Going back to basics – reusing data

It is almost a year since the first set of data was gathered to analyse journal articles, and now the benefits of saving data well is becoming fruitful. Two things are happening that means we are getting the basic figures out, dusting them off and looking at them again. The first is a paper about the development of a model journal research data policy, which is being co-authored by the JoRD team members, and the second is in response to certain questions that various people are asking.

The idea of creating a model policy emerged from the mass of data that was being found in the analytical process, and it was based on what journals were already doing, and suggestions from the report “Sharing Publication-related Data and Materials: Responsibilities of Authorship in the Life Sciences” (Committee on Responsibilities of Authorship in the Biological Sciences, 2003,  http://www.nap.edu/openbook.php?isbn=0309088593). The report was the outcome of a workshop in the United States which involved Biological Scientists. The five principles and ten recommendations stated in the report were strongly in favour of open access to the data that underpins the research reported in published articles.  A summary of the principles and recommendations can be found here: http://www.councilscienceeditors.org/files/scienceeditor/v26n6p192-193.pdf. The report suggested that the data could either be included into the article , or deposited in a reputable repository and linked to the article. The focus of the first model data policy was therefore based on the rather patchy and inconsistent set of policies that were found, from less than half the journals we analysed, and a report which was biased towards one scientific discipline.  It was decided to compare the initial model data policy with the needs of the stakeholders, which were examined at a later stage in the JoRD project. This has entailed, not only going over the data gathered from the stakeholder interviews and questionnaire, but also digging retrospectively into the reasons for the initial model criteria to be chosen.

The second reason for examining the basic data has come from interesting questions asked by a number of bodies that know about the JoRD project, and therefore assume that the JoRD team are experts in the field of Journal research data policies, an assumption that is becoming increasingly true as more questions are answered. In order for the questions to be answered, the data needed to be looked at from a different perspective. For example, to answer “How many journals make sharing a requirement of publication?” the original data set was re-examined and journals counted, because the original analysis was looking at the number of policies, some journals having up to three different data policies. Here follows a table with figures from a journal perspective:

Results of Journal Survey
Total no. of Journals surveyed 371
Total no. of Journals with data sharing policies 162
Total no. of Journals that make sharing a requirement of publication 31
Total no. of Journals that enforce the policies 27
Total no. of Journals that state consequences for non compliance 7

This process is an illustration of the way that well organised data, saved  safely, and as in this case in digital form, can be re-used after a particular project has ended. Surely it is generally after research has been concluded that questions arise and the iterative process of dipping in and out of data to validate or extend the research then begins. The moral of this blog post? Manage your data well because you never know what you will asked.

A rather long post, but quite a brief summary

Here is a summary of the the project so far.

Sharing the data which is generated by research projects is increasingly being recognised as an academic priority by funders, researchers and publishers.  The issue of the policies on sharing set out by academic journals has been raised by scientific organisations, such as the US National Academy of Sciences, which urges journals to make clear statements of their sharing policies. On the other hand, the publishing community expresses concerns over the intellectual property implications of archiving shared data, whilst broadly supporting the principle of open and accessible research data .

The JoRD Project was a feasibility study on the possible shape of a central service on journal research data policies, funded by the UK JISC under its Managing Data Research Programme. It was carried out by the Centre for Research Communications Research at Nottingham University (UK) with contributions from the Research Information Network and Mark Ware Consulting Ltd. The project used a mix of methods to examine the scope and form of a sustainable, international service that would collate and summarise journal policies on research data for the use of researchers, managers of research data and other stakeholders. The purpose of the service would be to provide a ready reference source of easily accessible, standardised, accurate and clear guidance and information, on the journal policy landscape relating to research data. The specific objectives of the study were:  to identify the current state of journal data sharing policies; to investigate the views and practices of stakeholders; to develop an overall view of stakeholder requirements and possible service specifications; to explore the market base for a JoRD Policy Bank Service; and to investigate and recommend sustainable business models for the development of a JoRD Policy Bank Service

A review of relevant literature showed evidence that scientific institutions are attempting to draw attention to the importance of journal data policies and a sense that the scientific community in general is in favour of the concept of data sharing.  At the same time it seems to be the case that more needs to be done to convince the publishing world of the need for greater consistency in data policy and author guidelines, particularly on vital questions such as when and where authors should deposit data for sharing.

The study of journal policies which currently exist found that a large percentage of journals do not have a policy on data sharing, and that there are great inconsistencies between journal data sharing policies. Whilst some journals offered little guidance to authors, others stipulated specific compliance mechanisms. A valuable distinction is made in some policies between two categories of data: integral, which directly supports the arguments and conclusions of the article, and supplementary, which enhanced the article, but was not essential to its argument. What we considered to be the most significant study on journal policies (Piwowar & Chapman, 2008), defined journal data sharing policies as “strong”, “weak” or “non-existent”. A strong policy mandates the deposit of data as a condition of publication, whereas a weak policy merely requests the deposit of data. The  indication from previous studies that researchers’ data sharing behaviour is similarly inconsistent was confirmed by our online survey. However, there is general assent to the data sharing concept and many researchers who would be prepared to submit data for sharing along with the articles they submit to journals.

We then investigated a substantial sample of journal policies to establish our own picture of the policy landscape. A selection of 400 international and national journals were purposefully chosen to represent the top 200 most cited journals (high impact journals), and the bottom 200 least cited (low impact journals), equally shared between Science and Social Science, based on the Thomson Reuters citation index.  Each policy we identified relating to these journals was broken into different aspects such as: what, when and where to deposit data; accessibility of data; types of data; monitoring data compliance and consequences of non compliance. These were then systematically entered onto a matrix for comparison. Where no policy was found, this was indicated on the matrix. Policies were categorised as either being “weak”, only requesting that data is shared, or “strong”, stipulating that data must be shared.

Approximately half the journals examined had no data sharing policy. Nearly three quarters of the policies we found we assessed as weak and only just under one quarter we deemed to be strong (76%: 24%). The high impact journals were found to have the  strongest policies,  whereas not only did fewer low impact journals include a data sharing policy, those policies were  were less likely to stipulate data sharing, merely suggested that it may be done. The policies generally give little guidance on which stage of the publishing process is data expected to be shared.

Throughout the duration of the project, representatives from publishing and other stakeholders were consulted in different ways. Representatives of publishing were selected from a cross section of different types of publishing house; the researchers we consulted were self selected through open invitations by way of the JoRD Blog. Nine of them attend a focus group and 70 answered an online survey. They were drawn from every academic discipline and ranged over a total of 36 different subject areas. During the later phases of the study, a selection of representatives of stakeholder organisations was asked to explore the potential of the proposed JoRD service and to comment on possible business models. These included publishers, librarians, representatives of data centres or repositories, and other interested individuals. This aspect of the investigation included a workshop session with representatives of leading journal publishers in order to assess the potential for funding a JoRD Policy Bank service. Subsequently an analysis of comparator services and organisations was performed, using interviews and desk research.

Our conclusion from the various aspects of the investigation was that although idea of making scientific data openly accessible for share is widely accepted in the scientific community, the practice confronts serious obstacles. The most immediate of these obstacles is the lack of a consolidated infrastructure for the easy sharing of data. In consequence, researchers quite simply do not know how to share their data. At the present juncture, when policies are either not available, or provide inadequate guidance, researchers acknowledge a need for the kind of information that a policy bank would supply. The market base for a JoRD policy bank service would be the research community, and researchers did indicate they believed such a service would be used.

Four levels of possible business models for a JoRD service were identified and finally these were put to a range of stakeholders. These stakeholders found it hard to identify a clear cut option of service level that would be self sustaining. The funding models of similar services and organisations were also investigated. In consequence, an exploratory two phase implementation of a service is suggested. The first phase would be the development of a database of data sharing policies, engagement with stakeholders, third party API development with the intention to build use to the level at which a second phase, a self sustaining model, would be possible.

What is linked data?

The fact that data comes in all sorts of shapes and sizes has already been blogged about, but what is the concern about adding data into online journals? after all, printed journals have included data in the shape of graphs or tables for a great many years. The problem is now that the journal article and its corresponding data is no longer in the flat two dimensional world of a piece of paper, but is part of the multi-dimensional world of the internet, the data is linked to something else. Linked data, according to Bizer, Heath and Berners-Lee (http;//linkeddatte.org/docs/ijwis-special-issue) is the method by which data is connected, structured and published on the web resulting in a “web of data”. Linked data “refers to data published on the web in such a way that it is machine readable, its meaning is is explicitly defined, it is linked to other external data sets and can in turn be linked to from external data sets”.

Before the data is published and linked, it has to be put somewhere. Most of our research participants said that they store their data in a personal storage system, either their own work or home computer, or on a portable storage device. While, of course, such spaces may be linked to the internet, it is rather like keeping the data in a filing cabinet, although anyone can go and find the data, they have to search very hard or ask the data keeper to give it to them. Data therefore has to be uploaded to a space that is openly accessible, which could be a university repository, a subject repository, a web page, or even onto the publishers own servers.

Again this is not as simple as it seems, first you have to choose your repository and ensure that it will accept your sort of data. Once safely held in a repository, the data must be permanently linked and archived. As digital repositories are relatively new things, there is the question of what if the repository you have chosen has to close? where will the data go? If the data is uploaded onto the publisher’s server, do they have the capacity to hold all the data for all the journals that they publish, as well as all the articles? Suddenly the storage needs of a single article can become top heavy. At the moment there are not very clear answers to these concerns, therefore there needs to be some guidelines and methods of best practice resolved before all data can be truly linked.

What to put in an ideal JoRD service

The Feasibility Study has been asking researchers, representatives of Publishing Houses, repository staff and librarians about their image of an ideal JoRD service to give some sort of indication of how to build a resource that will be useful. So far, the most ideal service which would achieve the desires of all the stakeholders would not only include a database to contain all the details of every journal data sharing policy, cross-matched with funders requirements and lists of suitable repositories but also employ a team of human staff to constantly update the data base, provide customer service and advice about best practice and give educational workshops and seminars. This would be ideal, but expensive, and ideals cannot always be reached, at least not initially.

So, who wants what out of the service? These are the service requirements each stakeholder group suggested.

Researchers would like the service to:

  • Have a clear, visual user friendly website with technical support, and information about the service and its scope
  • Include summaries of policies, RCUK baseline policies, compliance statistics
  • Include the URL of journal policy
  • Provide contact details of researchers

Researchers told us that they would use the service to find the journal which is right for their data and funder’s requirements, find appropriate repositories and to look for openly accessed data.

Publishers asked for:

  • A simple attractive web page
  • An authoritative resource
  • Compliance monitoring and sanction information
  • Technical error reporting
  • Guidance about best practice, current issues, changes and trends and a model policy
  • A policy grading system
  • Levels of membership

Publishers said that they would use the service to gather competitor intelligence, a source of advice and as a central resource to get information about funder’s requirements and accredited repositories.

Both researchers and publishers wanted:

  • Guidelines about data submission,  such as copyright, use licensing, ethical clearance, restrictions and embargoes and file format
  • URLs of places where data can be archived and retrieved

As far as other stakeholders are concerned, librarians  considered that the service could give publication and funding compliance guidance for researchers as well as support research data management policies. Funders thought that the  service could track the development of Journal data policies and influence the data sharing behaviour of researchers. Representatives of repositories thought that a central data policy bank would be a resource where they could check consistency and compliance of journal data policies and possibly identify partner journals. It seems that a JoRD Policy Bank Service would have something to offer for everyone in the research industry. The quest now, as in all research activity, is finding someone who will pay, so that the ideal service will not be such a distant dream.

Data comes in all sorts of shapes and sizes

The JoRD project has not set out to define the term “data” (or the singular form of the word, “datum”). This was a fortunate choice, because one of the messages that has clearly come across from all the participants of our study is that data can take many forms. The recent Royal Society Report, “Science as an Open Enterprise”, (http://royalsociety.org/uploadedFiles/Royal_Society_Content/policy/projects/sape/2012-06-20-SAOE.pdf) includes a glossary of data terms which illustrates the ways in which the term “data” can be used. For example:

  • big data – data that requires massive computing power to process
  • broad data – structured big data
  • data set – a collection of  information held in electronic form
  • linked data – data that has been allocated a unique identifying number to be able to access it from an electronic storage facility

… and those are just a few terms that it explains. The word “Data” is defined as “Qualitative  or  quantitative statement or numbers that are (or assumed to be ) factual”. The researchers that were part of this study considered that their data took more forms that just statements or numbers.

Researchers described the data that their research generated as:  software, video footage, geodata, geological maps, ontologies, web services and data models , as can be seen in the table below. The multitude of forms therefore makes it difficult for publishers to include in their on-line published articles. The publishers said that linked data in a journal article should be  “fit for use” and “replicable” and consider that data in many different formats is “Messy” and currently is not supplied with sufficient meta-data. Another consideration is the resulting file size of an article if the publisher saves the embedded data on their own servers. Data repositories and data centres are the more practical method of data storage with published articles incorporating linked data.

Therefore that is one reason for Journals to have a data policy, and a good argument for those policies to be collected and made accessible in a centralised resource, a JoRD Policy Bank  Service.

Researchers description of data Qualitative(documents and text) Quantitative(figures) Visual data (images) Virtual data (software or protocols)
Collection of examiner reports and questions supervisory reports, letters and other documentary evidence.
Dataset of measurements and statistical analyses
Digitised Textual Sources
Excavation, field observation, environmental monitoring, software to collate mine and analyse
Excel sheets
Focus Group, Interview Transcripts, some footage of people using computers, digital photographs
Geodata
Geologic maps, chemical and isotopic analyses of Earth Materials, GIS datasets
Interview transcripts
Ontologies
Reports
Visualization
Web Services, Data Models and Specifications

Summary of workshop, discussion about the nature of JoRD

Here is another summary of the concluding discussion that took place at the workshop on 13th November. This is about the expectations and perceptions of publishers concerning the nature of the JoRD Data Bank service.

A prominent consideration of the publishers was that JoRD should be an authoritative resource, such that a JoRD compliance stamp, or quality mark, could be displayed on Journal’s websites. There was discussion that for JoRD to be authoritative, the content of the database should be added, updated and maintained by the JoRD team. It was mentioned that publishers might initially populate the data base, but ongoing maintenance would be the responsibility of JoRD. However, there should be a guarantee that the content is accurate and that publishers would need to commit to providing policies that can be machine readable in order for them to be automatically harvested.

It was suggested that the operational database should not be merely a static catalogue or encyclopaedia. It was requested that the non-compliance of a journal to a data sharing policy, or to a funder’s policy, could be flagged and reported to the publisher, although that request was queried as to whether that was the remit of the service, or the publisher themselves. Similarly, it was questioned whether the service would mediate user complaints, and proposed that it would engage with complaints concerning policies only. To maintain functionality, could there be automatic URL checking which would send an alert to the publisher if links were broken.  Updates to policy changes would also be a useful function.

The service website should include a model data policy framework or an example of a standard data policy and offer guidance and advice to journals and funders about policy development. However, the processing and ratification of a model policy could be a time consuming process to some publishers. It was asked whether repository policies would also be included, and there was mention of compliance with the OpenAIRE European repository network. The website should also contain:

  • Links to the publishers web-pages
  • Dates of the records
  • Lists of links to repositories
  • Set of criteria for data hosting repository

It should look inviting, but businesslike and be simple and clear, but be sufficiently detailed.

Methods of funding the service were considered and the benefits of membership. For example, would only the policies of members to the service be entered into the database? Would there be different levels of membership or different service options that publishers could choose? and would there be extra costs for extra services? One such service could be to contain historical records and persistent records to former policies. In the publisher’s opinion, they would be prepared to pay for a service that is transparent and would save them time.

Other comments included:

  • Would the service be a member of the World Data System?
  • Could it be released in Beta?
  • There are around 4-600 titles to enter initially
  • When set up the service could be studied to discover its effectiveness and impact
  • Further consultation may be needed

Very brief summary of JoRD workshop

On Tuesday 13th November some of the JoRD team met with representatives of several well known journal publishers for workshop a session to discuss a number of points concerning the potential JoRD data bank service. This is a very potted summary of the discussions that took place. If any of the attendees are reading this and feel that their comments have not been correctly interpreted, then please comment to correct any misunderstandings.

Preservation of and sustained access to published supplementary material: The current situation
The group perceived that at present there are a variety of issues that impede the maintenance of data added to an on-line journal as supplementary material, or even the practice of including data within an article. The areas where difficulties lie include:
• Technology
• Data repositories
• Embargoes
• Peer review
• Licensing
• Copyright
Unstable URLs, PDF formats and usable forms of preserved data present technological problems that need to be solved to ensure that data can be accessed in the long term. However, transferring data to new formats has fewer difficulties. Data may be linked to external repositories, but they present a problem because they each have different policies and practices. Embargoes placed on data release complicates matters, there is not standard for their length. To overcome these issues, an alternative solution would be not to include the data file with the article but to add information of where it can be obtained directly from the researcher. However, on-line journals will be upgrading to enriched HTML and should therefore commit to include data.

The group were concerned about the peer review of data, which is currently “Ad Hoc”. It was queried whether peer reviewers have time to examine data alongside judging arguments and suggested that data is reviewed by the research community. Currently publishers’ practices concerning licensing and copyrighting of data as supplementary material vary greatly. However EU legislation does not allow data to be copyrighted. Authors could be offered choices of licensing and work is being done to define data and on forms of data citation, however, publishers do feel a duty of care to the knowledge that they publish.

About data repositories: Advantages and disadvantages
Ideally, publishers would like repositories to be a searchable archive that manages data and collects retrospectively, such as the library of Columbia University gathering data for PLOS.

Advantages

  • The situation for publishers would be made simpler should data be held in external repositories
  • Technically more able to deal with digital data
  • Guidelines about re-depositing data if closed
  • Institutional repositories could manage data then aggregate it as in Australia

Disadvantages

  • May want to take over from publishers
  •  Not currently ready for influx of data
  • Funding may not be sustained
  • Discovery issues

Solutions to any of the issues posed above are not given in this post, but there is opportunity for you to comment. The remainder of the discussion focused on the structuring and content of a JoRD Policy Bank service, which will be summarised in the next post.

Online survey results part two

The second set of questions asked in the online survey ask for the opinions of researchers about data sharing and the usefulness of a data policy bank service. They are as follows:

  • Where do you access or locate the research output of other researchers?
  • In your opinion are the key drivers behind increasing access to research data?
  • In your opinion what are the main problems associated with sharing research data?
  • What do you think about linking a publication with digital data that are integral to its main conclusions?
  • What do you think about linking an article with supplementary material that enhances the article?
  • Do you think that journals should provide digital data sharing policies?
  • Do you think there would be benefits in having a service offering information about journal research data policies?
  • Would you use a service of this kind?
  • What information should be included in a policy bank service?
  • Do you have any other comments?

Most of the respondents locate other researcher’s data from colleagues or in their own institution or organisation and feel that the four most important key drivers to increasing access to data are:

  • Openness
  • Accountability
  • Increased access to data
  • Increased efficiency of research resources

The most frequently expressed concern is that of attribution of intellectual property right to the data being shared. The next frequently expressed issue is that current  institutional and establishment models and mindsets of institutions and some individuals create barriers to sharing data. However just over one-third of respondents (35%) consider that linking digital data as an integral part of  main conclusions in published online journals would be useful and should be mandatory.

Linking articles to supplementary data to enhance the article was considered useful by more respondents (43%) but it would also depend on the context of the data shared. Over 74% of researchers considered that journals should provide data sharing policies and a similar percentage (73%) thought that such a service would be of benefit, because it would be a central resource. Nearly 80% of respondents said that they would use such a service, either to gather data, or as a means of selecting where to publish their work. Many ideas of what to include in a policy data bank were suggested, which included:

  • Clarity and simplicity of use
  • Archiving URLs
  • Guidelines
  • Usage licences (eg Creative Commons)

Eight researchers commented that they considered the initiative important.

The least number of respondents said that they gather other research data from their own blog, or from hard copy data sets. The concerns expressed about sharing data were those of trust, confidentiality and the need to overcome existing mindsets and institutional barriers. A small number of researchers felt that sharing data would affect the future of research and that before sharing data certain conditions would have to be fulfilled. A very low number of people (3%) said that linking data to main conclusions was not useful and unnecessary; that they would only be interested in a published article, not in any additional material and that journals should not provide data sharing policies. One researcher commented that further research about the topic with a trial  would help their decision as to whether published data sharing policies would be of personal benefit.

Three percent of respondents thought that there would be no benefit to a data policy bank service, because it is not needed, not feasible or there would be conflicting journal ethos. Twenty one percent considered that they would not use such a service because they did not find it relevant and one researcher stated that they would prefer to deal directly with the journal.

On balance, it appears that more respondents are pro-data sharing, have positive opinions about the JoRD policy bank service and would find it useful, than respondents who feel that there is no need or use for such a service.

Preliminary Results of Online Questionnaire

The online questionnaire  closed on Monday 5th November and had been answered by 70 researchers. The survey comprised 20 questions asking for information about the researcher, their data sharing habits, their opinions of the possibility of openly sharing their data and the utility of a policy bank service. The first ten questions were as follows:

  • What is your academic discipline?
  • What is your subject?
  • How long have you been a researcher?
  • In which part of the world is your research institution based?
  • Do you generate research data/materials/programs etc?
  • What kind of data/materials/programs do you generate?
  • Where do you currently store you digital data?
  • Where do you currently store your non-digital data?
  • How accessible are your data/materials/programs to other researchers?
  • Are your data/materials/programs etc sharing habits going to change in the future?

Most of the respondents worked in the disciplines of Science or Social Science, however there were representatives from a substantial range of fields which means that the self selecting  sample was from a cross-section of research disciplines. The most frequently listed subject was some variety of Information Studies and around 33% of respondents were actively working on a PhD or M/Phil and roughly 30% had been post qualification researchers for between 5 – 14 years. The respondents were overwhelmingly based in Europe and nearly all of them considered that they generated some sort of data, which was mainly qualitative, but there was an equal balance between textual and numerical data.  Most people stored digital data on own computer and at a work server. The favoured form of other digital storage was Dropbox. However, when it came to non-digital data, many more people stored that at their workplace. Surprisingly around 56% of respondents already share their data, albeit with their colleagues. Slightly more researchers thought that they were unlikely to change their sharing habits (approx 37%) than change their sharing habits (36%).

The least number of respondents were from the field of Economics, one respondent was studying for a MSc, and fewer respondents had been working as researchers for over 15 years. Geographically, a very small number of respondents were based in South America and Africa, and a very few people answered that they did not generate any data. Visual Data was the least form generated. Few respondents stored digital data on a disciplinary digital or archive,  or non-digital data at an external repository. One correspondent appeared to destroy all raw data after research publication. None of the correspondents answered that they shared data with no-one, although certain researchers  shared only with their research partner. A few considered that they would share less of their data in future, while a small number of researchers were not able to share because of the sensitive nature of the data.

Questions 11 – 20 will be analysed and reported next week.

 

Incentivisation and Data Sharing – why should I cite the data I have used?

DATA CITATION

National Archive of Computerized Data on Aging (NACDA)

Browsing data archives generally, the following was found amongst the pages of NACDA concerning why re-used data should be cited:

Citing data files in publications based on those data is important for several reasons:

  • Other researchers may want to replicate findings and need the citation to identify/locate the data.
  • Citations are harvested by key social sciences indexes, such as Web of Science, providing credit to the researchers.
  • Data producers and funding agencies can track citations to measure impact.

http://www.icpsr.umich.edu/icpsrweb/NACDA/studies/4248/detail

These statements demonstrate the incentivisation process for people to share their data and make it available for re-use. Benefits are accrued back the the original researcher(s) for having shared their data, but also the discipline itself becomes more impactful.

WHAT A DATA CITATION MIGHT LOOK LIKE

Examples

United States Department of Commerce. Bureau of the Census, and United States Department of Labor. Bureau of Labor Statistics. Current Population Survey: Annual Demographic File, 1987 [Computer file]. ICPSR08863-v2. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2009-02-03. doi:10.3886/ICPSR08863

Johnston, Lloyd D., Jerald G. Bachman, Patrick M. O’Malley, and John E. Schulenberg. Monitoring the Future: A Continuing Study of American Youth (12th-Grade Survey), 2007 [Computer File]. ICPSR22480-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2008-10-29. doi:10.3886/ICPSR22480