Categories
general

A Landscape Study on Open Access and Monographs

http://bit.ly/2hTgA7e
http://bit.ly/2hTgA7e

Our new Knowledge Exchange report maps the open access monograph landscape in the six Knowledge Exchange countries Finland, the Netherlands, UK, France, Denmark and Germany together with Norway and Austria.

A large part of the Open Access discussion has so far been concentrated on journals but in recent years the focus is also starting to move towards making publicly funded monographs open access. It is likely that monograph mandates will follow where mandates for journal articles have lead the way. In the UK, HEFCE has stated its intention to move towards an open access requirement for monographs in the exercise that follows the next REF (expected in the mid 2020). This doesn’t seem to be too far in the future as it can take many years to write a book!

There are several projects and initiatives experimenting in the area of open access monograph publishing. In spite of of this rich evolving landscape, it was so far hard to get a systematic overview on characteristic developments and to assess which specific issues would benefit from further attention and discussion. To fill this gap and in order to effectively support open access monograph publishing in its partner countries, the Knowledge Exchange initiated this new landscape study on Open Access Monographs. The study was supported by Knowledge ExchangeFWF Austrian Science Fund, CRIStin and Couperin and written by Frances Pinter, Eelco Ferwerda and Niels Stern. It builds on in-depth interviews with experts from over 70 institutions across Denmark, Finland, Germany, Netherlands, UK, France, Norway and Austria, a survey and also an analysis of existing information. The focus is on three areas: the inclusion of open access monographs in open access policies, funding streams to support open access monographs and business models for publishing open acess monographs. The report has been designed to be read in a number of ways. The reader can either concentrate on particular areas of interest or focus on the eight very rich country studies.

It is clear that the eight countries examined in the report are at different stages towards enabling monographs to go open access and that no country has found the perfect solution for this transition but by looking at how particular issues have been addressed we can learn from each other and hopefully build a better system.

The report mentions a number of interesting open access book publishing initiatives which are experimenting with new business models and other innovative features. For example, Language Science press in Germany is a very successful new academic-led press with a discipline specific focus. It has already gained significant credibility among the community and was initially developed with support from DFG (German Research Foundation). The knowledge gained by this press and other examples mentioned will be valuable when considering similar projects in other disciplines. The KE study also complements the  Jisc Changing publishing ecologies report by Graham Stone and Janneke Adema which maps the rise in the number of independent presses set up by academics next to university presses in the UK.

The report closes with recommendations for a number of groups including Knowledge Exchange. For example, it suggests for the Knowledge Exchange to facilitate the exchange of ideas and to encourage awareness for policy makers across countries on the issues around mandating of open access monographs. Could this lead to policies than can be adopted across countries?

Jisc will also review the report in order to recommend best practice for open access monograph publishing in the UK as part of research being undertaken by the Jisc Collections research team on open access publishing infrastructure in the UK.

 

Categories
general publishing platform

The Rise of New University Presses and Academic-Led Presses in the UK

blog post by Janneke Adema (University of Coventry), Graham Stone (Jisc) and Chris Keene (Jisc)

report-coverOur new report: Changing publishing ecologies: A landscape study of new university presses and academic-led publishing maps the rise of new university presses and academic-led presses in the UK

The landscape of academic publishing has seen a discernible increase in new publishing initiatives entering the sector over the last few years. These new publishing initiatives have a potentially disruptive effect on the scholarly communication environment, providing new avenues for the dissemination of research outputs and acting as pathfinders for the evolution of academic publishing and the scholarly record.

In 2016 we commissioned a research project focused on institutional publishing initiatives which includes academic-led publishing ventures (ALPs) as well as new university presses and library-led initiatives (NUPs). We are pleased to announce the publication of the report ‘Changing Publishing Ecologies. A Landscape Study of New University Presses and Academic-led Publishing’, which charts the outcomes of this research.

The report, by Dr Janneke Adema (Coventry University) and Graham Stone (Jisc, formerly Collections and Scholarly Communications Librarian, University of Huddersfield), benchmarks the development of NUPs and ALPs and fills in knowledge gaps. It complements our previous  research, such as OAPEN-UK, the National Monographs strategy, the Jisc/OAPEN Investigating OA monograph services project and the new Knowledge Exchange Landscape Study on Open Access Monographs which will be published in September 2017.

The NUP and ALP strands of the research study were co-ordinated and run in tandem by Stone and Adema. This study was informed by a desk top review of current library publishing ventures in the US, Europe and Australia and an overview of international academic-led initiatives and their existing and future directions. The NUP strand consisted of a survey, which collected 43 responses, where the ALP strand was informed by interviews with 14 scholar-led presses. Taking different approaches for these two types of press, the report captures the take-up, reasoning and characteristics of these initiatives, as well as their future plans.

The report concludes with a series of recommendations to help support and foster new developments in this space, share best practice, collaboration and the tools and services to facilitate further innovation. As such the report recommends to support community building for both NUPs and ALPs, the establishment of guidelines for setting up a press, the provision of legal advice and guidelines for preservation and dissemination, and the development of future projects to support these new initiatives. In particular, the community professed a need for the development of a toolkit that would aid both existing NUPs and academic-led presses, as well as those universities and academics that are thinking about setting up their own publishing initiatives. This toolkit, based on information collated from the communities, could consist of how-to-manuals, best practices guidelines, standardised contracts and agreements and alternative FLOSS software able to support the production process.

The findings of the research carried out as part of this report provide an evidence base for future support for both new university presses and academic-led publishing initiatives to help create and maintain a diverse publishing ecology. We plan to work with both communities, its members, and partners to further build on these recommendations and seek suitable ways to take these ideas forward to realisation.

Full report: http://repository.jisc.ac.uk/6666/1/Changing-publishing-ecologies-report.pdf

Interview transcripts with Academic-led presses: http://repository.jisc.ac.uk/6652/

Categories
citation experiment

Towards full-text based research metrics: Exploring semantometrics

See also:
Report of Experiments (PDF)
Open Citations and Responsible Metrics (PDF) – Briefing note by Cameron Neylon, Curtin University
Cameron Neylon’s comment on the experiment

The HEFCE Metrics Tide Report has started a debate and proposed directions of travel for more responsible metrics. Our intention for what we called the ‘open citation experiment’ was to support this debate, to investigate open data sources that indicators can be based on and help to inform new ways forward.

Currently, the most widely used indicators for research assessment are derived from proprietary data sources and focused on citations of the peer reviewed literature.

In many cases this data is not transparent, community governed or auditable. This comes with a number of risks for universities and funders: it encourages costly purchase of near monopoly products, results in unaccountable / un-auditable allocation of public funds and means that benchmarks are often not seen as legitimate and therefore not useful.

Image 'measurement' by Freddy Fam (CC BY-NC-ND 2.0)
Image by Freddy Fam, CC BY-NC-ND 2.0 (creativecommons.org/licenses/by-nc-nd/2.0)

The Experiments

Petr Knoth and Dasha Herrmanova from the Open University have experimented with a new approach to research assessment metrics (semantometrics) which isn’t based on citation data alone but argues that the full text is needed to assess the value of a research article. I should also add that we found this approach very promising as it makes use of a new data source – the increasing availability of open access full text options in the publication ecosystem.

The experiment was an attempt to create the first semantometric measure based on the idea of measuring an article’s contribution to the progress of scholarly discussion. At a very simplified level you could say that the indicator looks at the subject matter of the papers citing and being cited by a given paper. If the subject matter of those citing a paper differs greatly from those that are cited, then this is considered to have a greater semantic distance. This is considered desirable, as the theory goes that this will have had a greater contribution to research as it is making a greater leap between previous and new discoveries, and hence give a higher score using this metric.

To explain what this means in more detail and how the contribution measure is calculated Petr and Dasha have developed a demonstrator page: http://semantometrics.org

The experiments report provides a correlation analysis of the contribution measure with two known metrics – citation counts and Mendeley readership – and analyses the behaviour of the contribution measure in relation to these metrics. However, rather than looking for a single new metric that could complement or replace citation counting, the aim of this experiment was to present an argument for studying this area more widely, to encourage developing new semantometric methods and to demonstrate this is already possible with openly available data.

To perform this analysis, both textual data of the research papers and citation data were needed. As no such dataset existed, the experiments have been conducted on a dataset obtained by merging data from CORE, Microsoft Academic Graph and Mendeley.

The report also looks at how article-level metrics can be extended to higher-level metrics, suggesting a new, fairer approach. While more work is needed to validate the proposed approach, the report emphasizes the need to move away from ad-hoc higher-level metrics (such as the h-index) to metrics that demonstrably fulfill certain objective criteria and show good performance on real data.

To find out more about the experiment and the results see the report of experiments:  Towards full-text based research metrics: Exploring semantometrics (PDF)

Open Citation Workshop March 2016

In a workshop at the end of March 2016, we reviewed the outcomes of the experiment with a number of sector representatives and also discussed potential next steps for research assessment metrics based on an open approach more generally.

We concluded that the argument for the usefulness of full text and openness in performing research assessment is convincing but that the work into semantometrics would need to go much further. We would need to investigate, for example, how the contribution measure compares to expert judgement. This would help us to see if the proposed indicator reflects any desired characteristics of research articles.

There were also plenty of suggestions for how to raise the profile of responsibly generated and applied indicators, how to improve and build on existing open data sources for research assessment metrics and how to make new ones available.  We’re working on a more detailed plan for next steps and will share this in due course.

In the meanwhile, we invited Cameron Neylon to comment on the experiment which you can read on his blog post – Taking Responsibility: How an answer to research assessment might just be “42”.

OR2016, 13-16 June, Dublin

Petr Knoth and Dasha Herrmannova presented at the OR2016 Conference on open research evaluation metrics:

Oxford vs Cambridge Contest: Collecting Open Research Evaluation Metrics for University Ranking

We are also very interested in your views about the experiment and any thoughts you may have about open metrics and indicators for research, so please do comment below.

 

Categories
citation experiment

Taking Responsibility: How an answer to research assessment might just be “42”

(a guest post from Cameron Neylon, Curtin University) in response to the open citations experiment

In Douglas Adams’ Hitchhiker’s Guide to the Galaxy a massive computer is created to give the answer to the ultimate question, of life the universe and everything. After ten million years of computing it gives its answer:

ONE: Well?
DEEP THOUGHT: You’re really not going to like it.
TWO: Tell us!!!
DEEP THOUGHT: All right. The Answer to Everything…
TWO: Yes…!
DEEP THOUGHT: Life, The Universe and Everything…
ONE: Yes…!
DEEP THOUGHT: Is…
THREE: Yes…!
DEEP THOUGHT: Is…
ONE/TWO: Yes…!
DEEP THOUGHT: Forty two.

(Pause. Actually quite a long time)

Douglas Adams: The Hitchhiker’s Guide to the Galaxy: The original radio scripts

The joke of course is that you need to know the question to understand the answer. But in the context of research assessment the joke is sharper. How can an arbitrary number (a number you can imagine Adams’ worrying about making sufficiently banal, not a prime, not magic, just…42) capture the complexities of the value created by research. And yet is it an accident that the Journal Impact Factor of Nature falls just short of 42?

The HEFCE metrics report was a rare thing, a report that gained almost universal support for finding a clear and evidenced middle ground on the future of research assessment. More data is coming and ignoring it is foolhardy – the “Metric Tide” of the report’s title – but we should also engage on our own terms, with a critical, and scholarly, approach to what these new data can tell us.

The report developed the idea of “Responsible Indicators” that have a set of characteristics: robustness, humility, transparency, diversity, and reflexivity. Responsible indicators are ones that will stand up to the same kind of criticism as any research claim. That means that its not just the characteristics of the indicator that matter but also the process of its measurement. Does it address a well-founded question or decision? Are the data available for checking and critique? Is the aggregation and analysis of this data robust and appropriate?

Knoth and Herrmannova, in their experiment developing a new indicator have achieved something valuable. They have developed a numerical indicator that is broadly independent of the number of citations an article receives. There are many issues to raise with the indicator itself and what it can tell us but first and foremost it shows the potential of using the content of a research output itself for assessment.

The indicator they develop measures the “semantic distance” between the articles cited by a specific output and the article that in turn cite it. It seeks to use this distance as a measure of the contribution, in essence the distance that the output has contributed to the journey of knowledge creation. Their report illustrates the limitations of the available data, both in terms of full-text and the availability of citation information. It is a valuable contribution to the debate on what indicators can be based on.

There are technical issues to be raised, does the semantic analysis actually measure distance in meaning or only syntax? Is it sensitive to changes in language rather than substance? It worries me that the quantitative analysis gives a normal distribution. Simple chain processes, like the use and processing of information, most often give power law distributions. Normal distributions suggest many different processes all contributing to the final outcome.

However the biggest issue for me lies in the framing. The indicator is labelled “contribution” but we’re not sure what it really measures. There are two problems with indicator in its current form. The first is that we don’t have a sense of how it relates to expert judgement. Expert judgement is far from perfect, or even reliable, but without an understanding of the relationship we’re unlikely to see much adoption.

A related, and potentially larger, issue is that it is not clear what answer or problem is being solved. One of the big problems with our current metrics is that they have become the target as opposed to a (pretty poor) proxy of something that we actually care about. Just as in the Hitch Hiker’s Guide to the Galaxy, where the answer to the question of the meaning of life the universe and everything is 42, the result is meaningless without understanding the question. We can’t tell how useful or accurate the contribution indicator is without asking “for what”.

This new indicator is valuable because it illustrates what is possible. I disagree with aspects of its implementation but that’s almost a part of its value. Maybe it is just what you need to solve your particular problem. Rather than accept or reject an indicator in a vacuum, we need a toolbox of approaches that lets us ask a different question, is it useful in addressing this specific question?

42 might be a very useful answer, at least if you know how to ask the right question. But we need the tools to be able to tell. Towards the end of Adams’ radio play the suggestion is made that the answer and the question cannot co-exist in the same universe. For Adams this was a joke, but for a researcher this is our bread and butter. We can only ever refine our questions in response to our answers and our answers in response to our questions. We just need to apply our own standards to measuring ourselves.

cameron-neylon
cameron-neylon

About the author: Cameron Neylon is an advocate for open access and Professor of Research Communications at the Centre for Culture and Technology at Curtin University. You can find out more about his work and get in touch with Cameron via his personal page Science in the Open.

Categories
scholarly comms

International advances in digital scholarship – Jisc and CNI conference

In today’s scholarly communication environment, it’s easy to be heads down implementing various funder and government policy requirements on open access and research data management.

For all involved in academic research, no matter whether as a researcher, managing services to support research, or providing oversight and leadership,  we all need to occasionally pause to take a broader look at what’s happening in the world of scholarly communications and open research. What are the current issues we, as a community, face in academic research, and what initiatives are currently in progress to help address them? How will the digital research environment look in five years time?

Jisc CNI conference 2016

Jisc and CNI have a long tradition in running a joint conference exploring the current issues around scholarly communications. This partnership helps to bring unique insight and ideas from both sides of the Atlantic. This year the conference, entitled ‘International advances in digital scholarship’ aims to answer these questions posed above.

The event will address a number of themes including: sustainability of open access; tracking research and research metrics; analytics; research incentives and managing active research. For anyone involved in academic research, the conference aims to ensure those attending will have a horizon scan of the areas of disruption and key developments coming in the future.

The one day conference is on the 6 July at Wadham College, Oxford, with a drinks reception the evening before.

You can find out more and register for the event on the Jisc and CNI 2016 conference page. Early bird booking closes this Friday, 13 May.

Categories
general

Open Citations Experiment

The recent Metric Tide report proposed the notion of responsible metrics as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research. One of the crucial dimensions of responsible metrics are transparent measurement systems. Data collection and analytical processes should be kept open and transparent (including university rankings), so that those being evaluated can test and verify the results.

Image by www.futureatlas.com CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/)
Image by www.futureatlas.com CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/)

The move to open access and open research means there are now opportunities for more open citation systems.

In a short experiment we’re investigating new forms of research citation and measures that could offer more accurate and transparent methods of measuring research impact based on an open approach.

We think that CORE which aggregates all open access research outputs from repositories and journals worldwide provides a great basis for this kind of demonstration. CORE is delivered by Jisc in partnership with the Open University (OU).

The open citations experiment team – Drahomira Hermannova and Petr Knoth will develop a graphical demonstrator of a new class of metrics for evaluating research. The demonstrator will allow us to visualise and compare – on a small set of preselected publications from CORE – traditional citation counts with the scores for the new metrics (the contribution score).

While this new method relies on the access to a citation network it does not use citation counts as evidence of impact. Instead it falls in the class of semantometrics/contribution methods, which aim at using the full-text of resources in order to assess research value, you can find out more about them in this 2014 Dlib article.

As part of the experiment a short report will be produced that:

  • provides a qualitative comparative evaluation of the semantometric/contribution method against a baseline of traditional impact metrics
  • compares technical challenges in larges use of semantometrics and traditional metrics including their possible adoption in the CORE aggregator
  • proposes possible modifications/improvements to the contribution method and discusses the advantages and disadvantages of the metrics with respect to time delay, validity across subject fields, transparency, resistance to gaming/manipulation

The demonstrator and report will be available in January 2016, and we’ll post a link to them on this blog.

Following this experiment we plan to evaluate the approach and hope to work with researchers, librarians, research managers, funders and other projects working in this space to assess feasibility and to inform thinking on new services and approaches for research citation.

If you have any thoughts, ideas or connections do let me know.