AISA 3rd Annual Conference

Open Science and Research Integrity

November 9th 2017

Room 113 via Festa del Perdono 7

3:00 pm – 7:00 pm

Alberto Baccini (University of Siena), Giuseppe De Nicolao (University of Pavia)
ANVUR (the Italian agency for research evaluation) and the closed data of state’s bibliometrics

Italy is probably the country of the Western world where obsession for labels of excellence is shaping more profoundly institutions and researchers behaviors. A growing centralized control, emerging from research assessment exercises, is realized through apparently technical devices, the scientific probing of which sparks a conflict between political, scientific and ethical dimensions. In this contribution, the attention is focused on the experiment carried out and analyzed by the Italian agency for reseach assessment (ANVUR) for validating the methodology adopted for research assessment. A detailed description is done of the dissemination efforts made by the agency that published excerpts of official reports in working papers of many institutions, scholarly journals and think-tank blogs. We highlight the unprecedented conflict of interest: the metodology and results of the research assessment were justified ex-post by papers written by the same scholars that had developed and applied the methodology officially adopted by the Italian government. Moreover, no replication of these results is possible because the data were not made available to scholars other than those working for ANVUR.

Mario Biagioli (Center for Science & Innovation Studies – UC Davis)
Metrics and misconduct: redefining “publication” and “evaluation”

Academic misconduct has traditionally been tied to the stress generated by the “publish or perish” culture and, more recently, to the new opportunities offered by electronic publishing. I argue, instead, that misconduct is undergoing a radical qualitative transformation, adapting itself to modern metrics-based regimes of academic evaluation and the new incentives and opportunities they provide. We are transitioning, so to speak, from “publish or perish” to “impact or perish.” These changes are affecting the practices as well as the discourse and conceptualization of misconduct. Traditional definitions of misconduct were rooted in oppositions between truth and falsehood, right and wrong, honest mistake and fabrication, but some of the new metrics-based misconduct could be seen as a form of gaming rather than a clear violation of ethical norms or laws. The new metrics-based forms of misconduct are thus challenging us to redefine misconduct, but they are also, at the same time, asking us to rethink what we mean by “publication” and “impact.”

Enrico Bucci (Sbarro Health Research Organization – Temple University, Philadelphia; Resis Srl – Ivrea)
The connection between bibliometric evaluation and scientific fraud

The current trend consisting in pushing extreme competition for funding and professional advancement among researchers and research groups is justified by arguing that, if resources are limited, the taxpayer money should be allocated wisely, so to get the maximum possible return.
While this reasoning line has some merit, far less obvious is the fact that it is used to justify a mechanism which betrays the original intention to properly spend public money to promote the “better” scientific research. Even without considering the problematic definition of “better research”, it easy to show how the current policies, based on an extensive albeit distorted use of bibliometric parameters, are flawed especially when they lead to concentrate a scant public investment on very few “excellent” institutions or researchers. As a matter of fact, pushing too far the bibliometric evaluation of research groups and institutions is leading to distort scientific research into a competition to get as many published papers as possible, with an exponential increase in publishing false or manipulated research results. Since those publications are in turn used to feed the current research evaluation exercises, a threatening positive feedback mechanism is now in place. It is all too easy to forecast that this mechanism will lead to the allocation of all available resources into the worst research, prizing at best the less innovative science and promoting those misbehaviours aimed to get published as soon as much is it possible; a process which is in fact already started, threatening the research enterprise in an unprecedented way.

Giuseppe Longo (Centre Cavaillès, CNRS, Ecole Normale Supérieure, Paris; Department of Integrative Physiology and Pathobiology, Tufts University School of Medicine, Boston)
Science,  scientistic distortions and meaning

A new correlation seems to be established between evaluation tools and “scientism”. On the one hand, bibliometric techniques make difficult what is most important in science: the critical spirit, the truly original idea, the opening of new spaces of meaning, not necessarily con-sensual. On the other hand, more and more people are being made to believe that science coincides with the progressive and complete occupation of reality with the instruments already available. Thus “optimization methods”, originating and pertinent in physico-mathematical theories of the nineteenth century, claim to govern the economy at equilibrium, identify the optimality of phylogenetic and ontogenetic trajectories in biology, guide “deep learning” on Big Data. Wonderful promises (cure Alzheimer’s and Parkinson’s by looking at them “in silico”, personalize medicine thanks to a perfect knowledge of DNA, predict without understanding thanks to Big Data…) are accompanied by the use of well-established or old instruments, understandable to all. These lead to easy bibliometric success in the short term and are presented with attractive watchword (the optimal path, the only possible one, in economics, biology,… ; machines, which “learn deeply” with or without “supervision” and thanks to “rewards”, that evoke a Pavlovian child who learns). The many promises guarantee billions in grants, define projects of “excellence”; scientific doubt, uncertainty, the “negative result”, the criticism that explores other points of view, are excluded. In this way, immensely rich projects allow to produce avalanches of publications and quotations, in games of mutual referrals; these guarantee new funding.
Scientism believes in the cumulative progress of science, along the only possible path to truth, which is shown, of course, by those who hold the “majority package”; bibliometry is the gauge and indicator of such progress. The close links between science and democracy, science and historical construction of meaning will be highlighted.

November 10th 2017

Sala Napoleonica via Sant’Antonio 12

9:00 am – 1:00 pm

Maria Cassella (University of Torino)
New tools and practices in open science: the open peer review between opportunities and (a few) perplexities

The paper offers a first reflection on the practices and methodologies of the open peer review, an umbrella term which refers to alternative, open and collaborative peer review modes. Ford (2013), for example, identifies eight different open peer review typologies, while Hellauer (2016)  lists seven different ones. The open peer review improves the evaluation process by making it open and transparent. However, it also has to deal with some challenges.
The paper tries to solve two crucial issues for the future of the open peer review:

  1. how to collect a critical mass of scientifically relevant comments;
  2. whether the open peer review can be more effective than the traditional peer review systems (single blind or double blind).

Related to the first issue, the author suggests that the best OPR model is the one based on the invitation to comment addressed to a selected set of reviewers. As a matter of fact, OPR different models are not neutral with respect to the diversity of research communities, of publication types and modalities. Related to the second issue, some studies show OPR superior qualitative value: Bormann (2011) and Maria K. Kowalczuk, et al. (2015).

The uptake of the OPR is subject to a change in the scholarly communication paradigm. Technology and open science are facilitating the diffusion of different forms of OPR. OPR highlights the value of service of the peer review process and fosters dialogue among research communities and disciplines. It also fosters quick recognition to research results and to reviewers’ work. At the same time, OPR meets the Mertonian norm of communalism. In open science the Mertonian norms of science take new force even if they remain imperfect.

Diego Giorio (Town Council of Villanova Canavese – SEPEL Editrice)
Open data from public offices to support and validate researches

In the information era, the immense wealth of data held in public offices can be made available to all: citizens, scholars, other public entities and researchers. Demographic data, births, deaths with related causes, topographic surveys, museums and library catalogs, information on industrial and artisanal activities, traffic analysis … Just to mention a few examples that come to mind out from an almost infinite list.
Laws on the matter are already in force; however, the diffusion of open data is far away to take off. This is due to the shortage of time and reduced staff in public Italian offices, to the poor attitude of employees, to the software not ready to manage the task. With appropriate information campaigns, and with the desirable turnover inside public administrations, it is nevertheless possible to overcome these problems.
A second question to address is not to underestimate anonymization: data must be made available in sufficiently detailed form to be useful and usable, but adequately aggregated to avoid de-anonymization. This is a huge risk and a rather slippery question in the big data era; such risk is significant in Italy, due to the specificity of Italian jurisdiction, comprising almost 8000 municipalities, often of very small sizes.
In any event, assuming that open data is available and properly managed, this huge public asset can have positive effects on many types of research. First, researchers may draw from an open and complete set of basic data; in addition, it may be easier to verify the results. Also, considering that data of public administration is not always correct and complete, a reverse check may also occur, correcting errors and anomalies on revealed discrepancies between research results and basic data.
As a conclusion, the above idea represents a potential virtuous circle that is not easy to trigger, but, once powered up, it can only bring benefits to society as a whole.

Daniela Luzi, Roberta Ruggieri, Lucio Pisacane, Rosa Di Cesare (CNR – Istituto di Ricerche sulla Popolazione e le Politiche Sociali, Roma)
Open peer review for research data: a pilot study in Social Sciences

Open peer review has the potential to be applied to all types of research results, from journal articles to project proposals and datasets. However, starting from its definition, criteria and procedures to assure a transparent and efficient evaluation are still debated. This discussion is embedded in the context of Open Science fulfilling the requirements of analysing structural and technological changes in the current scientific communication system. It is in this context that the Mertonian principles, in particular the ones connected with communality and organized scepticism, become important point of reference.
Considering these issues, this paper intends to explore the applicability of (open) peer review to datasets produced and shared in Social Sciences. This study is part of OpenUp (OPENing UP new methods, indicators and tools for peer review, dissemination of research results, and impact measurement), a European funded project that addresses the currently transforming science landscape and aims (1) to identify ground-breaking mechanisms, processes and tools for peer-review for all types of research results (e.g., publications, data, software), (2) to explore innovative dissemination mechanisms with an outreach aim towards businesses and industry, education, and society as a whole, and (3) to analyse a set of novel indicators (i.e. altmetric) that assess the impact of research results and correlate them to channels of dissemination
Since OpenUP employs a user-centered, evidence-based approach, its methodological approach not only engages all stakeholders (researchers, publishers, funders, institutions, industry, and the general public) in ad hoc workshops, conferences and training, but it also tests the achieved results in a set of seven pilots. They are related to the three project’s pillars (innovative peer review, dissemination and impact measuring) and are applied to specific research areas and communities: arts and humanities, social sciences, energy and life sciences.
Within the overall objective of the project, this paper intends to provide the methodology used in developing the pilot that allows us to reconstruct the context of data sharing and evaluation in Social Sciences. Based on this analysis, the selection criteria of a research community to be involved in the pilot are identified together with the specific issues that are going to be investigated taking into account both data providers and users perspectives. The analysis is carried out considering the Mertonian principles and in particular the issues connected to their applicability to the sharing and evaluation of research data.

Silvia Scalzini (LUISS Guido Carli, Lider Lab – Dirpolis Scuola Superiore Sant’Anna)
Who owns my ideas? A path between law and ethics around the concept of “scientific authorship” in the open science era

A scientific work, to whatever branch of knowledge it belongs, is the result of eureka moments, dedication, and deep knowledge of the subject. One of the most thorny problem is the correct attribution of scientific credits to the lawful authors and the due recognition of scientific authorship.
This difficulty arises from several causes. First, a scientific work is not limited to the final article- or paper – , but it involves also measurements, experiments, codes, brainstorming and so forth. Such elements cannot easily be partitioned and attributed to specific subjects. Moreover, research misconducts, questionable and/or irresponsible research practices are often the reasons of the wrong attribution of a scientific work. The targeted conducts can be classified along a spectrum that ranges from cases of extreme gravity (such as plagiarism) to different shades of dishonesty. Suffice it to mention the cases in which ideas, findings and new results are attributed to people other than the young researcher who conceived them. It happens also that collaborators’ names of a scientific work are ranked arbitrarily on the paper on the basis of different criteria, rather than according to the effective scientific contribution of the authors. And more examples in this sense can be given. This phenomenon is actually exacerbated in the “publish or perish” framework.
Copyright law protects the expression of ideas and not ideas itself. Moreover, a work should be original in order to be protectable by copyright. Therefore copyright law is not often able to regulate “scientists’ debts of ideas” M. Bertani, Diritto d’autore e connessi, in L.C. Ubertazzi (a cura di), La Proprietà Intellettuale, Giappichelli, 2011, p. 276), also because of legal certainty reasons and issues related to the effectiveness of protection.
In some cases, scientific communities self-regulate themselves, through social norms –e.g. rules about the names rankings in publications- and codes of conducts, such as Research Integrity Guidelines. The latter aim to recommend – and educate to – principles, ethical values, professional and ethical standards that govern responsible and correct research conducts (see, for instance, the Research Integrity Guidelines of the Research Ethics and Bioethics Committee of the Italian National Research Council of Italy (CNR)).
In addition open science could also facilitate the correct attribution of scientific authorship, by disseminating ideas and working papers to a larger scientific community.
Nevertheless the boundaries of the concept of what we can call “scientific authorship” are still blurred and the related conducts are in a grey area between the serious misconducts and the physiological evolution of science.
This paper therefore aims to reflect on the intersection and overlapping of notions and interpretations that revolve around the concept of “scientific authorship” between law and ethics.

Accessi: 85