From MFKP_wiki

Jump to: navigation, search

Selection: with tag publication-bias [56 articles] 


Fears rise for US climate report as Trump officials take reins

Nature, Vol. 548, No. 7665. (1 August 2017), pp. 15-16,


Officials at the US Environmental Protection Agency are consulting global-warming sceptics as they weigh up a technical review. ...


Big names in statistics want to shake up much-maligned P value

Nature, Vol. 548, No. 7665. (26 July 2017), pp. 16-17,


One of scientists’ favourite statistics — the P value — should face tougher standards, say leading researchers. [Excerpt] Science is in the throes of a reproducibility crisis, and researchers, funders and publishers are increasingly worried that the scholarly literature is littered with unreliable results. Now, a group of 72 prominent researchers is targeting what they say is one cause of the problem: weak statistical standards of evidence for claiming new discoveries. [\n] In many disciplines the significance of findings is judged by ...


Reviewers are blinkered by bibliometrics

Nature, Vol. 544, No. 7651. (26 April 2017), pp. 411-412,


[Excerpt] [...] Although journal impact factors (JIFs) were developed to assess journals and say little about any individual paper, reviewers routinely justify their evaluations on the basis of where candidates have published. [...] As economists who study science and innovation, we see engrained processes working against cherished goals. Scientists we interview routinely say that they dare not propose bold projects for funding in part because of expectations that they will produce a steady stream of papers in journals with high impact ...


Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results

PLOS ONE, Vol. 6, No. 11. (2 November 2011), e26828,


The widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically. We related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence ...


Communication: science censorship is a global issue

Nature, Vol. 542, No. 7640. (08 February 2017), pp. 165-165,


[Excerpt] [...] Regrettably, suppression of public scientific information is already the norm, or is being attempted, in many countries [...]. We fear that such gagging orders could encourage senior bureaucrats to use funding as a tool with which to rein in academic freedoms. [...] The response of scientists to this type of coercion has been to share scientific information widely and openly using such legal means as social media to defend facts and transparency [...] ...


When a preprint becomes the final paper



A geneticist's decision not to publish his finalized preprint in a journal gets support from scientists online. [Excerpt] Preprint papers posted on servers such as arXiv and bioRxiv are designed to get research results out for discussion before they are formally peer reviewed and published in journals. But for some scientists, the term is now a misnomer — their preprint papers will never be submitted for formal publication. [...] One of the major services of traditional journals is that papers are peer ...


Are conservation biologists working too hard?

Biological Conservation, Vol. 166 (October 2013), pp. 186-190,


[Highlights] [::] We analyze the work habits of conservation biologists contributing to Biological Conservation. [::] Conservation scientists conduct substantial amount of work on weekends and after office time. [::] There are geographical differences in the tendency to work on weekends or after office time. [::] Over time there has been a gradual increase in the tendency to conduct work on weekends. [Abstract] The quintessential scientist is exceedingly hardworking and antisocial, and one who would spend countless evenings and weekends buried under her/his microscopes and manuscripts. In an ...


The mismeasurement of science

Current Biology, Vol. 17, No. 15. (07 August 2007), pp. R583-R585,


[Excerpt:Impact factors and citations] Crucially, impact factors are distorted by positive feedback — many citations are not based on reading the paper but by reading other papers, particularly reviews. One study even suggested that, of cited articles, only some 20% had actually been read. [...] Nevertheless, citations are now being used to make quantitative comparisons between scientists. [...] [Changes in behaviour] Unfortunately, the use of these measures is having damaging effects on perceptions and on behaviour; these I list below. Please note that ...


The politics of publication

Nature, Vol. 422, No. 6929. (20 March 2003), pp. 259-261,


Authors, reviewers and editors must act to protect the quality of research. Listen. All over the world scientists are fretting. [Excerpt] The decision about publication of a paper is the result of interaction between authors, editors and reviewers. Scientists are increasingly desperate to publish in a few top journals and are wasting time and energy manipulating their manuscripts and courting editors. As a result, the objective presentation of work, the accessibility of articles and the quality of research itself are being compromised. ...


Lost in publication: how measurement harms science

Ethics in Science and Environmental Politics, Vol. 8 (03 June 2008), pp. 9-11,


Measurement of scientific productivity is difficult. The measures used (impact factor of the journal, citations to the paper being measured) are crude. But these measures are now so universally adopted that they determine most things that matter: tenure or unemployment, a postdoctoral grant or none, success or failure. As a result, scientists have been forced to downgrade their primary aim from making discoveries to publishing as many papers as possible—and trying to work them into high impact factor journals. Consequently, scientific ...


Take the time and effort to correct misinformation

Nature, Vol. 540, No. 7632. (6 December 2016), pp. 171-171,


Scientists should challenge online falsehoods and inaccuracies — and harness the collective power of the Internet to fight back, argues Phil Williamson. [Excerpt] [...] Most researchers who have tried to engage online with ill-informed journalists or pseudoscientists will be familiar with Brandolini’s law (also known as the Bullshit Asymmetry Principle): the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. Is it really worth taking the time and effort to challenge, correct and clarify ...


Trusting others to ‘do the math’

Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 376-392,


Researchers effectively trust the work of others anytime they use software tools or custom software. In this article I explore this notion of trusting others, using Digital Humanities as a focus, and drawing on my own experience. Software is inherently flawed and limited, so when its use in scholarship demands better practices and terminology, to review research software and describe development processes. It is also important to make research software engineers and their work more visible, both for the purposes of ...


Software and scholarship

Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 342-348,


[excerpt] The thematic focus of this issue is to examine what happens where software and scholarship meet, with particular reference to digital work in the humanities. Despite the some seven decades of its existence, Digital Humanities continues to struggle with the implications, in the academic ecosystem, of its position between engineering and art. [...] [\n] [...] [\n] I will end with my own reflection on this topic of evaluation. Peer review of scholarly works of software continues to pose a particularly vexed challenge ...


Ethics among scholars in academic publishing

In 2012 Proceedings of the Information Systems Educators Conference (2012), 1948


This paper offers a survey of the contemporary and common-place ethical breaches concerning authorship, research, and publishing in today’s scholarly production, as juxtaposed with some of the predominant standards and guidelines that have been developed to direct academic publishing practices. While the paper may suggest the need for an updated and comprehensive set of guidelines for multiple discipline areas, the purpose here is to prepare the theoretical framework for a future computing discipline-specific study of ethical authorship and related concepts in ...


Programmers, professors, and parasites: credit and co-authorship in computer science

Science and Engineering Ethics In Science and Engineering Ethics, Vol. 15, No. 4. (2009), pp. 467-489,


This article presents an in-depth analysis of past and present publishing practices in academic computer science to suggest the establishment of a more consistent publishing standard. Historical precedent for academic publishing in computer science is established through the study of anecdotes as well as statistics collected from databases of published computer science papers. After examining these facts alongside information about analogous publishing situations and standards in other scientific fields, the article concludes with a list of basic principles that should be ...


Scientists behaving badly

Nature, Vol. 435, No. 7043. (9 June 2005), pp. 737-738,


To protect the integrity of science, we must look beyond falsification, fabrication and plagiarism, to a wider range of questionable research practices, argue Brian C. Martinson, Melissa S. Anderson and Raymond de Vries. [\n] Serious misbehaviour in research is important for many reasons, not least because it damages the reputation of, and undermines public support for, science. Historically, professionals and the public have focused on headline-grabbing cases of scientific misconduct, but we believe that researchers can no longer afford to ignore ...


The false academy: predatory publishing in science and bioethics

Medicine, Health Care and Philosophy (2016), pp. 1-8,


This paper describes and discusses the phenomenon ‘predatory publishing’, in relation to both academic journals and books, and suggests a list of characteristics by which to identify predatory journals. It also raises the question whether traditional publishing houses have accompanied rogue publishers upon this path. It is noted that bioethics as a discipline does not stand unaffected by this trend. Towards the end of the paper it is discussed what can and should be done to eliminate or reduce the effects ...


Measuring scientific impact beyond citation counts

D-Lib Magazine, Vol. 22, No. 9/10. (September 2016),


The measurement of scientific progress remains a significant challenge exasperated by the use of multiple different types of metrics that are often incorrectly used, overused, or even explicitly abused. Several metrics such as h-index or journal impact factor (JIF) are often used as a means to assess whether an author, article, or journal creates an "impact" on science. Unfortunately, external forces can be used to manipulate these metrics thereby diluting the value of their intended, original purpose. This work highlights these ...


The natural selection of bad science

Open Science, Vol. 3, No. 9. (01 September 2016), 160384,


Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further ...


Does background matter? Disciplinary perspectives on sustainable forest management

Biodiversity and Conservation, Vol. 23, No. 14. (2014), pp. 3373-3389,


Although sustainable forest management (SFM) has become increasingly popular during recent decades, approaches towards it are still imprecise and lack consistency. Within this “chaos”, scientists are increasingly expected to further develop the concept across disciplinary boundaries, including normative statements relating to the future. However, we assume that disciplinary boundaries in the construction of SFM still exist due to prevalent interests and political intentions within scientific communities. Therefore, our aim is to analyse and explain qualitative differences in the construction of SFM ...


Hyperauthorship: a postmodern perversion or evidence of a structural shift in scholarly communication practices?

Journal of the American Society for Information Science and Technology, Vol. 52, No. 7. (2001), pp. 558-569,


Classical assumptions about the nature and ethical entailments of authorship (the standard model) are being challenged by developments in scientific collaboration and multiple authorship. In the biomedical research community, multiple authorship has increased to such an extent that the trustworthiness of the scientific communication system has been called into question. Documented abuses, such as honorific authorship, have serious implications in terms of the acknowledgment of authority, allocation of credit, and assigning of accountability. Within the biomedical world it has been proposed ...


Credit where credit is due? Regulation, research integrity and the attribution of authorship in the health sciences

Social Science & Medicine, Vol. 70, No. 9. (May 2010), pp. 1458-1465,


Despite attempts at clear direction in international, national and journal guidelines, attribution of authorship can be a confusing area for both new and established researchers. As journal articles are valuable intellectual property, authorship can be hotly contested. Individual authors' responsibilities for the integrity of article content have not been well explored. Semi-structured interviews (n = 17) were conducted with staff, student advocates and doctoral candidates working in health research in two universities in Australia. Stratified sampling ensured participants reflected a range of experience ...


Responsible authorship: why researchers must forgo honorary authorship

Accountability in Research, Vol. 18, No. 2. (9 March 2011), pp. 76-90,


Although widespread throughout the biomedical sciences, the practice of honorary authorship?the listing of authors who fail to merit inclusion as authors by authorship criteria?has received relatively little sustained attention. Is there something wrong with honorary authorship, or is it only a problem when used in conjunction with other unethical authorship practices like ghostwriting? Numerous sets of authorship guidelines discourage the practice, but its ubiquity throughout biomedicine suggests that there is a need to say more about honorary authorship. Despite its general ...


Academic authorship: who, why and in what order?

Health Renaissance, Vol. 11, No. 2. (19 June 2013),


We are frequently asked by our colleagues and students for advice on authorship for scientific articles. This short paper outlines some of the issues that we have experienced and the advice we usually provide. This editorial follows on from our work on submitting a paper1 and also on writing an academic paper for publication.2 We should like to start by noting that, in our view, there exist two separate, but related issues: (a) authorship and (b) order of authors. The issue of authorship centres on the notion of who can be ...


Opinion: science in the age of selfies

Proceedings of the National Academy of Sciences, Vol. 113, No. 34. (23 August 2016), pp. 9384-9387,


[Excerpt] [\n] [...] [\n] Here there is a paradox: Today, there are many more scientists, and much more money is spent on research, yet the pace of fundamental innovation, the kinds of theories and engineering practices that will feed the pipeline of future progress, appears, to some observers, including us, to be slowing [...]. Why might that be the case? [\n] One argument is that “theoretical models” may not even exist for some branches of science, at least not in the ...


Study contract concerning moral rights in the context of the exploitation of works through digital technology - Final report

No. ETD/99/B5-3000/E°28. (200)


[Excerpt] According to the terms of Annexes III & IV of the contract n° ETD/99/B5-3000/E/28, the following parts of the study have been drafted : [::] analysis of the disparities in the legislation and case law of the several Member States concerning the protection of moral rights, in particular with respect to the specific characteristics of the digital exploitation of works [::] establishment of comparison tables describing the Member States’ applicable provisions in distinguishing between the several categories of works and the type of exploitation [::] analysis of ...


Journals and funders confront implicit bias in peer review

Science, Vol. 352, No. 6289. (26 May 2016), pp. 1067-1068,


[Excerpt] Deeply rooted assumptions creep into decision-making in unrecognized ways—even among the most well-intentioned peer-reviewers, journal editors, and science funders—and that can prevent the best science from being sponsored or published, experts said at a recent AAAS forum on implicit bias. [\n] [...] [\n] Unconscious assumptions about gender, ethnicity, disabilities, nationality, and institutions clearly limit the science and technology talent pool and undermine scientific innovation, said AAAS Board Chair Geraldine Richmond. [...] [\n] The problem of implicit bias is not only about fairness, said ...


Implicit bias

Science, Vol. 352, No. 6289. (26 May 2016), pp. 1035-1035,


[Excerpt] [...] To explore the extent of implicit bias in peer review, and what can be done to counter it, the American Association for the Advancement of Science (AAAS, the publisher of Science) recently convened a day-long forum of editors, publishers, funders, and experts on implicit bias in Washington, DC [...] [\n] [...] Scientific publishers such as the American Chemical Society (ACS) and the American Geophysical Union (AGU) find that female authors are published either at a rate proportional to that at which ...


Sailing from the seas of chaos into the corridor of stability: practical recommendations to increase the informational value of studies

Perspectives on psychological science : a journal of the Association for Psychological Science, Vol. 9, No. 3. (01 May 2014), pp. 278-292,


Recent events have led psychologists to acknowledge that the inherent uncertainty encapsulated in an inductive science is amplified by problematic research practices. In this article, we provide a practical introduction to recently developed statistical tools that can be used to deal with these uncertainties when performing and evaluating research. In Part 1, we discuss the importance of accurate and stable effect size estimates as well as how to design studies to reach a corridor of stability around effect size estimates. In ...


(INRMM-MiD internal record) List of keywords of the INRMM meta-information database - part 28

(February 2014)
Keywords: inrmm-list-of-tags   power-law   ppm   practice   pre-alpine   pre-print   precaution   precaution-principle   precipitation   precisely-wrong   precursor-research   predation   predator-satiation   predatory-publishers   prediction   prediction-bias   predictive-modelling   predictors   predisposition   premature-optimization   preparedness   preprints   prescribed-burn   presence-absence   presence-only   pressure-volume-curves   pressures   prestoea-montana   pretreatment   prey-predator   pricing   primary-productivity   principal-components-regression   prisoners-dilemma   pristiphora-abietina   probability-vs-possibility   problem-driven   processes   processing   production-rules   productivity   programming   progressive-learning   prolog   proportion   prosopis-alba   prosopis-glandulosa   prosopis-pallida   protected-areas   protected-species   protection   protective-forest   protocol-uncertainty   provenance   provisioning-services   pruning   prunus-avium   prunus-cerasifera   prunus-domestica   prunus-dulcis   prunus-fruticosa   prunus-ilicifolia   prunus-laurocerasus   prunus-mahaleb   prunus-malaheb   prunus-padus   prunus-salicina   prunus-serotina   prunus-spinosa   prunus-spp   prunus-tenella   pseudo-absences   pseudo-random   pseudoaraucaria-spp   pseudolarix-spp   pseudomonas-avellanae   pseudomonas-spp   pseudomonas-syringae   pseudotsuga   pseudotsuga-macrocarpa   pseudotsuga-menziesii   pseudotsuga-spp   psychology   pterocarpus-indicus   pterocarpus-officinalis   pterocarya-pterocarpa   public-domain   publication-bias   publication-delay   publication-errors   publish-or-perish   puccinia-coronata   pull-push-pest-control   pulp   punica-granatum   purdiaea-nutans   pyrenees-region   pyrolysis   pyrus-amygdaliformis   pyrus-browiczii  


List of indexed keywords within the transdisciplinary set of domains which relate to the Integrated Natural Resources Modelling and Management (INRMM). In particular, the list of keywords maps the semantic tags in the INRMM Meta-information Database (INRMM-MiD). [\n] The INRMM-MiD records providing this list are accessible by the special tag: inrmm-list-of-tags ( ). ...


The unsung heroes of scientific software

Nature, Vol. 529, No. 7584. (4 January 2016), pp. 115-116,


Creators of computer programs that underpin experiments don’t always get their due — so the website Depsy is trying to track the impact of research code. [Excerpt] For researchers who code, academic norms for tracking the value of their work seem grossly unfair. They can spend hours contributing to software that underpins research, but if that work does not result in the authorship of a research paper and accompanying citations, there is little way to measure its impact. [\n] [...] Depsy’s creators hope that their ...


The oligopoly of academic publishers in the digital era

PLOS ONE, Vol. 10, No. 6. (10 June 2015), e0127502,


The consolidation of the scientific publishing industry has been the topic of much debate within and outside the scientific community, especially in relation to major publishers’ high profit margins. However, the share of scientific output published in the journals of these major publishers, as well as its evolution over time and across various disciplines, has not yet been analyzed. This paper provides such analysis, based on 45 million documents indexed in the Web of Science over the period 1973-2013. It shows ...

Visual summary


Statistics: P values are just the tip of the iceberg

Nature, Vol. 520, No. 7549. (28 April 2015), pp. 612-612,


Ridding science of shoddy statistics will require scrutiny of every step, not merely the last one, say Jeffrey T. Leek and Roger D. Peng. [Excerpt] There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as 'null hypothesis significance testing' (NHST; see, for example, NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) ...


The extent and consequences of p-hacking in science

PLoS Biology, Vol. 13, No. 3. (13 March 2015), e1002106,


A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the ...


How NOT to Review a Paper: The Tools and Techniques of the Adversarial Reviewer

SIGMOD Rec., Vol. 37, No. 4. (March 2009), pp. 100-104,


There are several useful guides available for how to review a paper in Computer Science [10, 6, 12, 7, 2]. These are soberly presented, carefully reasoned and sensibly argued. As a result, they are not much fun. So, as a contrast, this note is a checklist of how not to review a paper. It details techniques that are unethical, unfair, or just plain nasty. Since in Computer Science we often present arguments about how an adversary would approach a particular problem, ...


The pleasure of publishing

eLife, Vol. 4 (06 January 2015), e05770,


When assessing manuscripts eLife editors look for a combination of rigour and insight, along with results and ideas that make other researchers think differently about their subject. [Excerpt] The senior editors at eLife are often asked: ‘Where is the bar for an eLife paper?’ Another frequent question is: ‘Why should I submit my best work to eLife?’ The second of these questions is not surprising because it is human nature to be wary of anything new and challenging. The first question has ...



Basic and Applied Social Psychology, Vol. 37, No. 1. (2 January 2015), pp. 1-2,


[Excerpt] The Basic and Applied Social Psychology (BASP) 2014 Editorial emphasized that the null hypothesis significance testing procedure (NHSTP) is invalid, and thus authors would be not required to perform it (Trafimow, 2014). However, to allow authors a grace period, the Editorial stopped short of actually banning the NHSTP. The purpose of the present Editorial is to announce that the grace period is over. From now on, BASP is banning the NHSTP. With the banning of the NHSTP from BASP, what are ...


  1. Chihara , C. S., 1994. The Howson-Urbach proofs of Bayesian principles. In E. Eells & B. Skyrms (Eds.), Probability and conditionals: Belief revision and rational decision (pp. 161 – 178 ). New York , NY : Cambridge University Press .
  2. Fisher , R. A., 1973. Statistical methods and scientific inference, 3rd ed. . London , England : Collier Macmillan .
  3. Glymour , C., 1980. Theory and evidence . Princeton , NJ

How many scientific papers are not original?

Proceedings of the National Academy of Sciences, Vol. 112, No. 1. (06 January 2015), pp. 6-7,


[Excerpt] Is plagiarism afflicting science? In PNAS, Citron and Ginsparg (1) count the number of authors who are submitting articles containing text already appearing elsewhere. They report disturbing numbers of authors resorting to copying, particularly in some countries where 15% of submissions are detected as containing duplicated material. I am on the editorial board of an Institute of Electrical and Electronic Engineers (IEEE) magazine, which also finds it useful to run all of the submissions through a plagiarism filter. What can ...


Why null results rarely see the light of day

Science, Vol. 345, No. 6200. (29 August 2014), pp. 992-992,


A team at Stanford University reports online this week in Science that scientists are unlikely to even write up an experiment that produces so-called null results. A study of 221 survey-based experiments funded by the TESS (Time-sharing Experiments for the Social Sciences) program at the National Science Foundation has found that almost two-thirds of the experiments yielding null findings are stuck in a file drawer rather than being submitted to a journal, and only 21% are published. In contrast, 96% of ...


Academic urban legends

Social Studies of Science, Vol. 44, No. 4. (1 August 2014), pp. 638-654,


Many of the messages presented in respectable scientific publications are, in fact, based on various forms of rumors. Some of these rumors appear so frequently, and in such complex, colorful, and entertaining ways that we can think of them as academic urban legends. The explanation for this phenomenon is usually that authors have lazily, sloppily, or fraudulently employed sources, and peer reviewers and editors have not discovered these weaknesses in the manuscripts during evaluation. To illustrate this phenomenon, I draw upon ...


Interactive comment on "Perturbation experiments to investigate the impact of ocean acidification: approaches and software tools" by J.-P. Gattuso and H. Lavigne

Biogeosciences Discussions, Vol. 6 (2009), pp. C1071-C1073


[Excerpt] The referee wonders whether this manuscript should be published as a technical note rather than as a scientific article […][and] feels that the functions described are “black boxes”. We cannot disagree more with this statement as [the software tool] is free software, the source code of which is available to anyone (one just needs to download the package). Further, [the software tool] can be redistributed and/or modified under the terms of the GNU General Public License as published by the ...


Bibliometrics: the citation game

Nature, Vol. 510, No. 7506. (25 June 2014), pp. 470-471,


Jonathan Adams takes the measure of the uses and misuses of scholarly impact. ...


China's publication bazaar

Science, Vol. 342, No. 6162. (29 November 2013), pp. 1035-1039,


Science has exposed a thriving academic black market in China involving shady agencies, corrupt scientists, and compromised editors—many of them operating in plain view. The commodity: papers in journals indexed by Thomson Reuters' Science Citation Index, Thomson Reuters' Social Sciences Citation Index, and Elsevier's Engineering Index. ...


Supply and demand: apply market forces to peer review

Nature, Vol. 506, No. 7488. (19 February 2014), pp. 295-295,


[excerpt] [...] When it comes to the highly skilled service of peer reviewing, the supply is sufficiently high to keep the monetary value at zero. If, at a constant level of demand, the supply is reduced, then this price would go up. With an increased price, people could become professional reviewers to supplement their salary. [...] ...


One Tongue to Rule Them All?

Science, Vol. 343, No. 6168. (17 January 2014), pp. 250-251,


Montgomery examines the benefits and costs of English's position as the global language of science. ...


With Great Power Comes Great Responsibility: the Importance of Rejection, Power, and Editors in the Practice of Scientific Publishing

PLoS ONE, Vol. 8, No. 12. (30 December 2013), e85382,


Peer review is an important element of scientific communication but deserves quantitative examination. We used data from the handling service manuscript Central for ten mid-tier ecology and evolution journals to test whether number of external reviews completed improved citation rates for all accepted manuscripts. Contrary to a previous study examining this issue using resubmission data as a proxy for reviews, we show that citation rates of manuscripts do not correlate with the number of individuals that provided reviews. Importantly, externally-reviewed papers ...


What ranking journals has in common with astrology

Home About Login Register Search Current Archives Announcements Home > Vol 1, No 1 (2013) Roars Transactions, a Journal on Research Policy and Evaluation, Vol. 1, No. 1. (2013),


[excerpt] Introduction. As scientists, we all send our best work to Science or Nature – or at least we dream of one day making a discovery we deem worthy of sending there. So obvious does this hierarchy in our journal landscape appear to our intuition, that when erroneous or fraudulent work is published in ‘high- ranking’ journals, we immediately wonder how this could have happened. Isn’t work published there the best there is? Vetted by professional editors before being sent out to the most critical ...


How journal rankings can suppress interdisciplinary research: A comparison between Innovation Studies and Business & Management

Research Policy, Vol. 41, No. 7. (September 2012), pp. 1262-1282,


This study provides quantitative evidence on how the use of journal rankings can disadvantage interdisciplinary research in research evaluations. Using publication and citation data, it compares the degree of interdisciplinarity and the research performance of a number of Innovation Studies units with that of leading Business & Management Schools (BMS) in the UK. On the basis of various mappings and metrics, this study shows that: (i) Innovation Studies units are consistently more interdisciplinary in their research than Business & Management Schools; ...


'Conferring authorship': Biobank stakeholders' experiences with publication credit in collaborative research

PLoS ONE, Vol. 8, No. 9. (30 September 2013), pp. e76686-e76686,


Multi-collaborator research is increasingly becoming the norm in the field of biomedicine. With this trend comes the imperative to award recognition to all those who contribute to a study; however, there is a gap in the current âgold standardâ in authorship guidelines with regards to the efforts of those who provide high quality biosamples and data, yet do not play a role in the intellectual development of the final publication. We carried out interviews with 36 individuals working in, or with ...


Secretive and Subjective, Peer Review Proves Resistant to Study

Science, Vol. 341, No. 6152. (20 September 2013), pp. 1331-1331,


At the International Congress on Peer Review and Biomedical Publication, efforts to explore the scientific literature have shifted away from peer review and into other areas, such as bias and authorship. With a dearth of available data and funding, large systematic studies of how peer review works and doesn't aren't easy to get off the ground. ...

This page of the database may be cited as:
Integrated Natural Resources Modelling and Management - Meta-information Database.

Result page: 1 2 Next

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.