From MFKP_wiki

Jump to: navigation, search

Selection: with tag peer-review [61 articles] 

 

Fears rise for US climate report as Trump officials take reins

  
Nature, Vol. 548, No. 7665. (1 August 2017), pp. 15-16, https://doi.org/10.1038/548015a

Abstract

Officials at the US Environmental Protection Agency are consulting global-warming sceptics as they weigh up a technical review. ...

 

Reviewers are blinkered by bibliometrics

  
Nature, Vol. 544, No. 7651. (26 April 2017), pp. 411-412, https://doi.org/10.1038/544411a

Abstract

[Excerpt] [...] Although journal impact factors (JIFs) were developed to assess journals and say little about any individual paper, reviewers routinely justify their evaluations on the basis of where candidates have published. [...] As economists who study science and innovation, we see engrained processes working against cherished goals. Scientists we interview routinely say that they dare not propose bold projects for funding in part because of expectations that they will produce a steady stream of papers in journals with high impact ...

 

How innovations thrive in GRASS GIS

  
In North Carolina GIS Conference, NCGIS2017 (2017)

Abstract

[Poster topic highlights] [::] Algorithms and models included in GRASS GIS remain available long term (Chemin et al., 2015). [::] Analytical tools are not limited to one domain but spread across many fields. [::] New tools can be built based on functionality or code of the existing ones regardless of the particular domain of problems they belong to. [::] Both the functionality and the code are evaluated by the community of users and developers in different fields and scales. [General GRASS GIS highlights] [::] The GRASS GIS development team ...

 

House bill no. 246, Indiana State Legislature, 1897

  
Proceedings of the Indiana Academy of Science, Vol. 45 (1935), pp. 206-210

Abstract

[Excerpt] This paper has grown out of a number of requests for information over a number of years, by students and others, concerning some supposed action taken by the Indiana State Legislature with regard to fixing the value of pi, that is, the result of dividing the length of the circumference of a circle by the length of its diameter, at a certain value that was different from the true value. Of course the interest in and wonder at such an action lies in the presumption of a ...

 

House bill no. 246 revisited

  
Proceedings of the Indiana Academy of Science, Vol. 84 (1974), pp. 374-399

Abstract

[Excerpt: Introduction] In the year 1966 the State of Indiana celebrated the Sesquicentennial of its admission into statehood, and the Indiana Academy of Science joined in this observance with a number of appropriate activities. Among these was a program of invited papers on the history of the various sciences and of mathematics in the state over the 150-year period. [\n] For a small number of persons the association of "Indiana" and "mathematics" immediately brings to mind the true story of the attempt in 1897 of the state legislature to pass ...

 

The importance of free and open source software and open standards in modern scientific publishing

  
Publications, Vol. 1, No. 2. (26 June 2013), pp. 49-55, https://doi.org/10.3390/publications1020049

Abstract

In this paper we outline the reasons why we believe a reliance on the use of proprietary computer software and proprietary file formats in scientific publication have negative implications for the conduct and reporting of science. There is increasing awareness and interest in the scientific community about the benefits offered by free and open source software. We discuss the present state of scientific publishing and the merits of advocating for a wider adoption of open standards in science, particularly where it ...

 

Post-normal institutional identities: quality assurance, reflexivity and ethos of care

  

Abstract

[Highlights] [::] Given the current crises of legitimacy and quality in mainstream science, institutions that produce and govern science and those that provide scientific advice to policy need to change their modus operandis; we advocate for an ethos of care. [::] Post-normal science and other frameworks of scientific knowledge production may inspire trustfulness in institutions that provide scientific advice to policy. [::] In Europe, the Joint Research Centre of the European Commission has the necessary scaffolding to advise policy in view of public interest, ...

 

When a preprint becomes the final paper

  

Abstract

A geneticist's decision not to publish his finalized preprint in a journal gets support from scientists online. [Excerpt] Preprint papers posted on servers such as arXiv and bioRxiv are designed to get research results out for discussion before they are formally peer reviewed and published in journals. But for some scientists, the term is now a misnomer — their preprint papers will never be submitted for formal publication. [...] One of the major services of traditional journals is that papers are peer ...

 

Are conservation biologists working too hard?

  
Biological Conservation, Vol. 166 (October 2013), pp. 186-190, https://doi.org/10.1016/j.biocon.2013.06.029

Abstract

[Highlights] [::] We analyze the work habits of conservation biologists contributing to Biological Conservation. [::] Conservation scientists conduct substantial amount of work on weekends and after office time. [::] There are geographical differences in the tendency to work on weekends or after office time. [::] Over time there has been a gradual increase in the tendency to conduct work on weekends. [Abstract] The quintessential scientist is exceedingly hardworking and antisocial, and one who would spend countless evenings and weekends buried under her/his microscopes and manuscripts. In an ...

 

Five selfish reasons to work reproducibly

  
Genome Biology, Vol. 16, No. 1. (8 December 2015), 274, https://doi.org/10.1186/s13059-015-0850-7

Abstract

And so, my fellow scientists: ask not what you can do for reproducibility; ask what reproducibility can do for you! Here, I present five reasons why working reproducibly pays off in the long run and is in the self-interest of every ambitious, career-oriented scientist. [Excerpt] [::Reproducibility: what's in it for me?] In this article, I present five reasons why working reproducibly pays off in the long run and is in the self-interest of every ambitious, career-oriented scientist. [::] Reason number 1: reproducibility helps to avoid ...

 

The mismeasurement of science

  
Current Biology, Vol. 17, No. 15. (07 August 2007), pp. R583-R585, https://doi.org/10.1016/j.cub.2007.06.014

Abstract

[Excerpt:Impact factors and citations] Crucially, impact factors are distorted by positive feedback — many citations are not based on reading the paper but by reading other papers, particularly reviews. One study even suggested that, of cited articles, only some 20% had actually been read. [...] Nevertheless, citations are now being used to make quantitative comparisons between scientists. [...] [Changes in behaviour] Unfortunately, the use of these measures is having damaging effects on perceptions and on behaviour; these I list below. Please note that ...

 

Lost in publication: how measurement harms science

  
Ethics in Science and Environmental Politics, Vol. 8 (03 June 2008), pp. 9-11, https://doi.org/10.3354/esep00079

Abstract

Measurement of scientific productivity is difficult. The measures used (impact factor of the journal, citations to the paper being measured) are crude. But these measures are now so universally adopted that they determine most things that matter: tenure or unemployment, a postdoctoral grant or none, success or failure. As a result, scientists have been forced to downgrade their primary aim from making discoveries to publishing as many papers as possible—and trying to work them into high impact factor journals. Consequently, scientific ...

 

Is grey literature ever used? Using citation analysis to measure the impact of GESAMP, an international Marine scientific advisory body

  
Canadian Journal of Information and Library Science, Vol. 28, No. 1. (2004), pp. 45-65

Abstract

Citation analysis was used to measure the impact of GESAMP, the Joint Group of Experts on the Scientific Aspects of Marine Environmental Protection, which since 1969 has published reports for the United Nations and seven of its agencies. Web of Science was used to search for citations to 114 publications, of which 15 are journal articles or books. Citations to grey literature can be difficult to locate and interpret, but two-thirds of the 1436 citations, in 1178 citing papers, are to ...

 

Statistical analysis

  
In Science: editorial policies (2016)

Abstract

[Excerpt: Statistical analysis] Generally, authors should describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the results. [::] Data pre-processing steps such as transformations, re-coding, re-scaling, normalization, truncation, and handling of below detectable level readings and outliers should be fully described; any removal or modification of data values must be fully acknowledged and justified. [::] [...] [::] The number of sampled units, N, upon which each reported statistic is based must be stated. [::] For continuous ...

 

Take the time and effort to correct misinformation

  
Nature, Vol. 540, No. 7632. (6 December 2016), pp. 171-171, https://doi.org/10.1038/540171a

Abstract

Scientists should challenge online falsehoods and inaccuracies — and harness the collective power of the Internet to fight back, argues Phil Williamson. [Excerpt] [...] Most researchers who have tried to engage online with ill-informed journalists or pseudoscientists will be familiar with Brandolini’s law (also known as the Bullshit Asymmetry Principle): the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. Is it really worth taking the time and effort to challenge, correct and clarify ...

 

Editorial: evidence-based guidelines for avoiding the most prevalent and serious APA error in journal article submissions - The citation error

  
Research in the Schools, Vol. 17, No. 2. (2010), pp. i-xxiv

Abstract

In a previous editorial, Onwuegbuzie, Combs, Slate, and Frels (2010) discussed the findings of Combs, Onwuegbuzie, and Frels (2010), who identified the 60 most common American Psychological Association (APA) errors—with the most common error being incorrect use of numbers that was committed by 57.3% of authors. However, they did not analyze citation errors, which stem from a failure “to make certain that each source referenced appears in both places [text and reference list] and that the text citation and reference list ...

 

Trusting others to ‘do the math’

  
Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 376-392, https://doi.org/10.1080/03080188.2016.1165454

Abstract

Researchers effectively trust the work of others anytime they use software tools or custom software. In this article I explore this notion of trusting others, using Digital Humanities as a focus, and drawing on my own experience. Software is inherently flawed and limited, so when its use in scholarship demands better practices and terminology, to review research software and describe development processes. It is also important to make research software engineers and their work more visible, both for the purposes of ...

 

Software and scholarship

  
Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 342-348, https://doi.org/10.1080/03080188.2016.1165456

Abstract

[excerpt] The thematic focus of this issue is to examine what happens where software and scholarship meet, with particular reference to digital work in the humanities. Despite the some seven decades of its existence, Digital Humanities continues to struggle with the implications, in the academic ecosystem, of its position between engineering and art. [...] [\n] [...] [\n] I will end with my own reflection on this topic of evaluation. Peer review of scholarly works of software continues to pose a particularly vexed challenge ...

 

Why policy needs philosophers as much as it needs science

  
The Guardian, Vol. 2016, No. October, 13. (2016), 57b3q

Abstract

[Excerpt] In a widely-discussed recent essay for the New Atlantis, the policy scholar Daniel Sarewitz argues that science is in deep trouble. While modern research remains wondrously productive, its results are more ambiguous, contestable and dubious than ever before. This problem isn’t caused by a lack of funding or of scientific rigour. Rather, Sarewitz argues that we need to let go of a longstanding and cherished cultural belief – that science consists of uniquely objective knowledge that can put an end to ...

 

The false academy: predatory publishing in science and bioethics

  
Medicine, Health Care and Philosophy (2016), pp. 1-8, https://doi.org/10.1007/s11019-016-9740-3

Abstract

This paper describes and discusses the phenomenon ‘predatory publishing’, in relation to both academic journals and books, and suggests a list of characteristics by which to identify predatory journals. It also raises the question whether traditional publishing houses have accompanied rogue publishers upon this path. It is noted that bioethics as a discipline does not stand unaffected by this trend. Towards the end of the paper it is discussed what can and should be done to eliminate or reduce the effects ...

 

Measuring scientific impact beyond citation counts

  
D-Lib Magazine, Vol. 22, No. 9/10. (September 2016), https://doi.org/10.1045/september2016-patton

Abstract

The measurement of scientific progress remains a significant challenge exasperated by the use of multiple different types of metrics that are often incorrectly used, overused, or even explicitly abused. Several metrics such as h-index or journal impact factor (JIF) are often used as a means to assess whether an author, article, or journal creates an "impact" on science. Unfortunately, external forces can be used to manipulate these metrics thereby diluting the value of their intended, original purpose. This work highlights these ...

 

Corporate culture has no place in academia

  
Nature, Vol. 538, No. 7623. (3 October 2016), pp. 7-7, https://doi.org/10.1038/538007a

Abstract

‘Academic capitalism’ contributed to the mishandling of the Macchiarini case by officials at the Karolinska Institute in Sweden, argues Olof Hallonsten. [Excerpt] [...] As academic capitalism spreads, universities abandon traditional meritocratic and collegial governance to hunt money, prestige and a stronger brand. [...] Yet this conduct goes against fundamental values of academia — the careful scrutiny of all claims, and of the research (and teaching) portfolios of those making such claims. This core principle in the self-organization of the academic system (studied ...

 

How to review a paper

  

Abstract

[Excerpt] As junior scientists develop their expertise and make names for themselves, they are increasingly likely to receive invitations to review research manuscripts. It’s an important skill and service to the scientific community, but the learning curve can be particularly steep. Writing a good review requires expertise in the field, an intimate knowledge of research methods, a critical mind, the ability to give fair and constructive feedback, and sensitivity to the feelings of authors on the receiving end. As a range ...

 

Why scientists must share their research code

  
Nature (13 September 2016), https://doi.org/10.1038/nature.2016.20504

Abstract

'Reproducibility editor' Victoria Stodden explains the growing movement to make code and data available to others. [Excerpt] [...] [::What does computational reproducibility mean?] It means that all details of computation — code and data — are made routinely available to others. If I can run your code on your data, then I can understand what you did. We need to expose all the steps that went into any discovery that relies on a computer. [::What’s the scientific value of running the same data with the ...

 

Scientific advances: fallacy of perfection harms peer review

  
Nature, Vol. 537, No. 7618. (31 August 2016), pp. 34-34, https://doi.org/10.1038/537034a

Abstract

[Excerpt] [...] The history of science has taught us that most progress has come from exploring flawed hypotheses and imperfect models. We must always strive for the better study, the better model, the better analysis. As experienced reviewers, however, we contend that seeking ultimate perfection is not the same as accepting nothing less here and now. Scientific progress depends on such compromise — provided that potential caveats are recognized. [\n] If a model is the most technically and ethically feasible approach available, ...

 

They write the right stuff

  
Fast Company, Vol. 6 (December 1996), 28121

Abstract

[Excerpt] As the 120-ton space shuttle sits surrounded by almost 4 million pounds of rocket fuel, exhaling noxious fumes, visibly impatient to defy gravity, its on-board computers take command. Four identical machines, running identical software, pull information from thousands of sensors, make hundreds of milli-second decisions, vote on every decision, check with each other 250 times a second. A fifth computer, with different software, stands by to take control should the other four malfunction. [\n] At T-minus 6.6 seconds, if the pressures, pumps, and temperatures are nominal, ...

 

Standards for reporting qualitative research

  
Academic Medicine, Vol. 89, No. 9. (September 2014), pp. 1245-1251, https://doi.org/10.1097/acm.0000000000000388

Abstract

[Purpose] Standards for reporting exist for many types of quantitative research, but currently none exist for the broad spectrum of qualitative research. The purpose of the present study was to formulate and define standards for reporting qualitative research while preserving the requisite flexibility to accommodate various paradigms, approaches, and methods. [Method] The authors identified guidelines, reporting standards, and critical appraisal criteria for qualitative research by searching PubMed, Web of Science, and Google through July 2013; reviewing the reference lists of retrieved sources; ...

 

(INRMM-MiD internal record) List of keywords of the INRMM meta-information database - part 24

  
(February 2014)
Keywords: inrmm-list-of-tags   overlapping-clustering   overspecialization   overview   overwhelming-uncertainty   oxalis-spp   ozone   p-value   pacific-islands   paleo-climate   paleo-data   paleobiogeography   paleobiology   paleobotany   paleoclimate-dynamics   paleoclimatic-models   paleoclimatology   paleoecology   paleoenvironment   paleohydrology   paleolithic   paliurus-spina-christi   palynology   pandanus-tectorius   panicum-spp   paper   papua-new-guinea   paradox   paragnetina   parallelism   paranthrene-tabaniformis   parasite   parasitism   parasitoid-recruitment   pareto-distribution   pareto-frontier   pareto-principle   parkinsonia-aculeata   parkinsonia-florida   parrotia-persica   parthenolecanium-corni   partial-open-loop-feedback-control   partial-protection   partial-uprooting   participation   participatory-modelling   particle-swarm-optimisation   particle-swarm-optimization   particulate-matter   partitioning   past-observations   pastoral-activities   pasture   patch-dynamics   paternity-analysis   pathogens   pattern   paulownia-tomentosa   payoff-vs-cost   pca   peak   peak-ground-acceleration   peatlands   pedogenesis-model   pedogenic-factors   peer-review   pellets   peloponnese   peltogyne-purpurea   percent   perl   permafrost   permanent-plot   persea-borbonia   perspective   perspective-article   peru   pesera   pesotum-synnemata   ph   phacidium-infestans   phaenops-spp   phaeoacremonium-aleophilum   phaeocryptopus-gaeumannii   phaeostigma-notata   pharmacology   phassus-excrescens   phellodendron-amurense   phenolic-compounds   phenolics   phenology   phenotypes-vs-genotypes   philadelphus-coronarius   philaenus-spumarius   phillyrea-latifolia   phloemyzus-passerinii  

Abstract

List of indexed keywords within the transdisciplinary set of domains which relate to the Integrated Natural Resources Modelling and Management (INRMM). In particular, the list of keywords maps the semantic tags in the INRMM Meta-information Database (INRMM-MiD). [\n] The INRMM-MiD records providing this list are accessible by the special tag: inrmm-list-of-tags ( http://mfkp.org/INRMM/tag/inrmm-list-of-tags ). ...

 

Reproducibility: a tragedy of errors

  
Nature, Vol. 530, No. 7588. (3 February 2016), pp. 27-29, https://doi.org/10.1038/530027a

Abstract

Mistakes in peer-reviewed papers are easy to find but hard to fix, report David B. Allison and colleagues. [Excerpt: Three common errors] As the influential twentieth-century statistician Ronald Fisher (pictured) said: “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” [\n] [...] Frequent errors, once recognized, can be kept out of the literature with targeted education and policies. Three of the most common are ...

 

The case for open preprints in biology

  
PLoS Biology, Vol. 11, No. 5. (14 May 2013), e1001563, https://doi.org/10.1371/journal.pbio.1001563

Abstract

Biologists should submit their preprints to open servers, a practice common in mathematics and physics, to open and accelerate the scientific process. [Excerpt: Introduction] Public preprint servers allow authors to make manuscripts publicly available before, or in parallel to, submitting them to journals for traditional peer review. The rationale for preprint servers is fundamentally simple: to make the results of research available to the scientific community as soon as possible, instead of waiting until the peer-review process is fully completed. Sharing manuscripts using ...

 

Statistics: P values are just the tip of the iceberg

  
Nature, Vol. 520, No. 7549. (28 April 2015), pp. 612-612, https://doi.org/10.1038/520612a

Abstract

Ridding science of shoddy statistics will require scrutiny of every step, not merely the last one, say Jeffrey T. Leek and Roger D. Peng. [Excerpt] There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as 'null hypothesis significance testing' (NHST; see, for example, go.nature.com/pfvgqe). NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) ...

 

How NOT to Review a Paper: The Tools and Techniques of the Adversarial Reviewer

  
SIGMOD Rec., Vol. 37, No. 4. (March 2009), pp. 100-104, https://doi.org/10.1145/1519103.1519122

Abstract

There are several useful guides available for how to review a paper in Computer Science [10, 6, 12, 7, 2]. These are soberly presented, carefully reasoned and sensibly argued. As a result, they are not much fun. So, as a contrast, this note is a checklist of how not to review a paper. It details techniques that are unethical, unfair, or just plain nasty. Since in Computer Science we often present arguments about how an adversary would approach a particular problem, ...

 

Read before you cite!

  
Complex Systems, Vol. 14, No. 3. (2003), pp. 269-274

Abstract

We report a method of estimating what percentage of people who cited a paper had actually read it. The method is based on a stochastic modeling of the citation process that explains empirical studies of misprint distributions in citations (which we show follows a Zipf law). Our estimate is only about 20% of citers read the original. ...

 

Verification of citations: fawlty towers of knowledge?

  
Interfaces, Vol. 38, No. 2. (April 2008), pp. 125-139, https://doi.org/10.1287/inte.1070.0317

Abstract

The prevalence of faulty citations impedes the growth of scientific knowledge. Faulty citations include omissions of relevant papers, incorrect references, and quotation errors that misreport findings. We discuss key studies in these areas. We then examine citations to “Estimating nonresponse bias in mail surveys,” one of the most frequently cited papers from the Journal of Marketing Research, to illustrate these issues. This paper is especially useful in testing for quotation errors because it provides specific operational recommendations on adjusting for nonresponse ...

 

Opinion: lay summaries needed to enhance science communication

  
Proceedings of the National Academy of Sciences, Vol. 112, No. 12. (24 March 2015), pp. 3585-3586, https://doi.org/10.1073/pnas.1500882112

Abstract

[Excerpt] At first blush, the notion of lay summaries seems a simple idea with admirable aims: Scientists write summaries of journal articles emphasizing the broad significance of research in accessible language. However, viewed from an ivory tower that has been besieged by an increasing amount of paperwork, scientists could easily regard lay summaries as just one more hurdle in peer-reviewed publishing, another administrative task to fit into an already busy agenda. [\n] But rather than an unrewarding burden, scientists (and journal publishers) ...

Visual summary

  • Figure:60%: http://www.pnas.org/content/112/12/3585/F1.large.jpg
  • Source: http://dx.doi.org/10.1073/pnas.1500882112
  • Caption: A conceptual map depicts the pathways available for communicating research results between scientists and end users via different mechanisms (depicted by black dotted lines). Lay summaries of published articles would serve to enhance potential communication pathways (depicted by red solid lines) between scientists and the lay public, increase decision makers' access to information, and improve interdisciplinary communication.
 

Bars and medals

  
eLife, Vol. 4 (17 February 2015), e05787, https://doi.org/10.7554/elife.05787

Abstract

There can only be one gold medal in each event at the Olympics. In science, on the other hand, as Eve Marder explains, it is more important to recognize excellence in its many different forms than it is to identify a winner. [Excerpt] [...] While it is tempting to use the metaphor of the ‘bar’ as a way of asking whether a person is ‘good enough’ to merit promoting (or if a manuscript is ‘good enough’ to merit publishing), this facile comparison ...

 

The pleasure of publishing

  
eLife, Vol. 4 (06 January 2015), e05770, https://doi.org/10.7554/elife.05770

Abstract

When assessing manuscripts eLife editors look for a combination of rigour and insight, along with results and ideas that make other researchers think differently about their subject. [Excerpt] The senior editors at eLife are often asked: ‘Where is the bar for an eLife paper?’ Another frequent question is: ‘Why should I submit my best work to eLife?’ The second of these questions is not surprising because it is human nature to be wary of anything new and challenging. The first question has ...

 

Knowledge Freedom in computational science: a two stage peer-review process with KF eligibility access review

  

Abstract

Wide scale transdisciplinary modelling (WSTM) growingly demands a focus on reproducible research and scientific knowledge freedom. Data and software freedom are essential aspects of knowledge freedom in computational science. Therefore, ideally published articles should also provide the readers with the data and source code of the described mathematical modelling. To maximise transparency, replicability, reproducibility and reusability, published data should be made available as open data while source code should be made available as free software. Here, a two-stage peer review process ...

Visual summary

 

The Open Science Peer Review Oath

  
F1000Research (12 November 2014), https://doi.org/10.12688/f1000research.5686.1

Abstract

One of the foundations of the scientific method is to be able to reproduce experiments and corroborate the results of research that has been done before. However, with the increasing complexities of new technologies and techniques, coupled with the specialisation of experiments, reproducing research findings has become a growing challenge. Clearly, scientific methods must be conveyed succinctly, and with clarity and rigour, in order for research to be reproducible. Here, we propose steps to help increase the transparency of the scientific ...

References

  1. Ioannidis JP: Why most published research findings are false. PLoS Med. 2005; 2(8): e124.
  2. Ioannidis JP, Allison DB, Ball CA, et al.: Repeatability of published microarray gene expression analyses. Nat Genet. 2009; 41(2): 149–55.
  3. Prinz F, Schlange T, Asadullah K: Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011; 10(9): 712.
  4. Hines WC,
 

Outstanding reviewers for environmental modelling and software in 2007

  
Environmental Modelling & Software, Vol. 23, No. 12. (December 2008), 1343, https://doi.org/10.1016/j.envsoft.2008.06.005

Abstract

[Excerpt] In recognition of our reviewers and to encourage high standards of constructive assessment in the journal, the Editors of EMS have instituted ‘Outstanding Reviewer Awards.’ These awards will be made annually through consultation among the Editors. The criteria for awards are based on constructiveness and depths of reviews, with some weight being given also to the number of reviews undertaken, as well as the turnaround time for the reviews. In any 1 year, a minimum of two reviews is required. ...

 

Outstanding reviewers for environmental modelling and software in 2008

  
Environmental Modelling & Software, Vol. 24, No. 10. (October 2009), pp. 1137-1138, https://doi.org/10.1016/j.envsoft.2009.03.005

Abstract

[Excerpt] Established in 2007, the ‘Outstanding Reviewer Awards’ are being presented annually to recognize the dedication and efforts of our reviewers and to encourage high standards of constructive assessment in the journal. In 2008, 723 reviewers graciously offered their expertise to provide the editors with their professional advice on the scientific merit of works submitted to the journal. Ten awardees were shortlisted through consultation among the Editors, who considered the constructiveness and depths of reviews, the number of reviewers performed and ...

 

Outstanding reviewers for Environmental Modelling and Software in 2009

  
Environmental Modelling & Software, Vol. 25, No. 10. (October 2010), 1063, https://doi.org/10.1016/j.envsoft.2010.02.006

Abstract

[Excerpt] Reviewers are key to the quality of Environmental Modelling and Software. They provide their advice, expertise and professional opinion on papers submitted to the journal to help ensure scientific rigour and validity in the works published. In 2009, 646 reviewers graciously provided their time and effort to the journal, and of these ten have been shortlisted to receive ‘Outstanding Reviewer Awards’. These awards are presented annually to recognize the dedication of those reviewers and to encourage high standards of constructive ...

 

Outstanding reviewers for Environmental Modelling and Software in 2013

  
Environmental Modelling & Software, Vol. 57 (July 2014), pp. iii-iv, https://doi.org/10.1016/s1364-8152(14)00131-5

Abstract

[Excerpt] Reviewers play a crucial role in helping the journal maintain its position as a leader in environmental modelling and software methodology by ensuring the high quality of its publications. We are grateful to each and every one of the 1152 reviewers who dedicated their time and expertise to the journal in 2013. In particular, we recognize the exceptional contributions made by our ‘Reviewer of the Year’ Gerry Laniak and the ten ‘Outstanding Reviewer Award’ recipients, listed below. These awardees were ...

 

Outstanding reviewers for Environmental Modelling and Software in 2010

  
Environmental Modelling & Software, Vol. 31 (May 2012), pp. 1-2, https://doi.org/10.1016/j.envsoft.2012.01.010

Abstract

[Excerpt] Environmental Modelling and Software’s high reputation would not be possible without the support from our reviewers who dedicate their expertise and time to help ensure the works published meet our high standard of scientific rigour and utility. In 2010, the journal was supported by 804 reviewers whose time and efforts are very much appreciated. While we are very grateful to every single one of our reviewers, we have shortlisted ten to receive Outstanding Reviewer Awards which recognize the considerable commitment ...

 

Academic urban legends

  
Social Studies of Science, Vol. 44, No. 4. (1 August 2014), pp. 638-654, https://doi.org/10.1177/0306312714535679

Abstract

Many of the messages presented in respectable scientific publications are, in fact, based on various forms of rumors. Some of these rumors appear so frequently, and in such complex, colorful, and entertaining ways that we can think of them as academic urban legends. The explanation for this phenomenon is usually that authors have lazily, sloppily, or fraudulently employed sources, and peer reviewers and editors have not discovered these weaknesses in the manuscripts during evaluation. To illustrate this phenomenon, I draw upon ...

 

Biodiversity data should be published, cited, and peer reviewed

  
Trends in Ecology & Evolution, Vol. 28, No. 8. (August 2013), pp. 454-461, https://doi.org/10.1016/j.tree.2013.05.002

Abstract

Knowledge depends on data and thus data quality. Data publication needs quality assurance standards like conventional publications. Peer review is the highest standard in scientific publications. Indicators for biodiversity data quality, including peer review, are proposed. Concerns over data quality impede the use of public biodiversity databases and subsequent benefits to society. Data publication could follow the well-established publication process: with automated quality checks, peer review, and editorial decisions. This would improve data accuracy, reduce the need for users to ‘clean’ ...

 

Supply and demand: apply market forces to peer review

  
Nature, Vol. 506, No. 7488. (19 February 2014), pp. 295-295, https://doi.org/10.1038/506295b

Abstract

[excerpt] [...] When it comes to the highly skilled service of peer reviewing, the supply is sufficiently high to keep the monetary value at zero. If, at a constant level of demand, the supply is reduced, then this price would go up. With an increased price, people could become professional reviewers to supplement their salary. [...] ...

 

Making Every Scientist a Research Funder

  
Science, Vol. 343, No. 6171. (07 February 2014), pp. 598-598, https://doi.org/10.1126/science.343.6171.598

Abstract

A radical proposal to revamp peer review would give scientists an even bigger role in deciding how to distribute U.S. research dollars—at a fraction of the current cost. ...

 

With Great Power Comes Great Responsibility: the Importance of Rejection, Power, and Editors in the Practice of Scientific Publishing

  
PLoS ONE, Vol. 8, No. 12. (30 December 2013), e85382, https://doi.org/10.1371/journal.pone.0085382

Abstract

Peer review is an important element of scientific communication but deserves quantitative examination. We used data from the handling service manuscript Central for ten mid-tier ecology and evolution journals to test whether number of external reviews completed improved citation rates for all accepted manuscripts. Contrary to a previous study examining this issue using resubmission data as a proxy for reviews, we show that citation rates of manuscripts do not correlate with the number of individuals that provided reviews. Importantly, externally-reviewed papers ...

 

Modeling, informatics, and the quest for reproducibility

  
Journal of Chemical Information and Modeling, Vol. 53, No. 7. (12 June 2013), pp. 1529-1530, https://doi.org/10.1021/ci400197w

Abstract

There is no doubt that papers published in the Journal of Chemical Information and Modeling, and related journals, provide valuable scientific information. However, it is often difficult to reproduce the work described in molecular modeling and cheminformatics papers. In many cases the software described in the paper is not readily available, in other cases the supporting information is not provided in an accessible format. To date, the major journals in the fields of molecular modeling and cheminformatics have not established guidelines ...

This page of the database may be cited as:
Integrated Natural Resources Modelling and Management - Meta-information Database. http://mfkp.org/INRMM/tag/peer-review

Result page: 1 2 Next

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.