From MFKP_wiki

Jump to: navigation, search

Selection: with tag research-metrics [99 articles] 

 

Countering European brain drain

  
Science, Vol. 356, No. 6339. (19 May 2017), pp. 695-696, https://doi.org/10.1126/science.aan3920

Abstract

[Excerpt] [...] Mobile European researchers who went to the United States were significantly more likely to report strong positive career effects than their mobile peers who moved within the European Union (EU) (up to twice as high) [...] In search of a possible “elite” brain drain from Europe, we examined return rates for a sample of Europeans pursuing Ph.D. degrees in economics in the United States (3). Those better students who received Ph.D. degrees from top U.S. institutes are more likely ...

 

Escape from the impact factor

  
Ethics in Science and Environmental Politics, Vol. 8, No. 1. (2008), pp. 5-7

Abstract

As Editor-in-Chief of the journal Nature, I am concerned by the tendency within academic administrations to focus on a journal’s impact factor when judging the worth of scientific contributions by researchers, affecting promotions, recruitment and, in some countries, financial bonuses for each paper. Our own internal research demonstrates how a high journal impact factor can be the skewed result of many citations of a few papers rather than the average level of the majority, reducing its value as an objective measure ...

 

A concise review on the role of author self-citations in information science, bibliometrics and science policy

  
Scientometrics, Vol. 67, No. 2. (2006), pp. 263-277, https://doi.org/10.1007/s11192-006-0098-9

Abstract

The objective of the present study is twofold: (1) to show the aims and means of quantitative interpretation of bibliographic features in bibliometrics and their re-interpretation in research policy, and (2) to summarise the state-of-art in self-citation research. The authors describe three approaches to the role of author self-citations and possible conflicts arising from the different perspectives. From the bibliometric viewpoint we can conclude that that there is no reason for condemning self-citations in general or for removing them from macro ...

 

Academia’s never-ending selection for productivity

  
Scientometrics In Scientometrics, Vol. 103, No. 1. (15 February 2015), pp. 333-336, https://doi.org/10.1007/s11192-015-1534-5

Abstract

[Excerpt] Over the last decade, a debate has been emerging on “Academia’s obsession with quantity” (Lawrence 2007; Fischer et al. 2012a) and the subsequent Impact Factor Race, an unhealthy game played by scientists (Cherubini 2008; Brischoux and Cook 2009). Despite being widely despised by the scientific community (but see Loyola et al. 2012), the “publish or perish” dogma and the use of productivity indices (e.g., journal’s impact factor, number of published articles) to assess a researcher’s output seem to hold on, ...

 

Impact factors: no totum pro parte by skewness of citation

  
Cardiovascular Research, Vol. 61, No. 2. (01 February 2004), pp. 201-203, https://doi.org/10.1016/j.cardiores.2003.11.023

Abstract

Citation of the various papers published in one and the same journal is highly skewed. Journals with a high impact factor obtain this high value by frequent citation of only a limited number of their papers and, on the other hand, journals with low impact factors publish many papers that remain uncited [1]. Thus, mere publication of a paper in a given journal cannot be regarded as a quality marker of that particular paper [2], it just means that the authors ...

 

Attempts to manufacture scientific discovery

  
Nature, Vol. 94, No. 2358. (7 January 1915), pp. 512-512, https://doi.org/10.1038/094512a0

Abstract

[Excerpt] In an excellent article forming one of his admirable series of essays entitled “Science from an Easy-chair,” published in the Daily Telegraph of December 15, 1914, Sir Ray Lankester deals particularly with the case of the recent proposal that the Lister Institute should be handed over to the Medical Research Committee of the National Insurance Commission. The proposal was rejected on November 18 by the votes of the members; and Sir Ray Lankester preaches a useful sermon upon this text. ...

 

Corporate culture: protect idea factories

  
Nature, Vol. 543, No. 7646. (22 March 2017), pp. 491-491, https://doi.org/10.1038/543491a

Abstract

[Excerpt] It is unsurprising that universities have adopted corporate culture (Nature 540, 315; 10.1038/540315a2016), but surprising that they select such archaic models. Universities corporatize because they must raise funds through teaching, research and commercialization. [...] Universities are the only social institutions set up specifically to produce ideas, and this is their most valuable societal role. [...] Many universities have copied the manufacturing models of the 1950s. Power has shifted from academics to administrators. Academics are treated as interchangeable and replaceable, and performance ...

 

Post-normal institutional identities: quality assurance, reflexivity and ethos of care

  

Abstract

[Highlights] [::] Given the current crises of legitimacy and quality in mainstream science, institutions that produce and govern science and those that provide scientific advice to policy need to change their modus operandis; we advocate for an ethos of care. [::] Post-normal science and other frameworks of scientific knowledge production may inspire trustfulness in institutions that provide scientific advice to policy. [::] In Europe, the Joint Research Centre of the European Commission has the necessary scaffolding to advise policy in view of public interest, ...

 

A data citation roadmap for scientific publishers

  
bioRxiv (19 January 2017), 100784, https://doi.org/10.1101/100784

Abstract

This article presents a practical roadmap for scholarly publishers to implement data citation in accordance with the Joint Declaration of Data Citation Principles (JDDCP) [1], a synopsis and harmonization of the recommendations of major science policy bodies. It was developed by the Publishers Early Adopters Expert Group as part of the Data Citation Implementation Pilot (DCIP) project, an initiative of FORCE11.org and the NIH BioCADDIE program. The structure of the roadmap presented here follows the “life of a paper” workflow and includes the categories Pre-submission, Submission, Production, ...

 

Position paper for the endorsement of Free Software and Open Standards in Horizon 2020 and all publicly-funded research

  
In Free Software Foundation Europe (January 2017)

Abstract

The Free Software Foundation Europe (FSFE) is a charity that empowers users to control technology by advocating for Free Software. In a digital world, Free Software is the fundament of Open Knowledge, Open Innovation and Open Science. [\n] Software is an integral part of today’s society. Our daily interactions, transactions, education, communication channels, work and life environments rely heavily on software. "Free Software" refers to all programs distributed under terms and licences that allow users to run the software for any purpose, ...

 

When free software isn't (practically) superior

  
GNU Operating System (2011)

Abstract

[Excerpt] The Open Source Initiative's mission statement reads, “Open source is a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in.” [\n] For more than a decade now, the Free Software Foundation has argued against this “open source” characterization of the free software movement. Free software advocates have primarily argued against this framing because ...

 

Five selfish reasons to work reproducibly

  
Genome Biology, Vol. 16, No. 1. (8 December 2015), 274, https://doi.org/10.1186/s13059-015-0850-7

Abstract

And so, my fellow scientists: ask not what you can do for reproducibility; ask what reproducibility can do for you! Here, I present five reasons why working reproducibly pays off in the long run and is in the self-interest of every ambitious, career-oriented scientist. [Excerpt] [::Reproducibility: what's in it for me?] In this article, I present five reasons why working reproducibly pays off in the long run and is in the self-interest of every ambitious, career-oriented scientist. [::] Reason number 1: reproducibility helps to avoid ...

 

The mismeasurement of science

  
Current Biology, Vol. 17, No. 15. (07 August 2007), pp. R583-R585, https://doi.org/10.1016/j.cub.2007.06.014

Abstract

[Excerpt:Impact factors and citations] Crucially, impact factors are distorted by positive feedback — many citations are not based on reading the paper but by reading other papers, particularly reviews. One study even suggested that, of cited articles, only some 20% had actually been read. [...] Nevertheless, citations are now being used to make quantitative comparisons between scientists. [...] [Changes in behaviour] Unfortunately, the use of these measures is having damaging effects on perceptions and on behaviour; these I list below. Please note that ...

 

The politics of publication

  
Nature, Vol. 422, No. 6929. (20 March 2003), pp. 259-261, https://doi.org/10.1038/422259a

Abstract

Authors, reviewers and editors must act to protect the quality of research. Listen. All over the world scientists are fretting. [Excerpt] The decision about publication of a paper is the result of interaction between authors, editors and reviewers. Scientists are increasingly desperate to publish in a few top journals and are wasting time and energy manipulating their manuscripts and courting editors. As a result, the objective presentation of work, the accessibility of articles and the quality of research itself are being compromised. ...

 

Lost in publication: how measurement harms science

  
Ethics in Science and Environmental Politics, Vol. 8 (03 June 2008), pp. 9-11, https://doi.org/10.3354/esep00079

Abstract

Measurement of scientific productivity is difficult. The measures used (impact factor of the journal, citations to the paper being measured) are crude. But these measures are now so universally adopted that they determine most things that matter: tenure or unemployment, a postdoctoral grant or none, success or failure. As a result, scientists have been forced to downgrade their primary aim from making discoveries to publishing as many papers as possible—and trying to work them into high impact factor journals. Consequently, scientific ...

 

Is grey literature ever used? Using citation analysis to measure the impact of GESAMP, an international Marine scientific advisory body

  
Canadian Journal of Information and Library Science, Vol. 28, No. 1. (2004), pp. 45-65

Abstract

Citation analysis was used to measure the impact of GESAMP, the Joint Group of Experts on the Scientific Aspects of Marine Environmental Protection, which since 1969 has published reports for the United Nations and seven of its agencies. Web of Science was used to search for citations to 114 publications, of which 15 are journal articles or books. Citations to grey literature can be difficult to locate and interpret, but two-thirds of the 1436 citations, in 1178 citing papers, are to ...

 

Research software sustainability: report on a knowledge exchange workshop

  
(February 2016)

Abstract

[Excerpt: Executive summary] Without software, modern research would not be possible. Understandably, people tend to marvel at results rather than the tools used in their discovery, which means the fundamental role of software in research has been largely overlooked. But whether it is widely recognised or not, research is inexorably connected to the software that is used to generate results, and if we continue to overlook software we put at risk the reliability and reproducibility of the research itself. [\n] The adoption of software is accompanied by new risks - many of ...

 

Theory of citing

  
In Handbook of Optimization in Complex Networks, Vol. 57 (11 Sep 2012), pp. 463-505, https://doi.org/10.1007/978-1-4614-0754-6_16

Abstract

We present empirical data on misprints in citations to twelve high-profile papers. The great majority of misprints are identical to misprints in articles that earlier cited the same paper. The distribution of the numbers of misprint repetitions follows a power law. We develop a stochastic model of the citation process, which explains these findings and shows that about 70-90% of scientific citations are copied from the lists of references used in other papers. Citation copying can explain not only why some misprints become popular, but also why some ...

 

It's impossible to conduct research without software, say 7 out of 10 UK researchers

  
Software and research, Vol. 5 (2014), 1536

Abstract

No one knows how much software is used in research. Look around any lab and you’ll see software – both standard and bespoke – being used by all disciplines and seniorities of researchers. Software is clearly fundamental to research, but we can’t prove this without evidence. And this lack of evidence is the reason why we ran a survey of researchers at 15 Russell Group universities to find out about their software use and background. [Excerpt: Headline figures] [::] 92% of academics use ...

 

Copyright contradictions in scholarly publishing

  
First Monday, Vol. 7, No. 11. (04 November 2002), 1006, https://doi.org/10.5210/fm.v7i11.1006

Abstract

This paper examines contradictions in how copyright works with the publishing of scholarly journals. These contradictions have to do with the protection of the authors’ interest and have become apparent with the rise of open access publishing as an alternative to the traditional commercial model of selling journal subscriptions. Authors may well be better served, as may the public which supports research, by open access journals because of its wider readership and early indications of greater scholarly impact. This paper reviews ...

 

Why linked data is not enough for scientists

  
Future Generation Computer Systems, Vol. 29, No. 2. (February 2013), pp. 599-611, https://doi.org/10.1016/j.future.2011.08.004

Abstract

[Abstract] Scientific data represents a significant portion of the linked open data cloud and scientists stand to benefit from the data fusion capability this will afford. Publishing linked data into the cloud, however, does not ensure the required reusability. Publishing has requirements of provenance, quality, credit, attribution and methods to provide the reproducibility that enables validation of results. In this paper we make the case for a scientific data publication model on top of linked data and introduce the notion of Research ...

 

Measuring scientific impact beyond citation counts

  
D-Lib Magazine, Vol. 22, No. 9/10. (September 2016), https://doi.org/10.1045/september2016-patton

Abstract

The measurement of scientific progress remains a significant challenge exasperated by the use of multiple different types of metrics that are often incorrectly used, overused, or even explicitly abused. Several metrics such as h-index or journal impact factor (JIF) are often used as a means to assess whether an author, article, or journal creates an "impact" on science. Unfortunately, external forces can be used to manipulate these metrics thereby diluting the value of their intended, original purpose. This work highlights these ...

 

The natural selection of bad science

  
Open Science, Vol. 3, No. 9. (01 September 2016), 160384, https://doi.org/10.1098/rsos.160384

Abstract

Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further ...

 

Open data: curation is under-resourced

  
Nature, Vol. 538, No. 7623. (05 October 2016), pp. 41-41, https://doi.org/10.1038/538041d

Abstract

[Excerpt] Science funders and researchers need to recognize the time, resources and effort required to curate open data [...]. There is no reliable business model to finance the curation and maintenance of data repositories. [...] Curation is not fully automated for most data types. This means that — in the life sciences, for example — many popular databases must resort to time-consuming manual curation to check data quality, reliability, provenance, format and metadata [...]. To make open data effective as a ...

 

Corporate culture has no place in academia

  
Nature, Vol. 538, No. 7623. (3 October 2016), pp. 7-7, https://doi.org/10.1038/538007a

Abstract

‘Academic capitalism’ contributed to the mishandling of the Macchiarini case by officials at the Karolinska Institute in Sweden, argues Olof Hallonsten. [Excerpt] [...] As academic capitalism spreads, universities abandon traditional meritocratic and collegial governance to hunt money, prestige and a stronger brand. [...] Yet this conduct goes against fundamental values of academia — the careful scrutiny of all claims, and of the research (and teaching) portfolios of those making such claims. This core principle in the self-organization of the academic system (studied ...

 

Who is accountable?

  
Nature, Vol. 450, No. 7166. (31 October 2007), pp. 1-1, https://doi.org/10.1038/450001a

Abstract

How the responsibilities of co-authors for a scientific paper's integrity could be made more explicit. ...

 

Authorship matters

  
Nature Materials, Vol. 7, No. 2. (01 February 2008), pp. 91-91, https://doi.org/10.1038/nmat2112

Abstract

Individual contributions should be carefully evaluated when compiling the author list of a scientific paper. ...

 

Why scientists must share their research code

  
Nature (13 September 2016), https://doi.org/10.1038/nature.2016.20504

Abstract

'Reproducibility editor' Victoria Stodden explains the growing movement to make code and data available to others. [Excerpt] [...] [::What does computational reproducibility mean?] It means that all details of computation — code and data — are made routinely available to others. If I can run your code on your data, then I can understand what you did. We need to expose all the steps that went into any discovery that relies on a computer. [::What’s the scientific value of running the same data with the ...

 

Archiving primary data: solutions for long-term studies

  
Trends in Ecology & Evolution, Vol. 30, No. 10. (2015), pp. 581-589, https://doi.org/10.1016/j.tree.2015.07.006

Abstract

The recent trend for journals to require open access to primary data included in publications has been embraced by many biologists, but has caused apprehension amongst researchers engaged in long-term ecological and evolutionary studies. A worldwide survey of 73 principal investigators (Pls) with long-term studies revealed positive attitudes towards sharing data with the agreement or involvement of the PI, and 93% of PIs have historically shared data. Only 8% were in favor of uncontrolled, open access to primary data while 63% ...

 

Opinion: science in the age of selfies

  
Proceedings of the National Academy of Sciences, Vol. 113, No. 34. (23 August 2016), pp. 9384-9387, https://doi.org/10.1073/pnas.1609793113

Abstract

[Excerpt] [\n] [...] [\n] Here there is a paradox: Today, there are many more scientists, and much more money is spent on research, yet the pace of fundamental innovation, the kinds of theories and engineering practices that will feed the pipeline of future progress, appears, to some observers, including us, to be slowing [...]. Why might that be the case? [\n] One argument is that “theoretical models” may not even exist for some branches of science, at least not in the ...

 

Beat it, impact factor! Publishing elite turns against controversial metric

  
Nature, Vol. 535, No. 7611. (8 July 2016), pp. 210-211, https://doi.org/10.1038/nature.2016.20224

Abstract

Senior staff at leading journals want to end inappropriate use of the measure. [Excerpt] [...] Calculated by various companies and promoted by publishers, journal impact factors (JIFs) are a measure of the average number of citations that articles published by a journal in the previous two years have received in the current year. [\n] They were designed to indicate the quality of journals, but researchers often use the metric to assess the quality of individual papers — and even, in some cases, their ...

 

The past, present and future of the PhD thesis

  
Nature, Vol. 535, No. 7610. (6 July 2016), pp. 7-7, https://doi.org/10.1038/535007a

Abstract

Writing a PhD thesis is a personal and professional milestone for many researchers. But the process needs to change with the times. [Excerpt] According to one of those often-quoted statistics that should be true but probably isn’t, the average number of people who read a PhD thesis all the way through is 1.6. And that includes the author. More interesting might be the average number of PhD theses that the typical scientist — and reader of Nature — has read from start ...

 

Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency

  
PLoS Biology, Vol. 14, No. 5. (12 May 2016), e1002456, https://doi.org/10.1371/journal.pbio.1002456

Abstract

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among ...

 

Software search is not a science, even among scientists

  
(8 May 2016)

Abstract

When they seek software for a task, how do people go about finding it? Past research found that searching the Web, asking colleagues, and reading papers have been the predominant approaches---but is it still true today, given the popularity of Facebook, Stack Overflow, GitHub, and similar sites? In addition, when users do look for software, what criteria do they use? And finally, if resources such as improved software catalogs were to be developed, what kind of information would people want in them? These questions motivated our cross-sectional survey ...

 

The pressure to publish pushes down quality

  
Nature, Vol. 533, No. 7602. (11 May 2016), pp. 147-147, https://doi.org/10.1038/533147a

Abstract

Scientists must publish less, says Daniel Sarewitz, or good research will be swamped by the ever-increasing volume of poor work. [Excerpt] [\n] [...] [\n] Indeed, the widespread availability of bibliometric data from sources such as Elsevier, Google Scholar and Thomson Reuters ISI makes it easy for scientists (with their employers looking over their shoulders) to obsess about their productivity and impact, and to compare their numbers with those of other scientists. [\n] And if more is good, then the trends for science are favourable. The ...

 

Promoting research resource identification at JCN

  
Journal of Comparative Neurology, Vol. 522, No. 8. (01 June 2014), pp. 1707-1707, https://doi.org/10.1002/cne.23585

Abstract

[Excerpt] [\n] [...] [\n] The attention of scientists, editors, and policymakers alike have all turned recently to the issue of reproducibility in scientific research, focusing on research spanning from the pharmaceutical industry (Begley and Ellis, 2012) to the highest levels of government (Collins and Tabak, 2014; see also McNutt, 2014). While these commentaries point out that scientific misconduct is quite rare, they do point to a confluence of factors that hinder the reproducibility of scientific findings, including the identification of key reagents, such ...

 

The Resource Identification Initiative: a cultural shift in publishing

  
Neuroinformatics, Vol. 14, No. 2. (2016), pp. 169-182, https://doi.org/10.1007/s12021-015-9284-3

Abstract

A central tenet in support of research reproducibility is the ability to uniquely identify research resources, i.e., reagents, tools, and materials that are used to perform experiments. However, current reporting practices for research resources are insufficient to identify the exact resources that are reported or to answer basic questions such as “How did other studies use resource X?” To address this issue, the Resource Identification Initiative was launched as a pilot project to improve the reporting standards for research resources in ...

 

(INRMM-MiD internal record) List of keywords of the INRMM meta-information database - part 30

  
(February 2014)
Keywords: inrmm-list-of-tags   receptivity   record-to-update-or-delete   red-list   redd   redistributable-scientific-information   reference-manual   reforestation   refugia   regeneration   regional-climate   regional-climate-models   regional-scale   regression   regression-tree-analysis   regulating-services   reinforcement   reinforcement-learning   reinventing-weels   reiteration   relative-distance-similarity   relative-distance-similarity-ancillary   remote-sensing   renewable-energy   renewable-energy-directive   repeatability   repellent-species   replicability   reporting   representative-concentration-pathways   reproducibility   reproducible-research   reproduction   reproductive-effort   resampling   research-funding   research-funding-vs-public-outcome   research-management   research-metrics   research-team-size   reservoir-management   reservoir-services   resilience   resin   resistance   resources-exploitation   respiration   restoration   resurvey-of-semi-permanent   retraction   review   review-publication   review-scopus-european-biodiversity-indicators   revision-control-system   rewarding-best-research-practices   rhamnus-cathartica   rhamnus-catharticus   rhamnus-frangula   rhamnus-saxatilis   rhamnus-spp   rhizophora-apiculata   rhizophora-mangle   rhododendron   rhododendron-arboreum   rhododendron-ferrugineum   rhododendron-periclymenoides   rhododendron-ponticum   rhododendron-spp   rhododendron-viscosum   rhopalicus-tutela   rhus-spp   rhus-typhina   rhyacionia-buoliana   rhyacionia-frustrana   rhyssa-persuasoria   rhytisma   ribes-alpinum   ribes-rubrum   ribes-uva-crispa   ring-analysis   ring-width-chronologies   ringspot-virus   riparian-ecosystem   riparian-forest   riparian-zones   risk-analysis   risk-assessment   risk-reduction   river-flow   river-networks   river-restoration   roads   robert-hooke   robinia-pseudoacacia   robinia-spp   robust-modelling   rockfalls   rodent   romania   root-deterioration  

Abstract

List of indexed keywords within the transdisciplinary set of domains which relate to the Integrated Natural Resources Modelling and Management (INRMM). In particular, the list of keywords maps the semantic tags in the INRMM Meta-information Database (INRMM-MiD). [\n] The INRMM-MiD records providing this list are accessible by the special tag: inrmm-list-of-tags ( http://mfkp.org/INRMM/tag/inrmm-list-of-tags ). ...

 

The unsung heroes of scientific software

  
Nature, Vol. 529, No. 7584. (4 January 2016), pp. 115-116, https://doi.org/10.1038/529115a

Abstract

Creators of computer programs that underpin experiments don’t always get their due — so the website Depsy is trying to track the impact of research code. [Excerpt] For researchers who code, academic norms for tracking the value of their work seem grossly unfair. They can spend hours contributing to software that underpins research, but if that work does not result in the authorship of a research paper and accompanying citations, there is little way to measure its impact. [\n] [...] Depsy’s creators hope that their ...

 

Wikipedia ranking of world universities

  
(29 Nov 2015)

Abstract

We use the directed networks between articles of 24 Wikipedia language editions for producing the Wikipedia Ranking of World Universities (WRWU) using PageRank, 2DRank and CheiRank algorithms. This approach allows to incorporate various cultural views on world universities using the mathematical statistical analysis independent of cultural preferences. The Wikipedia ranking of top 100 universities provides about 60 percent overlap with the Shanghai university ranking demonstrating the reliable features of this approach. At the same time WRWU incorporates all knowledge accumulated at 24 Wikipedia editions giving stronger highlights for historically ...

 

The oligopoly of academic publishers in the digital era

  
PLOS ONE, Vol. 10, No. 6. (10 June 2015), e0127502, https://doi.org/10.1371/journal.pone.0127502

Abstract

The consolidation of the scientific publishing industry has been the topic of much debate within and outside the scientific community, especially in relation to major publishers’ high profit margins. However, the share of scientific output published in the journals of these major publishers, as well as its evolution over time and across various disciplines, has not yet been analyzed. This paper provides such analysis, based on 45 million documents indexed in the Web of Science over the period 1973-2013. It shows ...

Visual summary

 

Which people use which scientific papers? An evaluation of data from F1000 and Mendeley

  
Journal of Informetrics, Vol. 9, No. 3. (July 2015), pp. 477-487, https://doi.org/10.1016/j.joi.2015.04.001

Abstract

[Highlights] [::] This study used the Mendeley API to download Mendeley counts for a comprehensive F1000Prime data set. [::] F1000Prime is a post-publication peer review system for papers from the biomedical area. [::] The F1000 papers are provided with tags from experts in this area which can characterise a paper more exactly (such as “good for teaching”). [::] Regression models with Mendeley counts as dependent variables have been calculated. [::] In the case of a well written article that provides a good overview of a topic, ...

 

Grant giving: global funders to focus on interdisciplinarity

  
Nature, Vol. 525, No. 7569. (16 September 2015), pp. 313-315, https://doi.org/10.1038/525313a

Abstract

Granting bodies need more data on how much they are spending on work that transcends disciplines, and to what end, explains Rick Rylance. [Excerpt] Three arguments are often made in favour of interdisciplinary research. [::] First, complex modern problems such as climate change and resource security are not amenable to single-discipline investigation; they often require many types of expertise across the biological, physical and social disciplines. [::] Second, discoveries are said to be more likely on the boundaries between fields, where the ...

 

Interdisciplinarity: how to catalyse collaboration

  
Nature, Vol. 525, No. 7569. (16 September 2015), pp. 315-317, https://doi.org/10.1038/525315a

Abstract

Turn the fraught flirtation between the social and biophysical sciences into fruitful partnerships with these five principles, urge Rebekah R. Brown, Ana Deletic and Tony H. F. Wong. [Excerpt] An urgent push to bridge the divide between the biophysical and the social sciences is crucial. It is the only way to drive global sustainable development that delivers social inclusion, environmental sustainability and economic prosperity1. Sustainability is the classic 'wicked' problem, characterized by poorly defined requirements, unclear boundaries and contested causes that no ...

 

Interdisciplinary research by the numbers

  
Nature, Vol. 525, No. 7569. (16 September 2015), pp. 306-307, https://doi.org/10.1038/525306a

Abstract

An analysis reveals the extent and impact of research that bridges disciplines. [Excerpt] Interdisciplinary work is considered crucial by scientists, policymakers and funders — but how widespread is it really, and what impact does it have? Scholars say that the concept is complex to define and measure, but efforts to map papers by the disciplines of the journals they appear in and by their citation patterns are — tentatively — revealing the growth and influence of interdisciplinary research. [\n][...] [Interdisciplinary research takes time to ...

Visual summary

 

Why interdisciplinary research matters

  
Nature, Vol. 525, No. 305. (2015), https://doi.org/10.1038/525305a

Abstract

Scientists must work together to save the world. A special issue asks how they can scale disciplinary walls. [Excerpt] Scientists must work together to save the world. A special issue asks how they can scale disciplinary walls. To solve the grand challenges facing society — energy, water, climate, food, health — scientists and social scientists must work together. But research that transcends conventional academic boundaries is harder to fund, do, review and publish — and those who attempt it struggle for recognition ...

 

Author sequence and credit for contributions in multiauthored publications

  
PLoS Biology, Vol. 5, No. 1. (16 January 2007), e18, https://doi.org/10.1371/journal.pbio.0050018

Abstract

A transparent, simple, and straightforward approach that is free from any arbitrary rank valuation is required to estimate the credit associated with the sequence of authors' names on multiauthored papers. [Excerpt] The increasing tendency across scientific disciplines to write multiauthored papers [1,2] makes the issue of the sequence of contributors' names a major topic both in terms of reflecting actual contributions and in a posteriori assessments by evaluation committees. Traditionally, the first author contributes most and also receives most of the credit, ...

 

Collective credit allocation in science.

  
Proceedings of the National Academy of Sciences of the United States of America, Vol. 111, No. 34. (26 August 2014), pp. 12325-12330, https://doi.org/10.1073/pnas.1401992111

Abstract

[Significance] The increasing dominance of multiauthor papers is straining the credit system of science: although for single-author papers, the credit is obvious and undivided, for multiauthor papers, credit assignment varies from discipline to discipline. Consequently, each research field runs its own informal credit allocation system, which is hard to decode for outsiders. Here we develop a discipline-independent algorithm to decipher the collective credit allocation process within science, capturing each coauthor’s perceived contribution to a publication. The proposed method provides scientists and policy-makers ...

 

Competitive science: is competition ruining science?

  
Infection and Immunity, Vol. 83, No. 4. (01 April 2015), pp. 1229-1233, https://doi.org/10.1128/iai.02939-14

Abstract

Science has always been a competitive undertaking. Despite recognition of the benefits of cooperation and team science, reduced availability of funding and jobs has made science more competitive than ever. Here we consider the benefits of competition in providing incentives to scientists and the adverse effects of competition on resource sharing, research integrity, and creativity. The history of science shows that transformative discoveries often occur in the absence of competition, which only emerges once fields are established and goals are defined. ...

 

The future of science will soon be upon us

  
Nature, Vol. 524, No. 7564. (12 August 2015), pp. 137-137, https://doi.org/10.1038/524137a

Abstract

The European Commission has abandoned consideration of 'Science 2.0', finding it too ambitious. That was the wrong call, says Colin Macilwain. [Excerpt] As the staff of the European Commission head for the beaches this August, they have been asked to ponder the future of science. Research commissioner Carlos Moedas has announced his priorities as being “open science” and “open innovation”, and invited his team to report back with its ideas on how to achieve that. [\n] These goals sound laudable enough, but they're ...

This page of the database may be cited as:
Integrated Natural Resources Modelling and Management - Meta-information Database. http://mfkp.org/INRMM/tag/research-metrics

Result page: 1 2 Next

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.