From MFKP_wiki

Jump to: navigation, search

Selection: with tag computational-science [177 articles] 

 

A general algorithm for computing distance transforms in linear time

  
Mathematical Morphology and its Applications to Image and Signal Processing In Mathematical Morphology and its Applications to Image and Signal Processing, Vol. 18 (2000), pp. 331-340, https://doi.org/10.1007/0-306-47025-x_36

Abstract

A new general algorithm for computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the computation per row (column) is independent of the computation of other rows (columns), the algorithm can be easily parallelized on shared memory computers. The algorithm can be used for the computation of the ...

 

Ten simple rules for making research software more robust

  
PLOS Computational Biology, Vol. 13, No. 4. (13 April 2017), e1005412, https://doi.org/10.1371/journal.pcbi.1005412

Abstract

[Abstract] Software produced for research, published and otherwise, suffers from a number of common problems that make it difficult or impossible to run outside the original institution or even off the primary developer’s computer. We present ten simple rules to make such software robust enough to be run by anyone, anywhere, and thereby delight your users and collaborators. [Author summary] Many researchers have found out the hard way that there’s a world of difference between “works for me on my machine” and “works for ...

 

Multi-dimensional weighted median: the module "wmedian" of the Mastrave modelling library

  
In Semantic Array Programming with Mastrave - Introduction to Semantic Computational Modelling (2012)

Abstract

Weighted median (WM) filtering is a well known technique for dealing with noisy images and a variety of WM-based algorithms have been proposed as effective ways for reducing uncertainties or reconstructing degraded signals by means of available information with heterogeneous reliability. Here a generalized module for applying weighted median filtering to multi-dimensional arrays of information with associated multi-dimensional arrays of corresponding weights is presented. Weights may be associated to single elements or to groups of elements along given dimensions of the ...

 

Running an open experiment: transparency and reproducibility in soil and ecosystem science

  
Environmental Research Letters, Vol. 11, No. 8. (01 August 2016), 084004, https://doi.org/10.1088/1748-9326/11/8/084004

Abstract

Researchers in soil and ecosystem science, and almost every other field, are being pushed—by funders, journals, governments, and their peers—to increase transparency and reproducibility of their work. A key part of this effort is a move towards open data as a way to fight post-publication data loss, improve data and code quality, enable powerful meta- and cross-disciplinary analyses, and increase trust in, and the efficiency of, publicly-funded research. Many scientists however lack experience in, and may be unsure of the benefits ...

 

The world's simplest impossible problem

  
MathWorks Technical Articles and Newsletters, Vol. 1 (1990), 92036v00

Abstract

If the average of two numbers is three, what are the numbers? The solution to this problem is not unique, and the problem is ill-defined, but that does not mean that MATLAB® cannot solve it. [\n] In this article from 1990, Cleve Moler explores this simple yet impossible problem and others like it using MATLAB to find answers with the fewest nonzero components and other “nice” solutions. ...

 

Rainbow color map critiques: an overview and annotated bibliography

  
MathWorks Technical Articles and Newsletters, Vol. 25 (2014), 92238v00

Abstract

A rainbow color map is based on the order of colors in the spectrum of visible light—the same colors that appear in a rainbow. Rainbow color maps commonly appear in data visualizations in many different scientific and engineering communities, and technical computing software often provides a rainbow color map as the default choice. Although rainbow color maps remain popular, they have a number of weaknesses when used for scientific visualization, and have been widely criticized. [\n] This paper summarizes the criticisms of ...

 

Enhancing reproducibility for computational methods

  
Science, Vol. 354, No. 6317. (09 December 2016), pp. 1240-1241, https://doi.org/10.1126/science.aah6168

Abstract

Over the past two decades, computational methods have radically changed the ability of researchers from all areas of scholarship to process and analyze data and to simulate complex systems. But with these advances come challenges that are contributing to broader concerns over irreproducibility in the scholarly literature, among them the lack of transparency in disclosure of computational methods. Current reporting methods are often uneven, incomplete, and still evolving. We present a novel set of Reproducibility Enhancement Principles (REP) targeting disclosure challenges ...

 

It's impossible to conduct research without software, say 7 out of 10 UK researchers

  
Software and research, Vol. 5 (2014), 1536

Abstract

No one knows how much software is used in research. Look around any lab and you’ll see software – both standard and bespoke – being used by all disciplines and seniorities of researchers. Software is clearly fundamental to research, but we can’t prove this without evidence. And this lack of evidence is the reason why we ran a survey of researchers at 15 Russell Group universities to find out about their software use and background. [Excerpt: Headline figures] [::] 92% of academics use ...

 

LINPACK: users' guide

  
(1979)

Abstract

[Excerpt:Table of Contents] "R.T.F.M." - Anonymous [\n] [...] [Overview] LIMPACK is a collection of Fortran subroutines which analyze and solve various systems of simultaneous linear algebraic equations. The subroutines are designed to be completely machine independent, fully portable, and to run at near optimum efficiency in most operating environments. [\n] Many of the subroutines deal with square coefficient matrices, where there are as many equations as unknowns. Some of the subroutines process rectangular coefficient matrices, where the system may be over- or underdetermined. Such systems ...

 

Trusting others to ‘do the math’

  
Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 376-392, https://doi.org/10.1080/03080188.2016.1165454

Abstract

Researchers effectively trust the work of others anytime they use software tools or custom software. In this article I explore this notion of trusting others, using Digital Humanities as a focus, and drawing on my own experience. Software is inherently flawed and limited, so when its use in scholarship demands better practices and terminology, to review research software and describe development processes. It is also important to make research software engineers and their work more visible, both for the purposes of ...

 

Software and scholarship

  
Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 342-348, https://doi.org/10.1080/03080188.2016.1165456

Abstract

[excerpt] The thematic focus of this issue is to examine what happens where software and scholarship meet, with particular reference to digital work in the humanities. Despite the some seven decades of its existence, Digital Humanities continues to struggle with the implications, in the academic ecosystem, of its position between engineering and art. [...] [\n] [...] [\n] I will end with my own reflection on this topic of evaluation. Peer review of scholarly works of software continues to pose a particularly vexed challenge ...

 

Ten steps to programming mastery

  
(2003)

Abstract

[Excerpt] Here are ten ways you can improve your coding. The overriding principle to improving your skill at coding, as well as almost endeavor, is open your mind and then fill it with better knowledge. Improvement necessarily implies change, yet it is human nature to fear and resist change. But overcoming that fear and embracing change as a way of life will enable you to reach new levels of achievement. [...] [::Big Rule 1: Break your own habits] When you began coding, you were much less experienced ...

 

ePiX tutorial and reference manual

  
(2008)

Abstract

[Excerpt: Introduction] ePiX, a collection of batch utilities, creates mathematically accurate figures, plots, and animations containing LATEX typography. The input syntax is easy to learn, and the user interface resembles that of LATEX itself: You prepare a scene description in a text editor, then “compile” the input file into a picture. LATEX- and web-compatible output types include a LATEX picture-like environment written with PSTricks, tikz, or eepic macros; vector images (eps, ps, and pdf); and bitmapped images and movies (png, mng, and gif). [\n] ePiX’s strengths include: [::] Quality of ...

 

The hard road to reproducibility

  
Science, Vol. 354, No. 6308. (07 October 2016), pp. 142-142, https://doi.org/10.1126/science.354.6308.142

Abstract

[Excerpt] [...] A couple years ago, we published a paper applying computational fluid dynamics to the aerodynamics of flying snakes. More recently, I asked a new student to replicate the findings of that paper, both as a training opportunity and to help us choose which code to use in future research. Replicating a published study is always difficult—there are just so many conditions that need to be matched and details that can't be overlooked—but I thought this case was relatively straightforward. ...

 

Academic authorship: who, why and in what order?

  
Health Renaissance, Vol. 11, No. 2. (19 June 2013), https://doi.org/10.3126/hren.v11i2.8214

Abstract

We are frequently asked by our colleagues and students for advice on authorship for scientific articles. This short paper outlines some of the issues that we have experienced and the advice we usually provide. This editorial follows on from our work on submitting a paper1 and also on writing an academic paper for publication.2 We should like to start by noting that, in our view, there exist two separate, but related issues: (a) authorship and (b) order of authors. The issue of authorship centres on the notion of who can be ...

 

Linking ecological information and radiative transfer models to estimate fuel moisture content in the Mediterranean region of Spain: solving the ill-posed inverse problem

  
Remote Sensing of Environment, Vol. 113, No. 11. (16 November 2009), pp. 2403-2411, https://doi.org/10.1016/j.rse.2009.07.001

Abstract

Live fuel moisture content (FMC) is a key factor required to evaluate fire risk and its operative and accurate estimation is essential for allocating pre-fire resources as a part of fire prevention. This paper presents an operative and accurate procedure to estimate FMC though MODIS (moderate resolution imaging spectrometer) data and simulation models. The new aspects of the method are its consideration of several ecological criteria to parameterize the models and consistently avoid simulating unrealistic spectra which might produce indetermination (ill-posed) ...

 

Filesystem Hierarchy Standard

  
(2015)

Abstract

This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. ...

 

Unfalsifiability of security claims

  
Proceedings of the National Academy of Sciences, Vol. 113, No. 23. (07 June 2016), pp. 6415-6420, https://doi.org/10.1073/pnas.1517797113

Abstract

[Significance] Much in computer security involves recommending defensive measures: telling people how they should choose and maintain passwords, manage their computers, and so on. We show that claims that any measure is necessary for security are empirically unfalsifiable. That is, no possible observation contradicts a claim of the form “if you don’t do X you are not secure.” This means that self-correction operates only in one direction. If we are wrong about a measure being sufficient, a successful attack will demonstrate that ...

 

Gotchas in writing Dockerfile

  
(2014)

Abstract

[Excerpt: Why do we need to use Dockerfile?] Dockerfile is not yet-another shell. Dockerfile has its special mission: automation of Docker image creation. [\n] Once, you write build instructions into Dockerfile, you can build the same image just with docker build command. [\n] Dockerfile is also useful to tell the knowledge of what a job the container does to somebody else. Your teammates can tell what the container is supposed to do just by reading Dockerfile. They don’t need to know login to the ...

 

An introduction to Docker for reproducible research, with examples from the R environment

  
ACM SIGOPS Operating Systems Review, Vol. 49, No. 1. (2 Oct 2014), pp. 71-79, https://doi.org/10.1145/2723872.2723882

Abstract

As computational work becomes more and more integral to many aspects of scientific research, computational reproducibility has become an issue of increasing importance to computer systems researchers and domain scientists alike. Though computational reproducibility seems more straight forward than replicating physical experiments, the complex and rapidly changing nature of computer environments makes being able to reproduce and extend such work a serious challenge. In this paper, I explore common reasons that code developed for one research project cannot be successfully executed or extended by subsequent researchers. I review current ...

 

Using Docker to support reproducible research

  

Abstract

Reproducible research is a growing movement among scientists, but the tools for creating sustainable software to support the computational side of research are still in their infancy and are typically only being used by scientists with expertise in com- puter programming and system administration. Docker is a new platform developed for the DevOps community that enables the easy creation and management of consistent computational environments. This article describes how we have applied it to computational science and suggests that it could ...

 

Modelling as a discipline

  
International Journal of General Systems, Vol. 30, No. 3. (1 January 2001), pp. 261-282, https://doi.org/10.1080/03081070108960709

Abstract

Modelling is an essential and inseparable part of all scientific, and indeed all intellectual, activity. How then can we treat it as a separate discipline? The answer is that the professional modeller brings special skills and techniques to bear in order to produce results that are insightful, reliable, and useful. Many of these techniques can be taught formally, such as sophisticated statistical methods, computer simulation, systems identification, and sensitivity analysis. These are valuable tools, but they are not as important as ...

 

Software search is not a science, even among scientists

  
(8 May 2016)

Abstract

When they seek software for a task, how do people go about finding it? Past research found that searching the Web, asking colleagues, and reading papers have been the predominant approaches---but is it still true today, given the popularity of Facebook, Stack Overflow, GitHub, and similar sites? In addition, when users do look for software, what criteria do they use? And finally, if resources such as improved software catalogs were to be developed, what kind of information would people want in them? These questions motivated our cross-sectional survey ...

 

A (partial) introduction to software engineering practices and methods

  
(2010)

Abstract

[Excerpt: Introduction] Software engineering is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use. [...] [\n] [...] As a discipline, software engineering has progressed very far in a very short period of time, particularly when compared to classical engineering field (like civil or electrical engineering). In the early days of computing, not much more than 50 years ago, computerized systems were quite small. Most of the programming was done by scientists trying to ...

 

EucaTool®, a cloud computing application for estimating the growth and production of Eucalyptus globulus Labill. plantations in Galicia (NW Spain)

  
Forest Systems, Vol. 24, No. 3. (03 December 2015), eRC06, https://doi.org/10.5424/fs/2015243-07865

Abstract

[Aim of study] To present the software utilities and explain how to use EucaTool®, a free cloud computing application developed to estimate the growth and production of seedling and clonal blue gum (Eucalyptus globulus Labill.) plantations in Galicia (NW Spain). [Area of study] Galicia (NW Spain). [Material and methods] EucaTool® implements a dynamic growth and production model that is valid for clonal and non-clonal blue gum plantations in the region. The model integrates transition functions for dominant height (site index curves), number of ...

 

Tales of future weather

  
Nature Climate Change, Vol. 5, No. 2. (28 January 2015), pp. 107-113, https://doi.org/10.1038/nclimate2450

Abstract

Society is vulnerable to extreme weather events and, by extension, to human impacts on future events. As climate changes weather patterns will change. The search is on for more effective methodologies to aid decision-makers both in mitigation to avoid climate change and in adaptation to changes. The traditional approach uses ensembles of climate model simulations, statistical bias correction, downscaling to the spatial and temporal scales relevant to decision-makers, and then translation into quantities of interest. The veracity of this approach cannot ...

 

Software Dependencies, Work Dependencies, and Their Impact on Failures

  
IEEE Transactions on Software Engineering, Vol. 35, No. 6. (November 2009), pp. 864-878, https://doi.org/10.1109/tse.2009.42

Abstract

Prior research has shown that customer-reported software faults are often the result of violated dependencies that are not recognized by developers implementing software. Many types of dependencies and corresponding measures have been proposed to help address this problem. The objective of this research is to compare the relative performance of several of these dependency measures as they relate to customer-reported defects. Our analysis is based on data collected from two projects from two independent companies. Combined, our data set encompasses eight ...

 

Realization of a scalable Shor algorithm

  
Science, Vol. 351, No. 6277. (31 March 2015), pp. 1068-1070, https://doi.org/10.1126/science.aad9480

Abstract

[Reducing quantum overhead] A quantum computer is expected to outperform its classical counterpart in certain tasks. One such task is the factorization of large integers, the technology that underpins the security of bank cards and online privacy. Using a small-scale quantum computer comprising five trapped calcium ions, Monz et al. implement a scalable version of Shor's factorization algorithm. With the function of ions being recycled and the architecture scalable, the process is more efficient than previous implementations. The approach thus provides the ...

 

License compatibility and relicensing

  
In Licenses (2016)

Abstract

If you want to combine two free programs into one, or merge code from one into the other, this raises the question of whether their licenses allow combining them. [\n] There is no problem merging programs that have the same license, if it is a reasonably behaved license, as nearly all free licenses are.(*) [\n] What then when the licenses are different? In general we say that several licenses are compatible if there is a way to merge code under those various licenses ...

 

Binless strategies for estimation of information from neural data

  
Physical Review E, Vol. 66, No. 5. (11 November 2002), 051903, https://doi.org/10.1103/physreve.66.051903

Abstract

We present an approach to estimate information carried by experimentally observed neural spike trains elicited by known stimuli. This approach makes use of an embedding of the observed spike trains into a set of vector spaces, and entropy estimates based on the nearest-neighbor Euclidean distances within these vector spaces [L. F. Kozachenko and N. N. Leonenko, Probl. Peredachi Inf. 23, 9 (1987)]. Using numerical examples, we show that this approach can be dramatically more efficient than standard bin-based approaches such as ...

 

A tutorial on independent component analysis

  
(11 Apr 2014)

Abstract

Independent component analysis (ICA) has become a standard data analysis technique applied to an array of problems in signal processing and machine learning. This tutorial provides an introduction to ICA based on linear algebra formulating an intuition for ICA from first principles. The goal of this tutorial is to provide a solid foundation on this advanced topic so that one might learn the motivation behind ICA, learn why and when to apply this technique and in the process gain an introduction to this exciting field of active research. [Excerpt: ...

 

GNU Coding Standards

  
(2015)

Abstract

[Excerpt: About the GNU Coding Standards] The GNU Coding Standards were written by Richard Stallman and other GNU Project volunteers. Their purpose is to make the GNU system clean, consistent, and easy to install. This document can also be read as a guide to writing portable, robust and reliable programs. It focuses on programs written in C, but many of the rules and principles are useful even if you write in another programming language. The rules often state reasons for writing ...

 

The unsung heroes of scientific software

  
Nature, Vol. 529, No. 7584. (4 January 2016), pp. 115-116, https://doi.org/10.1038/529115a

Abstract

Creators of computer programs that underpin experiments don’t always get their due — so the website Depsy is trying to track the impact of research code. [Excerpt] For researchers who code, academic norms for tracking the value of their work seem grossly unfair. They can spend hours contributing to software that underpins research, but if that work does not result in the authorship of a research paper and accompanying citations, there is little way to measure its impact. [\n] [...] Depsy’s creators hope that their ...

 

Code complexity - Part II

  
No. GotW #21. (1997)

Abstract

The challenge: Take the three-line function from GotW #20 and make it strongly exception-safe. This exercise illustrates some important lessons about exception safety. [Excerpt: Exception Safety and Multiple Side Effects] In this case, it turned out to be possible in Attempt #3 to perform both side effects with essentially commit-or-rollback semantics (except for the stream issues). The reason it was possible is that there turned out to be a technique by which the two effects could be performed atomically... that is, all of ...

 

A computational framework for generalized moving windows and its application to landscape pattern analysis

  
International Journal of Applied Earth Observation and Geoinformation, Vol. 44 (February 2016), pp. 205-216, https://doi.org/10.1016/j.jag.2015.09.010

Abstract

[Highlights] [::] Moving window analysis is a prominent means of analyzing the spatial variability of landscape patterns at multiple scales. [::] A new computational framework is presented that overcomes technical and computational barriers to the use and implementation of moving windows based landscape pattern analysis of raster maps. [::] For a small window of 41 × 41 pixels, computation time was reduced by a factor 600 compared to the most commonly used software. These gains will be greater for larger windows. [::] The framework facilitates ...

 

The statistical crisis in science

  
American Scientist, Vol. 102, No. 6. (2014), 460, https://doi.org/10.1511/2014.111.460

Abstract

Data-dependent analysis—a “garden of forking paths”— explains why many statistically significant comparisons don't hold up. [Excerpt] There is a growing realization that reported “statistically significant” claims in scientific publications are routinely mistaken. Researchers typically express the confidence in their data in terms of p-value: the probability that a perceived result is actually the result of random variation. The value of p (for “probability”) is a way of measuring the extent to which a data set provides evidence against a so-called null hypothesis. ...

 

Open Source software and GIS

  
In Open source GIS a GRASS GIS approach, Vol. 773 (2008), pp. 1-6, https://doi.org/10.1007/978-1-4757-3578-9_1

Abstract

Over the past decade, Geographical Information Systems (GIS) have evolved from a highly specialized niche to a technology that affects nearly every aspect of our lives, from finding driving directions to managing natural disasters. While just a few years ago the use of GIS was restricted to a group of researchers, planners and government workers, now almost everybody can create customized maps or overlay GIS data. On the other hand, many complex problems related to urban and regional planning, environmental protection, or business management, require ...

 

Interactive comment (reply to Anonymous Referee 3) on Modelling soil erosion at European scale: towards harmonization and reproducibility - by Bosco et al

  
Natural Hazards and Earth System Sciences Discussions, Vol. 2 (2014), pp. C1786-C1795, https://doi.org/10.6084/m9.figshare.1379902

Abstract

Throughout the public discussion of our article Bosco et al. (Nat. Hazards Earth Syst. Sci. Discuss., 2, 2639-2680, 2014), the Anonymous Referee 3 provided (Nat. Hazards Earth Syst. Sci. Discuss., 2, C1592-C1594, 2014) a variety of insights. This work presents our replies to them. ...

 

Interactive comment (reply to Dino Torri) on Modelling soil erosion at European scale: towards harmonization and reproducibility - by Bosco et al

  
Natural Hazards and Earth System Sciences Discussions, Vol. 2 (2014), pp. C671-C688, https://doi.org/10.6084/m9.figshare.1379901

Abstract

During the public discussion of our article Bosco et al. (Nat. Hazards Earth Syst. Sci. Discuss., 2, 2639-2680, 2014), D. Torri provided numerous insights (Nat. Hazards Earth Syst. Sci. Discuss. 2, C528-C532, 2014). This work offers our replies to them. ...

 

What is the question?

  
Science, Vol. 347, No. 6228. (20 March 2015), pp. 1314-1315, https://doi.org/10.1126/science.aaa6146

Abstract

Over the past 2 years, increased focus on statistical analysis brought on by the era of big data has pushed the issue of reproducibility out of the pages of academic journals and into the popular consciousness (1). Just weeks ago, a paper about the relationship between tissue-specific cancer incidence and stem cell divisions (2) was widely misreported because of misunderstandings about the primary statistical argument in the paper (3). Public pressure has contributed to the massive recent adoption of reproducible research ...

Visual summary


 

Predictive modeling and analytics

  
(August 2014)

Abstract

This book is about predictive modeling. Yet, each chapter could easily be handled by an entire volume of its own. So one might think of this as a survey of predictive models, both statistical and machine learning. We define A predictive model as a statistical model or machine learning model used to predict future behavior based on past behavior. In order to use this book, the reader should have a basic understanding of statistics (statistical inference, models, tests, etc.)—this is an ...

 

A web accessible environmental model base: a tool for natural resources management

  
In MODSIM 1997 International Congress on Modelling and Simulation Proceedings (1997), pp. 657-663

Abstract

An environmental model base, accessible via the World-Wide Web, is presented. The information stored in this model base can be retrieved using various methods such as structured queries, full text search, exact and approximate keyword search. These search tools help the environmental modeller to find a suitable model to solve their modelling problem. The modeller is then presented with a set of information, regarding the models matching the query, including source and executable code, user manuals, and references to supplemental documentation. ...

References

  1. Abel, D.J., K. L. Taylor, D. Kuo, Integrating modelling systems for environmental management infomuttion systems, ACM Sigmod, 26(1), 1997a.
  2. Abel, D.J., V. Garde, K. L Taylor, X. Zhou, SMART: towards Internet marketplaces, Technical Report, CSIRO Mathematical and Information Sciences, Canberra, Australia, 1997b.
  3. Ba, S., R. Kalakota, A.B. Whinston. Using client-broker-server architecture for Intranet decision support. . Decision Support Systems, 19(3), 1997.
  4. Berners-Lee, T., R. CaiHatt, A.
 

On the role of scientific thought

  
In Selected Writings on Computing: A personal Perspective (1982), pp. 60-66, https://doi.org/10.1007/978-1-4612-5695-3_12

Abstract

Essentially, this essay contains nothing new; on the contrary, its subject matter is so old that sometimes it seems forgotten. It is written in an effort to undo some of the more common misunderstandings that I encounter (nearly daily) in my professional world of computing scientists, programmers, computer users and computer designers, and even colleagues engaged in educational politics. The decision to write this essay now was taken because I suddenly realized that my confrontation with this same pattern of misunderstanding ...

 

Modelling soil erosion at European scale: towards harmonization and reproducibility

  
Natural Hazards and Earth System Science, Vol. 15, No. 2. (4 February 2015), pp. 225-245, https://doi.org/10.5194/nhess-15-225-2015

Abstract

Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the ...

 

GRASS GIS manual: r.watershed

  
In GRASS Development Team, 2014. GRASS GIS 7.1svn Reference Manual (2014)

Abstract

r.watershed - Calculates hydrological parameters and RUSLE factors. ...

 

Rampant software errors undermine scientific results

  
F1000Research, Vol. 3 (11 December 2014), 303, https://doi.org/10.12688/f1000research.5930.1

Abstract

Errors in scientific results due to software bugs are not limited to a few high-profile cases that lead to retractions and are widely reported. Here I estimate that in fact most scientific results are probably wrong if data have passed through a computer, and that these errors may remain largely undetected. The opportunities for both subtle and profound errors in software and data management are boundless, yet they remain surprisingly underappreciated. ...

 

Minimal make - A minimal tutorial on make

  
(2014)

Abstract

[Excerpt] I would argue that the most important tool for reproducible research is not Sweave or knitr but GNU make. Consider, for example, all of the files associated with a manuscript. In the simplest case, I would have an R script for each figure plus a LaTeX file for the main text. And then a BibTeX file for the references. Compiling the final PDF is a bit of work: [::] Run each R script through R to produce the relevant figure. [::] Run latex and ...

 

Mathematical models for emerging disease

  
Science, Vol. 346, No. 6215. (12 December 2014), pp. 1294-1295, https://doi.org/10.1126/science.aaa3441

Abstract

It has been nearly 25 years since the publication of Infectious Disease of Humans (1), the “vade mecum” of mathematical modeling of infectious disease; the proliferation of epidemiological careers that it initiated is now in its fourth generation. Epidemiological models have proved very powerful in shaping health policy discussions. The complex interactions that lead to pathogen (and pest) outbreaks make it necessary to use models to provide quantitative insights into the counterintuitive outcomes that are the rule of most nonlinear systems. ...

 

Facilitating reproducibility in scientific computing: principles and practice

  
In Reproducibility: Principles, Problems, Practices (2015)

Abstract

The foundation of scientific research is theory and experiment, carefully documented in open publications, in part so that other researchers can reproduce and validate the claimed findings. Unfortunately, the field of scientific and mathematical computing has evolved in ways that often do not meet these high standards. In published computational work, frequently there is no record of the work ow process that produced the published computational results, and in some cases, even the code is missing or has been changed significantly ...

References

  1. Linpack. Available at http://www.netlib.org/linpack
  2. NIST digital library of mathematical functions. Available at http://dlmf.nist.gov
  3. Heartbleed. 2014. Available at http://en.wikipedia.org/wiki/Heartbleed
  4. Top500 list. July 2014. Available at http://top500.org/statistics/perfdevel
  5. A. Abad, R. Barrio, and A. Dena. Computing periodic orbits with arbitrary precision. Phys. Rev. E, 84:016701, 2011
  6. D. H. Bailey. Misleading performance reporting in the supercomputing field. Scientific
 

Knowledge Freedom in computational science: a two stage peer-review process with KF eligibility access review

  

Abstract

Wide scale transdisciplinary modelling (WSTM) growingly demands a focus on reproducible research and scientific knowledge freedom. Data and software freedom are essential aspects of knowledge freedom in computational science. Therefore, ideally published articles should also provide the readers with the data and source code of the described mathematical modelling. To maximise transparency, replicability, reproducibility and reusability, published data should be made available as open data while source code should be made available as free software. Here, a two-stage peer review process ...

Visual summary

This page of the database may be cited as:
Integrated Natural Resources Modelling and Management - Meta-information Database. http://mfkp.org/INRMM/tag/computational-science

Result page: 1 2 3 4 Next

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.