From MFKP_wiki

Jump to: navigation, search

Selection: with tag computational-science [197 articles] 

 

Microsoft’s purchase of GitHub leaves some scientists uneasy

  
Nature, Vol. 558, No. 7710. (15 June 2018), pp. 353-353, https://doi.org/10.1038/d41586-018-05426-0

Abstract

They fear the online platform will become less open, but other researchers say the buyout could make GitHub more useful. [Excerpt] GitHub — a website that has become popular with scientists collaborating on research data and software — is to be acquired by Microsoft [...] [::Decentralized systems] Daniel Himmelstein, a data scientist at the University of Pennsylvania in Philadelphia, says that GitHub is problematic for researchers, but that this has nothing to do with the Microsoft acquisition. GitHub hosts repositories of code or data ...

 

Operating procedure for the production of the global human settlement layer from Landsat data of the epochs 1975, 1990, 2000, and 2014

  
Vol. 27741 EN (2016), https://doi.org/10.2788/253582

Abstract

A new global information baseline describing the spatial evolution of the human settlements in the past 40 years is presented. It is the most spatially global detailed data available today dedicated to human settlements, and it shows the greatest temporal depth. The core processing methodology relies on a new supervised classification paradigm based on symbolic machine learning. The information is extracted from Landsat image records organized in four collections corresponding to the epochs 1975, 1990, 2000, and 2014. The experiment reported ...

 

Geostatistical tools to map the interaction between development aid and indices of need

  
No. 49. (2018)

Abstract

In order to meet and assess progress towards global sustainable development goals (SDGs), an improved understanding of geographic variation in population wellbeing indicators such as health status, wealth and access to resources is crucial, as the equitable and efficient allocation of international aid relies on knowing where funds are needed most. Unfortunately, in many low-income countries, detailed, reliable and timely information on the spatial distribution and characteristics of intended aid recipients are rarely available. Furthermore, lack of information on the past ...

 

Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure

  
Ecography, Vol. 40, No. 8. (1 August 2017), pp. 913-929, https://doi.org/10.1111/ecog.02881

Abstract

Ecological data often show temporal, spatial, hierarchical (random effects), or phylogenetic structure. Modern statistical approaches are increasingly accounting for such dependencies. However, when performing cross-validation, these structures are regularly ignored, resulting in serious underestimation of predictive error. One cause for the poor performance of uncorrected (random) cross-validation, noted often by modellers, are dependence structures in the data that persist as dependence structures in model residuals, violating the assumption of independence. Even more concerning, because often overlooked, is that structured data also ...

 

Statistical modeling: the two cultures (with comments and a rejoinder by the author)

  
Statistical Science, Vol. 16, No. 3. (August 2001), pp. 199-231, https://doi.org/10.1214/ss/1009213726

Abstract

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in ...

 

Stacked generalization

  
Neural Networks, Vol. 5, No. 2. (January 1992), pp. 241-259, https://doi.org/10.1016/s0893-6080(05)80023-1

Abstract

This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When ...

 

Hierarchical Bayesian modeling

  
In Subjective and Objective Bayesian Statistics: Principles, Models, and Applications, Second Edition (25 November 2002), pp. 336-358, https://doi.org/10.1002/9780470317105.ch14
edited by S. James Press

Abstract

[Excerpt: Introduction] Hierarchical modeling is a widely used approach to building complex models by specifying a series of more simple conditional distributions. It naturally lends itself to Bayesian inference, especially using modern tools for Bayesian computation. In this chapter we first present essential concepts of hierarchical modeling, and then suggest its generality by presenting a series of widely used specific models. [...] [\n] [...] [Summary] In this chapter we have introduced hierarchical modeling as a very general approach to specifying complex models through a ...

 

Applied regression and multilevel/hierarchical models

  
(2006)

Abstract

Data Analysis Using Regression and Multilevel/Hierarchical Models is a comprehensive manual for the applied researcher who wants to perform data analysis using linear and nonlinear regression and multilevel models. The book introduces and demonstrates a wide variety of models, at the same time instructing the reader in how to fit these models using freely available software packages. The book illustrates the concepts by working through scores of real data examples that have arisen in the authors’ own applied research, with programming code provided for each one. Topics ...

 

Iterative random forests to discover predictive and stable high-order interactions

  
Proceedings of the National Academy of Sciences, Vol. 115, No. 8. (20 February 2018), pp. 1943-1948, https://doi.org/10.1073/pnas.1711236115

Abstract

[Significance] We developed a predictive, stable, and interpretable tool: the iterative random forest algorithm (iRF). iRF discovers high-order interactions among biomolecules with the same order of computational cost as random forests. We demonstrate the efficacy of iRF by finding known and promising interactions among biomolecules, of up to fifth and sixth order, in two data examples in transcriptional regulation and alternative splicing. [Abstract] Genomics has revolutionized biology, enabling the interrogation of whole transcriptomes, genome-wide binding sites for proteins, and many other molecular processes. However, ...

 

Classification and interaction in random forests

  
Proceedings of the National Academy of Sciences, Vol. 115, No. 8. (20 February 2018), pp. 1690-1692, https://doi.org/10.1073/pnas.1800256115

Abstract

Suppose you are a physician with a patient whose complaint could arise from multiple diseases. To attain a specific diagnosis, you might ask yourself a series of yes/no questions depending on observed features describing the patient, such as clinical test results and reported symptoms. As some questions rule out certain diagnoses early on, each answer determines which question you ask next. With about a dozen features and extensive medical knowledge, you could create a simple flow chart to connect and order ...

 

Maxent is not a presence-absence method: a comment on Thibaud et al

  
Methods in Ecology and Evolution, Vol. 5, No. 11. (November 2014), pp. 1192-1197, https://doi.org/10.1111/2041-210x.12252

Abstract

[Summary] [::1] Thibaud et al. (Methods in Ecology and Evolution 2014) present a framework for simulating species and evaluating the relative effects of factors affecting the predictions from species distribution models (SDMs). They demonstrate their approach by generating presence–absence data sets for different simulated species and analysing them using four modelling methods: three presence–absence methods and Maxent, which is a presence-background modelling tool. One of their results is striking: that their use of Maxent performs well in estimating occupancy probabilities and even ...

 

Inside-outside net: detecting objects in context with skip pooling and recurrent neural networks

  
In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016) (2016), pp. 2874-2883, https://doi.org/10.1109/CVPR.2016.314

Abstract

It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are ...

 

Speed/accuracy trade-offs for modern convolutional object detectors

  
In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (2017), pp. 7310-7319, https://doi.org/10.1109/CVPR.2017.351

Abstract

The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We ...

 

Software engineering for computational science: past, present, future

  
Computing in Science & Engineering (2018), pp. 1-1, https://doi.org/10.1109/mcse.2018.108162940

Abstract

While the importance of in silico experiments for the scientific discovery process increases, state-of-the-art software engineering practices are rarely adopted in computational science. To understand the underlying causes for this situation and to identify ways for improving the current situation, we conduct a literature survey on software engineering practices in computational science. As a result of our survey, we identified 13 recurring key characteristics of scientific software development that can be divided into three groups: characteristics that results (1) from the ...

 

What can machine learning do? Workforce implications

  
Science, Vol. 358, No. 6370. (22 December 2017), pp. 1530-1534, https://doi.org/10.1126/science.aap8062

Abstract

Digital computers have transformed work in almost every sector of the economy over the past several decades (1). We are now at the beginning of an even larger and more rapid transformation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology,” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities (2), there is no ...

 

The drought code component of the Canadian forest fire behavior system

  
Vol. 1316 (1972)

Abstract

Development of the Drought Code component of the Canadian Forest Fire Behavior System is described. The scale of available moisture used in the original Stored Moisture Index developed for coastal British Columbia was transformed to one of cumulative drying and incorporated as a component of the National Index. Drought Code values are related to availability of surface water, and to fire behavior and effects. Procedures are developed for improving estimated starting values, taking into account the carry-over of drought from the ...

 

2017 hurricanes and aerosols simulation

  
In Scientific Visualization Studio (November 2017), 12772

Abstract

[Excerpt] Tracking aerosols over land and water from August 1 to November 1, 2017. Hurricanes and tropical storms are obvious from the large amounts of sea salt particles caught up in their swirling winds. The dust blowing off the Sahara, however, gets caught by water droplets and is rained out of the storm system. Smoke from the massive fires in the Pacific Northwest region of North America are blown across the Atlantic to the UK and Europe. This visualization is a ...

 

To model or not to model, that is no longer the question for ecologists

  
Ecosystems, Vol. 20, No. 2. (2017), pp. 222-228, https://doi.org/10.1007/s10021-016-0068-x

Abstract

Here, I argue that we should abandon the division between “field ecologists” and “modelers,” and embrace modeling and empirical research as two powerful and often complementary approaches in the toolbox of 21st century ecologists, to be deployed alone or in combination depending on the task at hand. As empirical research has the longer tradition in ecology, and modeling is the more recent addition to the methodological arsenal, I provide both practical and theoretical reasons for integrating modeling more deeply into ecosystem ...

 

Science of preparedness

  
Science, Vol. 357, No. 6356. (14 September 2017), pp. 1073-1073, https://doi.org/10.1126/science.aap9025

Abstract

Our hearts go out to those affected by hurricanes Harvey and Irma and by earlier monsoons across South Asia. These events are compelling reminders of the important role that science must play in preparing for disasters. But preparation is challenging, as reflected in the many facets of the “science of preparedness.” Certainly, modeling and forecasting storms are critical, but so are analyses of how agencies, communities, and individuals interact to understand and implement preparedness initiatives. [Excerpt] [...] Long-range estimates of the number ...

 

A general algorithm for computing distance transforms in linear time

  
Mathematical Morphology and its Applications to Image and Signal Processing In Mathematical Morphology and its Applications to Image and Signal Processing, Vol. 18 (2000), pp. 331-340, https://doi.org/10.1007/0-306-47025-x_36

Abstract

A new general algorithm for computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the computation per row (column) is independent of the computation of other rows (columns), the algorithm can be easily parallelized on shared memory computers. The algorithm can be used for the computation of the ...

 

Ten simple rules for making research software more robust

  
PLOS Computational Biology, Vol. 13, No. 4. (13 April 2017), e1005412, https://doi.org/10.1371/journal.pcbi.1005412

Abstract

[Abstract] Software produced for research, published and otherwise, suffers from a number of common problems that make it difficult or impossible to run outside the original institution or even off the primary developer’s computer. We present ten simple rules to make such software robust enough to be run by anyone, anywhere, and thereby delight your users and collaborators. [Author summary] Many researchers have found out the hard way that there’s a world of difference between “works for me on my machine” and “works for ...

 

Multi-dimensional weighted median: the module "wmedian" of the Mastrave modelling library

  
In Semantic Array Programming with Mastrave - Introduction to Semantic Computational Modelling (2012)

Abstract

Weighted median (WM) filtering is a well known technique for dealing with noisy images and a variety of WM-based algorithms have been proposed as effective ways for reducing uncertainties or reconstructing degraded signals by means of available information with heterogeneous reliability. Here a generalized module for applying weighted median filtering to multi-dimensional arrays of information with associated multi-dimensional arrays of corresponding weights is presented. Weights may be associated to single elements or to groups of elements along given dimensions of the ...

 

Running an open experiment: transparency and reproducibility in soil and ecosystem science

  
Environmental Research Letters, Vol. 11, No. 8. (01 August 2016), 084004, https://doi.org/10.1088/1748-9326/11/8/084004

Abstract

Researchers in soil and ecosystem science, and almost every other field, are being pushed—by funders, journals, governments, and their peers—to increase transparency and reproducibility of their work. A key part of this effort is a move towards open data as a way to fight post-publication data loss, improve data and code quality, enable powerful meta- and cross-disciplinary analyses, and increase trust in, and the efficiency of, publicly-funded research. Many scientists however lack experience in, and may be unsure of the benefits ...

 

The world's simplest impossible problem

  
MathWorks Technical Articles and Newsletters, Vol. 1 (1990), 92036v00

Abstract

If the average of two numbers is three, what are the numbers? The solution to this problem is not unique, and the problem is ill-defined, but that does not mean that MATLAB® cannot solve it. [\n] In this article from 1990, Cleve Moler explores this simple yet impossible problem and others like it using MATLAB to find answers with the fewest nonzero components and other “nice” solutions. ...

 

Rainbow color map critiques: an overview and annotated bibliography

  
MathWorks Technical Articles and Newsletters, Vol. 25 (2014), 92238v00

Abstract

A rainbow color map is based on the order of colors in the spectrum of visible light—the same colors that appear in a rainbow. Rainbow color maps commonly appear in data visualizations in many different scientific and engineering communities, and technical computing software often provides a rainbow color map as the default choice. Although rainbow color maps remain popular, they have a number of weaknesses when used for scientific visualization, and have been widely criticized. [\n] This paper summarizes the criticisms of ...

 

Enhancing reproducibility for computational methods

  
Science, Vol. 354, No. 6317. (09 December 2016), pp. 1240-1241, https://doi.org/10.1126/science.aah6168

Abstract

Over the past two decades, computational methods have radically changed the ability of researchers from all areas of scholarship to process and analyze data and to simulate complex systems. But with these advances come challenges that are contributing to broader concerns over irreproducibility in the scholarly literature, among them the lack of transparency in disclosure of computational methods. Current reporting methods are often uneven, incomplete, and still evolving. We present a novel set of Reproducibility Enhancement Principles (REP) targeting disclosure challenges ...

 

It's impossible to conduct research without software, say 7 out of 10 UK researchers

  
Software and research, Vol. 5 (2014), 1536

Abstract

No one knows how much software is used in research. Look around any lab and you’ll see software – both standard and bespoke – being used by all disciplines and seniorities of researchers. Software is clearly fundamental to research, but we can’t prove this without evidence. And this lack of evidence is the reason why we ran a survey of researchers at 15 Russell Group universities to find out about their software use and background. [Excerpt: Headline figures] [::] 92% of academics use ...

 

LINPACK: users' guide

  
(1979)

Abstract

[Excerpt:Table of Contents] "R.T.F.M." - Anonymous [\n] [...] [Overview] LIMPACK is a collection of Fortran subroutines which analyze and solve various systems of simultaneous linear algebraic equations. The subroutines are designed to be completely machine independent, fully portable, and to run at near optimum efficiency in most operating environments. [\n] Many of the subroutines deal with square coefficient matrices, where there are as many equations as unknowns. Some of the subroutines process rectangular coefficient matrices, where the system may be over- or underdetermined. Such systems ...

 

Trusting others to ‘do the math’

  
Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 376-392, https://doi.org/10.1080/03080188.2016.1165454

Abstract

Researchers effectively trust the work of others anytime they use software tools or custom software. In this article I explore this notion of trusting others, using Digital Humanities as a focus, and drawing on my own experience. Software is inherently flawed and limited, so when its use in scholarship demands better practices and terminology, to review research software and describe development processes. It is also important to make research software engineers and their work more visible, both for the purposes of ...

 

Software and scholarship

  
Interdisciplinary Science Reviews, Vol. 40, No. 4. (2 October 2015), pp. 342-348, https://doi.org/10.1080/03080188.2016.1165456

Abstract

[excerpt] The thematic focus of this issue is to examine what happens where software and scholarship meet, with particular reference to digital work in the humanities. Despite the some seven decades of its existence, Digital Humanities continues to struggle with the implications, in the academic ecosystem, of its position between engineering and art. [...] [\n] [...] [\n] I will end with my own reflection on this topic of evaluation. Peer review of scholarly works of software continues to pose a particularly vexed challenge ...

 

Ten steps to programming mastery

  
(2003)

Abstract

[Excerpt] Here are ten ways you can improve your coding. The overriding principle to improving your skill at coding, as well as almost endeavor, is open your mind and then fill it with better knowledge. Improvement necessarily implies change, yet it is human nature to fear and resist change. But overcoming that fear and embracing change as a way of life will enable you to reach new levels of achievement. [...] [::Big Rule 1: Break your own habits] When you began coding, you were much less experienced ...

 

ePiX tutorial and reference manual

  
(2008)

Abstract

[Excerpt: Introduction] ePiX, a collection of batch utilities, creates mathematically accurate figures, plots, and animations containing LATEX typography. The input syntax is easy to learn, and the user interface resembles that of LATEX itself: You prepare a scene description in a text editor, then “compile” the input file into a picture. LATEX- and web-compatible output types include a LATEX picture-like environment written with PSTricks, tikz, or eepic macros; vector images (eps, ps, and pdf); and bitmapped images and movies (png, mng, and gif). [\n] ePiX’s strengths include: [::] Quality of ...

 

The hard road to reproducibility

  
Science, Vol. 354, No. 6308. (07 October 2016), pp. 142-142, https://doi.org/10.1126/science.354.6308.142

Abstract

[Excerpt] [...] A couple years ago, we published a paper applying computational fluid dynamics to the aerodynamics of flying snakes. More recently, I asked a new student to replicate the findings of that paper, both as a training opportunity and to help us choose which code to use in future research. Replicating a published study is always difficult—there are just so many conditions that need to be matched and details that can't be overlooked—but I thought this case was relatively straightforward. ...

 

Academic authorship: who, why and in what order?

  
Health Renaissance, Vol. 11, No. 2. (19 June 2013), https://doi.org/10.3126/hren.v11i2.8214

Abstract

We are frequently asked by our colleagues and students for advice on authorship for scientific articles. This short paper outlines some of the issues that we have experienced and the advice we usually provide. This editorial follows on from our work on submitting a paper1 and also on writing an academic paper for publication.2 We should like to start by noting that, in our view, there exist two separate, but related issues: (a) authorship and (b) order of authors. The issue of authorship centres on the notion of who can be ...

 

Linking ecological information and radiative transfer models to estimate fuel moisture content in the Mediterranean region of Spain: solving the ill-posed inverse problem

  
Remote Sensing of Environment, Vol. 113, No. 11. (16 November 2009), pp. 2403-2411, https://doi.org/10.1016/j.rse.2009.07.001

Abstract

Live fuel moisture content (FMC) is a key factor required to evaluate fire risk and its operative and accurate estimation is essential for allocating pre-fire resources as a part of fire prevention. This paper presents an operative and accurate procedure to estimate FMC though MODIS (moderate resolution imaging spectrometer) data and simulation models. The new aspects of the method are its consideration of several ecological criteria to parameterize the models and consistently avoid simulating unrealistic spectra which might produce indetermination (ill-posed) ...

 

Filesystem Hierarchy Standard

  
(2015)

Abstract

This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. ...

 

Unfalsifiability of security claims

  
Proceedings of the National Academy of Sciences, Vol. 113, No. 23. (07 June 2016), pp. 6415-6420, https://doi.org/10.1073/pnas.1517797113

Abstract

[Significance] Much in computer security involves recommending defensive measures: telling people how they should choose and maintain passwords, manage their computers, and so on. We show that claims that any measure is necessary for security are empirically unfalsifiable. That is, no possible observation contradicts a claim of the form “if you don’t do X you are not secure.” This means that self-correction operates only in one direction. If we are wrong about a measure being sufficient, a successful attack will demonstrate that ...

 

Gotchas in writing Dockerfile

  
(2014)

Abstract

[Excerpt: Why do we need to use Dockerfile?] Dockerfile is not yet-another shell. Dockerfile has its special mission: automation of Docker image creation. [\n] Once, you write build instructions into Dockerfile, you can build the same image just with docker build command. [\n] Dockerfile is also useful to tell the knowledge of what a job the container does to somebody else. Your teammates can tell what the container is supposed to do just by reading Dockerfile. They don’t need to know login to the ...

 

An introduction to Docker for reproducible research, with examples from the R environment

  
ACM SIGOPS Operating Systems Review, Vol. 49, No. 1. (2 Oct 2014), pp. 71-79, https://doi.org/10.1145/2723872.2723882

Abstract

As computational work becomes more and more integral to many aspects of scientific research, computational reproducibility has become an issue of increasing importance to computer systems researchers and domain scientists alike. Though computational reproducibility seems more straight forward than replicating physical experiments, the complex and rapidly changing nature of computer environments makes being able to reproduce and extend such work a serious challenge. In this paper, I explore common reasons that code developed for one research project cannot be successfully executed or extended by subsequent researchers. I review current ...

 

Using Docker to support reproducible research

  

Abstract

Reproducible research is a growing movement among scientists, but the tools for creating sustainable software to support the computational side of research are still in their infancy and are typically only being used by scientists with expertise in com- puter programming and system administration. Docker is a new platform developed for the DevOps community that enables the easy creation and management of consistent computational environments. This article describes how we have applied it to computational science and suggests that it could ...

 

Modelling as a discipline

  
International Journal of General Systems, Vol. 30, No. 3. (1 January 2001), pp. 261-282, https://doi.org/10.1080/03081070108960709

Abstract

Modelling is an essential and inseparable part of all scientific, and indeed all intellectual, activity. How then can we treat it as a separate discipline? The answer is that the professional modeller brings special skills and techniques to bear in order to produce results that are insightful, reliable, and useful. Many of these techniques can be taught formally, such as sophisticated statistical methods, computer simulation, systems identification, and sensitivity analysis. These are valuable tools, but they are not as important as ...

 

Software search is not a science, even among scientists

  
(8 May 2016)

Abstract

When they seek software for a task, how do people go about finding it? Past research found that searching the Web, asking colleagues, and reading papers have been the predominant approaches---but is it still true today, given the popularity of Facebook, Stack Overflow, GitHub, and similar sites? In addition, when users do look for software, what criteria do they use? And finally, if resources such as improved software catalogs were to be developed, what kind of information would people want in them? These questions motivated our cross-sectional survey ...

 

A (partial) introduction to software engineering practices and methods

  
(2010)

Abstract

[Excerpt: Introduction] Software engineering is concerned with all aspects of software production from the early stages of system specification through to maintaining the system after it has gone into use. [...] [\n] [...] As a discipline, software engineering has progressed very far in a very short period of time, particularly when compared to classical engineering field (like civil or electrical engineering). In the early days of computing, not much more than 50 years ago, computerized systems were quite small. Most of the programming was done by scientists trying to ...

 

EucaTool®, a cloud computing application for estimating the growth and production of Eucalyptus globulus Labill. plantations in Galicia (NW Spain)

  
Forest Systems, Vol. 24, No. 3. (03 December 2015), eRC06, https://doi.org/10.5424/fs/2015243-07865

Abstract

[Aim of study] To present the software utilities and explain how to use EucaTool®, a free cloud computing application developed to estimate the growth and production of seedling and clonal blue gum (Eucalyptus globulus Labill.) plantations in Galicia (NW Spain). [Area of study] Galicia (NW Spain). [Material and methods] EucaTool® implements a dynamic growth and production model that is valid for clonal and non-clonal blue gum plantations in the region. The model integrates transition functions for dominant height (site index curves), number of ...

 

Tales of future weather

  
Nature Climate Change, Vol. 5, No. 2. (28 January 2015), pp. 107-113, https://doi.org/10.1038/nclimate2450

Abstract

Society is vulnerable to extreme weather events and, by extension, to human impacts on future events. As climate changes weather patterns will change. The search is on for more effective methodologies to aid decision-makers both in mitigation to avoid climate change and in adaptation to changes. The traditional approach uses ensembles of climate model simulations, statistical bias correction, downscaling to the spatial and temporal scales relevant to decision-makers, and then translation into quantities of interest. The veracity of this approach cannot ...

 

Software Dependencies, Work Dependencies, and Their Impact on Failures

  
IEEE Transactions on Software Engineering, Vol. 35, No. 6. (November 2009), pp. 864-878, https://doi.org/10.1109/tse.2009.42

Abstract

Prior research has shown that customer-reported software faults are often the result of violated dependencies that are not recognized by developers implementing software. Many types of dependencies and corresponding measures have been proposed to help address this problem. The objective of this research is to compare the relative performance of several of these dependency measures as they relate to customer-reported defects. Our analysis is based on data collected from two projects from two independent companies. Combined, our data set encompasses eight ...

 

Realization of a scalable Shor algorithm

  
Science, Vol. 351, No. 6277. (31 March 2015), pp. 1068-1070, https://doi.org/10.1126/science.aad9480

Abstract

[Reducing quantum overhead] A quantum computer is expected to outperform its classical counterpart in certain tasks. One such task is the factorization of large integers, the technology that underpins the security of bank cards and online privacy. Using a small-scale quantum computer comprising five trapped calcium ions, Monz et al. implement a scalable version of Shor's factorization algorithm. With the function of ions being recycled and the architecture scalable, the process is more efficient than previous implementations. The approach thus provides the ...

 

License compatibility and relicensing

  
In Licenses (2016)

Abstract

If you want to combine two free programs into one, or merge code from one into the other, this raises the question of whether their licenses allow combining them. [\n] There is no problem merging programs that have the same license, if it is a reasonably behaved license, as nearly all free licenses are.(*) [\n] What then when the licenses are different? In general we say that several licenses are compatible if there is a way to merge code under those various licenses ...

 

Binless strategies for estimation of information from neural data

  
Physical Review E, Vol. 66, No. 5. (11 November 2002), 051903, https://doi.org/10.1103/physreve.66.051903

Abstract

We present an approach to estimate information carried by experimentally observed neural spike trains elicited by known stimuli. This approach makes use of an embedding of the observed spike trains into a set of vector spaces, and entropy estimates based on the nearest-neighbor Euclidean distances within these vector spaces [L. F. Kozachenko and N. N. Leonenko, Probl. Peredachi Inf. 23, 9 (1987)]. Using numerical examples, we show that this approach can be dramatically more efficient than standard bin-based approaches such as ...

 

A tutorial on independent component analysis

  
(11 Apr 2014)

Abstract

Independent component analysis (ICA) has become a standard data analysis technique applied to an array of problems in signal processing and machine learning. This tutorial provides an introduction to ICA based on linear algebra formulating an intuition for ICA from first principles. The goal of this tutorial is to provide a solid foundation on this advanced topic so that one might learn the motivation behind ICA, learn why and when to apply this technique and in the process gain an introduction to this exciting field of active research. [Excerpt: ...

This page of the database may be cited as:
Integrated Natural Resources Modelling and Management - Meta-information Database. http://mfkp.org/INRMM/tag/computational-science

Result page: 1 2 3 4 Next

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.