From MFKP_wiki

Jump to: navigation, search

Selection: with tag modelling [at least 200 articles] 

 

A semi-automated approach for the generation of a new land use and land cover product for Germany based on Landsat time-series and Lucas in-situ data

  
Remote Sensing Letters, Vol. 8, No. 3. (02 December 2016), pp. 244-253, https://doi.org/10.1080/2150704x.2016.1249299

Abstract

Information on land cover and land use at high spatial resolutions is essential for advancing earth system science as well as for environmental monitoring to support decision-making and reporting processes. In view of this, we present the first version of the DFD Land Use and Land Cover Product for Germany, DFD-LULC_DE, for the year 2014, generated from 702 Landsat-7 and Landat-8 scenes at 30 m resolution. The results were derived based on a fully automated preprocessing chain that integrates data acquisition, ...

 

Medium-range, monthly, and seasonal prediction for Europe and the use of forecast information

  
Journal of Climate, Vol. 19, No. 23. (December 2006), pp. 6025-6046, https://doi.org/10.1175/jcli3944.1

Abstract

Operational probabilistic (ensemble) forecasts made at ECMWF during the European summer heat wave of 2003 indicate significant skill on medium (3–10 day) and monthly (10–30 day) time scales. A more general “unified” analysis of many medium-range, monthly, and seasonal forecasts confirms a high degree of probabilistic forecast skill for European temperatures over the first month. The unified analysis also identifies seasonal predictability for Europe, which is not yet realized in seasonal forecasts. Interestingly, the initial atmospheric state appears to be important ...

 

A new European settlement map from optical remotely sensed data

  
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 9, No. 5. (May 2016), pp. 1978-1992, https://doi.org/10.1109/jstars.2015.2485662

Abstract

An application of a general methodology for processing very high-resolution imagery to produce a European Settlement Map (ESM) in support of policy-makers is presented. The process mapped around 10 million km2 of the European continent. The input image data are satellite SPOT-5/6 pan-sharpened multispectral images of 2.5- and 1.5-m spatial resolution, respectively. This is the first time that remote sensing technology demonstrates capability to produce a continental information layer using 2.5-m input images. Moreover, it is the highest resolution continental map ...

 

Operating procedure for the production of the global human settlement layer from Landsat data of the epochs 1975, 1990, 2000, and 2014

  
Vol. 27741 EN (2016), https://doi.org/10.2788/253582

Abstract

A new global information baseline describing the spatial evolution of the human settlements in the past 40 years is presented. It is the most spatially global detailed data available today dedicated to human settlements, and it shows the greatest temporal depth. The core processing methodology relies on a new supervised classification paradigm based on symbolic machine learning. The information is extracted from Landsat image records organized in four collections corresponding to the epochs 1975, 1990, 2000, and 2014. The experiment reported ...

 

Land cover mapping from remotely sensed and auxiliary data for harmonized official statistics

  
ISPRS International Journal of Geo-Information, Vol. 7, No. 4. (21 April 2018), 157, https://doi.org/10.3390/ijgi7040157

Abstract

This paper describes a general framework alternative to the traditional surveys that are commonly performed to estimate, for statistical purposes, the areal extent of predefined land cover classes across Europe. The framework has been funded by Eurostat and relies on annual land cover mapping and updating from remotely sensed and national GIS-based data followed by area estimation. Map production follows a series of steps, namely data collection, change detection, supervised image classification, rule-based image classification, and map updating/generalization. Land cover area ...

 

Geostatistical tools to map the interaction between development aid and indices of need

  
No. 49. (2018)

Abstract

In order to meet and assess progress towards global sustainable development goals (SDGs), an improved understanding of geographic variation in population wellbeing indicators such as health status, wealth and access to resources is crucial, as the equitable and efficient allocation of international aid relies on knowing where funds are needed most. Unfortunately, in many low-income countries, detailed, reliable and timely information on the spatial distribution and characteristics of intended aid recipients are rarely available. Furthermore, lack of information on the past ...

 

Non-supervised method for early forest fire detection and rapid mapping

  
In Fifth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2017), Vol. 10444 (6 September 2017), 104440R, https://doi.org/10.1117/12.2280714

Abstract

Natural hazards are a challenge for the society. Scientific community efforts have been severely increased assessing tasks about prevention and damage mitigation. The most important points to minimize natural hazard damages are monitoring and prevention. This work focuses particularly on forest fires. This phenomenon depends on small-scale factors and fire behavior is strongly related to the local weather. Forest fire spread forecast is a complex task because of the scale of the phenomena, the input data uncertainty and time constraints in ...

 

A global human settlement layer from optical HR/VHR RS data: concept and first results

  
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 6, No. 5. (October 2013), pp. 2102-2131, https://doi.org/10.1109/jstars.2013.2271445

Abstract

A general framework for processing high and very-high resolution imagery in support of a Global Human Settlement Layer (GHSL) is presented together with a discussion on the results of the first operational test of the production workflow. The test involved the mapping of 24.3 million km2 of the Earth surface spread in four continents, corresponding to an estimated population of 1.3 billion people in 2010. The resolution of the input image data ranges from 0.5 to 10 meters, collected by a ...

 

Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure

  
Ecography, Vol. 40, No. 8. (1 August 2017), pp. 913-929, https://doi.org/10.1111/ecog.02881

Abstract

Ecological data often show temporal, spatial, hierarchical (random effects), or phylogenetic structure. Modern statistical approaches are increasingly accounting for such dependencies. However, when performing cross-validation, these structures are regularly ignored, resulting in serious underestimation of predictive error. One cause for the poor performance of uncorrected (random) cross-validation, noted often by modellers, are dependence structures in the data that persist as dependence structures in model residuals, violating the assumption of independence. Even more concerning, because often overlooked, is that structured data also ...

 

Statistical modeling: the two cultures (with comments and a rejoinder by the author)

  
Statistical Science, Vol. 16, No. 3. (August 2001), pp. 199-231, https://doi.org/10.1214/ss/1009213726

Abstract

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. Algorithmic modeling, both in ...

 

WHO child growth standards: length/height-for-age, weight-for-age, weight-for-length, weight-for-height and body mass index-for-age - Methods and development

  
(2006)

Abstract

[:Executive summary: Methods and development] In 1993 the World Health Organization (WHO) undertook a comprehensive review of the uses and interpretation of anthropometric references. The review concluded that the NCHS/WHO growth reference, which had been recommended for international use since the late 1970s, did not adequately represent early childhood growth and that new growth curves were necessary. The World Health Assembly endorsed this recommendation in 1994. In response WHO undertook the Multicentre Growth Reference Study (MGRS) between 1997 and 2003 to generate ...

 

Stacked generalization

  
Neural Networks, Vol. 5, No. 2. (January 1992), pp. 241-259, https://doi.org/10.1016/s0893-6080(05)80023-1

Abstract

This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When ...

 

Creating spatial interpolation surfaces with DHS data

  
No. 11. (2015)

Abstract

Improved understanding of sub-national geographic variation and inequity in demographic and health indicators is increasingly recognized as central to meeting development goals. Data from DHS surveys are critical to monitoring progress in these indicators but are generally not used to support sub-national evaluation below the first-level administrative unit. This study explored the potential of geostatistical approaches for the production of interpolated surfaces from GPS cluster located survey data, and for the prediction of gridded surfaces at 5×5km resolution. The impact of DHS cluster displacement on these interpolated ...

 

Assessing comorbidity and correlates of wasting and stunting among children in Somalia using cross-sectional household surveys: 2007 to 2010

  
BMJ Open, Vol. 6, No. 3. (09 March 2016), e009854, https://doi.org/10.1136/bmjopen-2015-009854

Abstract

[Objective] Wasting and stunting may occur together at the individual child level; however, their shared geographic distribution and correlates remain unexplored. Understanding shared and separate correlates may inform interventions. We aimed to assess the spatial codistribution of wasting, stunting and underweight and investigate their shared correlates among children aged 6–59 months in Somalia. [Setting] Cross-sectional nutritional assessments surveys were conducted using structured interviews among communities in Somalia biannually from 2007 to 2010. A two-stage cluster sampling methodology was used to select children aged ...

 

Hierarchical Bayesian modeling

  
In Subjective and Objective Bayesian Statistics: Principles, Models, and Applications, Second Edition (25 November 2002), pp. 336-358, https://doi.org/10.1002/9780470317105.ch14
edited by S. James Press

Abstract

[Excerpt: Introduction] Hierarchical modeling is a widely used approach to building complex models by specifying a series of more simple conditional distributions. It naturally lends itself to Bayesian inference, especially using modern tools for Bayesian computation. In this chapter we first present essential concepts of hierarchical modeling, and then suggest its generality by presenting a series of widely used specific models. [...] [\n] [...] [Summary] In this chapter we have introduced hierarchical modeling as a very general approach to specifying complex models through a ...

 

Mapping under-5 and neonatal mortality in Africa, 2000–15: a baseline analysis for the Sustainable Development Goals

  
The Lancet, Vol. 390, No. 10108. (November 2017), pp. 2171-2182, https://doi.org/10.1016/s0140-6736(17)31758-0
Keywords: africa   development   mapping   modelling   mortality  

Abstract

[Background] During the Millennium Development Goal (MDG) era, many countries in Africa achieved marked reductions in under-5 and neonatal mortality. Yet the pace of progress toward these goals substantially varied at the national level, demonstrating an essential need for tracking even more local trends in child mortality. With the adoption of the Sustainable Development Goals (SDGs) in 2015, which established ambitious targets for improving child survival by 2030, optimal intervention planning and targeting will require understanding of trends and rates of progress ...

 

Fine resolution mapping of population age-structures for health and development applications

  
Journal of The Royal Society Interface, Vol. 12, No. 105. (18 March 2015), 20150073, https://doi.org/10.1098/rsif.2015.0073

Abstract

The age-group composition of populations varies considerably across the world, and obtaining accurate, spatially detailed estimates of numbers of children under 5 years is important in designing vaccination strategies, educational planning or maternal healthcare delivery. Traditionally, such estimates are derived from population censuses, but these can often be unreliable, outdated and of coarse resolution for resource-poor settings. Focusing on Nigeria, we use nationally representative household surveys and their cluster locations to predict the proportion of the under-five population in 1 × ...

 

Applied regression and multilevel/hierarchical models

  
(2006)

Abstract

Data Analysis Using Regression and Multilevel/Hierarchical Models is a comprehensive manual for the applied researcher who wants to perform data analysis using linear and nonlinear regression and multilevel models. The book introduces and demonstrates a wide variety of models, at the same time instructing the reader in how to fit these models using freely available software packages. The book illustrates the concepts by working through scores of real data examples that have arisen in the authors’ own applied research, with programming code provided for each one. Topics ...

 

Spatially-explicit models of global tree density

  
Scientific Data, Vol. 3 (16 August 2016), 160069, https://doi.org/10.1038/sdata.2016.69

Abstract

Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied ...

 

Iterative random forests to discover predictive and stable high-order interactions

  
Proceedings of the National Academy of Sciences, Vol. 115, No. 8. (20 February 2018), pp. 1943-1948, https://doi.org/10.1073/pnas.1711236115

Abstract

[Significance] We developed a predictive, stable, and interpretable tool: the iterative random forest algorithm (iRF). iRF discovers high-order interactions among biomolecules with the same order of computational cost as random forests. We demonstrate the efficacy of iRF by finding known and promising interactions among biomolecules, of up to fifth and sixth order, in two data examples in transcriptional regulation and alternative splicing. [Abstract] Genomics has revolutionized biology, enabling the interrogation of whole transcriptomes, genome-wide binding sites for proteins, and many other molecular processes. However, ...

 

Classification and interaction in random forests

  
Proceedings of the National Academy of Sciences, Vol. 115, No. 8. (20 February 2018), pp. 1690-1692, https://doi.org/10.1073/pnas.1800256115

Abstract

Suppose you are a physician with a patient whose complaint could arise from multiple diseases. To attain a specific diagnosis, you might ask yourself a series of yes/no questions depending on observed features describing the patient, such as clinical test results and reported symptoms. As some questions rule out certain diagnoses early on, each answer determines which question you ask next. With about a dozen features and extensive medical knowledge, you could create a simple flow chart to connect and order ...

 

Maxent is not a presence-absence method: a comment on Thibaud et al

  
Methods in Ecology and Evolution, Vol. 5, No. 11. (November 2014), pp. 1192-1197, https://doi.org/10.1111/2041-210x.12252

Abstract

[Summary] [::1] Thibaud et al. (Methods in Ecology and Evolution 2014) present a framework for simulating species and evaluating the relative effects of factors affecting the predictions from species distribution models (SDMs). They demonstrate their approach by generating presence–absence data sets for different simulated species and analysing them using four modelling methods: three presence–absence methods and Maxent, which is a presence-background modelling tool. One of their results is striking: that their use of Maxent performs well in estimating occupancy probabilities and even ...

 

Inside-outside net: detecting objects in context with skip pooling and recurrent neural networks

  
In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016) (2016), pp. 2874-2883, https://doi.org/10.1109/CVPR.2016.314

Abstract

It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are ...

 

Speed/accuracy trade-offs for modern convolutional object detectors

  
In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) (2017), pp. 7310-7319, https://doi.org/10.1109/CVPR.2017.351

Abstract

The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We ...

 

Legacy system anti-patterns and a pattern-oriented migration response

  
In Systems Engineering for Business Process Change (2000), pp. 239-250, https://doi.org/10.1007/978-1-4471-0457-5_19
edited by Peter Henderson

Abstract

Mature information systems grow old disgracefully as successive waves of hacking result in accidental architectures which resist the reflection of ongoing business process change. Such petrified systems are termed legacy systems. Legacy systems are simultaneously business assets and business liabilities. Their hard-won dependability and accurate reflection of tacit business knowledge prevent us from undertaking green-field development of replacement systems. Their resistance to the reflection of business process change prevents us from retaining them. Consequently, we are drawn in this paper to ...

 

The lack of a priori distinctions between learning algorithms

  
Neural Computation, Vol. 8, No. 7. (1 October 1996), pp. 1341-1390, https://doi.org/10.1162/neco.1996.8.7.1341

Abstract

This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower ...

 

Seeking for the rational basis of the Median Model: the optimal combination of multi-model ensemble results

  
Atmospheric Chemistry and Physics, Vol. 7, No. 24. (11 December 2007), pp. 6085-6098, https://doi.org/10.5194/acp-7-6085-2007

Abstract

In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. [\n] We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach ...

 

Integrated environmental modeling: a vision and roadmap for the future

  
Environmental Modelling & Software, Vol. 39 (January 2013), pp. 3-23, https://doi.org/10.1016/j.envsoft.2012.09.006

Abstract

Integrated environmental modeling (IEM) is inspired by modern environmental problems, decisions, and policies and enabled by transdisciplinary science and computer capabilities that allow the environment to be considered in a holistic way. The problems are characterized by the extent of the environmental system involved, dynamic and interdependent nature of stressors and their impacts, diversity of stakeholders, and integration of social, economic, and environmental considerations. IEM provides a science-based structure to develop and organize relevant knowledge and information and apply it to ...

 

An assessment of forest biomass maps in Europe using harmonized national statistics and inventory plots

  
Forest Ecology and Management, Vol. 409 (February 2018), pp. 489-498, https://doi.org/10.1016/j.foreco.2017.11.047

Abstract

[Highlights] [::] We assessed four biomass maps for Europe using harmonized biomass reference data. [::] The harmonized statistics were derived from ∼430,000 plots from 26 countries. [::] All maps overestimated at low biomass and underestimated at high biomass. [::] All maps had an overall negative bias (23–43 Mg ha−1 at national level). [::] The maps relative errors was 29–40% at national level and 63–72% at cell level. [Abstract] Maps of aboveground forest biomass based on different input data and modelling approaches have been recently produced for Europe, opening up the ...

 

Rules of thumb for judging ecological theories

  
Trends in Ecology & Evolution, Vol. 19, No. 3. (March 2004), pp. 121-126, https://doi.org/10.1016/j.tree.2003.11.004

Abstract

An impressive fit to historical data suggests to biologists that a given ecological model is highly valid. Models often achieve this fit at the expense of exaggerated complexity that is not justified by empirical evidence. Because overfitted theories complement the traditional assumption that ecology is `messy', they generally remain unquestioned. Using predation theory as an example, we suggest that a fit-driven appraisal of model value is commonly misdirected; although fit to historical data can be important, the simplicity and generality of ...

 

The strategy of model building in population biology

  
American Scientist, Vol. 54, No. 4. (1966), pp. 421-431

Abstract

[Excerpt: Cluster of models] A mathematical model is neither an hypothesis nor a theory. Unlike the scientific hypothesis, a model is not verifiable directly by experiment. For all models are both true and false. Almost any plausible proposed relation among aspects of nature is likely to be true in the sense that it occurs (although rarely and slightly). Yet all models leave out a lot and are in that sense false, incomplete, inadequate. The validation of a model is not that it ...

 

Point process models for presence-only analysis

  
Methods in Ecology and Evolution, Vol. 6, No. 4. (1 April 2015), pp. 366-379, https://doi.org/10.1111/2041-210x.12352

Abstract

[::] Presence-only data are widely used for species distribution modelling, and point process regression models are a flexible tool that has considerable potential for this problem, when data arise as point events. [::] In this paper, we review point process models, some of their advantages and some common methods of fitting them to presence-only data. [::] Advantages include (and are not limited to) clarification of what the response variable is that is modelled; a framework for choosing the number and location of quadrature ...

 

To model or not to model, that is no longer the question for ecologists

  
Ecosystems, Vol. 20, No. 2. (2017), pp. 222-228, https://doi.org/10.1007/s10021-016-0068-x

Abstract

Here, I argue that we should abandon the division between “field ecologists” and “modelers,” and embrace modeling and empirical research as two powerful and often complementary approaches in the toolbox of 21st century ecologists, to be deployed alone or in combination depending on the task at hand. As empirical research has the longer tradition in ecology, and modeling is the more recent addition to the methodological arsenal, I provide both practical and theoretical reasons for integrating modeling more deeply into ecosystem ...

 

Describing wildland surface fuel loading for fire management: a review of approaches, methods and systems

  
International Journal of Wildland Fire, Vol. 22, No. 1. (2013), 51, https://doi.org/10.1071/wf11139

Abstract

Wildland fuelbeds are exceptionally complex, consisting of diverse particles of many sizes, types and shapes with abundances and properties that are highly variable in time and space. This complexity makes it difficult to accurately describe, classify, sample and map fuels for wildland fire research and management. As a result, many fire behaviour and effects software prediction systems use a generalised description of fuels to simplify data collection and entry into various computer programs. There are several major fuel description systems currently ...

 

The concept of potential natural vegetation: an epitaph?

  
Journal of Vegetation Science, Vol. 21, No. 6. (December 2010), pp. 1172-1178, https://doi.org/10.1111/j.1654-1103.2010.01218.x

Abstract

We discuss the usefulness of the concept of Potential Natural Vegetation (PNV), which describes the expected state of mature vegetation in the absence of human intervention. We argue that it is impossible to model PNV because of (i) the methodological problems associated to its definition and (ii) the issues related to the ecosystems dynamics.We conclude that the approach to characterizing PNV is unrealistic and provides scenarios with limited predictive power. In places with a long-term human history, interpretations of PNV need ...

 

Science of preparedness

  
Science, Vol. 357, No. 6356. (14 September 2017), pp. 1073-1073, https://doi.org/10.1126/science.aap9025

Abstract

Our hearts go out to those affected by hurricanes Harvey and Irma and by earlier monsoons across South Asia. These events are compelling reminders of the important role that science must play in preparing for disasters. But preparation is challenging, as reflected in the many facets of the “science of preparedness.” Certainly, modeling and forecasting storms are critical, but so are analyses of how agencies, communities, and individuals interact to understand and implement preparedness initiatives. [Excerpt] [...] Long-range estimates of the number ...

 

Trends in extreme weather and climate events: issues related to modeling extremes in projections of future climate change

  
Bulletin of the American Meteorological Society, Vol. 81, No. 3. (1 March 2000), pp. 427-436, https://doi.org/10.1175/1520-0477(2000)081<0427:tiewac>2.3.co;2

Abstract

Projections of statistical aspects of weather and climate extremes can be derived from climate models representing possible future climate states. Some of the recent models have reproduced results previously reported in the Intergovernmental Panel on Climate Change (IPCC) Second Assessment Report, such as a greater frequency of extreme warm days and lower frequency of extreme cold days associated with a warmer mean climate, a decrease in diurnal temperature range associated with higher nighttime temperatures, increased precipitation intensity, midcontinent summer drying, decreasing ...

 

Resampling methods for meta-model validation with recommendations for evolutionary computation

  
Evolutionary Computation, Vol. 20, No. 2. (16 February 2012), pp. 249-275, https://doi.org/10.1162/evco_a_00069

Abstract

Meta-modeling has become a crucial tool in solving expensive optimization problems. Much of the work in the past has focused on finding a good regression method to model the fitness function. Examples include classical linear regression, splines, neural networks, Kriging and support vector regression. This paper specifically draws attention to the fact that assessing model accuracy is a crucial aspect in the meta-modeling framework. Resampling strategies such as cross-validation, subsampling, bootstrapping, and nested resampling are prominent methods for model validation and ...

 

Combining multiple classifiers: an application using spatial and remotely sensed information for land cover type mapping

  
Remote Sensing of Environment, Vol. 74, No. 3. (December 2000), pp. 545-556, https://doi.org/10.1016/s0034-4257(00)00145-0

Abstract

This article discusses two new methods for increasing the accuracy of classifiers used land cover mapping. The first method, called the product rule, is a simple and general method of combining two or more classification rules as a single rule. Stacked regression methods of combining classification rules are discussed and compared to the product rule. The second method of increasing classifier accuracy is a simple nonparametric classifier that uses spatial information for classification. Two data sets used for land cover mapping ...

 

Bagging ensemble selection for regression

  
In AI 2012: Advances in Artificial Intelligence, Vol. 7691 (2012), pp. 695-706, https://doi.org/10.1007/978-3-642-35101-3_59

Abstract

Bagging ensemble selection (BES) is a relatively new ensemble learning strategy. The strategy can be seen as an ensemble of the ensemble selection from libraries of models (ES) strategy. Previous experimental results on binary classification problems have shown that using random trees as base classifiers, BES-OOB (the most successful variant of BES) is competitive with (and in many cases, superior to) other ensemble learning strategies, for instance, the original ES algorithm, stacking with linear regression, random forests or boosting. Motivated by ...

 

Bagging ensemble selection

  
In AI 2011: Advances in Artificial Intelligence, Vol. 7106 (2011), pp. 251-260, https://doi.org/10.1007/978-3-642-25832-9_26

Abstract

Ensemble selection has recently appeared as a popular ensemble learning method, not only because its implementation is fairly straightforward, but also due to its excellent predictive performance on practical problems. The method has been highlighted in winning solutions of many data mining competitions, such as the Netflix competition, the KDD Cup 2009 and 2010, the UCSD FICO contest 2010, and a number of data mining competitions on the Kaggle platform. In this paper we present a novel variant: bagging ensemble selection. ...

 

Simulating geographic transport network expansion through individual investments

  
In Spatial data analyses of urban land use and accessibility (2016), pp. 93-133

Abstract

This chapter introduces a GIS-based model that simulates the geographical expansion of transport networks by several decision makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness is defined as a function of variables in which revenue and broader societal benefits may play a role and can be based on empirically underpinned parameters that may differ according to private or public interests. The choice set is selected from an exhaustive set of ...

References

  1. Alonso, W., 1978. A theory of movements. In: Hansen, N.M. (Ed.), Human settlement systems: International perspectives on structure, change and public policy. Cambridge, MA. Ballinger, pp. 197–211.
  2. Anshelevich, E., Dasgupta, A., Tardos, E., Wexler, T., 2003. Near-optimal network design with selfish agents. In: Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, San Diego, June 9-11, 2003. ACM, pp. 511–520.
  3. Bala, V., Goyal, S., 2000. A noncooperative model of
 

Modelling post-fire soil erosion hazard using ordinal logistic regression: a case study in South-eastern Spain

  
Geomorphology, Vol. 232 (March 2015), pp. 117-124, https://doi.org/10.1016/j.geomorph.2014.12.005

Abstract

[Highlights] [::] A method to identify most vulnerable areas towards soil erosion has been proposed. [::] Slope steepness, aspect and fire severity were the inputs. [::] The field data were successfully fit to the model in 60% of cases after 50 runs. [::] North-facing slopes were shown to be less prone to soil erosion than the rest. [Abstract] Treatments that minimize soil erosion after large wildfires depend, among other factors, on fire severity and landscape configuration so that, in practice, most of them are applied according to ...

 

New temperature-based models for predicting global solar radiation

  
Applied Energy, Vol. 179 (October 2016), pp. 437-450, https://doi.org/10.1016/j.apenergy.2016.07.006

Abstract

[Highlights] [::] New temperature-based models for estimating solar radiation are investigated. [::] The models are validated against 20-years measured data of global solar radiation. [::] The new temperature-based model shows the best performance for coastal sites. [::] The new temperature-based model is more accurate than the sunshine-based models. [::] The new model is highly applicable with weather temperature forecast techniques. [Abstract] This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at ...

 

Archetypical patterns and trajectories of land systems in Europe

  
Regional Environmental Change (2015), pp. 1-18, https://doi.org/10.1007/s10113-015-0907-x

Abstract

Assessments of land-system change have dominantly focused on conversions among broad land-use categories, whereas intensity changes within these categories have received less attention. Considering that both modes of land change typically result in diverse patterns and trajectories of land-system change, there is a need to develop approaches to reduce this complexity. Using Europe as a case study, we applied a clustering approach based on self-organising maps and 12 land-use indicators to map (1) land-system archetypes for the year 2006, defined as ...

 

Linking plant strategies and plant traits derived by radiative transfer modelling

  
Journal of Vegetation Science (12 April 2017), https://doi.org/10.1111/jvs.12525

Abstract

[Question] Do spatial gradients of plant strategies correspond to patterns of plant traits obtained from a physically based model and hyperspectral imagery? It has previously been shown that reflectance can be used to map plant strategies according to the established CSR scheme. So far, these approaches have been based on empirical links and lacked transferability. Therefore, we test if physically based derivations of plant traits may help in finding gradients in traits that are linked to strategies. [Location] A raised bog and minerotrophic fen ...

 

Building Rothermel fire behaviour fuel models by genetic algorithm optimisation

  
International Journal of Wildland Fire, Vol. 24, No. 3. (2015), 317, https://doi.org/10.1071/wf14097

Abstract

A method to build and calibrate custom fuel models was developed by linking genetic algorithms (GA) to the Rothermel fire spread model. GA randomly generates solutions of fuel model parameters to form an initial population. Solutions are validated against observations of fire rate of spread via a goodness-of-fit metric. The population is selected for its best members, crossed over and mutated within a range of model parameter values, until a satisfactory fitness is reached. We showed that GA improved the performance ...

 

Building confidence in climate model projections: an analysis of inferences from fit

  
WIREs Clim Change, Vol. 8, No. 3. (1 May 2017), n/a, https://doi.org/10.1002/wcc.454

Abstract

Climate model projections are used to inform policy decisions and constitute a major focus of climate research. Confidence in climate projections relies on the adequacy of climate models for those projections. The question of how to argue for the adequacy of models for climate projections has not gotten sufficient attention in the climate modeling community. The most common way to evaluate a climate model is to assess in a quantitative way degrees of ‘model fit’; that is, how well model results ...

 

Detecting long-range correlations with detrended fluctuation analysis

  
Physica A: Statistical Mechanics and its Applications, Vol. 295, No. 3-4. (June 2001), pp. 441-454, https://doi.org/10.1016/s0378-4371(01)00144-3

Abstract

We examine the detrended fluctuation analysis (DFA), which is a well-established method for the detection of long-range correlations in time series. We show that deviations from scaling which appear at small time scales become stronger in higher orders of DFA, and suggest a modified DFA method to remove them. The improvement is necessary especially for short records that are affected by non-stationarities. Furthermore, we describe how crossovers in the correlation behavior can be detected reliably and determined quantitatively and show how ...

 

Statistical significance of seasonal warming/cooling trends

  
Proceedings of the National Academy of Sciences, Vol. 114, No. 15. (11 April 2017), pp. E2998-E3003, https://doi.org/10.1073/pnas.1700838114

Abstract

[Significance] The question whether a seasonal climatic trend (e.g., the increase of spring temperatures in Antarctica in the last decades) is of anthropogenic or natural origin is of great importance because seasonal climatic trends may considerably affect ecological systems, agricultural yields, and human societies. Previous studies assumed that the seasonal records can be treated as independent and are characterized by short-term memory only. Here we show that both assumptions, which may lead to a considerable overestimation of the trend significance, do not ...

This page of the database may be cited as:
Integrated Natural Resources Modelling and Management - Meta-information Database. http://mfkp.org/INRMM/tag/modelling

Result page: 1 2 3 4 5 6 Next

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.