Hopp til hovedinnholdet

Publications

NIBIOs employees contribute to several hundred scientific articles and research reports every year. You can browse or search in our collection which contains references and links to these publications as well as other research and dissemination activities. The collection is continously updated with new and historical material.

2022

To document

Abstract

The process of creating terrain and landscape models is important in a variety of computer graphics and visualization applications, from films and computer games, via flight simulators and landscape planning, to scientific visualization and subsurface modelling. Interestingly, the modelling techniques used in this large range of application areas have started to merge in the last years. This chapter is a report where we present two taxonomies of different modelling methods. Firstly we present a data oriented taxonomy, where we divide modelling into three different scenarios: the data-free, the sparse-data and the dense-data scenario. Then we present a workflow oriented taxonomy, where we divide modelling into the separate stages necessary for creating a geological model. We start the report by showing that the new trends in geological modelling are approaching the modelling methods that have been developed in computer graphics. We then introduce the process of geological modelling followed by our two taxonomies with descriptions and comparisons of selected methods. Finally, we discuss the challenges and trends in geological modelling.

Abstract

When you care about data integrity of spatial data you need to know about the limitations/weaknesses of using simple feature datatype in your database. For instance https://land.copernicus.eu/pan-european/corine-land-cover/clc2018 contains 2,377,772 simple features among which we find 852 overlaps and 1420 invalid polygons. For this test I used “ESRI FGDB” file and gdal for import to postgis. We find such minor overlaps and gaps quite often, which might not be visible for the human eye. The problem here is that it covers up for real errors and makes difficult to enforce database integrity constraints for this. Close parallel lines also seems to cause Topology Exception in many spatial libraries. A core problem with simple features is that they don't contain information about the relation they have with neighbor features, so integrity of such relations is hard to constraint. Another problem is mixing of old and new data in the payload from the client. This makes it hard and expensive to create clients, because you will need a full stack of spatial libraries and maybe a complete locked exact snapshot of your database on the client side. Another thing is that a common line may differ from client to client depending on spatial lib, snapTo usage, tolerance values and transport formats. In 2022 many system are depending on live updates also for spatial data. So it’s big advantage to be able to provide a simple and “secure” API’s with fast server side integrity constraints checks that can be used from a standard web browser. When we have this checks on server side we will secure the equal rules across different clients. Is there alternatives that can secure data integrity in a better way? Yes, for instance Postgis Topology. The big difference is that Postgis Topology has more open structure that is realized by using standard database relational features. This lower the complexity of the client and secures data integrity. In the talk “Use Postgis Topology to secure data integrity, simple API and clean up messy simple feature datasets.” we will dive more into the details off Postgis Topology Building an API for clients may be possible using simple features, but it would require expensive computations to ensure topological integrity but to solve problem with mixing of new and old borders parts can not be solved without breaking the polygon up into logical parts. Another thing is attribute handling, like if you place surface partly overlapping with another surface should that have an influence on the attributes on the new surface. We need to focus more on data integrity and the complexity and cost of creating clients when using simple feature, because the demands for spatial data updated in real time from many different clients in a secure and consistent way will increase. This will be main focus in this talk. https://www.slideshare.net/laopsahl/dataintegrityriskswhenusingsimplefeaturepdf

To document

Abstract

Sustainable water resources management roots in monitoring data reliability and a full engagement of all institutions involved in the water sector. When competences and interests are overlapping, however, coordination may be difficult, thus hampering cooperative actions. This is the case of Santa Cruz Island (Galápagos, Ecuador). A comprehensive assessment on water quality data (physico-chemical parameters, major elements, trace elements and coliforms) collected since 1985 revealed the need of optimizing monitoring efforts to fill knowledge gaps and to better target decision-making processes. A Water Committee (Comité de la gestión del Agua) was established to foster the coordinated action among stakeholders and to pave the way for joint monitoring in the island that can optimize the efforts for water quality assessment and protection. Shared procedures for data collection, sample analysis, evaluation and data assessment by an open-access geodatabase were proposed and implemented for the first time as a prototype in order to improve accountability and outreach towards civil society and water users. The overall results reveal the high potential of a well-structured and effective joint monitoring approach within a complex, multi-stakeholder framework.

2021

To document

Abstract

There are neither volume nor velocity thresholds that define big data. Any data ranging from just beyond the capacity of a single personal computer to tera- and petabytes of data can be considered big data. Although it is common to use High Performance Computers (HPCs) and cloud facilities to compute big data, migrating to such facilities is not always practical due to various reasons, especially for medium/small analysis. Personal computers at public institutions and business companies are often idle during parts of the day and the entire night. Exploiting such computational resources can partly alleviate the need for HPC and cloud services for analysis of big data where HPC and cloud facilities are not immediate options. This is particularly relevant also during testing and pilot application before implementation on HPC or cloud computing. In this paper, we show a real case of using a local network of personal computers using open-source software packages configured for distributed processing to process remotely sensed big data. Sentinel-2 image time series are used for the testing of the distributed system. The normalized difference vegetation index (NDVI) and the monthly median band values are the variables computed to test and evaluate the practicality and efficiency of the distributed cluster. Computational efficiencies of the cluster in relation to different cluster setup, different data sources and different data distribution are tested and evaluated. The results demonstrate that the proposed cluster of local computers is efficient and practical to process remotely sensed data where single personal computers cannot perform the computation. Careful configurations of the computers, the distributed framework and the data are important aspects to be considered in optimizing the efficiency of such a system. If correctly implemented, the solution leads to an efficient use of the computer facilities and allows the processing of big, remote, sensing data without the need to migrate it to larger facilities such as HPC and cloud computing systems, except when going to production and large applications.

To document

Abstract

Rapid technological advances in airborne hyperspectral and lidar systems paved the way for using machine learning algorithms to map urban environments. Both hyperspectral and lidar systems can discriminate among many significant urban structures and materials properties, which are not recognizable by applying conventional RGB cameras. In most recent years, the fusion of hyperspectral and lidar sensors has overcome challenges related to the limits of active and passive remote sensing systems, providing promising results in urban land cover classification. This paper presents principles and key features for airborne hyperspectral imaging, lidar, and the fusion of those, as well as applications of these for urban land cover classification. In addition, machine learning and deep learning classification algorithms suitable for classifying individual urban classes such as buildings, vegetation, and roads have been reviewed, focusing on extracted features critical for classification of urban surfaces, transferability, dimensionality, and computational expense.