Hopp til hovedinnholdet

Publications

NIBIOs employees contribute to several hundred scientific articles and research reports every year. You can browse or search in our collection which contains references and links to these publications as well as other research and dissemination activities. The collection is continously updated with new and historical material.

2023

To document

Abstract

Global warming necessitates urgent action to reduce carbon dioxide (CO2) emissions and remove CO2 from the atmosphere. Biochar, a type of carbonized biomass which can be produced from crop residues (CRs), offers a promising solution for carbon dioxide removal (CDR) when it is used to sequester photosynthetically fixed carbon that would otherwise have been returned to atmospheric CO2 through respiration or combustion. However, high-resolution spatially explicit maps of CR resources and their capacity for climate change mitigation through biochar production are currently lacking, with previous global studies relying on coarse (mostly country scale) aggregated statistics. By developing a comprehensive high spatial resolution global dataset of CR production, we show that, globally, CRs generate around 2.4 Pg C annually. If 100% of these residues were utilized, the maximum theoretical technical potential for biochar production from CRs amounts to 1.0 Pg C year−1 (3.7 Pg CO2e year−1). The permanence of biochar differs across regions, with the fraction of initial carbon that remains after 100 years ranging from 60% in warm climates to nearly 100% in cryosols. Assuming that biochar is sequestered in soils close to point of production, approximately 0.72 Pg C year−1 (2.6 Pg CO2e year−1) of the technical potential would remain sequestered after 100 years. However, when considering limitations on sustainable residue harvesting and competing livestock usage, the global biochar production potential decreases to 0.51 Pg C year−1 (1.9 Pg CO2e year−1), with 0.36 Pg C year−1 (1.3 Pg CO2e year−1) remaining sequestered after a century. Twelve countries have the technical potential to sequester over one fifth of their current emissions as biochar from CRs, with Bhutan (68%) and India (53%) having the largest ratios. The high-resolution maps of CR production and biochar sequestration potential provided here will provide valuable insights and support decision-making related to biochar production and investment in biochar production capacity.

Abstract

In Norway we now get more up-to-date maps for land resource map (AR5), because the domain experts on agriculture in the municipalities in Norway have got access to a easy to use client. This system includes a simple web browser client and a database built on Postgis Topology. In this talk we will focus on, what is it with Postgis Topology that makes it easier to build user friendly and secure tools for updating of land resource maps like AR5. We will also say a couple of words about advantages related to traceability and data security, when using Postgis Topology. In another project, where we do a lot ST_Intersection and ST_Diff on many big Simple Feature layers that covers all of Norway, we have been struggling with Topology exceptions, wrong results and performance for years. Last two years we also tested JTS OverlayNG, but we still had problems. This year we are switching to Postgis Topology and tests so far are very promising. We also take a glance on this project here in this talk. A Postgis Topology database modell has normalised the data related to borders and surfaces as opposed to Simple Feature where this is not the case. Simple Feature database modell may be compared to not using foreign keys between students and classes in a database model, but just using a standard spreadsheet model where each student name are duplicated in each class they attend. URL’s that relate this talk https://gitlab.com/nibioopensource/pgtopo_update_gui https://gitlab.com/nibioopensource/pgtopo_update_rest https://gitlab.com/nibioopensource/pgtopo_update_sql https://gitlab.com/nibioopensource/resolve-overlap-and-gap

To document

Abstract

Up-to-date and reliable information on land cover and land use status is important in many aspects of human activities. Knowledge about the reference dataset, its coverage, nomenclature, thematic and geometric accuracy, spatial resolution is crucial for appropriate selection of reference samples used in the classification process. In this study, we examined the impact of the selection and pre-processing of reference samples for the classification accuracy. The classification based on Random Forest algorithm was performed using firstly the automatically selected reference samples derived directly from the national databases, and secondly using the pre-processed and verified reference samples. The verification procedures involved the iterative analysis of histogram of spectral features derived from the Sentinel-2 data for individual land cover classes. The verification of the reference samples improved the accuracy of delineation of all land cover classes. The highest improvement was achieved for the woodland broadleaved and non- and sparce vegetation classes, with the overall accuracy increasing from 51% to 73%, and from 33% to 74%, respectively. The second objective of this study was to derive the best possible land cover classification over the mountain area in Norway, therefore we examined whether the use of the Digital Elevation Model (DEM) can improve the classification results. Classifications were carried out based on Sentinel-2 data and a combination of Sentinel-2 and DEM. Using the DEM the accuracy for nine out of ten land cover classes was improved. The highest improvement was achieved for classes located at higher altitudes: low vegetation and non- and sparse vegetation.

Abstract

The aim of the article is to assess whether agricultural landscapes play a role in the perception of Norway held by tourists and residents. An additional aim is to analyse whether information accompanying images on social media indicate that the photographers have acknowledged the agricultural landscape. The authors used geotagged images uploaded to the image-sharing platform Flickr in their analyses. They selected photos from within the agricultural landscapes, inspected them, and categorized them according to extent and content. Additionally, they analysed the accompanying hashtags. The findings revealed that a large proportion of the photos contained agricultural landscapes, and thus confirmed the importance of the agricultural landscape for visual perception of and access to Norwegian landscapes. In addition, the lack of agricultural-related hashtags strengthened the authors’ suspicions that this might not have been widely recognized by the photographers. Thus, while agricultural landscapes commonly are considered primarily as landscapes of food production, the authors conclude that these landscapes also fulfil other functions and that their contribution to the perception of Norway is important. Additionally, many of the landscape elements seen and analysed in the sample of photos are elements that play a role in providing cultural ecosystem services.

2022

To document

Abstract

The process of creating terrain and landscape models is important in a variety of computer graphics and visualization applications, from films and computer games, via flight simulators and landscape planning, to scientific visualization and subsurface modelling. Interestingly, the modelling techniques used in this large range of application areas have started to merge in the last years. This chapter is a report where we present two taxonomies of different modelling methods. Firstly we present a data oriented taxonomy, where we divide modelling into three different scenarios: the data-free, the sparse-data and the dense-data scenario. Then we present a workflow oriented taxonomy, where we divide modelling into the separate stages necessary for creating a geological model. We start the report by showing that the new trends in geological modelling are approaching the modelling methods that have been developed in computer graphics. We then introduce the process of geological modelling followed by our two taxonomies with descriptions and comparisons of selected methods. Finally, we discuss the challenges and trends in geological modelling.

Abstract

When you care about data integrity of spatial data you need to know about the limitations/weaknesses of using simple feature datatype in your database. For instance https://land.copernicus.eu/pan-european/corine-land-cover/clc2018 contains 2,377,772 simple features among which we find 852 overlaps and 1420 invalid polygons. For this test I used “ESRI FGDB” file and gdal for import to postgis. We find such minor overlaps and gaps quite often, which might not be visible for the human eye. The problem here is that it covers up for real errors and makes difficult to enforce database integrity constraints for this. Close parallel lines also seems to cause Topology Exception in many spatial libraries. A core problem with simple features is that they don't contain information about the relation they have with neighbor features, so integrity of such relations is hard to constraint. Another problem is mixing of old and new data in the payload from the client. This makes it hard and expensive to create clients, because you will need a full stack of spatial libraries and maybe a complete locked exact snapshot of your database on the client side. Another thing is that a common line may differ from client to client depending on spatial lib, snapTo usage, tolerance values and transport formats. In 2022 many system are depending on live updates also for spatial data. So it’s big advantage to be able to provide a simple and “secure” API’s with fast server side integrity constraints checks that can be used from a standard web browser. When we have this checks on server side we will secure the equal rules across different clients. Is there alternatives that can secure data integrity in a better way? Yes, for instance Postgis Topology. The big difference is that Postgis Topology has more open structure that is realized by using standard database relational features. This lower the complexity of the client and secures data integrity. In the talk “Use Postgis Topology to secure data integrity, simple API and clean up messy simple feature datasets.” we will dive more into the details off Postgis Topology Building an API for clients may be possible using simple features, but it would require expensive computations to ensure topological integrity but to solve problem with mixing of new and old borders parts can not be solved without breaking the polygon up into logical parts. Another thing is attribute handling, like if you place surface partly overlapping with another surface should that have an influence on the attributes on the new surface. We need to focus more on data integrity and the complexity and cost of creating clients when using simple feature, because the demands for spatial data updated in real time from many different clients in a secure and consistent way will increase. This will be main focus in this talk. https://www.slideshare.net/laopsahl/dataintegrityriskswhenusingsimplefeaturepdf