SURE uses state-of-the-art algorithms from the fields of Photogrammetry and Computer Vision for transforming input images and/or LiDAR data into georeferenced Raster Products (DSM, True Ortho, etc.), Point Clouds and Meshes.
In order to allow for efficient and scalable processing, SURE divides the surface to be processed into tiles.
A DSM (Digital Surface Model) retains elevation data in a raster grid. This can be represented as a height image. Each pixel stores one elevation value corresponding to its (X,Y) position. Therefore this is a 2.5D surface representation. The resolution of the DSM (output GSD) coincides with the size of the pixel / raster cell or with the distance between two consecutive observations in row/column direction.
As the name implies, the DSM represents the surface of objects from the real landscape, in addition to the exposed terrain: vegetation, buildings or other structures, etc. This feature distinguishes it from a Digital Terrain Model (DTM) - which only represents the ground/terrain, excluding objects on top of it.
The DSM image takes a bird’s-eye view perspective and it is in fact an orthographic projection with a pixel-to-pixel correspondence to the True Ortho.
SURE can store the DSM in two formats: tif (32 bit image) and las (Point cloud). In the latter, each point would correspond to one raster cell (pixel). In addition to its coordinates, the points in the DSM las files retain also the RGB(I) fields as they are computed for the True Ortho tiles. See also Output Formats and Las Definition.
Although it can be viewed as a requirement for the True Ortho, the DSM carries value by itself, as it can be useful in a wide range of applications. Examples include: DTM production, building footprint extraction, LOD generation, etc.
Apart from the DSM product itself, SURE can generate additional raster layers which store DSM Metainformation. These items are written as tif images with a one-to-one correspondence to the DSM tif tiles and they can only be generated if the DSM is also produced. The Metainformation layers also provide insights to the quality of the observations in the DSM.
DSM Height Colored
This image has its pixels color-coded in a particular way depending on their corresponding height values. The repetitive color bands pattern facilitates the visualization of subtle height differences or slopes.
DSM Point Color
Each pixel retains the RGB values of the original point that delivered the Z value at the respective position.
DSM Point Color Interpolated
Each pixel retains the RGB values of the original point that delivered the Z value at the respective position plus the interpolated pixels will be textured in black.
This layer displays a measure of the local height variation across a mask of 3x3 pixels. Small or nil values are expected in flat areas, whereas larger values indicate an abrupt change in elevation (e.g. near building edges).
DSM Cell Point Count
The pixels in this image retain the number of original 3D points that were created inside the respective raster cells. The layer can be interpreted as a visibility measure because it basically indicates how many times the input images could triangulate points in the respective pixels.
DSM Cell Standard Deviation
This product stores per pixel the standard deviation in terms of elevation for the population of points that were generated inside the respective raster cell. This can be a good indicator of the reliability of the height measurement that ended up in the DSM product.
DSM Model Count
This image stores the number of the stereo models that generated the points from which the elevation measurements in the DSM were taken.
DSM Binary Mask
Every pixel, where an actual height measurement was available, is represented as white (255 or maximum intensity), whereas the pixels, that were either interpolated or left empty, are represented as black (0 or minimum intensity).
DSM Distance Mask
For each raster cell of the DSM that does not contain a measurement (including interpolated pixels), the euclidean distance to the closest pixel with a valid measurement (originating in a 3D point) is stored.
True Orthophotos (also known as True Ortho) show aerial images of the Earth surface and the objects on it in an orthographic projection. The images have the same bird's-eye view perspective as the DSM, depicting instead the original color bands of the input images. Up to 4 channels are supported, usually the Red, Green, Blue and -if available - the Near-Infrared bands.
The main feature that sets it apart from the classical Orthophoto is the use of an accurate DSM, instead of a DTM, as a geometric basis. This is applied to remove the perspective distortion of the aerial images. As consequence, a number of properties that distinguish the True Ortho from the aerial images and the classical Orthophoto emerge:
Uniform scale across the entire images. Size of pixels or resolution remains constant with regards to the object space that they cover. The resolution is identical to the one from the underlying DSM.
Accurate 2D positions of objects - with the possibility of georeferencing them.
True horizontal distances, independent from where they are measured across the True Ortho tiles.
No displacement caused by relief or tall structures → no building lean and no occlusions caused by it.
Building footprints can be easily identified - consistent with rooftops, except for the eaves offset, where these are present.
Comparison between classical Ortho (left) and True Ortho (right) and the impact of Building Lean on the building footprints:
The True Ortho produced by SURE draws its strengths from the methods of data processing: the precise Dense Image Matching and the efficient DSM filtering produce well defined sharp edges, whereas the smart combination of images determine a crisp texture effect. Additionally, this product benefits from the Global Color Balancing feature, which evens out differences in color shades from the input images, caused by changing atmospheric conditions, camera exposure and orientation, etc. The unique algorithm producing the True Ortho does not imply a concept of cut lines or seam line editing, rendering the process as fully automatic. All these features make it a fit solution to represent both densely urbanized areas, as well as rural or natural landscapes.
True Orthophotos are the optimal base for mapping applications. They can be adopted as the most intuitive background for GIS operations. In the SURE package, the True Ortho comes together with the DSM. Therefore, applications can benefit from the combination of elevation and spectral data to accurately represent objects in real world. Examples where this particularly comes useful include Deep Learning based object detection, semantic segmentation, temporal analysis and others.
Meshes are an intuitive way of representing arbitrarily shaped surfaces. Complementary to the geometric features of the surfaces they represent, the mesh models include texture applied on their faces from the input images.
A DSM Mesh contains the DSM data converted into a triangle mesh. The triangle density is locally adapted to the geometry, meaning areas of increased ondulation are represented by smaller triangles, whereas flat areas are represented by larger triangles. This product is, by definition, a 2.5D Mesh, as it has the DSM elevation data as source for the geometry of the surface.
In the cases of Nadir only projects, the DSM Mesh can be the optimal solution for representing city models. For such circumstances, it is typically challenging to retrieve reliable data on the building facades or, in general, on vertical surfaces. By its nature, the DSM Mesh closes all potential data gaps in these areas. More than this, the texture from the input images is accurately applied on the entire Mesh surface, giving it a realistic aspect.
SURE exports the DSM Mesh in the formats osgb, cesium, i3s (slpk), obj, lod_dae (collada) and lod_obj. The LOD (Level-Of-Detail) structure for the corresponding formats determines the performant loading and web-streaming of large models. This optimizes the experience of working with scalable Mesh models for various applications: city modelling, infrastructure planning and optimization (5G network design), insurance domain, etc.
3D Point Cloud
SURE Point Clouds represent an optimized outcome of Dense Image Matching. Each successfully matched pixel from the input images generates a 3D point, leading to a complete and dense surface reconstruction of the captured area. Subsequently, the point clouds undergo a filtering step, turning them into an accurate, light and noise reduced result.
The special algorithms employed ensure the representation of captured objects with high detail and sharp edges, in a true 3D manner. Not only ground and rooftop information are accurately preserved, but also building facades, as well as thin structures.
Data are stored into small or manageable sized las or laz files. The point clouds are colorized, as spectral information (typically RGB values) is transmitted from the images that produced the points initially. Apart from coordinates, the points store also meta-information relating to the positioning quality.
All these features make this product ready to use for applications such as: web streaming of city point clouds datasets, object extraction and classification, infrastructure modelling, etc.
The 3D Mesh is an intuitive representation of the objects in a 1:1 scale. Triangle faces are arranged to describe the geometry of the captured surface, carrying also texture information from the input images.
SURE can generate the 3D Mesh either from the filtered Dense Image Matching points, from LiDAR data only, or a combination of both.
Given its 3D nature, the Mesh preserves details on building facades and features under bridges, trees or suspended structures. Straight edges, geometric accuracy and photo-consistent texture determine the realistic aspect of the surface model, when viewed from every angle.
The Mesh provides exceptional performance while loading or web streaming even large datasets. This is due to its optimized structure and organization into small files.
The various formats in which this product can be exported (osgb, cesium, slpk, lod_obj, lod_dae, obj and dae) allow for interaction with the models on Desktop applications and web-based 3D viewers.
This type of output is facilitating various applications such as smart city planning and solutions, architectural visualization, cultural preservation, risk analysis, insurance industry, segmentation for feature extraction etc.