Glossary
This glossary provides a non-exhaustive list of terms used in the MapillaryJS documentation and codebase.
#
Camera CalibrationCamera calibration is the process of estimating the parameters of a camera model to approximate the physical (or virtual) camera that captured a set images.
#
Camera CaptureStill image captured by a camera. It can be photographs but also frames extracted from a video. The still images are used as background textures in the street imagery map.
#
Camera ControlsSpecifies different modes for how the viewer's virtual camera is controlled through pointer, keyboard or other modes of input. Custom camera controls allow the API user to freely move the viewer's camera and define the [camera projection] used.
#
Camera FrameA three-dimensional visual representation of a the camera used to capture a still image. Different projection types and camera parameters can be visualized in different ways to indicate the underlying camera model and parameters.
#
Custom RendererCan be implemented to superimpose any geo-anchored 3D content on the street-level imagery. You can render 3D models of any format and create animations in the undistorted 3D space of MapillaryJS. You can even create 3D content editor functionality directly in the viewer.
#
Data ProviderWrite a data provider to render your own 3D reconstruction data of any format in MapillaryJS. You can use the data provider API to provide data in the MapillaryJS ent format. The data can come from anywhere, e.g. service APIs, JSON files, or even be generated procedurally in the browser.
#
DistortionDistortion is a deviation from rectilinear projection; a projection in which straight lines in a scene remain straight in an image.
#
DistortWhen a scene of the real 3D world is captured with cameras, it is projected onto 2D textures. Depending on the camera, this process can introduce some errors. One of them is radial distortion, which causes straight lines in the real world to look bent in the 2D image.
#
UndistortUsing computer vision, it is possible to compensate for radial distortion.
Radial distortion is different for every camera. To reconstruct a good 3D model from 2D images we need to determine the distortion. Mapillary uses OpenSfM as the technology for 3D reconstruction, which calculates radial distortion parameters during the process. Using the calibration parameters, image textures can be undistorted on the fly in MapillaryJS. The result is that lines that are straight in the real world will now also be straightened in MapillaryJS. Another effect is that image borders are not straight anymore after undistorting the image.
Besides making images look more realistic, undistortion also has several other benefits. In general, image pixels are now correctly related to 3D positions in the viewer. This results in better alignment between the pixels of different images and, therefore, smoother transitions and less artifacts when navigating between images.
#
Geographic AnchorA geographic anchor identifies a geographic location and an orientation using latitude, longitude, altitude, and rotation data. 3D models or AR effects can be geo-anchored in the undistorted 3D space in MapillaryJS.
#
ImageThe Image is the main MapillaryJS entity. An image consists of the texture of a camera capture, metadata associated with that camera capture, and artifacts derived from the camera capture itself or the group of adjacent camera captures.
#
Image Tile2D world maps are divided into tile sets with different level of detail. When zooming to a specific city of the map, higher resolution tiles are loaded and more details appear. In the same way, an image with high resolution can be tiled into smaller pieces. With image tiling it is possible to view every pixel and detail of the original photo without having to load every part of the image at once.
#
Street Imagery MapA three-dimensional map where the primary navigation and point of view are from the street perspective. The map is visualized through textures and geo-spatial data. MapillaryJS is an example of a street imagery map.
#
ProjectionWhen talking about projection, we usually refer to the case of an approximated ideal pinhole camera. The camera projects coordinates of a point in three-dimensional space onto its image plane. This mapping can be described by the camera matrix.
#
TypesMany different projection models exist. The once most commonly used with MapillaryJS are perspective, fisheye, and spherical (or equirectangular) projections.
#
UnprojectionUnprojection is the inverse of projection. It is a mapping from points on the two-dimensional image plane to the three-dimensional scene.
#
ViewerThe Viewer object represents the visible, navigable, and interactive imagery component. When you integrate the MapillaryJS street imagery map into your application, you always instantiate a new viewer with a set of options. You use the viewer API to affect the viewer programmatically.
#
Space#
Real 3D WorldThe space that we live in. A scene of the real 3D space is captured with cameras when mapping.
#
Distorted 2D ProjectionThe projection of the camera captures. This is a two-dimensional space, the flat texture. Projection types for images uploaded to Mapillary are generally perspective, fisheye, or equirectangular. All projection types have some distortion. For perspective and fisheye images it's radial. For equirectangular images the distortion comes from the representation itself.
#
Undistorted 3D SpaceThe rendered space in MapillaryJS where textures are undistorted according to their calibration parameters. This space is three-dimensional and its aim is to represent the real 3D world as accurately as possible. In this space, equirectangular (panoramic) images are rendered as spheres.