Show simple item record

dc.contributor.authorEiderström Swahn, Linus
dc.contributor.authorPohl, Pontus
dc.date.accessioned2019-11-12T10:15:16Z
dc.date.available2019-11-12T10:15:16Z
dc.date.issued2019-11-12
dc.identifier.urihttp://hdl.handle.net/2077/62437
dc.description.abstractSimultaneous Localization and Mapping (SLAM) is a technique frequently used in the area of self-driving cars for mapping and odometry. SLAM has traditionally been performed using laser based range finders of the light detection and ranging (LIDAR) types. Due to the high cost of these sensors there is currently a trend of implementing visuallybased SLAM systems using cameras as sensory input. This thesis explores the possibility of integrating a visual-SLAM component into an automotive framework as well as how this visual-SLAM compares to LIDAR based SLAM techniques. Using a state of the art visual SLAM algorithm, ORB-SLAM2, we implement and evaluate a modern visual-SLAM solution within the OpenDLV framework by performing a Design Science Research (DSR) study with the goal of implementing a microservice containing the ORB-SLAM2 algorithm inside of OpenDLV. The software artifact resulting from the DSR study is then evaluated using the evaluation methodology included in the KITTI visual odometry benchmark. Based on the results from this evaluation we conclude that the ORB-SLAM2 algorithm can successfully be integrated in the OpenDLV framework and that it is a possible replacement for LIDAR-based SLAM.sv
dc.language.isoengsv
dc.titleVisual SLAM in an automotive context:sv
dc.typetext
dc.setspec.uppsokTechnology
dc.type.uppsokM2
dc.contributor.departmentGöteborgs universitet/Institutionen för data- och informationsteknikswe
dc.contributor.departmentUniversity of Gothenburg/Department of Computer Science and Engineeringeng
dc.type.degreeStudent essay


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record