dc.contributor.author | PRADHAN, SHAMEER KUMAR | |
dc.contributor.author | TUNGAL, SAGAR | |
dc.date.accessioned | 2021-09-28T13:25:40Z | |
dc.date.available | 2021-09-28T13:25:40Z | |
dc.date.issued | 2021-09-28 | |
dc.identifier.uri | http://hdl.handle.net/2077/69724 | |
dc.description.abstract | Large volume of data is generated by different systems. Intelligent systems such as
autonomous driving uses such large volume of data to train their artificial intelligence models. However, good quality data is one of the foremost needs of any system to function in an effective and safe manner. Especially in critical systems such as those related with autonomous driving, quality data becomes sacrosanct as fault in
such systems could result in fatal accidents. In this thesis, a Design Science Research is conducted to identify challenges related with data quality of a distributed deep learning system. The challenges are identified by conducing interviews with five experts from autonomous driving domain as well as through literature review. The challenges and their severity are validated using a survey. After identification of the challenges, five artifact components are developed that relate with assessing and improving data quality. The artifact components include Data Quality Workflow,
List of Challenges, List of Data Quality Attributes, List of Data Quality Attribute Metrics, and Potential Solutions. The abstract artifact components and concrete implementation of those components are devised and validated using second round of interviews. In the third iteration of this study, the final artifact components are validated through a focus group session with experts and survey. Furthermore, the artifact also presents the information regarding which challenges affect which data quality attributes. This association between challenges and attributes are also validated in the focus group session. The results depict that most of the challenge - attribute association presumed by the researchers of this thesis are valid. Similarly, the templates developed for the artifact components are regarded as appropriate
as well. A contribution of this thesis study towards the body of software engineering and requirements engineering research is the comprehensive and unified "Data
Quality Assessment and Maintenance Framework" developed as a series of artifact components in this thesis. This framework can be used by researchers and practitioners to improve processes related with data quality as well as enhance data quality of the systems they develop. | sv |
dc.language.iso | eng | sv |
dc.subject | Data quality | sv |
dc.subject | Data | sv |
dc.subject | Data quality attributes | sv |
dc.subject | Data quality challenges | sv |
dc.subject | Data quality workflow | sv |
dc.subject | Data quality assessment | sv |
dc.subject | Data quality maintenance | sv |
dc.subject | Design science research | sv |
dc.subject | Artifacts | sv |
dc.subject | Template | sv |
dc.subject | Deep learning | sv |
dc.subject | Distributed architecture | sv |
dc.subject | Distributed deep learning architecture | sv |
dc.subject | Advanced driver assistance systems | sv |
dc.title | Quality Attributes of Data in Distributed Deep Learning Architectures | sv |
dc.type | text | |
dc.setspec.uppsok | Technology | |
dc.type.uppsok | H2 | |
dc.contributor.department | Göteborgs universitet/Institutionen för data- och informationsteknik | swe |
dc.contributor.department | University of Gothenburg/Department of Computer Science and Engineering | eng |
dc.type.degree | Student essay | |