dc.description.abstract | This report describes the implementation and evaluation of two natural language
models using the machine learning technique deep learning. More specifically, two different models describing recurrent artificial neural networks (RNNs) were implemented, capable of generating news article headlines. One model focused on the
generation of random (unconditioned) headlines, and the other one on the generation of headlines based (conditioned) on a given news article. Both models were
then trained and evaluated on a data set of approximately 500,000 pairs of news
articles and their corresponding headlines. The task of summarizing large bodies of text into smaller ones, while maintaining the key points of the original text, has many applications. Quickly and automatically obtaining condensed versions of for example medical journals, scientific papers, and news articles can be of great value for the users of such content. The unconditioned model, implemented using a
multi-layer RNN consisting of LSTM cells, was able to produce headlines of moderate
plausibility, a majority being syntactically correct. The conditioned model was implemented using two RNNs consisting of GRU cells in an encoder-decodernetwork
with an attention mechanism, allowing the network to learn what words to focus on during headlining. Although the model managed to identify important keywords in the articles, it seldomly managed to produce meaningful sentences with them. We conclude that the techniques and models described in this report could be used to generate plausible news headlines. However, for the purpose of generating
conditioned headlines, we think that additional modifications are needed to obtain
satisfying results. | sv |