Do Scaling Algorithms Preserve Word2Vec Semantics? A Case Study for Medical Entities

TitleDo Scaling Algorithms Preserve Word2Vec Semantics? A Case Study for Medical Entities
Publication TypeConference Paper
Year of Publication2018
AuthorsWawrzinek, J., J. M. G. Pinto, P. Markiewka, and W. - T. Balke
Conference Name13th International Conference on Data Integration in Life Science (DILS)
Date Published11/2018
Conference LocationHannover, Germany
Abstract

The exponential increase of scientific publications in the bio-medical field challenges access to scientific information, which primarily is encoded by semantic relationships between medical entities, such as active ingredients, diseases, or genes. Neural language models, such as Word2Vec, offer new ways of automatically learning semantically meaningful entity relationships even from large text corpora. They offer high scalability and deliver better accuracy than comparable approaches. Still, first the models have to be tuned by testing different training parameters. Arguably, the most critical parameter is the number of training dimensions for the neural network training and testing individually different numbers of dimensions is time-consuming. It usually takes hours or even days per training iteration on large corpora. In this paper we show a more efficient way to determine the optimal number of dimensions concerning quality measures such as precision/recall. We show that the quality of results gained using simpler and easier to compute scaling approaches like MDS or PCA correlates strongly with the expected quality when using the same number of Word2Vec training dimensions. This has even more impact if after initial Word2Vec training only a limited number of entities and their respective relations are of interest.

AttachmentSize
Camera-Ready of DILS2018 Paper 7.pdf862.63 KB