dc.description.abstract | Representing text documents as vector or in numerical format has been a revolution in
natural language processing. It represents similar parts of text in such a way that they
are very close to each other, making it very easy to classify or find similarities among
them. These vectors also represent the way we use the words or parts of documents as
well which helps finding similarity even between pair of words. While word2vec is such
a technique that represents each word as a vector, doc2vec takes it to another level by
representing a whole sentence or document as a vector. Being able to represent an entire
document as a vector allows comparing a substantial number of words or sentences at a
time which can save computational power as well as bandwidth. This relatively newer
doc2vec technology has not yet been implemented for Bengali sentiment analysis and its
feasibility is also unknown. In this study, we have trained doc2vec and word2vec models
using a corpus constructed with 10500 Bengali documents. The corpus consists of three
types of data differentiated by their polarity i.e. positive, negative and neutral. Later,
we have employed several machine learning algorithms for comparing the accuracy of
classification. To evaluate machine learning classifiers performance, we’ve applied k-fold cross validation technique. In k-fold cross validation we’ve used document vectors
directly obtained from doc2vec model, and TF-IDF averaged document vectors gained
from word2vec model. | en_US |