Since you are using gensim, you should probably use its implementation of doc2vec. doc2vec is an extension of word2vec to phrase-, sentence- and document level. This is a fairly simple extension described here.
http://cs.stanford.edu/~quocle/paragraph_vector.pdf
Gensim is good because it is intuitive, fast and flexible. It's great that you can get pre-prepared word embeddings from the official word2vec page, and the gensim Doc2Vec syn0 layer is presented so that you can fill in the word embeddings with these high-quality vectors!
GoogleNews-vectors-negative300.bin.gz
I think gensim is by far the easiest (and so far the best for me) tool for embedding sentences in vector space.
There are other methods sentence- for the vector than the one that was proposed in the article by Le and Mikolov above. Socher and Manning from Stanford are by far the two most famous researchers working in this field. Their work is based on the principle of compositional - the semantics of sentences comes from:
1. semantics of the words 2. rules for how these words interact and combine into phrases
They proposed several such models (becoming more and more complex) on how to use composition to build sentence- level representations.
2011 - Deploying a recursive auto-encoder (very simple. Start here if interested)
2012 - matrix-vector neural network
2013 - neural tensor network
2015 - LSTM Tree
all his works are available on socher.org. Some of these models are available, but I still recommend gensim doc2vec. Firstly, the 2011 URAE is not particularly strong. In addition, it is pre-prepared with scales suitable for paraphrasing news data. The code it provides does not allow you to retrain the network. You also cannot swap different word vectors, so you are stuck with the 2011 attachments of pre-word2vec from Turian. These vectors, of course, are not at the level of word2vec or GloVe.
Not yet worked with Tree LSTM, but it looks very promising!
tl; dr Yes, use gensim doc2vec. But other methods exist!