How to vector labeled bigrams with scikit?

I myself learn how to use scikit-learn, and I decided to start the second task but with my own body. I got a few bigrams manually, say:

training_data = [[('this', 'is'), ('is', 'a'),('a', 'text'), 'POS'],
[('and', 'one'), ('one', 'more'), 'NEG']
[('and', 'other'), ('one', 'more'), 'NEU']]

I would like to vectorize them in a format that can be well filled with some classification algorithm provided by scikit-learn (svc, multi-volume naive bays, etc.). This is what I tried:

from sklearn.feature_extraction.text import CountVectorizer

count_vect = CountVectorizer(analyzer='word')

X = count_vect.transform(((' '.join(x) for x in sample)
                  for sample in training_data))

print X.toarray()

The problem is that I don’t know how to handle the label (i.e. 'POS', 'NEG', 'NEU'), I also need to “vectorize” the label to pass training_datato the classification algorithm, or I could just let it look like “POS” or any other line ? Another problem is that I get the following:

raise ValueError("Vocabulary wasn't fitted or is empty!")
ValueError: Vocabulary wasn't fitted or is empty!

, , training_data. dictvectorizer Sklearn-pandas, , , , aproach ?

+4
1

:

>>> training_data = [[('this', 'is'), ('is', 'a'),('a', 'text'), 'POS'],
                 [('and', 'one'), ('one', 'more'), 'NEG'],
                 [('and', 'other'), ('one', 'more'), 'NEU']]
>>> count_vect = CountVectorizer(preprocessor=lambda x:x,
                                 tokenizer=lambda x:x)
>>> X = count_vect.fit_transform(doc[:-1] for doc in training_data)

>>> print count_vect.vocabulary_
{('and', 'one'): 1, ('a', 'text'): 0, ('is', 'a'): 3, ('and', 'other'): 2, ('this', 'is'): 5, ('one', 'more'): 4}
>>> print X.toarray()
[[1 0 0 1 0 1]
 [0 1 0 0 1 0]
 [0 0 1 0 1 0]]

:

y = [doc[-1] for doc in training_data] # ['POS', 'NEG', 'NEU']

:

model = SVC()
model.fit(X, y)
+7

All Articles