I built a semi-directional version of NLTK Naive Bayes in Python based on EM (Expectation Maximization Algorithm). However, in some iterations of EM, I get negative logarithmic probabilities (the logarithmic probabilities of EM should be positive at each iteration), so I believe there should be errors in my code. After carefully analyzing my code, I have no idea why this is happening. It would be greatly appreciated if anyone could notice errors in my code below:
( References semi-controlled Naive Bayes )
The main loop of the EM algorithm
#initial assumptions: #Bernoulli NB: only feature presence (value 1) or absence (value None) is computed #initial data: #C: classifier trained with labeled data #labeled_data: an array of tuples (feature dic, label) #features: dictionary that outputs feature dictionary for a given document id for iteration in range(1, self.maxiter): #Expectation: compute probabilities for each class for each unlabeled document #An array of tuples (feature dictionary, probability dist) is built unlabeled_data = [(features[id],C.prob_classify(features[id])) for id in U] #Maximization: given the probability distributions of previous step, #update label, feature-label counts and update classifier C #gen_freqdists is a custom function, see below #gen_probdists is the original NLTK function l_freqdist_act,ft_freqdist_act, ft_values_act = self.gen_freqdists(labeled_data,unlabeled_data) l_probdist_act, ft_probdist_act = self.gen_probdists(l_freqdist_act, ft_freqdist_act, ft_values_act, ELEProbDist) C = nltk.NaiveBayesClassifier(l_probdist_act, ft_probdist_act) #Compute log-likelihood #NLTK Naive bayes classifier prob_classify func gives logprob(class) + logprob(doc|class)) #for labeled data, sum logprobs output by the classifier for the label #for unlabeled data, sum logprobs output by the classifier for each label log_lh = sum([C.prob_classify(ftdic).prob(label) for (ftdic,label) in labeled_data]) log_lh += sum([C.prob_classify(ftdic).prob(label) for (ftdic,ignore) in unlabeled_data for label in l_freqdist_act.samples()]) #Continue until convergence if log_lh_old == "first": if self.debug: print "\tM: #iteration 1",log_lh,"(FIRST)" log_lh_old = log_lh else: log_lh_diff = log_lh - log_lh_old if self.debug: print "\tM: #iteration",iteration,log_lh_old,"->",log_lh,"(",log_lh_diff,")" if log_lh_diff < self.log_lh_diff_min: break log_lh_old = log_lh
Custom gen-freqdists functions used to create the necessary frequency distributions
def gen_freqdists(self, instances_l, instances_ul): l_freqdist = FreqDist()