Nominally, the alpha for a weak classifier with zero error should be large , because it correctly classifies all training instances. I assume that you use all the training data to evaluate alpha. Perhaps you only evaluate alpha with a training sample for this round of promotion, and also - in this case, your alpha should be slightly smaller based on the sample size, but the same idea.
In theory, this alpha should be almost infinite if your other alpha is abnormal. In practice, the suggestion to check whether your error is zero and give these alpha values a very high value is reasonable, but the error rate of zero or near zero usually indicates that you are processing (or simply too little training data to evaluate reliable alpha).
This is stated in section 4.2 of the Schapire and Singer Confidence Rated Predictions version of the Adaboost . They suggest adding a small epsilon to your numerator and denominator for stability:
alpha = (0.5) * Math.log(((1 - errorRate + epsilon) / (errorRate + epsilon)))
In any case, this alpha should not be set to a small value (it should be large). And setting it to 1 makes sense only if all other alpha for all other promotion rounds are normalized, so the sum of all alpha is almost 1, for example.
source share