In my company, we have a product that does this and also works well. I did most of the work on this. I can give a brief introduction:
You need to divide the paragraph into sentences, and then divide each sentence into smaller sub-tasks - comma-based splitting, hyphen, semi-colon, colon, "and", "or", etc. In some cases, each supporting sentence will show completely different feelings.
Some sentences, even if separated, should be put together.
For example: the product is amazing, great and fantastic.
We have developed a comprehensive set of rules for types of sentences that need to be divided and which should not be (based on POS tag words)
At the first level, you can use the approach with a bag of words, that is, have a list of positive and negative words / phrases and check each subtable. At the same time, also look at the negative words such as “no”, “no”, etc., which will change the polarity of the sentence.
Even then, if you can’t find the mood, you can go for naive tales . This approach is not very accurate (about 60%). But if you apply this only to a sentence that does not go through the first set of rules, you can easily get an accuracy of 80-85%.
The important part is a list of positive / negative words and a way to separate things. If you want, you can even go one level higher by doing HMM (hidden Markov model) or CRF (conditional random fields). But I'm not a professional in NLP, and someone else can fill you in this part.
For curious people, we implemented all this python with NLTK and the Reverend Bayes module.
Pretty simple and handles most offers. However, you may encounter problems when trying to tag content from the Internet. Most people do not write the correct sentences on the Internet. It is also very difficult to cope with sarcasm.