RegEx Tokenizer for splitting text into words, numbers, and punctuation

What I want to do is split the text into its final elements.

For example:

from nltk.tokenize import * txt = "A sample sentences with digits like 2.119,99 or 2,99 are awesome." regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+') ['A','sample','sentences','with','digits','like','2.199,99','or','2,99','are','awesome','.'] 

You can see that everything is working fine. My problem: what happens if the number is at the end of the text?

 txt = "Today it 07.May 2011. Or 2.999." regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+') ['Today', 'it', "'s", '07.May', '2011.', 'Or', '2.999.'] 

The result should be: ['Today', 'it', '' s ', '07 .May', '2011', '.', 'Or', '2.999', '.']

What do I need to do to get the result above?

+2
source share
1 answer

I created a template to try to include periods and commas occurring inside words, numbers. Hope this helps:

 txt = "Today it 07.May 2011. Or 2.999." regexp_tokenize(txt, pattern=r'\w+([.,]\w+)*|\S+') ['Today', 'it', "'s", '07.May', '2011', '.', 'Or', '2.999', '.'] 
+7
source

All Articles