As this question , Iโm interested in getting a large list of words in terms of speech (a long list of nouns, a list of adjectives) that will be used programmatically elsewhere. This answer has a solution using a WordNet database (in SQL).
Is there any way to get such a list using the tools / tools built into Python NLTK. I could take a large pile of text, analyze it, and then save nouns and adjectives. But given dictionaries and other built-in tools, is there a smarter way to simply extract words that are already present in NLTK datasets encoded as nouns / adjectives (whatever)?
Thanks.
python machine-learning nltk
cforster
source share