Creating a dictionary for each word in the file and counting the frequency of subsequent words

I try to solve a difficult problem and get lost.

Here is what I have to do:

INPUT: file OUTPUT: dictionary Return a dictionary whose keys are all the words in the file (broken by whitespace). The value for each word is a dictionary containing each word that can follow the key and a count for the number of times it follows it. You should lowercase everything. Use strip and string.punctuation to strip the punctuation from the words. Example: >>> #example.txt is a file containing: "The cat chased the dog." >>> with open('../data/example.txt') as f: ... word_counts(f) {'the': {'dog': 1, 'cat': 1}, 'chased': {'the': 1}, 'cat': {'chased': 1}} 

Here is all that I have so far tried to at least pull out the right words:

 def word_counts(f): i = 0 orgwordlist = f.split() for word in orgwordlist: if i<len(orgwordlist)-1: print orgwordlist[i] print orgwordlist[i+1] with open('../data/example.txt') as f: word_counts(f) 

I think I need to somehow use the .count method and, in the end, pin some dictionaries together, but I'm not sure how to count the second word for every first word.

I know that I am not solving the problem anywhere, but I am trying to do it step by step. Any help is appreciated, even tips pointing in the right direction.

+7
python dictionary counter n-gram
source share
5 answers

We can solve this in two passes:

  • in the first pass, we build a Counter and count the tuples of two consecutive words using zip(..) ; and
  • then we turn this Counter into a dictionary of dictionaries.

The result is the following code:

 from collections import Counter, defaultdict def word_counts(f): st = f.read().lower().split() ctr = Counter(zip(st,st[1:])) dc = defaultdict(dict) for (k1,k2),v in ctr.items(): dc[k1][k2] = v return dict(dc) 
+5
source share

We can do this in one go:

  • Use defaultdict as a counter.
  • Iterations over bitrams counting in place

So ... For brevity, we will leave the normalization and cleanup:

 >>> from collections import defaultdict >>> counter = defaultdict(lambda: defaultdict(int)) >>> s = 'the dog chased the cat' >>> tokens = s.split() >>> from itertools import islice >>> for a, b in zip(tokens, islice(tokens, 1, None)): ... counter[a][b] += 1 ... >>> counter defaultdict(<function <lambda> at 0x102078950>, {'the': defaultdict(<class 'int'>, {'cat': 1, 'dog': 1}), 'dog': defaultdict(<class 'int'>, {'chased': 1}), 'chased': defaultdict(<class 'int'>, {'the': 1})}) 

And a more readable conclusion:

 >>> {k:dict(v) for k,v in counter.items()} {'the': {'cat': 1, 'dog': 1}, 'dog': {'chased': 1}, 'chased': {'the': 1}} >>> 
+5
source share

Firstly, this is some kind of brave cat that was chasing a dog! Secondly, it’s a bit complicated because we don’t interact with this type of parsing every day. Here is the code:

 k = "The cat chased the dog." sp = k.split() res = {} prev = '' for w in sp: word = w.lower().replace('.', '') if prev in res: if word.lower() in res[prev]: res[prev][word] += 1 else: res[prev][word] = 1 elif not prev == '': res[prev] = {word: 1} prev = word print res 
+2
source share

You can:

  • Create a list of separated words;
  • Create word pairs using zip(list_, list_[1:]) or any method that iterates over pairs;
  • Create the letter of the first words in the pair, followed by a list of the second word of the pair;
  • Count the words on the list.

Same:

 from collections import Counter s="The cat chased the dog." li=[w.lower().strip('.,') for w in s.split()] # list of the words di={} for a,b in zip(li,li[1:]): # words by pairs di.setdefault(a,[]).append(b) # list of the words following first di={k:dict(Counter(v)) for k,v in di.items()} # count the words >>> di {'the': {'dog': 1, 'cat': 1}, 'chased': {'the': 1}, 'cat': {'chased': 1}} 

If you have a file, just read it on the line and continue.


Alternatively, you could

  • The same first two steps
  • Use defaultdict with Counter as factory.

Same:

 from collections import Counter, defaultdict li=[w.lower().strip('.,') for w in s.split()] dd=defaultdict(Counter) for a,b in zip(li, li[1:]): dd[a][b]+=1 >>> dict(dd) {'the': Counter({'dog': 1, 'cat': 1}), 'chased': Counter({'the': 1}), 'cat': Counter({'chased': 1})} 

Or

 >>> {k:dict(v) for k,v in dd.items()} {'the': {'dog': 1, 'cat': 1}, 'chased': {'the': 1}, 'cat': {'chased': 1}} 
+1
source share

I think this is a one-pass solution without importing defaultdict. He also has punctuation. I tried optimizing it for large files or rediscovering files

 from itertools import islice class defaultdictint(dict): def __missing__(self,k): r = self[k] = 0 return r class defaultdictdict(dict): def __missing__(self,k): r = self[k] = defaultdictint() return r keep = set('1234567890abcdefghijklmnopqrstuvwxy ABCDEFGHIJKLMNOPQRSTUVWXYZ') def count_words(file): d = defaultdictdict() with open(file,"r") as f: for line in f: line = ''.join(filter(keep.__contains__,line)).strip().lower().split() for one,two in zip(line,islice(line,1,None)): d[one][two] += 1 return d print (count_words("example.txt")) 

will output:

 {'chased': {'the': 1}, 'cat': {'chased': 1}, 'the': {'dog': 1, 'cat': 1}} 
0
source share

All Articles