Imagine that we have a piece of code that shrinks big data into smaller data and performs some process on it.
def node_cut(input_file): NODE_LENGTH = 500 count_output = 0 node_list=[] for line in input_file.readlines(): if len(node_list) >= NODE_LENGTH : count_output += 1 return( node_list,count_output ) node_list=[] node,t=line.split(',') node_list.append(node) if __name__ =='__main__': input_data = open('all_nodes.txt','r') node_list, count_output = node_cut(input_data) some_process(node_list)
while node_cut returns the first list of data, the for loop stops for the rest of the big data. How can I make sure it returns, but the cycle continues?
masti source share