In the dataset, user_id and site_name have two columns. It records every site name that every user has viewed.
toy_dict = {'site_name': {0: u'\u4eac\u4e1c\u7f51\u4e0a\u5546\u57ce', 1: u'\u963f\u91cc\u4e91', 2: u'\u6dd8\u5b9d\u7f51', 3: u'\u624b\u673a\u6dd8\u5b9d\u7f51', 4: u'\u6211\u4eec\u7684\u70b9\u5fc3\u7f51', 5: u'\u8c46\u74e3\u7f51', 6: u'\u9ad8\u5fb7\u5730\u56fe', 7: u'\u817e\u8baf\u7f51', 8: u'\u70b9\u5fc3', 9: u'\u767e\u5ea6', 10: u'\u641c\u72d7', 11: u'\u8c37\u6b4c', 12: u'AccuWeather\u6c14\u8c61\u9884\u62a5', 13: u'\u79fb\u52a8\u68a6\u7f51', 14: u'\u817e\u8baf\u7f51', 15: u'\u641c\u72d7\u7f51', 16: u'360\u624b\u673a\u52a9\u624b', 17: u'\u641c\u72d0', 18: u'\u767e\u5ea6'}, 'user_id': {0: 37924550, 1: 37924550, 2: 37924550, 3: 37924550, 4: 37924550, 5: 37924550, 6: 37924550, 7: 37924550, 8: 37924551, 9: 37924551, 10: 37924551, 11: 37924551, 12: 37924551, 13: 37924552, 14: 45285152, 15: 45285153, 16: 45285153, 17: 45285153, 18: 45285153}}
Now I want to restore a random network and thereby ensure that a person with n sites in the observed network also has n sites in a randomized network.
Python's numpy.random.shuffle has low efficiency when the amount of data is massive.
I am using the following Python script currently:
import pandas as pd import numpy as np import itertools from collections import Counter for i in range (10):
I am wondering if we can use the Monte Carlo algorithm in Python and reduce the computational complexity?