Fix invalid json escape

KISSmetrics generates invalid JSON strings. I need to make out. I get a lot of errors like

ERROR 2013-03-04 04:31:12,253 Invalid \escape: line 1 column 132 (char 132): {"search engine":"Google","_n":"search engine hit","_p":"z392cpdpnm6silblq5mac8kiugq=","search terms":"happy new year animation 1920\303\2271080 hd","_t":1356390128} ERROR 2013-03-04 04:34:19,153 Invalid \escape: line 1 column 101 (char 101): {"search engine":"Google","_n":"ad campaign hit","_p":"byskpczsw6sorbmzqi0tk1uimgw=","search terms":"\331\203\330\261\330\252\331\207 \331\201\331\212\330\257\331\212\330\244\331\211 \330\256\331\212\331\204\330\247\330\255\331\211 \331\203\331\210\330\261\330\257\331\211","_t":1356483052} 

My code is:

 for line in lines: try: data = self.clean_data(json.loads(line)) except ValueError, e: logger.error('%s: %s' % (e.message, line)) 

Example raw data:

 {"search engine":"Google","_n":"search engine hit","_p":"kvceh84hzbhywcnlivv+hdztizw=","search terms":"military sound effects programs","_t":1356034177} 

Is there any way to clear this messy JSON and parse it? Thank you for your help.

+6
source share
3 answers

Your input contains octal escape sequences; that would be really invalid. Replace them with decoded bytes using a regular expression:

 import re invalid_escape = re.compile(r'\\[0-7]{1,3}') # up to 3 digits for byte values up to FF def replace_with_byte(match): return chr(int(match.group(0)[1:], 8)) def repair(brokenjson): return invalid_escape.sub(replace_with_byte, brokenjson) 

This makes your input work:

 >>> data1 = r"""{"search engine":"Google","_n":"search engine hit","_p":"z392cpdpnm6silblq5mac8kiugq=","search terms":"happy new year animation 1920\303\2271080 hd","_t":1356390128}""" >>> json.loads(repair(data1)) {u'_n': u'search engine hit', u'search terms': u'happy new year animation 1920\xd71080 hd', u'_p': u'z392cpdpnm6silblq5mac8kiugq=', u'_t': 1356390128, u'search engine': u'Google'} >>> print json.loads(repair(data1))['search terms'] happy new year animation 1920×1080 hd >>> data2 = r"""{"search engine":"Google","_n":"ad campaign hit","_p":"byskpczsw6sorbmzqi0tk1uimgw=","search terms":"\331\203\330\261\330\252\331\207 \331\201\331\212\330\257\331\212\330\244\331\211 \330\256\331\212\331\204\330\247\330\255\331\211 \331\203\331\210\330\261\330\257\331\211","_t":1356483052}""" >>> json.loads(repair(data2)){u'_n': u'ad campaign hit', u'search terms': u'\u0643\u0631\u062a\u0647 \u0641\u064a\u062f\u064a\u0624\u0649 \u062e\u064a\u0644\u0627\u062d\u0649 \u0643\u0648\u0631\u062f\u0649', u'_p': u'byskpczsw6sorbmzqi0tk1uimgw=', u'_t': 1356483052, u'search engine': u'Google'} >>> print json.loads(repair(data2))['search terms'] كرته فيديؤى خيلاحى كوردى 
+11
source

Consider cjson for this exact scenario ( https://pypi.python.org/pypi/python-cjson )

Shielded octal (and fast) seem to be handled.

+1
source

I had a similar problem, and just replacing the json library, yaml solved the problem. (YAML is compatible with JSON.)

Example:

 import yaml obj = yaml.load(json_string) # instead of json.loads(json_string) 
0
source

All Articles