Handling JSON arrays in an array with jq

Environment: JQ 1.5, Windows 64-bit.

I have the following JSON:

{ "unique": 1924, "coordinates": [ { "time": "2015-01-25T00:00:01.683", "xyz": [ { "z": 4, "y": 2, "x": 1, "id": 99 }, { "z": 9, "y": 9, "x": 8, "id": 100 }, { "z": 9, "y": 6, "x": 10, "id": 101 } ] }, { "time": "2015-01-25T00:00:02.790", "xyz": [ { "z": 0, "y": 3, "x": 7, "id": 99 }, { "z": 4, "y": 6, "x": 2, "id": 100 }, { "z": 2, "y": 9, "x": 51, "id": 101 } ] } ] } 

And I would like to convert it to this CSV format using jq:

 unique,time,id,x,y,z 1924,"2015-01-25T00:00:01.683",99,1,2,4 1924,"2015-01-25T00:00:01.683",100,8,9,9 

(etc.)

I tried several things, for example:

 jq -r '{unique: .unique, coordinates: .coordinates[].xyz[] | [.id, .x, .y, .z], time: .coordinates.[].time} | flatten | @csv' 

which gave me my desired JSON, but multiplied for each id, x, y and z (i.e. each unique row appears four times - one for id, x, y, z).

Assigning a number to an array, e.g.

 jq -r '{unique: .unique, coordinates: .coordinates[0].xyz[] | [.id, .x, .y, .z], time: .coordinates.[0].time} | flatten | @csv' 

gives me the first index of the coordinates array, but I would like all of them naturally.

+5
source share
1 answer

The first step is to smooth the results in rows.

 [{ unique } + (.coordinates[] | { time } + .xyz[])] 

This will give an array of objects in a string:

 [ { "unique": 1924, "time": "2015-01-25T00:00:01.683", "id": 99, "x": 1, "y": 2, "z": 4 }, { "unique": 1924, "time": "2015-01-25T00:00:01.683", "id": 100, "x": 8, "y": 9, "z": 9 }, { "unique": 1924, "time": "2015-01-25T00:00:01.683", "id": 101, "x": 10, "y": 6, "z": 9 }, { "unique": 1924, "time": "2015-01-25T00:00:02.790", "id": 99, "x": 7, "y": 3, "z": 0 }, ... ] 

Then it's just a matter of converting this to csv strings.

 ["unique","time","id","x","y","z"] as $fields | $fields, (.[] | [.[$fields[]]]) | @csv 
+5
source

All Articles