You can do it in Hive as follows:
First you need JSON SerDe (Serializer / Deserializer). The most functional I've seen is https://github.com/rcongiu/Hive-JSON-Serde/ . SerDe from Peter Sankauskas cannot cope with JSON this complex. Starting with this post you will need to compile SerDe with Maven and put a JAR where your Hive session can achieve this.
Next, you will need to change the JSON format. The reason is that Hive uses a strongly typed view of arrays, so mixing integers and other things will not work. Consider the transition to the structure as follows:
{"str": { n1 : 1, n2 : 134, n3 : 61, s1: "Matt", st1: {"type":"registered","app":491,"value":423,"value2":12344}, ar1: ["application"], ar2: [], s2: "49:0" } }
Then you will need to put JSON on one line. I'm not sure if this is a quirk of Hive or SerDe, but you need to do this.
Then copy the data to HDFS.
Now you are ready to define the table and query:
ADD JAR /path/to/jar/json-serde-1.1.2-jar-with-dependencies.jar; CREATE EXTERNAL TABLE json ( str struct< n1 : int, n2 : int, n3 : int, s1 : string, st1 : struct < type : string, app : int, value : int, value2 : int>, ar1 : array<string>, ar2 : array<string>, s2 : string > ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' LOCATION '/hdfs/path/to/file';
With this, you can run interesting nested queries, for example:
select str.st1.type from json;
Last but not least, as it is so specific to Hive, it would be helpful to update the question and tags.