If you use SQLLINE use!
If you are using a query set, you need to specify the exact schema to use. This can be done using the use use schema command. Unfortunately, you should not use the root scheme either. Make sure you create the correct directory on your file system and use the correct storage configuration . The following is an example configuration. After that, you can create csv via java using the SQL driver , or in a tool like Pentaho to create CSV. With the proper specification, you can use the REST request tool on localhost: 8047 / query. The request to create csv at / out / data / csv is given below after the configuration example.
Storage configuration
{ "type": "file", "enabled": true, "connection": "file:///", "config": null, "workspaces": { "root": { "location": "/out", "writable": false, "defaultInputFormat": null }, "jsonOut": { "location": "/out/data/json", "writable": true, "defaultInputFormat": "json" }, "csvOut": { "location": "/out/data/csv", "writable": true, "defaultInputFormat": "csv" } }, "formats": { "json": { "type": "json", "extensions": [ "json" ] }, "csv": { "type": "text", "extensions": [ "csv" ], "delimiter": "," } } }
Query
USE fs.csvOut; ALTER SESSION SET `store.format`='csv'; CREATE TABLE fs.csvOut.mycsv_out AS SELECT * FROM fs.`my_records_in.json`;
This will result in at least one CSV and possibly many with different header specifications in / out / data / csv / mycsv _out.
Each file should have the following format:
\d+_\d+_\d+.csv
Note. . While the query result can be read as a single CSV, the resulting CSVs (if there are several) cannot, because the number of headers will differ. Drop the file as a Json file and read with the code or later using Drill or another tool, if so.