What is better to request elasticsearch from python?

There are libraries to do these pyes and pyelasticsearch. The pyelasticsearch website looks good, and pyes takes a different approach, but also good.

On the other hand, this code works, and it is very simple.

import urllib2 as urllib import json import pprint query = { "from":0, "size":10, "query":{ "field" : { "name" : "david" } }, "sort":[ {"name":"asc"}, {"lastName":"asc"} ] } query = json.dumps(query) response = urllib.urlopen( 'http://localhost:9200/users/users/_search', query ) result = json.loads( response.read() ) pprint.pprint(result) 

So, I am thinking of using simple code instead of learning library tricks.

+7
source share
2 answers

There is nothing wrong with your approach to using the REST API to interact with ElasticSearch.

Py and other libraries provide a wrapper around the REST API, so you can write Python code to counter the very creation of JSON requests.

+7
source

Keep in mind that your code snippet, as shown in your question, will not work in Python 3. You need to encode the query string and also add the content header to your request. So, in Python 3, do the following:

 from urllib.request import urlopen, Request import json import pprint query = { "from":0, "size":10, "query":{ "field" : { "name" : "david" } }, "sort":[ {"name":"asc"}, {"lastName":"asc"} ] } # encode your json string query = json.dumps(query).encode("utf-8") # add a content-type header request = Request('http://localhost:9200/users/users/_search', data=query, headers={'Content-Type': 'application/json'}) response = urlopen(request) result = json.loads( response.read() ) pprint.pprint(result) 
0
source

All Articles