Since there are no answers, and someone can see it, I had the same problem, and here is the solution:
Using the edgeNGrams tokenizer .
You need to change the index and mapping settings.
Here is an example of settings:
"settings" : { "index" : { "analysis" : { "analyzer" : { "ngram_analyzer" : { "type" : "custom", "stopwords" : "_none_", "filter" : [ "standard", "lowercase", "asciifolding", "word_delimiter", "no_stop", "ngram_filter" ], "tokenizer" : "standard" }, "default" : { "type" : "custom", "stopwords" : "_none_", "filter" : [ "standard", "lowercase", "asciifolding", "word_delimiter", "no_stop" ], "tokenizer" : "standard" } }, "filter" : { "no_stop" : { "type" : "stop", "stopwords" : "_none_" }, "ngram_filter" : { "type" : "edgeNGram", "min_gram" : "2", "max_gram" : "20" } } } } }
Of course, you must adapt the analyzers for your own use. You might want to leave the analyzer intact by default or add an ngram filter to it so that you don't have to change the mappings. This last decision will mean that all fields in your index will receive an ngram filter.
And to display:
"mappings" : { "patient" : { "properties" : { "name" : { "type" : "string", "analyzer" : "ngram_analyzer" }, "address" : { "type" : "string", "analyzer" : "ngram_analyzer" } } } }
Declare each field that you want to autocomplete with ngram_analyzer. Then the questions in your question should work. If you used anything else, I would be glad to hear about it.