Table indexes for columns in the Text [] array

I have a PostgreSQL database table with text[] (array) columns defined on it. I use these columns to search for a specific record in the database as follows:

 select obj from business where ((('street' = ANY (address_line_1) and 'a_city' = ANY (city) and 'a_state' = ANY (state)) or ('street' = ANY (address_line_1) and '1234' = ANY (zip_code))) and ('a_business_name' = ANY (business_name) or 'a_website' = ANY (website_url) or array['123'] && phone_numbers)) 

The problem I ran into is that with about 1 million records, the query becomes very slow. My question is simple, do array columns have different types of indexes ?. Does anyone know the best type of index to create in this case? (Assuming there are different types).

Just in case, this is explain analyze answer:

 "Seq Scan on business (cost=0.00..207254.51 rows=1 width=32) (actual time=18850.462..18850.462 rows=0 loops=1)" " Filter: (('a'::text = ANY (address_line_1)) AND (('a'::text = ANY (business_name)) OR ('a'::text = ANY (website_url)) OR ('{123}'::text[] && phone_numbers)) AND ((('a'::text = ANY (city)) AND ('a'::text = ANY (state))) OR ('1234'::text = ANY (zip_code))))" " Rows Removed by Filter: 900506" "Total runtime: 18850.523 ms" 

Thanks in advance!

+4
source share
1 answer

You can use the GIN index to work effectively with arrays.
Use it in combination with array operators .

For instance:

 CREATE INDEX business_address_line_1_idx ON business USING GIN (address_line_1); 

Do this for all columns of the array involved in the conditions.

Perhaps you should consider normalizing your circuit. Perhaps splitting multiple records into a separate table (1: n or n: m) will serve you better. This often happens in the long run, even if at first it seems that works more.

+3
source

All Articles