You didnโt talk very much about what you did to set up your PostgreSQL instance or your queries. It's unusual to see a 50 percent speed in a PostgreSQL query by setting up and / or re-querying your query in a format that is better optimized.
Just this week, there was a report at work that someone wrote using Java and several queries in a way that, based on how far he earned in four hours, should have taken about a month. (He needed to hit five different tables, each containing hundreds of millions of rows.) I rewrote it using several CTEs and a window function so that it works for less than ten minutes and generates the desired results directly from the query. It is a 4400x speed up.
Perhaps the best answer to your question has nothing to do with the technical details of how you can search in each product, but more so with ease of use in your specific use case. Obviously, you were able to find a quick way to search using Solr with fewer problems than PostgreSQL, and this may not go beyond something more.
I include a brief example of how text queries for several criteria can be executed in PostgreSQL, and how a few small tweaks can greatly affect performance. To make it quick and easy, I just run War and Peace in text form into a test database, with each โdocumentโ being a single text string. Similar methods can be used for arbitrary fields using hstore or JSON columns if the data needs to be poorly defined. Where there are separate columns with their own indexes, the benefits of using indexes are usually much greater.
After setting up indexing, I show several queries with row counts and timings with both types of indexes:
-- Find lines with "gentlemen". EXPLAIN ANALYZE SELECT * FROM war_and_peace WHERE tsv @@ to_tsquery('english', 'gentlemen');
84 lines, gist: 2.006 ms, gin: 0.194 ms
-- Find lines with "ladies". EXPLAIN ANALYZE SELECT * FROM war_and_peace WHERE tsv @@ to_tsquery('english', 'ladies');
184 lines, gist: 3,549 ms, din: 0,328 ms
-- Find lines with "ladies" and "gentlemen". EXPLAIN ANALYZE SELECT * FROM war_and_peace WHERE tsv @@ to_tsquery('english', 'ladies & gentlemen');
1 line, gist: 0.971 ms, din: 0.104 ms
Now, since the GIN index was about 10 times faster than the GiST index, you might wonder why someone would use GiST to index text data. The answer is that GiST is generally faster to maintain. Therefore, if your text data is highly volatile, the GiST index can win with the total load, while the GIN index will win if you are only interested in search time or a workload with a higher load.
Without an index, the above queries are taken from 17.943 ms to 23.397 ms, since they must scan the entire table and check for each row.
Finding GIN indexes for rows with "ladies" and "gentlemen" is more than 172 times faster than scanning a table in the exact same database. Obviously, the benefits of indexing would be more dramatic with large documents than for this test.
The setting, of course, is one-time. Using a trigger to support the tsv column tsv any changes made can be instantly searched without re-configuring any of the settings.
With a slow PostgreSQL query, if you show the table structure (including indexes), the problem query and the EXPLAIN ANALYZE result of your query, someone can almost always identify the problem and suggest how to get it run faster.
UPDATE (December 9 '16)
I did not mention what I used to get the previous timings, but based on the date, this would probably be the main release 9.2. I just went through this old thread and tried it again on the same hardware using version 9.6.1 to find out if this intermediate performance tuning in this example helps. Requests for only one argument only increased in productivity by about 2%, but the search for lines with "ladies" and "gentlemen" approximately doubled in speed to 0.053 ms (i.e. 53 microseconds) when using the GIN index (inverted).