The problem is mainly solved after some digging in oracle_enhanced_adapter.rb .
The problem boiled down to too many tables in the local schema (many tables EBA_%, EVT_%, EMP_%, SMP_% were created there coincidentally at some point), archive tables are included in the dump and the selection of data dictionaries taking 14 seconds to fulfill.
To fix the speed, I did three things:
- Threw out all unnecessary tables (about 250 out of 500)
- Excluded archive tables from schema dump
- The cached result of a long query
This improved the migration / schema dump time for the remaining 350 tables from about 90 minutes to 15 seconds. Fast enough.
My code is as follows (for inspiration not copy and paste - this code is quite specific to my database, but you should be able to get this idea). You need to create a temporary table manually. It takes me about 2 or 3 minutes - still too long to generate with each migration, and it is pretty static anyway =)
module ActiveRecord module ConnectionAdapters class OracleEnhancedAdapter def tables(name = nil) select_all("select lower(table_name) from all_tables where owner = sys_context('userenv','session_user') and table_name not like 'A!_%' escape '!' ").inject([]) do | tabs, t | tabs << t.to_a.first.last end end # TODO think of some way to automatically create the rails_temp_index table # # Table created by: # create table rails_temp_index_table as # SELECT lower(i.index_name) as index_name, i.uniqueness, # lower(c.column_name) as column_name, i.table_name # FROM all_indexes i, user_ind_columns c # WHERE c.index_name = i.index_name # AND i.owner = sys_context('userenv','session_user') # AND NOT exists (SELECT uc.index_name FROM user_constraints uc # WHERE uc.constraint_type = 'P' and uc.index_name = i.index_name); def indexes(table_name, name = nil) #:nodoc: result = select_all(<<-SQL, name) SELECT index_name, uniqueness, column_name FROM rails_temp_index_table WHERE table_name = '#{table_name.to_s.upcase}' ORDER BY index_name SQL current_index = nil indexes = [] result.each do |row| if current_index != row['index_name'] indexes << IndexDefinition.new(table_name, row['index_name'], row['uniqueness'] == "UNIQUE", []) current_index = row['index_name'] end indexes.last.columns << row['column_name'] end indexes end end
source share