1) Should I accept these disparities as the way of the world, or is there a reason for them?
Variations in speed may be related to competing processes using disk-IO - so we expect resources. If it is a production server, and not a lone test server, then, of course, some other processes request disk access.
2) I felt nervous about lumping the ~185000 row inserts into one query. Is there any reason I should avoid using one query for these inserts? I've not worked with this amount of data being saved at one time before.
You should also split the inserts into groups of X inserts and insert each group as a transaction.
It is difficult to determine the value of X in any other way than the experimental one.
Grouping the inserts in the transaction ensures that the data is written (committed) to the disk only after each transaction and not after each (automatic commit) insertion.
This has a good effect on disk I / O, and if you group many attachments into one transaction, this can have a bad effect on available memory. If the amount of uncommitted data is too large for the current memory, the DBMS will start writing data to the internal log (on disk).
Thus, X depends on the number of inserts, the amount of data associated with each insert, the allowed memory / user / session parameters. And much more.
There are some interesting (free) tools from percona . They help track database activity.
You can also watch vmstat watch -n.5 'vmstat'
See the amount and change of data written to disk as a result of the actions of the production environment.
Run your script and wait until you notice a step in the number of bytes written to disk. If you write a step up, this is largely a constant value (above normal use), then it cheats and replaces, if it is rhythmic, then it writes only for commits.