In most cases, productivity shocks (usually taken in a collision) are amortized over all calls. Therefore, for the most realistic use, you will not get O(n) for each call. In fact, the only case where you harm O(n) on every call is in the pathological case where each key hash collides with an existing key hash value (i.e. the worst (or worst) use of the hash table).
If, for example, you know your set of keys in advance, and you know that they will not have hash collisions (i.e. all their hashes are unique), then you will not suffer from conflicts. Another main O(n) operation is changing the size of the hash table, but the frequency of this depends on the implementation (expansion coefficient / hash function / conflict resolution scheme, etc.), and will also vary from start to start depending on input data set.
In any case, you can avoid a sudden slowdown if you can pre-populate the dict with all the keys. values โโcan simply be set to None and later filled with their real values. This should lead to a noticeable result when, at the first start, "dicting" with the keys, and in the future, the value insertion should be constant.
A completely different question is how are you going to read / query the structure? Do you need to attach individual values โโand access them using a key? Should I order it? perhaps set might be more appropriate than a dict , since you don't need key:value matching.
Update:
Based on your description in the comments, this starts to sound more like a job for the database, even if you work with temporary dialing. You can use a relational database in memory (e.g. with SQLite). In addition, you can use ORM, such as SQLAlchemy, to interact with the database in more pythonic ways and without having to write SQL.
It even sounds like you can read data from a database to get started, so maybe you can use it yet?
Storing / querying / updating a large number of typed records that are uniquely identified is exactly what RDBMS has been specialized for decades of development and research. Using the in-memory version of an existing relational database (such as SQLite) is likely to be a more pragmatic and sustainable choice.
Try using python built in sqlite3 module and try the in-memory version by providing ":memory:" as the path of the db file when building
con = sqlite3.connect(":memory:")