About your two questions:
I can tell new developers that these structures are not normalized, but they can say that it is faster. How can I resist this? Am I countering this? Others build their databases as follows ?!
It may be faster, but it is not necessary: when you decide to add additional information to the table (additional fields in your case), you also add a penalty for performance, because the table becomes larger, which can mean more data moving from the server to clients, either for unloading into memory or from memory ... also, if the field for query acceleration is likely to have one or more pointers to this, which again has a penalty for performance when updating and inserting, The main thing, however, is about what I hinted at in my comment: “cached” and “pre-computed” values make the system more fragile in terms of data integrity. Are you sure that "event_creator_id" always correctly points to the real creator, even if someone changed the original value? If so, it is also associated with costs, both from the point of view of calculations (you need to update all the tables when the creator changes), and from the point of view of the actual development and testing efforts (you are sure that no one forgot to distribute the changes in the pre-calculated fields ?).
The same applies to aggregate values, such as the "discounted price" or current results ... and a change in the source data, probably much more often than a change in the "event creator" information. Again, is there an appropriate “invalidation caching” mechanism to ensure that total sales are recounted whenever someone completes the sale? How about the returned item? Has anyone considered the cost of ensuring integrity?
Run totals and other derived values should be implemented using representations instead, so that caching, if it does, is done by the actual DBMS mechanism, which knows how to take care of this correctly.
Is there a rule of thumb or a set of principles that I can use to say that “oh, it will be slower, but only 1%, so it's okay to do it like that”, etc.
The database (or, possibly, any computing system) should be “right first” so that you can find how to make it “fast enough, second”. The correctness of trading for speed is a decision that you should not make when developing a database if you do not already know that timeliness is considered more important than correctness. That is, your requirements clearly indicate that the presence of erroneous or outdated information is less important than the response time.
In other words: designing a table with excess cached information is another example of premature optimization and should be avoided at all costs.
See also this - especially the answers.