An important principle of denormalization is that it does not sacrifice normalized data. You should always start with a scheme that accurately describes your data. Thus, you should place different types of information in different tables. You should also indicate as many restrictions on your data as you consider reasonable.
All these goals, as a rule, make queries a few seconds longer, since you need to join different tables to get the necessary information, but with the correct names for tables and columns, this should not be a burden from the point of view of readability.
More importantly, these goals can affect performance. You must monitor your actual workload to make sure your database is working properly. If almost all your queries return quickly, and you have a lot of CPU on hand for more queries, then you're done.
If you find that write requests are time consuming, make sure that you do not denormalize your data. You will make the database work harder to maintain consistency, as it will have to do many readings, followed by many more records. Instead, you want to look at your indexes. Do you have column indexes that you rarely query? Do you have indexes needed to verify the integrity of the update?
If your read requests are your bottleneck, then again you want to start by looking at your indexes. Do you need to add an index or two to avoid table scans? If you just can't avoid scanning the tables, are there any things you could do to reduce each row, for example by decreasing the number of characters in a varchar column or dividing rarely requested columns into another table to be joined when they are necessary.
If there is a certain slow query that always uses the same connection, then this query can benefit from denormalization. First, make sure that reading these tables greatly exceeds the number of records. Determine which columns you need from one table to add to another. You might want to use several different names for these columns, so that it is more obvious that they are denormalization. Change the write logic to update both the source table used in the join and the denormalized fields.
It is important to note that you are not deleting the old table. The problem with denormalized data is that, by speeding up the particular query for which it was designed, it tends to complicate other queries. In particular, write requests should do more work to ensure that the data remains consistent, either by copying data from table to table, by adding additional subqueries to make sure the data is correct, or skipping over other kinds of obstacles. By preserving the original table, you can leave all your old constraints in place, so at least those source columns are always valid. If for some reason you find that the denormalized columns are out of sync, you can go back to the original, slower query, and everything really is, and then you can work on ways to restore the denormalized data.