What changes require redistributing the dependent assembly?

At my workplace, we deploy an internal application, replacing only changed assemblies (not my idea).

We can determine which assemblies we need to deploy by looking to see if the source files compiled into assemblies have changed. In most cases, we do not need to redistribute assemblies depending on the changes that have been changed. However, we found some cases where even if the source files in the assembly did not change, we need to redistribute them.

So far, we know that any of these changes in the assembly will require recompiling and deploying all dependent assemblies:

  • Constant change
  • Changes to the definition of an enumeration (order of values)
  • Returns the type of the function, and the caller uses var (sometimes)
  • The class namespace is changed to another existing namespace.

Are there any other cases that we are missing? I am also open to arguments why this whole approach is wrong (although it has been used for years).

Change To be clear, we always recompile, but only deploy assemblies in which the source files in them have been modified.

So, everything that breaks the compilation will be picked up (changing the method name, etc.), since they require changes in the calling code.

+7
source share
2 answers

First, we sometimes only deployed a few builds in the application instead of the full application. However, this is by no means the norm and was ONLY done in our test environments when the developer just recently (as in the last few minutes) published the entire site and just did a little setup. However, as soon as the developer is satisfied, they will go ahead and do a complete recompilation and re-release.

The final impetus for testing is always based on a complete recompilation / deployment. The push for production and, ultimately, production is based on this full copy.

Besides repeatability, one of the reasons is that you really cannot be 100% sure that a person has not missed something in comparison. Then the amount of time it takes to deploy 100 builds versus 5 is trivial and, frankly, not worth the time it takes humanity to try to figure out what really changed.

Quite frankly, the list that you have in conjunction with the Oded answer should be sufficient to convince others of the possibility of failure. However, the fact that you have already encountered setbacks due to this unfounded approach should be enough warning flag so that it does not continue.

In the end, it really comes down to the issue of professionalism. Standardization and repeatability of the process of moving code from development through various hoops and, ultimately, into production are extremely important for creating robust mission-critical applications. If your deployment process has failed due to these kinds of risks causing short cuts, the question arises as to the quality of the generated code.

+2
source

Here is another one:

Change additional parameter values.

The default values ​​are directly compiled assemblies using them (if not specified)

public void MyOptMethod(int optInt = 5) {} 

Any calling code, for example:

  theClass.MyOptMethod(); 

It will compile:

  theClass.MyOptMethod(5); 

If you change the method to:

  public void MyOptMethod(int optInt = 10) {} 

You will need to recompile all dependent assemblies if you want to apply the new one by default.


Additional changes that will require recompilation (thanks Polynomial):

  • Changes to type type parameter restrictions
  • Changes method names (especially problematic when using reflection, since private methods can also be checked).
  • Modifies exception handling (creates a different type of exception)
  • Thread Processing Changes
  • Etc ... etc. etc.

So - always recompile everything .

+6
source

All Articles