Question about managing application instance

I am currently working on a fairly large project with a team distributed throughout the United States. Developers regularly compile code into the source repository. We have the following application builds (they are all managed by the application, without manual processes):

  • Continuous integration: the monitor checks to see if the code repository is updated if it builds and runs our unit test package. If errors occur, the team receives email notifications
  • Daily assembly: developers use this assembly to check their bug fixes or new code on a real application server, and if the β€œthings” succeed, the developer can solve the problem.
  • Weekly build: testers check the allowed error queue in this build. This is a more stable testing environment.
  • Current release version: used for demonstration and an open test platform for potential new users.

Each assembly updates its associated database. This cleans up the data and verifies that all changes to the database that come with the new code are pulled in. One of the problems that I hear from our testers is that we need to pre-populate the weekly build database with some expected test data, and not the more general data that the developers work with. This is like legitimate concern / need and what we are working on.

I give up what we are doing to see if the SO community sees any gap in what we are doing, or any problems. Everything seems to work well, but FEELS as if it could be better. Your thoughts?

+6
java build-process testing application-server
source share
3 answers

The next step is that after the release build passes the tests (say smoke test), it qualifies as a good build (say, golden assembly), and you use some kind of marking mechanism to mark all artifacts (code, set scripts, make files, install, etc.) that went into creating the golden image. Gold assembly may become a candidate for release later or not.

Perhaps you are already doing this, because you are not mentioning, I added what I observed.

+1
source share

it's almost the way we do it. The database of the testers themselves only reset upon request. If we update it automatically every week, then

  • we would lose references to the symptoms of errors; if an error is detected, but the developer only looks at it after a few weeks (or just after the weekend), then all the results of this error may disappear.
  • testers may be in the middle of a large test case (for example, more than 1 day).
  • We have tons of unit tests that work with the database, which is updated (automatically, of course) every time the integration build is performed

Yours faithfully,
Stein

+1
source share

I think you have a good, comprehensive process if it comes up when your customers want to see updates. One of the possible drawbacks that I see is that it seems that you will not be able to get a critical fix for a client error in production in less than a week, since your test builds are weekly, and then you need time to check the testers for the fix.

If you think of the other differently, look at this article in continuous deployment - it may be a little difficult to accept the concept first, but it definitely has some potential.

0
source share

All Articles