How do you manage the base code base for versioned APIs?

I read about REST API versioning strategies, and something that none of them address is how you manage the base code base.

Suppose we make some changes to the API - for example, we change the Customer resource so that it returns separate forename and surname fields instead of a single name field. (In this example, I will use the URL versioning solution because it easily understands the concepts associated with it, but the question is equally applicable to content negotiation or custom HTTP headers)

Now we have an endpoint at http://api.mycompany.com/v1/customers/{id} and another incompatible endpoint at http://api.mycompany.com/v2/customers/{id} . We are still releasing security patches and updates for API v1, but now the new feature development focuses on v2. How do we write, test and deploy changes to our API server? I see at least two solutions:

  • Use the source control branch / tag for the v1 codebase. v1 and v2 are developed and deployed independently of each other, while version control is smoothed, if necessary, to apply the same fix to both versions - similar to how you manage codebases for native applications when developing a new new version, while maintaining This is the previous version.

  • Provide the code base with information about the API versions, so you will get a single code base that includes both the v1 client view and the v2 client view. Handle versions as part of the solution architecture instead of a deployment problem - perhaps using some combination of namespaces and routing to ensure that requests are handled with the correct version.

The obvious advantage of the branch model is that it is trivial to delete old versions of the API - just stop deploying the corresponding branch / tag, but if you use multiple versions, you can get a really confusing branch structure and pipe deployment. The Unified Code Base model avoids this problem, but (I think?) It would be much more difficult to remove obsolete resources and endpoints from the code base when they are no longer needed. I know that this is probably subjective, because it is unlikely that there will be a simple correct answer, but I am curious to understand how to solve this problem, organizations that support complex APIs in several versions.

+76
rest versioning
Apr 25 '15 at 23:08
source share
3 answers

I used both strategies that you mentioned. Of these two, I prefer the second approach, being simpler in cases of use that support it. That is, if the need for versioning is simple, then go to simpler software:

  • A small number of changes, low complexity of changes or a schedule with a low frequency of changes.
  • Changes that are largely orthogonal to the rest of the code base: a public API can exist peacefully with the rest of the stack without requiring “excessive” (for any definition of this term that you decide to accept), branching in the code

I was not fortunate enough to remove obsolete versions using this model:

  • Good coverage for testing meant that tearing down the retired API and its associated support code guaranteed no (well, minimal) regressions
  • A good naming strategy (package names with API versions or somewhat ugly versions of the API in method names) made it easier to find the appropriate code
  • Extensive problems are more complicated; modifications to the underlying backend systems to support multiple APIs must be carefully weighed. At some point, the cost of a backend version (see the comment on “excessive” above) outweighs the advantage of a single code base.

The first approach is certainly simpler in terms of reducing conflict between existing versions, but the overhead of maintaining separate systems tends to outweigh the benefits of reducing version conflict. However, it was very easy to get into the new open API stack and start iterating in a separate API branch. Of course, the loss of generations appeared almost immediately, and the branches turned into a mess of mergers, combined conflict resolution and other similar entertainment.

The third approach is at the architectural level: use the Facade template option and draw your APIs on public framed versions that speak to the corresponding Facade instance, which in turn accesses the backend through its own set of APIs. Your Facade (I used the Adapter in my previous project) becomes its own package, self-sufficient and verifiable, and allows you to transfer front-end APIs independently of the backend and each other.

This will work if your versions of the API demonstrate the same resources, but with different structural representations, for example, in your name fullname / forename / surname. It gets a little more complicated if they start to rely on various backend-based calculations, for example, “My backend service returned incorrectly calculated compound percentages that were exposed in the open API v1. Our clients have already fixed this incorrect behavior. Therefore, I cannot update this calculation in the backend and applying it to version 2. Therefore, we now need to develop our percent calculation code. " Fortunately, they tend to be infrequent: in practice, RESTful API users prefer an accurate representation of resources over backward compatibility with error for errors, even among constant changes to a theoretically idempotent GET resource.

I will be interested to hear your final decision.

+34
May 26 '15 at 18:31
source share

For me, the second approach is better. I use it for SOAP web services and plan to use it for REST.

When you write, the code should be version aware, but the compatibility level can be used as a separate layer. In your example, the code base can create a resource view (JSON or XML) with the given name and surname, but the compatibility level will change it so that there is only a name instead.

The code base should only implement the latest version, say v3. The compatibility level should convert requests and responses between the latest version of v3 and supported versions, such as v1 and v2. The compatibility level can have separate adapters for each supported version, which can be connected as a chain.

For example:

Client v1 request: v1 adapt to v2 ---> v2 adapt to v3 ----> codebase

Client v2 request: v1 adapt to v2 (skip) ---> v2 adapt to v3 ----> codebase

To answer, adapters just work in the opposite direction. If you use Java EE, you can use the servlet filter chain as an adapter chain.

Removing one version is easy, remove the corresponding adapter and test code.

+9
Apr 7 '16 at 21:06
source share

Branching seems a lot better to me, and I used this approach in my case.

Yes, as you already mentioned - fixing errors in backporting will require some effort, but at the same time, supporting multiple versions under the same source database (with routing and all other materials) will require you, if not less, but at least to make the system more complex and monstrous with various branches of logic inside (at some point in the version you will probably come to a huge case() , pointing to version modules that have duplicated code or even worse if(version == 2) then... ). Also, remember that for regression purposes, you still need to keep the tests branched.

Regarding the version control policy: I would save as max -2 versions from the current one, which does not approve of the support of the old ones - this would create some motivation for moving users.

+2
Jun 02 '15 at 17:50
source share



All Articles