Where to store translations in cloud applications?

I am currently creating an application for an architecture running in the Amazon cloud (some w / php5.3 web servers, load balancing, PostgreSQL).

A key feature of my (PHP5) application is that everything (on the interface) must be translated into different languages, so there will be many lines that are represented by a β€œtoken” that should be translated.

My question ist: Where do you save these translations?

  • Save translations in files on local (web servers) drives?
  • Save translations in files on a central repository?
  • Keep translations in the database?
  • In the other place?

Additional information: Regardless of where the translations will be stored, there will be some caching (Redis, + cache cache), therefore files / databases will not be requested on each displayed page.

Each of the above solutions has pros and cons, and after many discussions in my team, we did not find a solution that we were all happy with.

Some of our thoughts:

  • Files are easier to maintain (update translations by overwriting files)
  • DB-Tables are more flexible (create a good translation interface around translation data)
  • DB-Tables are saved only once; so it’s cheaper than a lot of files in the cloud, I think (we pay for storage and data transfer).
  • Central file storage might be a bottleneck

So what is your position?

Hi Robert

+4
source share
1 answer

You have to do the same thing - save the main memory of the language data in the database, which will simplify the assembly of the management application around it and build local files (or another approach to local storage) for actual execution. Constant access to language data from the database is aimed at the fact that language data is usually quite static.

If you want to provide scalability, you must build at least three levels:

The decision at what level to store your data should always be based on where the data is extracted in the most efficient way:

  • Static or actual static data (= language, configuration, skins ...) must be stored locally to guarantee the fastest access to the data. You will have to come up with a way to create and synchronize updated data on several servers (with the exception of the local cache, if you use them). Approaches include: rsync , unison , redis replication, version control systems ...

  • Dynamic but cached data must live in the cloud cache, as it is assumed that it is often rebuilt and therefore can take advantage of the performance benefits of sharing embedded data.

  • Access to the database should only be done when you cannot avoid it (e.g. obsolete cache)

I would not particularly worry about the costs of accessing IO. Scaling a database server or performing mid-tier central planning will be much more expensive than IO. And if you're worried about this, find a local storage solution that relies heavily on RAM, and you can completely stop reading the drive and get another performance boost.

+3
source

All Articles