NLS is a recurring requirement, and often when asked about the functionality of NLS, people who donβt ask about complexity do not. NLS is usually divided into (at least) 2 areas:
In your case, on a content-based website, you can even divide the second paragraph into - data created by the website provider, and - data created by the user.
For UI NLS, you can use the .resx mechanism mentioned by Mehrdad, but you should be aware that for each localization work you always need to edit the source code (i.e. resx files).
When I had to develop a multi-user web application, I therefore decided to process the NLS requirement in my code and created a couple of NLS-specific tables that reflected the UI (by the way, it was the motivation to write graspx : extract all visible texts from an aspx source, such as Label. Text, etc.). There is a separate application for downloading user interface definitions and allows translators to do their job. The main application has an import function for translated texts.
The data model looks like this: Page - PageItems - PageItemTexts (with a link to the language), so it is quite simple.
The same model can be applied to content: instead of pages and PageItems, you simply have ContentItems, which have only a PC and an identifier, as well as a table containing the ContentItems text associated with the language.
In addition, you can define some kind of language backup chain so that text that has not yet been translated is displayed in the original language or in some other (closely related) language.
The displayed language can be selected according to the language provided by the browser (HTTP_ACCEPT_LANGUAGE), but must be allowed for rewriting by the user (for example, using combobox). The selected language should be stored in a session variable, in a cookie or in a database (for registered users).