How to get around the Lotus Notes 60 Gb database barrier

Are there any ways to get around the upper size limit of a database in a Notes database? We are compiling a database that is still approaching 60 gigabytes. Thank you very much if you can offer an offer.

+4
source share
10 answers

Even if you could find a way to overcome the 64 GB limit, this would not be the recommended solution. Splitting an application into multiple databases is much better if you want to improve performance and maintain the stability of your Domino server. If you think you need to have everything in one database in order to be able to search, browse the domain search and search across several databases in the Domino admin help.

  • Perhaps some parts of the data are “old” and can be placed in one or more archive databases?

  • Perhaps you have a lot of large attachments and you can store them in a series of attachment databases?

  • Maybe you have a lot of complex ideas that can be optimized or eliminated and thereby save a lot of space and save everything in one database? (Remove sorting by columns where it is not necessary, using “clicking on the column heading to sort” is a surefire way to increase the size of the view index.)

+8
source

I assume your database is large due to attachments. In this case, look at DAOS - it will save all file attachments in the file system (server functionality is transparent to clients and existing applications).

As a bonus, he finds duplicates and stores them only once.

More details here: http://www.ibm.com/developerworks/lotus/library/domino-green/

+7
source

Just a kick in the dark:

Use a DB2 storage method instead of a Domino server?

+3
source

I assume that 80-90% of this space is occupied by file attachments. My suggestion is to move all attachments to a file share, if everyone can access this resource or an FTP server that everyone can connect to.

This is not ideal because the security issue is becoming a problem - now you need to manage the credentials in the Notes database and the external file resource, however it will be worth the effort from the Notes administrator's point of view.

In Notes documents, simply provide a link to the file. If users add these files through the Notes form, perhaps you can add some background code to extract the file from the document after saving it and replace it with a link to this file.

+3
source

64 GB is not really an absolute limit, you can go above that, I saw 80 GB and even close to 100 GB, although as soon as you are 64 GB in the past, you can get problems at any time. The limit is not really Notes, its underlying file system, I saw it on the AS400, but the great thing about Notes is that if you get a huge crash, you can still access all the documents and pull out all of the new copies. using even if you can no longer view the views opened in the client.

Regular archiving is best if it is a file attachment, then for two years something does not have to be in the main system, just a brief overview and link, you can even have a 5-year archive, 2-year archive, 1 year archive, etc. .d., the data will continue to accumulate and must be managed, no matter which platform you use to store it.

+3
source

If the problem is really related to large file attachments, I would certainly recommend that you study DAOS implementation on your server / database. It is available only with Domino Server 8.5 and later. On the other hand, if your database contains more than 100,000 documents, you can seriously look at dividing the data into several NSFs - with so many documents, you need to be very careful about the design of your presentation, search code, etc.

Some documented successes in DAOS: http://www.edbrill.com/ebrill/edbrill.nsf/dx/yet-another-daos-success-story-from-darren-duke?opendocument&comments

+2
source

If you get a database of up to 60 GB, do not use the Domino solution, which needs to be switched to a relational database. You need to archive or move documents across multiple databases. Although you can reach 60 GB, you should not do this. Significant impact on active databases. There are not many problems for static databases.

+1
source

I would also look at removing unnecessary views and their indices. Viewing indexes can take up 80-90% of disk space. If you cannot delete them, simplify their sort order / formulas and remove unnecessary column sort options. I halved 50 gigabytes to 25 gigabytes with a few simple changes like this, and almost no one noticed.

+1
source

One way could be, this time, to start with the user. Do all users need to constantly access all this data? If not, time to split or archive. If so, there is probably a flaw in the design of the application.

Technically, I would add to the previous comments a suggestion to check many parameters for compaction . Quick and dirty: drop all view indexes, but remember to rebuild at least one for the default view if you don't want your users to riot. See updall

+1
source

One more thing to check: make sure you check

[x] Use LZ1 compression for attachments

in db properties.

0
source

All Articles