Django Unit Testing takes a very long time to create a test database

For some time, my unit testing lasted longer than expected. I tried to debug it several times without much success, since there were delays before my tests even started to work. This affected my ability to do anything remotely close to testing-based development (maybe my expectations are too high), so I want to see if I can fix this once and for all.

When starting a test, there is a delay of 70 to 80 seconds between the start and actual start of the test. For example, if I run a test for a small module (using time python manage.py test myapp ), I get

 <... bunch of unimportant print messages I print from my settings> Creating test database for alias 'default'... ...... ---------------------------------------------------------------- Ran 6 tests in 2.161s OK Destroying test database for alias 'default'... real 1m21.612s user 1m17.170s sys 0m1.400s 

About 1m18 of 1m: 21 are between

 Creating test database for alias 'default'... 

and

 ....... 

lines. In other words, the test takes less than 3 seconds, but the initialization of the database seems to take 1: 18min

I have about 30 applications, most of which have 1 to 3 database models, so this should give an idea of โ€‹โ€‹the size of the project. I use SQLite for unit testing and have implemented some of the proposed improvements. I canโ€™t publish the entire settings file, but I am happy to add any information that is required.

I use a runner

 from django.test.runner import DiscoverRunner from django.conf import settings class ExcludeAppsTestSuiteRunner(DiscoverRunner): """Override the default django 'test' command, exclude from testing apps which we know will fail.""" def run_tests(self, test_labels, extra_tests=None, **kwargs): if not test_labels: # No appnames specified on the command line, so we run all # tests, but remove those which we know are troublesome. test_labels = ( 'app1', 'app2', .... ) print ('Testing: ' + str(test_labels)) return super(ExcludeAppsTestSuiteRunner, self).run_tests( test_labels, extra_tests, **kwargs) 

and in my settings:

 TEST_RUNNER = 'config.test_runner.ExcludeAppsTestSuiteRunner' 

I also tried using django-nose with django-nose-exclude

I read a lot about how to speed up the test itself, but did not find any instructions on how to optimize or avoid initializing the database. I saw suggestions for trying not to test the database, but I cannot or do not know how to completely avoid this.

Please let me know if

  • This is normal and expected.
  • Not expected (and hopefully correct or lead to what to do)

Again, I don't need help on how to speed up the test itself, but initialize (or overhead). I want the above example to execute 10 seconds instead of 80 seconds.

Thank you very much

I run the test (for one application) with --verbose 3 and find that all this is related to migrations:

  Rendering model states... DONE (40.500s) Applying authentication.0001_initial... OK (0.005s) Applying account.0001_initial... OK (0.022s) Applying account.0002_email_max_length... OK (0.016s) Applying contenttypes.0001_initial... OK (0.024s) Applying contenttypes.0002_remove_content_type_name... OK (0.048s) Applying s3video.0001_initial... OK (0.021s) Applying s3picture.0001_initial... OK (0.052s) ... Many more like this 

I crushed all my migrations, but still slowly.

+29
python django django-unittest django-nose
Apr 7 '16 at 21:58
source share
4 answers

The final solution that solves my problem is to force Django to disable migration during testing, which can be done from settings like this

 TESTING = 'test' in sys.argv[1:] if TESTING: print('=========================') print('In TEST Mode - Disableling Migrations') print('=========================') class DisableMigrations(object): def __contains__(self, item): return True def __getitem__(self, item): return "notmigrations" MIGRATION_MODULES = DisableMigrations() 

or use https://pypi.python.org/pypi/django-test-without-migrations

My whole test now takes about 1 minute, and a small application - 5 seconds.

In my case, migrations are not needed for testing, since I update the tests during the migration and do not use migrations to add data. This will not work for everyone.

+24
May 11 '16 at
source share

Summary

Use pytest !

Operations

  • pip install pytest-django
  • pytest --nomigrations instead of ./manage.py test

Result

  • ./manage.py test costs 2 minutes. 11.86 seconds
  • pytest --nomigrations costs 2.18 seconds

Advice

  • You can create a file called pytest.ini in the project root directory and specify the default command line parameters and / or Django settings .

     # content of pytest.ini [pytest] addopts = --nomigrations DJANGO_SETTINGS_MODULE = yourproject.settings 

    Now you can just run the tests with pytest and type a little.

  • You can also speed up subsequent tests by adding --reuse-db to the default command line options.

     [pytest] addopts = --nomigrations --reuse-db 

    However, once your database model has been modified, you must run pytest --create-db once to force the test database to pytest --create-db .

  • If you need to enable gevent monkey patching during testing, you can create a file called pytest in the project root directory with the following contents, attach a run bit ( chmod +x pytest ) to it and run ./pytest to test instead of pytest :

     #!/usr/bin/env python # -*- coding: utf-8 -*- # content of pytest from gevent import monkey monkey.patch_all() import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "yourproject.settings") from django.db import connection connection.allow_thread_sharing = True import re import sys from pytest import main if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) 

    You can create a test_gevent.py file to verify the success of the gevent monkey patch:

     # -*- coding: utf-8 -*- # content of test_gevent.py import time from django.test import TestCase from django.db import connection import gevent def f(n): cur = connection.cursor() cur.execute("SELECT SLEEP(%s)", (n,)) cur.execute("SELECT %s", (n,)) cur.fetchall() connection.close() class GeventTestCase(TestCase): longMessage = True def test_gevent_spawn(self): timer = time.time() d1, d2, d3 = 1, 2, 3 t1 = gevent.spawn(f, d1) t2 = gevent.spawn(f, d2) t3 = gevent.spawn(f, d3) gevent.joinall([t1, t2, t3]) cost = time.time() - timer self.assertAlmostEqual(cost, max(d1, d2, d3), delta=1.0, msg='gevent spawn not working as expected') 

References

+20
Sep 30 '16 at 3:57
source share

use the test. / manage.py --keepdb if there are no changes in the migration files

+7
Sep 18 '17 at 13:43 on
source share

Initializing a database really takes too much time ...

I have a project with about the same number of models / tables (about 77) and about 350 tests and it takes 1 minute to run everything. Nine in a vagrant machine with 2 cpus and 2 GB of RAM allocated. I also use py.test with the pytest-xdist plugin to run several tests in parallel.

Another thing you can do is say that django reuses the test database and only re-creates it when the schema changes. You can also use SQLite to make tests use the database in memory. Both approaches are explained here: https://docs.djangoproject.com/en/dev/topics/testing/overview/#the-test-database

EDIT . If none of the above options work, another option is to have your unit tests inherit from django SimpleTestCase or use a custom test runner that doesn't create a database, as described in this answer here: django unit tests without db .

Then you can just make fun of django calls to the database using a library like this (which as I understand it): https://github.com/stphivos/django-mock-queries

That way, you can quickly run your unit tests and let your CI server worry about doing integration tests that require a database, before merging your code with some stable dev / master branch, which is not production branch.

+2
Apr 7 '16 at 22:20
source share



All Articles