This is possible by running a celery worker in a Django test case.
Background
The in-memory Django database is sqlite3. As stated on the description page for in-memory Sqlite databases , "[A] ll database connections sharing a database in memory should be in the same process." This means that as long as Django uses the test database in memory and the celery runs in a separate process, it is fundamentally impossible for Celery and Django to share the test database.
However, with celery.contrib.testing.worker.start_worker , you can start the celery worker in a separate thread as part of the same process. This worker can access the database in memory.
This suggests that celery is already set up in the usual way with the Django project.
Decision
Since Django-Celery uses some cross-threading, only those tests that do not run in isolated transactions will work. The test case should be inherited directly from SimpleTestCase or its equivalent Rest APISimpleTestCase and set the class allow_database_queries attribute to True .
The key should start with the celery worker in the setUpClass TestCase method and close it in the tearDownClass method. The key function is celery.contrib.testing.worker.start_worker(app) , which requires an instance of the current Celery application, supposedly obtained from mysite.celery.app , and returns a Python ContextManager that has the __enter__ and __exit__ methods that must be called in setUpClass and tearDownClass , respectively. There is probably a way to avoid manually entering the existing ContextManager using a decorator or something else, but I could not figure it out. The following is an example tests.py file:
from celery.contrib.testing.worker import start_worker from django.test import SimpleTestCase from mysite.celery import app class BatchSimulationTestCase(SimpleTestCase): allow_database_queries = True @classmethod def setUpClass(cls): super().setUpClass()
For some reason, the tester is trying to use a task named 'celery.ping' , possibly to improve error messages in the event of a worker failure. Even setting perform_ping_check to False as an argument to the ot start_worker keyword still checks for its existence. The task she is looking for is celery.contrib.testing.tasks.ping . However, this task is not set by default. This could be achieved by adding celery.contrib.testing to INSTALLED_APPS in settings.py . However, this only makes it visible to the worker; not the code that the employee creates. The code that the worker creates executes assert 'celery.ping' in app.tasks , which does not work. Commenting on this, everything works, but modifying the installed library is not a good solution. I'm probably doing something wrong, but the workaround I stopped for is to copy a simple function somewhere, you can pick it up app.autodiscover_tasks() , for example celery.py :
@app.task(name='celery.ping') def ping(): # type: () -> str """Simple task that just returns 'pong'.""" return 'pong'
Now that the tests are done, there is no need to start a separate Celery process. The celery worker will be launched during the testing process of Django as a separate thread. This worker can see any databases in memory, including the default test database in memory. To control the number of workers there are options available in start_worker , but by default it is the only worker.