I work with a software-oriented architecture in which several celery workers work (call them worker1 , worker2 and worker3 ). All three employees are separate entities (i.e., separate code bases, separate repositories, separate celery instances, separate machines), and not one of them is associated with the Django application.
Communicating with each of these three employees is Django, based on the MySQL RESTful API.
In development, these services are located in a wandering field, each of which acts as a separate machine operating from a separate port. We have one RabbitMQ broker for all Celery tasks.
A typical path through these services might look something like this: worker1 receives a message from the device, does some processing, puts the task on worker2 , which does further processing and does POST for the API , which is written to the MySQL database and runs the task on worker3 , which does some other processing and makes another POST for the API , which results in a MySQL record.
The services communicate well, but it’s very annoying to test this stream every time we make changes to any service. I really want to get full integration tests (i.e. starting with a message sent to worker1 and going through the whole chain), but I'm not sure where to start. The main issues I am facing are as follows:
If I focus on something on worker1 , how can I tell when the whole thread has ended? How can I make informed statements about the results when I don’t know if the results have been achieved?
How do I work with installing / disabling a database? I want to delete all entries made during the test at the end of each test, but if I start the test from outside the Django application, I’m not sure how to clean it effectively. Manually deleting it and re-creating it after each test, it looks like it could be too much overhead.
python django integration celery
user1427661
source share