Last week I installed RabbitMQ and Celery on my production system after I tested it with my local developer and everything worked fine.
I get the feeling that my tasks are not being performed at the factory, since I have about 1,200 tasks that are still in the queue.
I start the installation of CentOS 5.4, with celerydboth celerybeatdaemons and WSGI. I made an import to the wsgi module.
When I run, /etc/init.d/celeryd startwe get the following answer
[root@myvm myproject]
celeryd-multi v2.3.1
> Starting nodes...
> w1.myvm.centos01: OK
When I run /etc/init.d/celerybeat start, I get the following response
[root@myvm fundedmyprojectbyme]
Starting celerybeat...
Thus, by the result, it seems that the elements are being executed successfully - although when viewing the queues they seem to be larger than they are being executed.
, , django manage.py ./manage.py celeryd ./manage.py celerybeat, .
/etc/default/celeryd
CELERYD_CHDIR="/www/myproject/"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
CELERYD_OPTS="--time-limit=300 --concurrency=8"
CELERY_CONFIG_MODULE="celeryconfig"
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
export DJANGO_SETTINGS_MODULE="settings"
my /etc/default/celerybeat
CELERYD_CHDIR="/www/myproject/"
export DJANGO_SETTINGS_MODULE="settings"
CELERYD="/www/myproject/manage.py celeryd"
CELERYBEAT="/www/myproject/manage.py celerybeat"
CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"
/etc/init.d /celeryd /celerybeat
???