Celery (Redis) results backend not working
我有一个使用Django的Web应用程序,并且正在使用Celery进行一些异步任务处理。
对于Celery,我使用Rabbitmq作为经纪人,而Redis作为后端。
Rabbitmq和Redis在本地虚拟机上托管的同一Ubuntu 14.04服务器上运行。
Celery工作程序正在远程计算机(Windows 10)上运行(Django服务器上没有工作程序)。
我有三个问题(我认为它们之间有某种联系!)。
reject requeue=False: [WinError 10061] No connection could be made
because the target machine actively refused it
我也对自己的设置感到困惑,而且我不确切知道此问题可能来自何处!
所以这是我到目前为止的设置:
my_app / settings.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | # region Celery Settings CELERY_CONCURRENCY = 1 CELERY_ACCEPT_CONTENT = ['json'] # CELERY_RESULT_BACKEND = 'redis://:C@pV@[email protected]:6379/0' BROKER_URL = 'amqp://soufiaane:C@pV@[email protected]:5672/cvcHost' CELERY_RESULT_SERIALIZER = 'json' CELERY_TASK_SERIALIZER = 'json' CELERY_ACKS_LATE = True CELERYD_PREFETCH_MULTIPLIER = 1 CELERY_REDIS_HOST = 'cvc.ma' CELERY_REDIS_PORT = 6379 CELERY_REDIS_DB = 0 CELERY_RESULT_BACKEND = 'redis' CELERY_RESULT_PASSWORD ="C@pV@lue2016" REDIS_CONNECT_RETRY = True AMQP_SERVER ="cvc.ma" AMQP_PORT = 5672 AMQP_USER ="soufiaane" AMQP_PASSWORD ="C@pV@lue2016" AMQP_VHOST ="/cvcHost" CELERYD_HIJACK_ROOT_LOGGER = True CELERY_HIJACK_ROOT_LOGGER = True CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' # endregion |
my_app / celery_settings.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | from __future__ import absolute_import from django.conf import settings from celery import Celery import django import os # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_app.settings') django.setup() app = Celery('CapValue', broker='amqp://soufiaane:C@pV@[email protected]/cvcHost', backend='redis://:C@pV@[email protected]:6379/0') # Using a string here means the worker will not have to # pickle the object when using Windows. app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) @app.task(bind=True) def debug_task(self): print('Request: {0!r}'.format(self.request)) |
my_app__init__.py
1 2 3 4 5 6 | from __future__ import absolute_import # This will make sure the app is always imported when # Django starts so that shared_task will use this app. from .celery_settings import app as celery_app |
my_app email tasks.py
1 2 3 4 5 6 7 8 9 10 | from __future__ import absolute_import from my_app.celery_settings import app # here i only define the task skeleton because i'm executing this task on remote workers ! @app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1) def email_task(self, job, email): try: print("x") except Exception as exc: self.retry(exc=exc) |
在工人方面,我有一个文件" tasks.py",其中包含该任务的实际执行情况:
Worker tasks.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | from __future__ import absolute_import from celery.utils.log import get_task_logger from celery import Celery logger = get_task_logger(__name__) app = Celery('CapValue', broker='amqp://soufiaane:C@pV@[email protected]/cvcHost', backend='redis://:C@pV@[email protected]:6379/0') @app.task(name='email_task', bind=True, max_retries=3, default_retry_delay=1) def email_task(self, job, email): try: """ The actual implementation of the task """ except Exception as exc: self.retry(exc=exc) |
我确实注意到的是:
- 当我将工作人员中的代理设置更改为错误的密码时,我无法连接到代理错误。
- 当我将工作人员中的结果后端设置更改为错误的密码时,它将正常运行,好像一切正??常。
是什么可能导致我遇到这些问题?
编辑
在我的Redis服务器上,我已经启用了远程连接
/etc/redis/redis.conf
...
绑定0.0.0.0
...
我的猜测是您的问题出在密码上。
您的密码中包含
工人处于待处理状态,因为他们无法正确连接到代理。
从芹菜的文档
http://docs.celeryproject.org/en/latest/userguide/tasks.html#pending
PENDING
Task is waiting for execution or unknown. Any task id that is not known is implied to be in the pending state.
将
我有一个设置,其中"单个实例服务器"(开发服务器和locahost服务器)在工作,但当redis服务器是另一台服务器时却没有。 芹菜任务运行正常,但未获得结果。 尝试获取任务结果时出现以下错误:
1 | Error 111 connecting to localhost:6379. Connection refused. |
使它起作用的原因只是将设置添加到Django:
1 | CELERY_RESULT_BACKEND = 'redis://10.10.10.10:6379/0' |
似乎如果不存在此参数,它将默认为localhost来获取任务的结果。