Feature Request | 功能需求
Closed this issue · 35 comments
1、Timed task
2、Online packaging and deployment
3、Log monitoring alarm
I know what you said, but this method should be manually packaged into egg, and then uploaded, and can not directly package the source project into egg, automatically upload it?
I know what you said, but this method should be manually packaged into egg, and then uploaded, and can not directly package the source project into egg, automatically upload it?
Ok, I would try to figure out a better way to eggify local projects.
You can refer to https://github.com/Gerapy/Gerapy
v0.9.9: Add auto eggifying
Wow, the response is fast! Expecting the same timing tasks
- You can select multiple crawlers when you want to set a timed task. You can select multiple crawlers to start at each time period after selecting the time point.
Imagine that I have 100 crawlers in a project. This feature does provide convenience and management.
2, as above, because there are a hundred crawlers, so can you have the function of custom label, that is, I can use the label screen and view the running status of the specified category of crawlers, and timed tasks
Thanks!
You can select multiple crawlers when you want to set a timed task. You can select multiple crawlers to start at each time period after selecting the time point.
Imagine that I have 100 crawlers in a project. This feature does provide convenience and management.
You mean there are 100 spiders in a project and you want to schedule some of them to run periodically?
as above, because there are a hundred crawlers, so can you have the function of custom label, that is, I can use the label screen and view the running status of the specified category of crawlers, and timed tasks
What about labeling some related jobs with a specific jobid?
You mean there are 100 spiders in a project and you want to schedule some of them to run periodically?
No, Simultaneously select multiple spiders to set one timed task.
What about labeling some related jobs with a specific jobid?
Labels will be better because they can be more visualized according to their own classification
OK, I would take it into account when implementing this feature. Thanks for your advice!
v1.0.0rc1: Add Email Notice, with multi-triggers provided, including:
-
ON_JOB_RUNNING_INTERVAL
-
ON_JOB_FINISHED
-
When reaching the threshold of specific kind of log: ['CRITICAL', 'ERROR', 'WARNING', 'REDIRECT', 'RETRY', 'IGNORE'], in the meanwhile, you can ask ScrapydWeb to stop/forcestop current job automatically.
Get it via the pip install scrapydweb==1.0.0rc1
command.
Email sample:
ERROR in utils: !!!!! ConnectionError HTTPConnectionPool(host='127.0.0.1', port=6800): Max retries exceeded with url: /jobs (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f6111cae9b0>: Failed to establish a new connection: [Errno 111] Connection refused',))
ERROR in utils: !!!!! ConnectionError HTTPConnectionPool(host='127.0.0.1', port=6800): Max retries exceeded with url: /jobs (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f6111cae9b0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Please new an issue with details.
This project is pretty good. If you can join Timed task, it will be a more complete project.
Come on!
timed task +1
v1.2.0: Support Timer Tasks to schedule a spider run periodically
While adding a timer task, I have to choose a version, but actually I only want to use the latest version so that if the project is updated, I don't need to update my task accordingly.
OK, it would be fixed in the future release.
For the time being, go to the Jobs page and click either multinode or Start button to work around.
Or modify the code below to vm.versions = ['default: the latest version'].concat(obj.versions);
Or modify the code below to
vm.versions = ['default: the latest version'].concat(obj.versions);
scrapydweb/scrapydweb/templates/scrapydweb/schedule.htmlLine 826 in 560e998
vm.versions = obj.versions;
It works, thanks
The modification has been commited.
For Logs categorization, can the same spiders distributed on different scrapyd be aggregated? @my8100
@heave-Rother
For the time being, you can switch to a specific page of the neighboring node
with the help of Node Scroller and Node Skipping.
If that cannot satisfy your need, could you draw a picture to show me your idea?
@my8100 ok,
- What would you use the checkboxes in the drop-down list for?
Did you notice the checkboxes for nodes in the Servers page? - Stats aggregation would be followed up in PR #72.
@my8100
yes, I want to choose the server I want to join the statistics.
@heave-Rother
I see. Thanks for your suggestion.
add a image of docker, please
add a image of docker, please
@seozed
Check out the docker image created by @luzihang123.
感谢作者,这是我找到的最好的爬虫集群操作平台。提几个需求:
1,给每个node加描述,方便自己看。
2,通过手机短信发送报警信息。
3,如何支持基于scrapy-redis的分布式爬虫的配置、启动?
@devxiaosong
Replied in #107.
Please add a short tutorial on how to switch from the flask development server to a production server with https enabled using letsencrypt. It would be much appreciated.
Please add a short tutorial on how to switch from the flask development server to a production server with https enabled using letsencrypt. It would be much appreciated.
Please try it out and share your result.
Line 116 in 6b9663b
scrapydweb/scrapydweb/default_settings.py
Lines 82 to 92 in 6b9663b
可以在页面上动态添加/删除 scrapyd 节点吗
可以在页面上动态添加/删除 scrapyd 节点吗
Editing Scrapyd servers via GUI is not supported.