my8100/scrapydweb

Feature Request | 功能需求

Closed this issue · 35 comments

1、Timed task
2、Online packaging and deployment
3、Log monitoring alarm

@tuchief

2、Online packaging and deployment

Have you tried the Projects > Deploy page?

I know what you said, but this method should be manually packaged into egg, and then uploaded, and can not directly package the source project into egg, automatically upload it?

I know what you said, but this method should be manually packaged into egg, and then uploaded, and can not directly package the source project into egg, automatically upload it?

Ok, I would try to figure out a better way to eggify local projects.

v0.9.9: Add auto eggifying

Wow, the response is fast! Expecting the same timing tasks

  1. You can select multiple crawlers when you want to set a timed task. You can select multiple crawlers to start at each time period after selecting the time point.
    Imagine that I have 100 crawlers in a project. This feature does provide convenience and management.
    2, as above, because there are a hundred crawlers, so can you have the function of custom label, that is, I can use the label screen and view the running status of the specified category of crawlers, and timed tasks
    Thanks!

You can select multiple crawlers when you want to set a timed task. You can select multiple crawlers to start at each time period after selecting the time point.
Imagine that I have 100 crawlers in a project. This feature does provide convenience and management.

You mean there are 100 spiders in a project and you want to schedule some of them to run periodically?

as above, because there are a hundred crawlers, so can you have the function of custom label, that is, I can use the label screen and view the running status of the specified category of crawlers, and timed tasks

What about labeling some related jobs with a specific jobid?

You mean there are 100 spiders in a project and you want to schedule some of them to run periodically?

No, Simultaneously select multiple spiders to set one timed task.

What about labeling some related jobs with a specific jobid?

Labels will be better because they can be more visualized according to their own classification

OK, I would take it into account when implementing this feature. Thanks for your advice!

v1.0.0rc1: Add Email Notice, with multi-triggers provided, including:

  • ON_JOB_RUNNING_INTERVAL

  • ON_JOB_FINISHED

  • When reaching the threshold of specific kind of log: ['CRITICAL', 'ERROR', 'WARNING', 'REDIRECT', 'RETRY', 'IGNORE'], in the meanwhile, you can ask ScrapydWeb to stop/forcestop current job automatically.

Get it via the pip install scrapydweb==1.0.0rc1 command.

Email sample:

image

ERROR in utils: !!!!! ConnectionError HTTPConnectionPool(host='127.0.0.1', port=6800): Max retries exceeded with url: /jobs (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f6111cae9b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

ERROR in utils: !!!!! ConnectionError HTTPConnectionPool(host='127.0.0.1', port=6800): Max retries exceeded with url: /jobs (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f6111cae9b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

Please new an issue with details.

This project is pretty good. If you can join Timed task, it will be a more complete project.

Come on!

timed task +1

v1.2.0: Support Timer Tasks to schedule a spider run periodically

While adding a timer task, I have to choose a version, but actually I only want to use the latest version so that if the project is updated, I don't need to update my task accordingly.

OK, it would be fixed in the future release.
For the time being, go to the Jobs page and click either multinode or Start button to work around.

Or modify the code below to vm.versions = ['default: the latest version'].concat(obj.versions);

vm.versions = obj.versions;

Or modify the code below to vm.versions = ['default: the latest version'].concat(obj.versions);
scrapydweb/scrapydweb/templates/scrapydweb/schedule.html

Line 826 in 560e998

                         vm.versions = obj.versions;

It works, thanks

The modification has been commited.

For Logs categorization, can the same spiders distributed on different scrapyd be aggregated? @my8100

@heave-Rother
For the time being, you can switch to a specific page of the neighboring node
with the help of Node Scroller and Node Skipping.
image

If that cannot satisfy your need, could you draw a picture to show me your idea?

@heave-Rother

  1. What would you use the checkboxes in the drop-down list for?
    Did you notice the checkboxes for nodes in the Servers page?
  2. Stats aggregation would be followed up in PR #72.

@my8100
yes, I want to choose the server I want to join the statistics.

@heave-Rother
I see. Thanks for your suggestion.

add a image of docker, please

add a image of docker, please

@seozed
Check out the docker image created by @luzihang123.

my8100/logparser#15 (comment)

感谢作者,这是我找到的最好的爬虫集群操作平台。提几个需求:
1,给每个node加描述,方便自己看。
2,通过手机短信发送报警信息。
3,如何支持基于scrapy-redis的分布式爬虫的配置、启动?

@devxiaosong
Replied in #107.

Please add a short tutorial on how to switch from the flask development server to a production server with https enabled using letsencrypt. It would be much appreciated.

Please add a short tutorial on how to switch from the flask development server to a production server with https enabled using letsencrypt. It would be much appreciated.

Please try it out and share your result.

logger.info("For running Flask in production, check out http://flask.pocoo.org/docs/1.0/deploying/")

############################## ScrapydWeb #####################################
# The default is False, set it to True and add both CERTIFICATE_FILEPATH and PRIVATEKEY_FILEPATH
# to run ScrapydWeb in HTTPS mode.
# Note that this feature is not fully tested, please leave your comment here if ScrapydWeb
# raises any excepion at startup: https://github.com/my8100/scrapydweb/issues/18
ENABLE_HTTPS = False
# e.g. '/home/username/cert.pem'
CERTIFICATE_FILEPATH = ''
# e.g. '/home/username/cert.key'
PRIVATEKEY_FILEPATH = ''

可以在页面上动态添加/删除 scrapyd 节点吗

可以在页面上动态添加/删除 scrapyd 节点吗

Editing Scrapyd servers via GUI is not supported.