evilcos/crawlers

Images cannot Download.

Closed this issue · 0 comments

The command line always like this (Changed The website url in rosi.py is useless)
"[scrapy.core.scraper] WARNING: Dropped: Item contains no images [] #add a pirnt add it was empty
{'image_urls': [u'/templets/default/logo.png',
u'/templets/default/s1.gif',
u'/templets/default/s2.gif',
u'/templets/default/s3.gif',
u'/templets/default/s4.gif',
u'/templets/default/s5.gif',
u'http://tu.jinyemimi.com/disi/777/x.jpg',
u'/images/rs.png',
u'/templets/default/mail.gif',
u'http://icon.51.la/icon_0.gif']}"

Add I add a print test before "yield scrapy.Request(image_url.strip())" , it dosen't work.
From the official guidline I find a description about it

  1. When the item reaches the FilesPipeline, the URLs in the file_urls field are scheduled for download using the standard Scrapy scheduler and downloader (which means the scheduler and downloader middlewares are reused), but with a higher priority, processing them before other pages are scraped. The item remains “locked” at that particular pipeline stage until the files have finish downloading (or fail for some reason).
  2. When the files are downloaded, another field (files) will be populated with the results. This field will contain a list of dicts with information about the downloaded files, such as the downloaded path, the original scraped url (taken from the file_urls field) , and the file checksum. The files in the list of the files field will retain the same order of the original file_urls field. If some file failed downloading, an error will be logged and the file won’t be present in the files field.

But I have no idea about the debug way,So could u plz give me some advicies? I just wanna dig it out