kezhenxu94/house-renting

运行scrapy crawl lianjia出错

wzzju opened this issue · 1 comments

wzzju commented

切换到house-renting/crawler目录下,运行scrapy crawl lianjia出错,错误信息如下:
2018-05-31 20:03:03 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: house_renting)
2018-05-31 20:03:03 [scrapy.utils.log] INFO: Overridden settings: {'AUTOTHROTTLE_MAX_DELAY': 10, 'NEWSPIDER_MODULE': 'house_renting.spiders', 'CONCURRENT_REQUESTS_PER_DOMAIN': 1, 'AUTOTHROTTLE_TARGET_CONCURRENCY': 2.0, 'SPIDER_MODULES': ['house_renting.spiders'], 'AUTOTHROTTLE_START_DELAY': 10, 'RETRY_TIMES': 3, 'BOT_NAME': 'house_renting', 'DOWNLOAD_TIMEOUT': 30, 'COOKIES_ENABLED': False, 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1 Safari/605.1.15 ', 'TELNETCONSOLE_ENABLED': False, 'COMMANDS_MODULE': 'house_renting.commands', 'AUTOTHROTTLE_ENABLED': True, 'DOWNLOAD_DELAY': 5, 'AUTOTHROTTLE_DEBUG': True}
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in
sys.exit(execute())
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 149, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 156, in _run_command
cmd.run(args, opts)
File "/home/yuchen/House/house-renting/crawler/house_renting/commands/crawl.py", line 17, in run
self.crawler_process.crawl(spider_name, **opts.spargs)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 167, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 195, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 200, in _create_crawler
return Crawler(spidercls, self.settings)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 52, in init
self.extensions = ExtensionManager.from_crawler(self)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 58, in from_crawler
return cls.from_settings(crawler.settings, crawler)
File "/usr/local/lib/python2.7/dist-packages/scrapy/middleware.py", line 34, in from_settings
mwcls = load_object(clspath)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 44, in load_object
mod = import_module(module)
File "/usr/lib/python2.7/importlib/init.py", line 37, in import_module
import(name)
File "/usr/local/lib/python2.7/dist-packages/scrapy/extensions/memusage.py", line 16, in
from scrapy.mail import MailSender
File "/usr/local/lib/python2.7/dist-packages/scrapy/mail.py", line 22, in
from twisted.internet import defer, reactor, ssl
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/ssl.py", line 230, in
from twisted.internet._sslverify import (
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/_sslverify.py", line 15, in
from OpenSSL._util import lib as pyOpenSSLlib
ImportError: No module named _util
并且我已经运行pip install -r requirements.txt命令,请问该如何解决呢?

wzzju commented

此问题已经解决,运行如下命令即可:
sudo pip install pyopenssl --user --upgrade