mzitu | 👧 Beautiful photo set crawler | Crawler library
kandi X-RAY | mzitu Summary
kandi X-RAY | mzitu Summary
👧 Beautiful photo set crawler (2)
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Crawl urlls
- Create folder
- Saves pic_src to file
- Returns a list of urls
- Delete all empty folders
mzitu Key Features
mzitu Examples and Code Snippets
Community Discussions
Trending Discussions on mzitu
QUESTION
I'm a beginner in web spider and i am so confused these days when using aiohttp. Here is my code:
...ANSWER
Answered 2019-Mar-05 at 16:05The first issue is that you are using pokemon exception handling, you really don't want to catch them all.
Catch specific exceptions, only, or at least only catch Exception
and make sure to re-raise asyncio.CancelledError
(you don't want to block task cancellations), and log or print the exceptions that are raised so you can further clean up your handler. As a quick fix, I replaced your try:... except: continue
blocks with:
QUESTION
# -*- coding: utf-8 -*- from scrapy_redis.spiders import RedisSpider from scrapy.spider import Request
from scrapy_redis_slaver.items import MzituSlaverItem
class MzituSpider(RedisSpider):
name = 'mzitu'
redis_key = 'mzitu:start_urls' # get start url from redis
def __init__(self, *args, **kwargs):
self.item = MzituSlaverItem()
def parse(self, response):
max_page = response.xpath(
"descendant::div[@class='main']/div[@class='content']/div[@class='pagenavi']/a[last()-1]/span/text()").extract_first(default="N/A")
max_page = int(max_page)
name = response.xpath("./*//div[@class='main']/div[1]/h2/text()").extract_first(default="N/A")
self.item['name'] = name
self.item['url'] = response.url
item_id = response.url.split('/')[-1]
self.item['item_id'] = item_id
# name: the pictures' title
# url: the pictures' the first url
# item_id: the pictures' id
# max_page: the pictures' max page
for num in range(1, max_page+1): # The cycle is turning pages.
# page_url is page address for each picture.
page_url = response.url + '/' + str(num)
yield Request(page_url, callback=self.img_url,meta={"name":name,
"item_id":item_id,
"max_page":max_page
})
def img_url(self, response):
# this function: get a picture's url from response
img_urls = response.xpath("descendant::div[@class='main-image']/descendant::img/@src").extract_first()
# a img_url
self.server.sadd('{}:{}:images'.format(response.meta['name'], response.meta['item_id']), img_urls)
# add a img_url to list of redis
len_redis_img_list = self.server.scard('{}:{}:images'.format(response.meta['name'], response.meta['item_id']))
# get the length of the img_url_list from redis
if len_redis_img_list == response.meta['max_page']:
self.item['img_urls'] = self.server.smembers('{}:{}:images'.format(response.meta['name'], response.meta['item_id']))
print("yield item",response.meta['item_id'])
yield self.item
# in my mind,when the len_redis_img_list is equal the max_page,the item will be yield one
# but actually,the item was yield the max_page times(very very more)
...ANSWER
Answered 2018-Aug-19 at 10:41I'm having trouble understanding your crawler as well.
Your current loop goes like this:
QUESTION
scrapy:
...ANSWER
Answered 2018-Aug-17 at 18:04This has to do with the fact that scrapy-redis
uses its own scheduler class which serializes/deserializes all requests through redis before pushing them further to the downloader (it keeps a queue on redis). There is no "easy" way around this as it's basically the core scrapy-redis
functionality. My advise is to not put too much runtime sensitive stuff into meta as this even generally not the best idea in scrapy.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mzitu
You can use mzitu like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page