scrapy-elasticsearch | scrapy pipeline which send items | Continuous Deployment library
kandi X-RAY | scrapy-elasticsearch Summary
kandi X-RAY | scrapy-elasticsearch Summary
Scrapy-ElasticSearch is a pipeline which allows Scrapy objects to be sent directly to ElasticSearch.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Initialize the connection .
- Add item to Elasticsearch
- Generate a unique key for the given item .
- Gets the unique key from ELAST_UNIQ_KEY
scrapy-elasticsearch Key Features
scrapy-elasticsearch Examples and Code Snippets
Community Discussions
Trending Discussions on scrapy-elasticsearch
QUESTION
I am crawling websites using Scrapy
. I want to store the data from each crawl directly to Elasticsearch
. I was able to find a pipeline written just for this:
https://github.com/jayzeng/scrapy-elasticsearch/blob/master/scrapyelasticsearch/scrapyelasticsearch.py
Elasticsearch
applies custom mapping to fields if not mentioned otherwise. I created an index on my localhost with a custom mapping (code attached below).
The index is successfully created and the mapping is also applied. Now when I try to store data in that particular index, no document is added to it. However, if I specify an index that has not been created via custom mapping, documents are added to it.
Code for custom mapping:
...ANSWER
Answered 2019-Oct-14 at 04:19After looking this into for a couple of hours, I finally realized what the problem was. The item that I was indexing was a custom object that I had created and therefore was NOT JSON serializable by default. I simply typecast it into dict
and it worked like a charm.
QUESTION
I want to use the scrapy-elasticsearch pipeline in my scrapy project. In this project I have different items / models. These items are stored in a mysql server. In addition I want to index ONE of these items in an ElasticSearchServer.
In the documentation, however, I only find the way to index all defined items like in the code example from the settings.py below.
...ANSWER
Answered 2019-Jun-04 at 08:00The current implementation does not support sending only some items.
You could create a subclass of the original pipeline and override the process_item
method to do what you want.
If you have the time, you could also send a pull request upstream with a proposal to allow filtering items before sending them to Elasticsearch.
QUESTION
So guys, for the past 18 hours, I've desperately been trying to find a workaround for a bug in my code, and I think it's time for me to seek for some help.
I'm building a web scraper, its goal is to download a page, grab anchor texts, internal links, referrer URL, and save data to DB. Here's the relevant part of my Scrapy code;
...ANSWER
Answered 2017-Dec-13 at 01:55Okay, after consuming 9 cups of coffee and banging my head on the wall for 20 hours, I was able to fix the issue. It's so simple I'm almost ashamed to post it here, but here goes nothing;
When I first got the error yesterday, I tried decoding the referrer like this
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install scrapy-elasticsearch
You can use scrapy-elasticsearch like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page