scrapy-splash | Scrapy+Splash for JavaScript integration | Scraper library

 by   scrapy-plugins Python Version: 0.9.0 License: BSD-3-Clause

kandi X-RAY | scrapy-splash Summary

kandi X-RAY | scrapy-splash Summary

scrapy-splash is a Python library typically used in Automation, Scraper, Selenium, PhantomJS applications. scrapy-splash has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install scrapy-splash' or download it from GitHub, PyPI.

Scrapy+Splash for JavaScript integration
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              scrapy-splash has a highly active ecosystem.
              It has 2900 star(s) with 443 fork(s). There are 122 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 60 open issues and 187 have been closed. On average issues are closed in 101 days. There are 16 open pull requests and 0 closed requests.
              OutlinedDot
              It has a negative sentiment in the developer community.
              The latest version of scrapy-splash is 0.9.0

            kandi-Quality Quality

              scrapy-splash has 0 bugs and 0 code smells.

            kandi-Security Security

              scrapy-splash has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              scrapy-splash code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              scrapy-splash is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              scrapy-splash releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              scrapy-splash saves you 858 person hours of effort in developing the same functionality from scratch.
              It has 2281 lines of code, 181 functions and 25 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed scrapy-splash and discovered the below as its top functions. This is intended to give you an instant insight into scrapy-splash implemented functionality, and help decide if they suit your requirements.
            • Process request
            • Convert headers to unicode
            • Gets the slot key for the request or response
            • Returns whether http auth is enabled
            • Set the download slot
            • Process a Splash request
            • Get cookies from request
            • Send a debug message to the logger
            • Convert a cookie to a dictionary
            • Process a response
            • Logs cookies
            • Add har_cookies to a cookie jar
            • Convert a harpoon cookie object into a cookie object
            • Load response from JSON
            • Return the options for the splash options
            • Convert headers to scipy
            • Override robots txt middleware
            Get all kandi verified functions for this library.

            scrapy-splash Key Features

            No Key Features are available at this moment for scrapy-splash.

            scrapy-splash Examples and Code Snippets

            Run scrapy splash as a script
            Pythondot img1Lines of Code : 4dot img1License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(docker ps -q)
            
            'SPLASH_URL': 'http://0.0.0.0:8050'
            
            Why does scrapy_splash CrawlSpider take the same amount of time as scrapy with Selenium?
            Pythondot img2Lines of Code : 34dot img2License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            # global_state.py
            
            GLOBAL_STATE = {"counter": 0}
            
            # middleware.py
            
            from global_state import GLOBAL_STATE
            
            class SeleniumMiddleware:
            
                def process_request(self, request, spider):
                    GLOBAL_STATE["counter"] += 1
                    self.driver.g
            Scraping images in a dynamic, JavaScript webpage using Scrapy and Splash
            Pythondot img3Lines of Code : 8dot img3License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            
            
            
            >>> response.css('meta[itemprop=image]::attr(content)').get()
            'https://arbstorage.mncdn.com/ilanfotograflari/2021/06/23/17753653/3c57b95d-9e76-42fd-b418-f81d85389529_image_for_silan_17753653_1920x1080.jp
            Scrapy-splash Can't find image source url
            Pythondot img4Lines of Code : 6dot img4License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            _mkt_imageDir = /BASE_IMAGES_URL=(.*?);/.test(document.cookie) && RegExp.$1 || 'https://static.zara.net/photos/';
            
            "originalUrl":"/us/en/fitted-houndstooth-blazer-p07808160.html?v1=108967877&v2=1718115",
            Why Splash+Scrapy add html header to json response
            Pythondot img5Lines of Code : 3dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            json_response = response.xpath('html/body/pre/text()').get()
            json_response = json.loads(json_response)
            
            How to enable overwriting output files in scrapy settings.py?
            Pythondot img6Lines of Code : 16dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
                FEEDS = {"overwrite": True}
            
            FEEDS = {
                "quotes_splash.json": {
                    "format": "json",
                    "overwrite": True
                     }
                }
            
              File "/home/andylu/.virtualenvs/scrapy_course/li
            How to get data from a later function in scrapy
            Pythondot img7Lines of Code : 7dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            async def parse_page(self, response):
                ...
                for link in links:
                    request = response.follow(link)
                    response = await self.crawler.engine.download(request, self)
                    urls.append(response.css('a::attr(href)').get())
            
            Attempting login with Scrapy-Splash
            Pythondot img8Lines of Code : 9dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            req = FormRequest.from_response(
                        response,
                        formid='login-form',
                        formdata={
                            'username' : 'not real',
                            'password' : 'login data'},
                        clickdata={'type': 'submit'}
              
            Please tell me what's wrong with the scrapy splash code
            Pythondot img9Lines of Code : 4dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            lists = response.css('#recent_list_box > li').getAll()
            
            lists = response.css('#recent_list_box > li').getall()
            
            scrapy xpath selectors return none
            Pythondot img10Lines of Code : 3dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            (//span[@class='tv-widget-fundamentals__value apply-overflow-tooltip'])[2]
            (//span[@class='tv-widget-fundamentals__value apply-overflow-tooltip'])[4]
            

            Community Discussions

            QUESTION

            CrawlSpider with Splash, only first link is crawled & processed
            Asked 2021-May-23 at 10:57

            I am using Scrapy with Splash. Here is what I have in my spider:

            ...

            ANSWER

            Answered 2021-May-23 at 10:57

            I ditched the Crawl Spider and converted to a regular spider, and things are working fine now.

            Source https://stackoverflow.com/questions/67611127

            QUESTION

            Scrapy-splash Can't find image source url
            Asked 2021-May-19 at 09:47

            I am trying to scrape a product page from ZARA. Like this one :https://www.zara.com/us/en/fitted-houndstooth-blazer-p07808160.html?v1=108967877&v2=1718115

            My scrapy-splash container is running. In the shell I fetch the page

            ...

            ANSWER

            Answered 2021-May-16 at 19:42

            I have only started looking into web scraping in the last week, so I am not sure if I can be much help, but I did find something.

            The source code showed this in the script at the top:

            Source https://stackoverflow.com/questions/67533691

            QUESTION

            Passing variable to SplashRequest callback function in Scrapy
            Asked 2021-May-04 at 14:39

            I have a mini project where there is a list of URLs on the first page and then I have to follow each URL in these list of URLs and open each URL with SplashRequest because I need the returned page to be rendered along with its JavaScript component.

            Now, I'm very new to all of these web scraping and scrapy-splash but basically I'm currently stuck because I'm trying to figure out how to pass a variable to the callback function when using SplashRequest. Basically, I have no idea how to pass a variable to our callback function below:

            ...

            ANSWER

            Answered 2021-May-04 at 14:39

            Found the answer to this one myself, apparently the SplashRequest also takes meta as its argument just like response.follow so the mechanism of passing variables to the callback function is exactly the same as using a normal scrapy.

            Source https://stackoverflow.com/questions/67384289

            QUESTION

            How To Run Selenium-scrapy in parallel
            Asked 2021-Feb-09 at 01:10

            I'm trying to scrape a javascript website using scrapy and selenium. I open the javascript website using selenium and a chrome driver and I scrape all the links to different listings from the current page using scrapy and store them in a list (this has been the best way to do it so far as trying to follow links using seleniumRequest and callingback to a parse new page function has caused a lot errors). Then, I loop through the list of URLs, open them in the selenium driver and scrape the info from the pages. So far this scrapes 16 pages/ minute which is not ideal given the amount of listings on this site. I would ideally have the selenium drivers opening links in parallel like the following implementations:

            How can I make Selenium run in parallel with Scrapy?

            https://gist.github.com/miraculixx/2f9549b79b451b522dde292c4a44177b

            However, I can't figure out how to implement parallel processing in my selenium-scrapy code. `

            ...

            ANSWER

            Answered 2021-Feb-09 at 01:10

            The following sample program creates a thread pool with only 2 threads for demo purposes and then scrapes 4 URLs to get their titles:

            Source https://stackoverflow.com/questions/66056697

            QUESTION

            scrapy javascript with spalsh wont render page
            Asked 2020-Dec-14 at 18:07

            I want to crawl this page, I follow this post to crawl it but it didn't render webpage.
            How can I fix it?
            I use this:

            ...

            ANSWER

            Answered 2020-Dec-14 at 18:07

            I found this post and fix the render issue.

            and this post to load ld-json.

            Source https://stackoverflow.com/questions/65262149

            QUESTION

            Storing responses as files using Scrapy Splash
            Asked 2020-Oct-26 at 08:14

            I'm creating my first scrapy project with Splash and work with the testdata from http://quotes.toscrape.com/js/ I want to store the quotes of each page as a separate file on disk (in the code below I first try to store the entire page). I have the code below, which worked when I was not using SplashRequest, but with the new code below, nothing is stored on disk now when I 'Run and debug' this code in Visual Studio Code. Also self.log does not write to my Visual Code Terminal window. I'm new to Splash, so I'm sure I'm missing something, but what?

            Already checked here and here.

            ...

            ANSWER

            Answered 2020-Oct-19 at 09:09
            Problem

            JavaScript on website you wish to scrape isn’t being executed.

            Solution

            Increase ScrappyRequest wait time to allow JavaScript to execute.

            Example

            Source https://stackoverflow.com/questions/64350943

            QUESTION

            How to get dinamically-loaded content from this website using scrapy-splash?
            Asked 2020-Oct-06 at 08:45

            I'm trying to get data from this website using scrapy-splash but im not able to extract data. I want to get data about each real state like href, price, etc. Here is my code:

            in setings.py:

            ...

            ANSWER

            Answered 2020-Oct-06 at 08:45

            What I would do instead is this:

            Send a request to https://www.metrocuadrado.com/results/_next/static/chunks/commons.8afec6af6d5add2097bf.js, in the response you'll find an API-key if you search for "X-Api-Key". So that can be extracted easily with regex, something like: re.findall(r'"X-Api-Key":"(\w+)"').

            Then, when you've extracted the API key, send a request to https://www.metrocuadrado.com/rest-search/search?seo=/bodega/arriendo&from=0&size=50, which is the hidden API in the website you sent. To get a valid response you have to attach the header like this

            Source https://stackoverflow.com/questions/64216201

            QUESTION

            Attempting login with Scrapy-Splash
            Asked 2020-Sep-23 at 07:35

            Since i am not able to login to https://www.duif.nl/login, i tried many different methods like selenium, which i successfully logged in, but didnt manage to start crawling.

            Now i tried my luck with scrapy-splash, but i cant login :(

            If i render the loginpage with splash, i see following picture:

            Well, there should be a loginform, like username and password, but scrapy cant see it?

            Im sitting here like a week in front of that loginform and losing my will to live..

            My last question didnt even get one answer, now i try it again.

            here is the html code of the login-form:

            When i login manual, i get redirected to "/login?returnUrl=", where i only have these form_data:

            My Code

            ...

            ANSWER

            Answered 2020-Sep-23 at 07:35
            • I don't think using Splash here is the way to go, as even with a normal Request the form is there: response.xpath('//form[@id="login-form"]')
            • There are multiple forms available on the page, so you have to specify which form you want to base yourself on to make a FormRequest.from_response. Best specify the clickdata as well (so it goes to 'Login', not to 'forgot password'). In summary it would look something like this:

            Source https://stackoverflow.com/questions/64006899

            QUESTION

            scrapy response is nothing like the page source
            Asked 2020-Aug-08 at 21:57

            I am trying to using scrapy shell to get in "ykc1.greatwestlife.com" which should be a public website, though there are lots of things if I look at page source manually, I can not get a correct response using scrapy.

            scrapy shell response result

            Do I need to use scrapy-splash in this case? any ideas? Thanks

            ...

            ANSWER

            Answered 2020-Aug-08 at 21:57

            You can actually see the two back-to-back requests, caused by

            Source https://stackoverflow.com/questions/63290461

            QUESTION

            How can I scrap all the information from a page that uses javascript to expand the content
            Asked 2020-Aug-03 at 09:27

            I am trying to scrap a page that have a list of elements and at the bottom a expand button that increases the list. It uses a onclick event to expand and I don't know how to activate it. I'm trying to use scrapy-splash since I read it might work, but I can't make it function properly.

            What I am currently trying to do is something like this

            ...

            ANSWER

            Answered 2020-Aug-03 at 09:27

            It's not necessary to use Splash, if you look at the network tools of chromedevtools. It's making a get HTTP request with some parameters. This is called re-engineering the HTTP requests and is preferable to using splash/selenium. Particularly if you're scraping a lot of data.

            In cases of re-engineering the request copying the BASH request and putting this into curl.trillworks.com. This gives me a nice formated headers, parameters and cookies for that particular request. I usually play about with this HTTP request using the requests python package. In this case, the simplest HTTP request is one where you just have to pass the parameters and not the headers.

            If you look on the right hand side you have headers and parameters. Using the reuqests package I figured out that you only need to pass the page parameters to get the information you needed.

            Source https://stackoverflow.com/questions/63221933

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install scrapy-splash

            You can install using 'pip install scrapy-splash' or download it from GitHub, PyPI.
            You can use scrapy-splash like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install scrapy-splash

          • CLONE
          • HTTPS

            https://github.com/scrapy-plugins/scrapy-splash.git

          • CLI

            gh repo clone scrapy-plugins/scrapy-splash

          • sshUrl

            git@github.com:scrapy-plugins/scrapy-splash.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link