autotrader | A set of script for you to trade any financial product | Cryptocurrency library
kandi X-RAY | autotrader Summary
Support
Quality
Security
License
Reuse
- Create the plot
- Compute a DataRange2D
- Add LineTool to input plot
- Bollinger Band
- Compute standard deviation
- Update the plot data
- Update plot data
- Updates the plot
autotrader Key Features
autotrader Examples and Code Snippets
Trending Discussions on autotrader
Trending Discussions on autotrader
QUESTION
Hi everyone so this script below is for Selenium but its extremely slow and not feasible for large amount of urls can anyone tell how to convert it into fast Bs4 script and can Beautiful Soup Scrape Click To Show buttons? Thank you everyone for helping me!
from selenium import webdriver
import time
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
chrome_path = r"C:\Users\lenovo\Downloads\chromedriver_win32 (5)\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.maximize_window()
driver.implicitly_wait(10)
driver.get("https://www.autotrader.ca/a/ram/1500/hamilton/ontario/19_12052335_/?showcpo=ShowCpo&ncse=no&ursrc=pl&urp=2&urm=8&sprx=-2")
wait =WebDriverWait(driver,30)
driver.find_element_by_xpath('//button[@class="close-button"]').click()
option = wait.until(EC.element_to_be_clickable((By.XPATH,"//a[text()= 'Click to show']")))
driver.execute_script("arguments[0].scrollIntoView(true);",option)
option.click()
time.sleep(10)
Name = driver.find_element_by_xpath('//p[@class="hero-title"]')
Number = driver.find_element_by_xpath('//div[@class="card-body"]')
print(Name.text,Number.text)
ANSWER
Answered 2021-Oct-09 at 04:21You don't really need to use selenium here, you can simple use requests as the phone number you're looking for is in the HTML (just not visible).
If you click on "view page source" in your browser you can ctrl+f
for the phone number:
So you don't need to emulate browser and button clicking - everything is there!
Now lets see how we can scrape this data just by using requests
(or any other http client like httpx
or aiohttp
):
import requests
import re
url = "https://www.autotrader.ca/a/ram/1500/hamilton/ontario/19_12052335_/?showcpo=ShowCpo&ncse=no&ursrc=pl&urp=2&urm=8&sprx=-2"
# we need to pretend that our request is coming from a web browser to get around anti-bot protection by setting user agent string header to a web-browsers one
# in this case we use windows chrome browser user agent string (you can find these online)
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}
# here we make request for html page
response = requests.get(url, headers=headers)
# now we can use regex patterns to find phone number
phone_number = re.findall('"phoneNumber":"([\d-]+)"', response.text)
["905-870-7127"]
description = re.findall('"description":"(.+?)"', response.text)
['2011 Ram 1500 Sport Crew Cab v8 5.7L - Fully loaded, Crew cab, leather heated/air-conditioned seats, heated leather steering wheel, 5’7 ft box w/ tonneau cover.']
Regex patterns are a bit of work to wrap your head around at first. I suggest googling "regex python tutorial" if you want to learn more but I can explain the pattern we're using here: we want to capture everything in double-quotes that follows "phoneNumber":"
string and is either a digit (marked as \d
) or a dash (marked as simply -
).
This requests script would only take few seconds to complete and use almost no computing resources. However one thing to watch out when using http client compared to Selenium browser emulation is bot blocking which often requires quite a bit of development work to get around though performance gains are really worth it!
QUESTION
Hi everyone i am trying to scrape name and phone number from this website but its not clicking and copying the "Click to Show" element required to see phone number. Also after this how can i add multiple (100+) urls in loop and can i achieve the same with bs4 as it will be faster.
from selenium import webdriver
chrome_path = r"C:\Users\lenovo\Downloads\chromedriver_win32 (5)\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("https://www.autotrader.ca/a/ram/1500/hamilton/ontario/19_12052335_/?showcpo=ShowCpo&ncse=no&ursrc=pl&urp=2&urm=8&sprx=-2")
driver.find_element_by_xpath('//p[@class="hero-title"]').text
'2011 Ram 1500 Crew Cab Sport'
driver.find_element_by_xpath('//a[@class="link ng-star-inserted"]').click()
selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element ... is not clickable at point (1079, 593). Other element would receive the click: ...
ANSWER
Answered 2021-Oct-08 at 06:01Regarding Click to Show
:
You need to close the Cookie setting
pop-up and then perform scrollIntoView
to click on the Element.
Was able to click on Click to Show
with below code:
from selenium import webdriver
import time
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(executable_path="path to chromedriver.exe")
driver.maximize_window()
driver.implicitly_wait(10)
driver.get("https://www.autotrader.ca/a/ram/1500/hamilton/ontario/19_12052335_/?showcpo=ShowCpo&ncse=no&ursrc=pl&urp=2&urm=8&sprx=-2")
wait =WebDriverWait(driver,30)
wait.until(EC.element_to_be_clickable((By.XPATH,"//button[@class='close-button']"))).click()
option = wait.until(EC.element_to_be_clickable((By.XPATH,"//a[text()= 'Click to show']")))
driver.execute_script("arguments[0].scrollIntoView(true);",option)
option.click()
time.sleep(10)
Regarding doing the same thing with multiple URLs:
You can try like below:
urls = ['url1','url2']
for url in urls:
driver.get(url)
...
If you want this in Beautifulsoup
you need to raise this question under Beautifulsoup
tag. Right now you have tagged python
, selenium
and web-scraping
.
QUESTION
Hi I need a help in the below issue. I have table where data being updated every minute. I have a trigger set on this table.
CREATE DEFINER=`root`@`localhost` TRIGGER `gsdatatabs_AFTER_UPDATE` AFTER UPDATE ON
`gsdatatabs` FOR EACH ROW BEGIN
IF NEW.CAMARILLA = 'B' or NEW.CAMARILLA = 'S' then
UPDATE gsdatatabs SET ALERT = NEW.LTP;
END IF;
END
Below is my table structure
Columns:
SCRIP varchar(45)
LTP float
OHL varchar(45)
ORB15 varchar(45)
ORB30 varchar(45)
PRB varchar(45)
CAMARILLA varchar(45)
ALERT float
I am trying to update ALERT column with value from LTP when the CAMARILLA value is 'B' or 'S'. In the backend the data for CAMARILLA column gets updated every minute.
Currently while updating in the backend getting error.
Error: Can't update table 'gsdatatabs' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.
at Packet.asError (C:\Users\sprasadswain\Documents\googleSheet\AutoTrader\Server\node_modules\mysql2\lib\packets\packet.js:722:17)
at Query.execute (C:\Users\sprasadswain\Documents\googleSheet\AutoTrader\Server\node_modules\mysql2\lib\commands\command.js:28:26)
at Connection.handlePacket (C:\Users\sprasadswain\Documents\googleSheet\AutoTrader\Server\node_modules\mysql2\lib\connection.js:456:32)
at PacketParser.onPacket (C:\Users\sprasadswain\Documents\googleSheet\AutoTrader\Server\node_modules\mysql2\lib\connection.js:85:12)
at PacketParser.executeStart (C:\Users\sprasadswain\Documents\googleSheet\AutoTrader\Server\node_modules\mysql2\lib\packet_parser.js:75:16)
at Socket. (C:\Users\sprasadswain\Documents\googleSheet\AutoTrader\Server\node_modules\mysql2\lib\connection.js:92:25)
at Socket.emit (events.js:315:20)
Kindly guide
ANSWER
Answered 2021-Sep-01 at 15:33It seems that you need in
CREATE DEFINER=`root`@`localhost` TRIGGER `gsdatatabs_BEFORE_UPDATE`
BEFORE UPDATE ON `gsdatatabs`
FOR EACH ROW
SET NEW.alert = CASE WHEN NEW.camarilla IN ('B', 'S')
-- AND NEW.alert IS NULL
THEN NEW.ltp
ELSE NEW.alert
END;
QUESTION
I am really new to OpenCV and I was wondering why my debug string for empty matrix is running when I check if I have a png in my directory. I can confirm that I do indeed have an image by given name in the specified directory.
relevant code:
cv::Mat imgTrainingNumbers;
imgTrainingNumbers = cv::imread("C:/Users/.../source/repos/AutoTrader/training_chars2.png");
if (imgTrainingNumbers.empty()) { // if unable to open image
std::cout << "error: image not read from file\n\n"; // show error message on command line
return(0); // and exit program
}
ANSWER
Answered 2021-May-16 at 03:04It is possible that the image you are using is of corrupted data. The imread() function will not return anything to your imgTrainingNumbers matrix if you... a. have not specified the path correctly
b. the image is not in a proper format/is corrupted
c. some linking issue
Replace the image with something else to test the theory.
QUESTION
I am trying to navigate to the next page on a website, normally this works for me. However I am struggling at the moment. Currenly with this line of code Set nextPageElement = HTML.getElementsByClassName("paginationMini--right__active")(0)
I can loop X amount of times however it is NOT changing the page, the page always remains page 1, therefore if I stated 3 pages it will pull the same data off page1 THREE times. When it should change the page 3 times.
I have tried several variations and have left a few commented out in the code below. All off the attempts end after the first page, the above line of code is the only one that loops the code 3 times but is not changing the page. I have always used this code so I do know that it works. Please could someone point out the correct class.
'Searches Number of Pages entered in Sheet20
If pageNumber >= Replace(Worksheets("Sheet20").Range("J9").Value, "", "+") Then Exit Do
On Error Resume Next
Set nextPageElement = HTML.getElementsByClassName("paginationMini--right__active")(0) ' THIS LINE
'Set nextPage = HTML.getElementsByClassName("pagination--ul")(0).getElementsByClassName("pagination--li")(0).getElementsByTagName("a")(0)
'Set nextPage = HTML.querySelector(".pagination--ul > li.pagination--li > a")
'Set nextPage = HTML.getElementsByClassName("pagination--ul")(0).getElementsByClassName("pagination--li")(0).getElementsByClassName("paginationMini--right__active")(0)
'Set nextPageElement = HTML.getElementsByClassName("paginationMini--ul")(0).getElementsByTagName("li")(2).getElementsByTagName("a")(0)
'Set nextPageElement = HTML.getElementsByClassName("paginationMini")(0).getElementsByTagName("li")(2).getElementsByTagName("a")(0)
If nextPageElement Is Nothing Then Exit Do
nextPageElement.Click 'next web page
Do While objIE.Busy = True Or objIE.readyState <> 4
Loop
Set Html = objIE.document
pageNumber = pageNumber + 1
ANSWER
Answered 2021-Apr-02 at 04:39Try this way to grab content from next pages. The links connected to next pages are invalid ones. When you click on the next page links, they get redirected to some other url. However, the following is one of the easy ways to get things done:
Sub FetchNextPageContent()
Dim IE As Object, post As Object, Url$, I&
Set IE = CreateObject("InternetExplorer.Application")
Url = "https://www.autotrader.co.uk/car-search?sort=relevance&postcode=W1K%203RA&radius=1500&include-delivery-option=on&page="
For I = 1 To 5
IE.Visible = True
IE.navigate Url & I
While IE.Busy = True Or IE.readyState < 4: DoEvents: Wend
For Each post In IE.document.getElementsByClassName("search-page__result")
With post.getElementsByClassName("listing-fpa-link")
If .Length Then Debug.Print .Item(0).getAttribute("href")
End With
Next post
Next I
End Sub
If clicking on the next page button is what you wanna stick with, the following should do that:
Sub FetchNextPageContent()
Dim IE As Object, post As Object, Url$, I&, nextPage As Object
Dim Html As HTMLDocument
Set IE = CreateObject("InternetExplorer.Application")
Url = "https://www.autotrader.co.uk/car-search?sort=relevance&postcode=W1K%203RA&radius=1500&include-delivery-option=on&page=1"
IE.Visible = True
IE.navigate Url
Do
While IE.Busy = True Or IE.readyState < 4: DoEvents: Wend
Set Html = IE.document
For Each post In Html.getElementsByClassName("search-page__result")
With post.getElementsByClassName("listing-fpa-link")
If .Length Then Debug.Print .Item(0).getAttribute("href")
End With
Next post
Set nextPage = Html.querySelector("a.pagination--right__active")
If Not nextPage Is Nothing Then
nextPage.Click
Application.Wait Now + TimeValue("00:00:05")
Else:
Exit Do
End If
Loop
End Sub
QUESTION
I am having difficulty in trying to pull the href from a website. I have been stuck on it for a few days nows. As the image below shows I can get all the other required information. I have tried several variations for the class as well as trying to get it via the a
Tag, however I can not work it out.
This is my latest attempt, still can not work it out
Question, Can someone please point out the correct Class?
If element.getElementsByClassName("product-card ")(0).getElementsByClassName("listing-fpa-link")(0) Is Nothing Then
wsSheet.Cells(sht.Cells(sht.Rows.Count, "A").End(xlUp).Row + 1, "A").Value = "-"
Else
HtmlText = element.getElementsByClassName("product-card")(0).getElementsByClassName("listing-fpa-link")(0).href
wsSheet.Cells(sht.Cells(sht.Rows.Count, "A").End(xlUp).Row + 1, "A").Value = HtmlText
End If
12
-
Good price
-
No admin fees
-
Finance available
£6,607
Renault Clio
0.9 TCe Play (s/s) 5dr
£550 OF EXTRAS • BLUETOOTH
- 2018 (68 reg)
- Hatchback
- 61,671 miles
- 0.9L
- 76PS
- Manual
- Petrol
- 1 owner
- ULEZ
Carbase Bristol
See all 780 cars
-
4.5 (
5409 reviews)
-
bristol (77 miles)
As always thanks in advance.
ANSWER
Answered 2021-Apr-01 at 14:48It ok, I have fixed the issue. I changed the parent class to Set elements = HTML.getElementsByClassName("search-page__result")
Then changed my code to
If element.getElementsByClassName("js-click-handler listing-fpa-link tracking-standard-link")(0) Is Nothing Then
wsSheet.Cells(sht.Cells(sht.Rows.Count, "A").End(xlUp).Row + 1, "A").Value = "-"
Else
HtmlText = element.getElementsByClassName("js-click-handler listing-fpa-link tracking-standard-link")(0).href
wsSheet.Cells(sht.Cells(sht.Rows.Count, "A").End(xlUp).Row + 1, "A").Value = HtmlText
End If
QUESTION
Using Selenium, Python, Pandas to scrape autotrader.co.uk. I'd like a table of stats of the vehicles listed but for some reason it's proving more difficult than I thought...
Full code here: pastebin link
it seems like sometimes the 'title' and the 'price' elements are not recognised, but it's the exact same code on html:
Working item's HTML (row index 1):
£16,500
Non-working HTML (row index 2):
£12,995
Element selector:
data['Price'] = listing.find_elements_by_css_selector('section.product-card-pricing')[0].text
ANSWER
Answered 2021-Jan-21 at 11:17Get the subelement of listing css selector wouldn't work here. I'd also add a webdriver wait for the cookies that pop up.
data['Price'] = listing.find_element_by_xpath(".//section[@class='product-card-pricing']").text
#print(data['Price'])
data['Title'] = listing.find_element_by_xpath(".//h3[@class='product-card-details__title']").text
#print(data['Title'])
Outputs
£10,500
Land Rover Range Rover Evoque 2.2 ED4 Pure Tech 2WD 5dr
£16,500
Land Rover Range Rover Evoque 2.2 SD4 Pure Tech AWD 5dr
QUESTION
I couldn't find an easy way to do this and none of the complex ways worked. Can you help?
I have a dataframe resulting from a web-scrape. In there I have a data['Milage'] column that has the following result: '80,000 miles'. Obviously that's a string, so I'm looking for a way to erase all content that isnt numeric and convert that string to straigt numbers '80,000 miles' -> '80000'
I tried the following:
data['Milage'] = data['Milage'].str[1:].astype(int)
No idea what the code above does, I took it from another post from here. But I get the following error message:
File "autotrader.py", line 73, in
data['Milage'] = data['Milage'].str[1:].astype(int)
AttributeError: 'str' object has no attribute 'str'
The other solution I tried was this:
data['Milage'] = str(data['Milage']).extract('(\d+)').astype(int)
And the resulting error is as follows:
File "autotrader.py", line 73, in
data['Milage'] = str(data['Milage']).extract('(\d+)').astype(int)
AttributeError: 'str' object has no attribute 'extract'
I would appreciate any help! Thank you
ANSWER
Answered 2020-Dec-08 at 13:08After some test problem was data
is dictionary, you need processing df
for DataFrame
.
I think you need remove non numeric values and convert to integers:
df['Milage'] = df['Milage'].str.replace('\D','').astype(int)
print(df['Milage'])
0 70000
1 69186
2 46820
3 54000
4 83600
5 139000
6 62000
7 51910
8 86000
9 38000
10 65000
11 119000
12 49500
13 60000
14 35000
15 57187
16 45050
17 80000
18 84330
19 85853
Name: Milage, dtype: int32
QUESTION
I need your help to get "Description" content of this URL using BeautifulSoup in Python (as shown below).
I have tried below code but it return None only!
import requests as rq
from bs4 import BeautifulSoup
hdr = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'}
page = rq.get(url, headers=hdr)
soup = BeautifulSoup(page.content, "html.parser")
description = soup.find('div', {'class': 'force-wrapping ng-star-inserted'})
Thanks
ANSWER
Answered 2020-Dec-07 at 07:14I had tried and i saw that soup
doesn't has class
force-wrapping ng-star-inserted because you had taken the source of site. It is different from what you saw in dev tool, to see source of site, you can press Ctr+U. Then you can see that the description is in meta
tag with name
is description. So, what you need to do is find this tag and take the content. For Sample:
res = soup.find('meta', {"name":"description"})
print(res['content'])
QUESTION
I am trying to scrape for car prices from this website:
To get car prices, you should fill out the form and I have to choose from dropdowns using Selenium. I am using this code to choose from dropdowns:
# Imports
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
year_dropdown = Select(WebDriverWait(driver, 5)
.until(EC.element_to_be_clickable((By.ID, "j_id_3q-carInfoForm-year-selectOneMenu"))))
year_dropdown.select_by_value('2015')
ANSWER
Answered 2020-Nov-03 at 17:01I resolved the issue by using a real chrome driver. I was using chromdriver-manager package and when I removed it and downloaded a real chrome driver, the issue was gone.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
Install autotrader
You can use autotrader like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page