finviz | Stock quotes and company data from Finviz | Business library
kandi X-RAY | finviz Summary
kandi X-RAY | finviz Summary
Stock quotes and company data from Finviz.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of finviz
finviz Key Features
finviz Examples and Code Snippets
Community Discussions
Trending Discussions on finviz
QUESTION
everyone. I am working on a python project with selenium to scrape data. But there is one problem, I have to scrape the data every 5mins. So I run chrome driver with selenium, the problem is selenium scrape speed is very slow. If I run this project, It takes at least 30mins. I can't get data every 5mins. If you have experience in this field, please help me. If you can give me other ways(for example beautiful soap), I will be very happy. Note: This site that I want to get data is rendering using javascript. This is my source code. I am testing it.
...ANSWER
Answered 2021-Jun-08 at 09:24There seems to be an API on the nasdaq site that you can query (found using network tools), so there isn't really any need to use selenium
for this. Here is an example that gets the data using requests
QUESTION
I hacked together the code below.
...ANSWER
Answered 2021-May-29 at 16:12You need to store all the sublists of data per ticker into it's own list. Instead of blending them all. Then you can use itertools
chain.from_iterable
to make one large list per ticket, take every even item as a key and odd item as as values in a dictionary, and put the final dict for each ticker into a larger list. That can turn into a dataframe.
QUESTION
This is my code without concurrent.futures:
...ANSWER
Answered 2021-May-08 at 04:03Not if you're using the process pool, no. Each pool item has its own process, with a completely separate Python interpreter and a completely separate memory space. concurrent
tries to shuffle over the things you need, but there is no live connection.
If you need to ship results back, you might consider using a multiprocessing.Queue
. That knows how to marshal things across the process boundary.
QUESTION
https://finviz.com/screener.ashx?v=152&f=cap_midover&c=1,16,17,18,65
I want to scrape the data from the website above using VBA so that I can obtain 5 columns that I want (Ticker, EPS, EPS this Y, EPS next Y, Price). There are 99 pages need to loop through and each pages have 20 tickers, which means I need to scrape almost 2000 rows of data. I'm able to do this by using PowerQuery but seems like it takes around 3min to refresh the data if I'm using powerquery.
I'm not sure if I use VBA to scrape the data would be able to speed up the time taken for the data to refresh or not so would like to get some help. I'm new to VBA and below is my code which give me an output of whole websites pages (not what I want) and the code doesn't loop through different pages from 1-99.
...ANSWER
Answered 2020-Dec-30 at 09:23It's my fourth day while learning vba, so don't expect much...Also I have no idea how to loop through different pages and get data into your sheet, So this is not going to solve your problem... but...
Still I think I should propose my thought, it's just my opinion. If you are going to make different sheets for each pg then you may use the code given below to delete the junk content which you don't need. I think junk will be limited to particular range so you can delete it after it comes into the sheet... Still this code won't get all pages into different sheets if you will be able to do it then this can be done next.
QUESTION
Most websites when you load ask you to accept cookies and privacy, I think it's mainly in the EU. I'm struggling on how to reuse the cookies so, I don't have to keep clicking "accept all", every time I load up chrome.
The way I'm thinking is that if I click on "accept all" the first time and save the cookie, I can write a code that fetches the cookie file and it knows I accepted the website cookie and so, it doesn't pop up again.
The website I'm using for this example is https://finviz.com/
...ANSWER
Answered 2021-Apr-03 at 15:18It is at least complicated to write an app that listens for the setting of cookies to copy them to a file and put them back when the browser is restartet. The same applies for the case that you want to save the cookies manually.
But if you do that then deleting the cookies would be unnecessary - so you could simply allow cookies in the settings of your browser.
QUESTION
I've been trying to get the information below, using the formula below, but it shows as loading indefinitely:
...ANSWER
Answered 2021-Mar-31 at 13:24I believe your goal as follows.
- You want to retrieve the values of
Energy | Oil & Gas E&P | USA
fromhttps://finviz.com/quote.ashx?t=OVV
. - You want to retrieve the values of
Technology | Software - Infrastructure | United Kingdom
fromhttps://finviz.com/quote.ashx?t=MIME
. - You want to achieve this using IMPORTXML and Google Apps Script.
QUESTION
So finviz changed their website making it more difficult to extract data from their site. I am trying to get the date from the analyst ratings table but I get a series of numbers.
...ANSWER
Answered 2021-Mar-22 at 09:0344218 is the same like Jan-22-21
Why?
Because dates are numbers.
Solution
Format 44218 like date from menu
QUESTION
I am having Date column like this
...ANSWER
Answered 2021-Feb-24 at 10:32with the given date/time format, you can
- split on the space between date and time
- put the second-to-last element into column "date" and forward-fill the gaps
- put the last element into column "time"
EX:
QUESTION
I have a function that scrapes data from Finviz, and part of the function compiles a list of metrics. If the website address does not exist as it iterates through a list of stocks, it will still create a row in the Dataframe that includes the name of the stock, but does not include any of the metrics. I would like it so that if it cannot find the website, the row is either deleted at the end of the loop, or isn't included at all. Any insight into how I can adjust this function would be very appreciated.
...ANSWER
Answered 2021-Feb-03 at 23:30A simple solution would be to store stocks for which an exception occurred and delete those lines after looping through. The deletions of lines would happen after the for-loop
to avoid changing the dataframe while iterating over it.
QUESTION
I am scraping several financial metrics from Finviz using a for loop that iterates through a list of stock symbols. I am faced with an issue with the empty values ('-') on Finviz causing issues with subsetting the data down the line, as it is recognized as a string rather than a float, like the values I am trying to subset. I would like to nullify these values and have been trying to use the replace function from the Pandas module, but haven't had any luck. Ideally it nullifies as it iterations the second for loop so that it iterates as it goes rather than having to do it to the entire list after. Code is shown below:
...ANSWER
Answered 2021-Feb-03 at 00:27df.replace()
is not an inplace operation. You need df = df.replace()
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install finviz
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page