praw | python package that allows for simple access | REST library
kandi X-RAY | praw Summary
kandi X-RAY | praw Summary
PRAW, an acronym for "Python Reddit API Wrapper", is a python package that allows for simple access to Reddit's API.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Update the user s preferences
- Make a HTTP request
- Make a PATCH request
- Make a request
- Update the user s settings
- Create or update a subreddit
- Return the amount of time a rate limit
- Post a subreddit
- Add text content
- Creates a new collection
- Update the rule
- Create a new multireddit
- Create a new rule
- Add an emoji
- Update the user s permissions
- Invite a forum
- Deprecated
- Add a link
- Run checks
- Return a generator of ModNotes
- Update the settings
- Create a new ModNote
- Update a flair template
- Create a draft draft draft
- Send a message to a subreddit
- Update this thread
praw Key Features
praw Examples and Code Snippets
reddit.domain("imgur.com").controversial("week")
reddit.multireddit("samuraisam", "programming").controversial("day")
reddit.redditor("spez").controversial("month")
reddit.redditor("spez").comments.controversial("year")
reddit.redditor("sp
reddit.subreddit("test").flair.templates
script/
- main.py
- image.png
reddit.subreddit('').submit_image(title, image_path="image.png")
reddit = praw.Reddit(user_agent='Comment Extraction (by /u/guy_asking_on_stackoverflow)',
client_id=sec.reddit_client_id, client_secret=sec.reddit_client_secret,
password=sec.reddit_password, usern
while True:
try:
my_reddit_comments=api.search_comments(filter=['id','author', 'body', 'subreddit'],limit=100000)
data=pd.DataFrame(k.d_ for k in my_reddit_comments)
break
except:
print('Vahid is speaking: Max Retries r
with suppress(Exception):
while True:
for item in reddit.inbox.unread(limit=None):
if item.body.startswith("+withdraw"):
print(item.author.name + " requested a withdraw")
command=item.body
command_spl
os.environ["HTTP_PROXY"] = "http://proxy.host.com:8080"
print("Request page with IP:", requests.get("http://icanhazip.com", timeout=1.5).text.strip())
data = {
"ID": post,
"Date_utc": post.created_utc,
"Upvotes": post.ups,
"Number of comments": post.num_comments,
"Subthread name": post.title,
}
writer.writerow(data)
data = {
'ID' : post,
'Date_utc' : post.created_utc,
'Upvotes' : post.ups,
'Number of comments' : post.num_comments,
'Subthread name' : post.title
}
with open('pdata.csv', 'a', newline='') as
testPosts = list(reddit.subreddit("test").top("day", limit=50))
redditdevPosts = list(reddit.subreddit("redditdev").top("day", limit=50))
switch = False
if not switch:
for c,item in enumerate(testPosts):
submissions.append(item
def submissions(subreddit):
for submission in subreddit:
# Do stuff
return stuff
submissions(reddit.subreddit("subname").new(limit=100))
submissions(reddit.subreddit("subname").hot(limit=100))
def subm
Community Discussions
Trending Discussions on praw
QUESTION
I'm stuck in this part. I'm extracting data from reddit using PRAW, and I need to push all the data I extract into a dictionary and then, store the dict data into a PostgreSQL database, the for-loop works and extracts all the values I need but at the end only the last one is inserted in the dict. I tried using a dict of lists, but the same values are repeated several times. How can I insert all the data in my dict?. Also tested other solutions I found here, but just got an error. Here's my code:
...ANSWER
Answered 2022-Apr-02 at 22:47Still not exactly what you are trying to achieve. Here is a attempt at something that I think does what you want:
QUESTION
Trying to post to a subreddit that requires flairs
...ANSWER
Answered 2022-Mar-21 at 13:29You can find the available flair ids with
QUESTION
I'm working on a misinformation project and I want to scrape a couple of quarantines subreddits (r/russsia specifically).
When I follow the guidelines posted on the praw docs I get a prawcore.exceptions.Forbidden: received 403 HTTP response
error.
I saw a couple of solutions from 3+ years ago about manually adding the subreddit on the browser and using quarn.opt_in()
but no luck. Below is a code snippit:
ANSWER
Answered 2022-Mar-19 at 17:03To scrape quarantined subreddits your client cannot be read only.
You can make your client fully authorized by also providing the account username and password.
QUESTION
I am trying to make a cryptocurrency tipbot for Reddit in python. I am trying to see if I can communicate with BitcoinAPI and return stuff like balance. My bot should have commented the balance but just commented this:
...ANSWER
Answered 2022-Mar-05 at 22:23You're passing the function get_balance
to item.reply
. You need to call get balance and pass the result to item.reply
. E.G.
item.reply(get_balance())
QUESTION
Python 3.10.2 with sqlite3 on Windows 10. Relatively new to Python, but I was quite experienced with Perl around 1997-2005.
Banged my head against this for three days. Attacked from lots of angles. No success. I'm asking for guidance, because I don't see myself progressing on my own at this point without help flipping on the light-switch.
My overall application utilizes PRAW
to scrape a subreddit and put the Submissions into a table. Then, we go through that reddit table one row at a time, scanning the selftext
column for any URLs. Disregard some. Clean up others. Insert the ones we want into another table. At a later point, I'll then go through that table, downloading each URL.
My problem is that if I run the below code, with the INSERT
commented out, my print(dl_lnk[0])
line prints out the results of all (currently) 1,400 rows from the reddit table. But if I activate the INSERT
line, it only seems to process the first row in the table. I can assume this to be the case, because the print line only shows a few lines and they are all regarding the same user and the same/similar URL.
I don't understand why this is. I don't think it's because of an error in SQL (though there seems to be fewer options for accessing SQL exceptions in-code than I used to have in Perl, inherently). But I also don't see anything about my flow logic that would make it process just one row when an SQL INSERT happens but processes through all of them when it's commented out.
...ANSWER
Answered 2022-Feb-25 at 17:33You should use different cursors for the SELECT
and INSERT
queries. When you use the same cursor, the INSERT
resets the cursor so you can't fetch the remaining rows.
Or you could use cursorObj.fetchall()
to get all the results of the SELECT
query as a list and loop through that, rather than looping through the cursor itself. But if there are lots of rows, this will use lots of memory, while looping through the cursor is incremental. But 1400 rows may not be a problem.
QUESTION
I want to use API through proxy in PRAW but I have no idea how to do it.. Any help?
I read through https://praw.readthedocs.io/en/stable/getting_started/configuration.html#using-an-http-or-https-proxy-with-praw
..and as I understand.. if I want to use the API through a proxy I must set it before running the script in the command line?
Isn't there a possibility to set the proxy in the Python code itself before authorizing through password flow?
...ANSWER
Answered 2022-Feb-25 at 01:09It seems the solution inside the code is:
QUESTION
This code is supposed to get some information about a post like title, upvotes etc. And then write it in a CSV file. But when is run it I get this error:
...ANSWER
Answered 2022-Feb-11 at 15:45You are using a dict writer, which accepts dicts. You are passing a string to it.
You should construct a dict, and write that:
QUESTION
I'm trying to fetch posts from two different subreddits and create a list of the top posts that day (from most upvoted to least upvoted order) that alternates betweens subreddits. Here is my code:
...ANSWER
Answered 2022-Feb-02 at 00:50this would give you an alternating list of posts in submissions depending on which subreddit you would like to have first.
QUESTION
I'm trying to retrieve messages from Reddit subreddits using PRAW. It is working properly for most of the cases but I'm getting the following error that the message is too long.I'm using pytelegrambotApi
Snippet:
...ANSWER
Answered 2021-Dec-03 at 18:38One Telegram message must contain no more than 4096
characters. The message is then split into another message (that is, the remainder).
Add this code to your message_handler
:
QUESTION
I'm trying to use the Reddit API via PRAW (https://praw.readthedocs.io/en/stable/) with Django, and am thinking of trying to use functools lru_cache
decorator to implement some kind of caching so I can cache the results of similar API calls to reduce overall calls being made. I've never done anything like this so I've been mainly following examples of implementing the @lru_cache
decorator.
I have 3 files that are primarily involved with the API calls / display here. I have:
account.html
...ANSWER
Answered 2021-Oct-26 at 22:51PRAW returns generator objects that are lazily evaluated. You want to evaluate them inside your cached function. Otherwise, after the generator is exhausted, you can't get the results again.
So the working version should look like this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install praw
You can use praw like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page