pywikibot | Python library that interfaces with the MediaWiki API | REST library
kandi X-RAY | pywikibot Summary
kandi X-RAY | pywikibot Summary
A Python library that interfaces with the MediaWiki API. This is a mirror from gerrit.wikimedia.org. Do not submit any patches here. See https://www.mediawiki.org/wiki/Developer_account for contributing.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Implements the upload method .
- Find claims .
- Process disambiguation only .
- Replace links in new pages .
- Edit a page .
- Move the text to a specific category .
- Remove templates from the page .
- Check if image has duplicates .
- Submit the request .
- Clean up links in text .
pywikibot Key Features
pywikibot Examples and Code Snippets
Community Discussions
Trending Discussions on pywikibot
QUESTION
I have been running a pywikibot on Marathi wikipedia since almost a month now. The only task of this bot is find and replace. You can find overall details of pywikibot at: pywikibot. You can find the details of that particular find and replace operation at replace.py and fixes.py and even further examples of fixes here.
The following is a part of my source code. When running the bot on Marathi wikipedia, I am facing a difficulty because of the Marathi language's script. All of the replacements are going fine, but one is not. For example, I will use English words instead of Marathi.
The first part ("fix") of following code searches for "{{PAGENAME}}", and replaces it with "{{subst:PAGENAME}}". The msg parameter is the edit summary.
The second fix of the code "man", finds "man" and replaces it with "gent". But the problem is, it is also replacing "human" to "hugent", "craftsmanship" to "craftsgentship" and so on.
...ANSWER
Answered 2022-Apr-04 at 04:51You want occurrances of 'man', but only by itself - in other words, only if it's not preceded or followed by other letters or symbols that would be part of a word.
I don't know if Marathi contains symbols like '-' that could be part of a word, for example 'He was a real man-child', in which case you may or may not want to replace it.
In English, since you're using regex, you can do this:
QUESTION
in my github actions unit test i am running some pywikibot code (pywikibot 6.6.3) that some times fails due to the site not being responsive or a misconfiguration. The log report used to show the error messages after a few minutes.
Now the code runs some 2 hours and more with hint such as:
...ANSWER
Answered 2021-Dec-18 at 17:27the style of setting the variables is:
QUESTION
I am trying to extract links from the summary section of a wikipedia page. I tried the below methods :
This url extracts all the links of the Deep learning
page:
https://en.wikipedia.org/w/api.php?action=query&prop=links&titles=Deep%20learning
And for extracting links associated to any section I can filter based on the section id - for e.g.,
for the Definition
section of same page I can use this url : https://en.wikipedia.org/w/api.php?action=parse&prop=links&page=Deep%20learning§ion=1
for the Overview
section of same page I can use this url : https://en.wikipedia.org/w/api.php?action=parse&prop=links&page=Deep%20learning§ion=2
But I am unable to figure out how to extract only the links from summary
section
I even tried using pywikibot to extract linkedpages and adjusting plnamespace
variable but couldn't get links only for summary section.
ANSWER
Answered 2021-Jun-04 at 13:32You need to use https://en.wikipedia.org/w/api.php?action=parse&prop=links&page=Deep%20learning§ion=0
Note that this also includes links in the {{machine learning bar}} and {{Artificial intelligence|Approaches}} templates however (to the right of the screen).
QUESTION
I have a list of strings called cities, where each string is a city name that is also the title of a wikipedia page. For each city, I'm getting the wikipedia page and then looking at the text content of it:
...ANSWER
Answered 2021-Feb-11 at 06:29re.sub("'", "\'", city)
does not do anything:
QUESTION
I am trying to get a list of all of Kurt Cobain's quotes from the mediawiki api. I have:
https://en.wikiquote.org/w/api.php?format=json&action=query&srsearch=Kurt+Cobain&list=search
BUT, it doesn't seem to give me any of his quotes as shown here...nor does it provide a good format to be able to parse easily.
How do I get a list of all of his quotes using the API? If possible would also like to include the source - e.g. From an interview on MTV with Zeca Camargo, 1993-01-21, Rio de Janeiro, Brazil
Would prefer the API directly but an answer with pywikibot is also good.
...ANSWER
Answered 2020-Nov-02 at 17:10There is no structured data like templates to get the quotes. All you can do is to retrieve quotes via regex from plain wikitext, something like:
QUESTION
At Pywikibot's Mediawiki Talk page this question has been asked some 2 years ago already.
The answers there were along the lines "you shouldn't" and maxthrottle isn't the right parameter for that.
For intranet usecases the throttle is mostly counterproductive. Especially when testing the automation the throttle kicks in no matter how low the number of API accesses is. So I'd rather switch if off or set it to a reasonable time of a few millisecs instead of the default 10 seconds.
How can the throttle be set to a different time?
...ANSWER
Answered 2020-Mar-27 at 11:47QUESTION
The MediaWiki API has an edit function which is available within pywikibot. According to https://doc.wikimedia.org/pywikibot/master/api_ref/pywikibot.site.html
the function is called with a page parameter:
...ANSWER
Answered 2020-Mar-27 at 10:05The following code works:
QUESTION
I am using the following code to get the backlinks of a page in wikipedia.
...ANSWER
Answered 2020-Mar-24 at 16:15By default, .backlinks()
includes backlinks of redirected pages.
While this is sometimes a desired feature, it causes the error in your case.
"Dibenzocycloheptene" is a backlink of "Cyproheptadine", but "Dibenzocycloheptene" is also a redirect to "Dibenzosuberane" which is again a redirect to "Dibenzocycloheptene". This is a circle and thus pywikibot throws an error.
You can solve this problem by setting .backlinks(follow_redirects=False)
. Then backlinks of redirects will not be included in your list.
As circular redirects are quite rare, you could also solve this problem at the source: go to Wikipedia and cut the circle by removing the redirect link on "Dibenzocycloheptene".
QUESTION
I am looking to parse the Wikipedia talk page (e.g., https://en.wikipedia.org/wiki/Talk:Elon_Musk). I would like to loop through texts by contributors/editors. Not sure how do I do it. For now, I have the following code:
...ANSWER
Answered 2020-Mar-09 at 10:34I don't know about pywikibot, but you can do this via the normal API. This will fetch the revisions: https://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Talk:Elon%20Musk&rvlimit=500&rvprop=timestamp|user|comment|ids
Then you can pass the revision ids to get the change in each edit: e.g. https://en.wikipedia.org/w/api.php?action=compare&fromrev=944235185&torev=944237256
QUESTION
I am using pywikibot
in python to get all revisions of a Wikipedia page.
import pywikibot as pw
wikiPage='Narthaki'
page = pw.Page(pw.Site('en'), wikiPage)
revs = page.revisions(content=True)
How do I know which of the revisions were reverts? I see from https://xtools.wmflabs.org/articleinfo/en.wikipedia.org/Narthaki that the page has one revert edit. Not sure how to get more information about this from the revision object.
Request your help. Many thanks!
...ANSWER
Answered 2020-Mar-02 at 05:20You can compare text of revision directly, or look for the revisions that have the same sha1 hash:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install pywikibot
You can use pywikibot like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page