Popular New Releases in Bilibili
Bilibili-Evolved
v2.1.7
DisableWinTracking
Bili.Uwp
四月更新 Pro Max
bilibili-helper-o
Bilibili Helper v1.2.30
BILIBILI-HELPER
BILIBILI-HELPER-v1.0.3
Popular Libraries in Bilibili
by the1812 typescript
12269 MIT
强大的哔哩哔哩增强脚本
by talkgo go
10226 MIT
Weekly Go Online Meetup via Bilibili|Go 夜读|通过 bilibili 在线直播的方式分享 Go 相关的技术话题,每天大家在微信/telegram/Slack 上及时沟通交流编程技术话题。
by 10se1ucgo python
5089 LGPL-3.0
Uses some known methods that attempt to minimize tracking in Windows 10
by Richasy csharp
4927 MIT
适用于新系统UI的哔哩
by HotBitmapGG java
4405 WTFPL
An unofficial bilibili client for android http://www.jianshu.com/p/f69a55b94c05 -- 该项目已停止维护!
by wbt5 python
4015 GPL-2.0
获取斗鱼&虎牙&哔哩哔哩&抖音&快手等 58 个直播平台的真实流媒体地址(直播源)和弹幕,直播源可在 PotPlayer、flv.js 等播放器中播放。
by bilibili-helper javascript
3547 MPL-2.0
哔哩哔哩 (bilibili.com) 辅助工具,可以替换播放器、推送通知并进行一些快捷操作
by JunzhouLiu java
3343 MIT
B站,哔哩哔哩(Bilibili)自动签到投币工具,每天轻松获取65经验值,支持每日自动投币,银瓜子兑换硬币,领取大会员福利,大会员月底给自己充电等功能。呐!赶快和我一起成为Lv6吧!
by RayWangQvQ csharp
2887 MIT
.Net 5 编写的B站(哔哩哔哩)任务工具,实现每日自动运行任务:如每日自动登录、观看、分享、投币视频,获取每日任务的满额经验,轻松升级Level 6;如定时自动领取大会员权益、月底自动为自己充电;如天选时刻抽奖等功能。
Trending New libraries in Bilibili
by Richasy csharp
4927 MIT
适用于新系统UI的哔哩
by JunzhouLiu java
3343 MIT
B站,哔哩哔哩(Bilibili)自动签到投币工具,每天轻松获取65经验值,支持每日自动投币,银瓜子兑换硬币,领取大会员福利,大会员月底给自己充电等功能。呐!赶快和我一起成为Lv6吧!
by RayWangQvQ csharp
2887 MIT
.Net 5 编写的B站(哔哩哔哩)任务工具,实现每日自动运行任务:如每日自动登录、观看、分享、投币视频,获取每日任务的满额经验,轻松升级Level 6;如定时自动领取大会员权益、月底自动为自己充电;如天选时刻抽奖等功能。
by nilaoda csharp
2368 MIT
Bilibili Downloader. 一款命令行式哔哩哔哩下载器.
by srcrs java
1896 Apache-2.0
哔哩哔哩(B站)自动完成每日任务,投币,点赞,直播签到,自动兑换银瓜子为硬币,自动送出即将过期礼物,漫画App签到。
by 19PDP c++
1422
课程视频、PPT和源代码:侯捷C++系列;台大郭彦甫MATLAB
by MoyuScript python
1241 GPL-3.0
哔哩哔哩的API调用模块
by HaujetZhao python
1127 MPL-2.0
Your most handy video processing software
by happy888888 python
1106 NOASSERTION
B站(bilibili 哔哩哔哩)助手:1.每日投币观看分享视频(快速升6级),签到(直播+漫画), 动态抽奖,风纪投票,直播挂机(小心心),天选时刻等日常操作(云函数+Actions+docker)(多账户)。2.漫画视频番剧音乐下载器(CLI)。3.up主视频专栏音乐动态投稿的python实现
Top Authors in Bilibili
1
5 Libraries
481
2
5 Libraries
1095
3
5 Libraries
1187
4
4 Libraries
43
5
4 Libraries
1023
6
4 Libraries
99
7
4 Libraries
236
8
4 Libraries
493
9
4 Libraries
154
10
3 Libraries
145
1
5 Libraries
481
2
5 Libraries
1095
3
5 Libraries
1187
4
4 Libraries
43
5
4 Libraries
1023
6
4 Libraries
99
7
4 Libraries
236
8
4 Libraries
493
9
4 Libraries
154
10
3 Libraries
145
Trending Kits in Bilibili
No Trending Kits are available at this moment for Bilibili
Trending Discussions on Bilibili
How to insert JSON containing escape codes into a JSONB column in PostgreSQL using GORM
Required item not in soup object - BeautifulSoup Python
How to get each one of the objects from an array which includes three objects?
Using wininet to download deflate XML on Windows MSVC, but gets broken data
Merge one audio file and one image file to create a video with ffmpeg
Web scraping multiple pages in python and writing it into a csv file
Why can only download the first episode video on bilibili with youtube-dl?
QUESTION
How to insert JSON containing escape codes into a JSONB column in PostgreSQL using GORM
Asked 2021-Jul-08 at 18:16I'm trying to store the JSON bytes to PostgreSQL, but there's a problem.
\u0000 cannot be converted to text.
As you can see below, the JSON contains escape sequences such as \u0000
, which it seems PostgreSQL is interpreting as unicode characters, not JSON strings.
1err := raws.SaveRawData(data, url)
2// if there is "\u0000" in the bytes
3if err.Error() == "ERROR: unsupported Unicode escape sequence (SQLSTATE 22P05)" {
4 // try to remove \u0000, but not work
5 data = bytes.Trim(data, "\u0000")
6 e := raws.SaveRawData(data, url) // save data again
7 if e != nil {
8 return e // return the same error
9 }
10 return nil
11}
12
Origin API data can be access form Here. There is \u0000 in it:
1err := raws.SaveRawData(data, url)
2// if there is "\u0000" in the bytes
3if err.Error() == "ERROR: unsupported Unicode escape sequence (SQLSTATE 22P05)" {
4 // try to remove \u0000, but not work
5 data = bytes.Trim(data, "\u0000")
6 e := raws.SaveRawData(data, url) // save data again
7 if e != nil {
8 return e // return the same error
9 }
10 return nil
11}
12{
13 "code": 0,
14 "message": "0",
15 "ttl": 1,
16 "data": {
17 "bvid": "BV1jb411C7m3",
18 "aid": 42443484,
19 "videos": 1,
20 "tid": 172,
21 "tname": "手机游戏",
22 "copyright": 1,
23 "pic": "http://i0.hdslb.com/bfs/archive/c76ee4798bf2ba0efc8449bcb3577d508321c6c5.jpg",
24 "title": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
25 "pubdate": 1549100438,
26 "ctime": 1549100438,
27 "desc": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
28 "desc_v2": [
29 {
30 "raw_text": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
31 "type": 1,
32 "biz_id": 0
33 }
34 ],
35 "state": 0,
36 "duration": 265,
37 "rights": {
38 "bp": 0,
39 "elec": 0,
40 "download": 1,
41 "movie": 0,
42 "pay": 0,
43 "hd5": 0,
44 "no_reprint": 1,
45 "autoplay": 1,
46 "ugc_pay": 0,
47 "is_cooperation": 0,
48 "ugc_pay_preview": 0,
49 "no_background": 0,
50 "clean_mode": 0,
51 "is_stein_gate": 0
52 },
53 "owner": {
54 "mid": 39699039,
55 "name": "明眸-雅望",
56 "face": "http://i0.hdslb.com/bfs/face/240f74f8706955119575ea6c6cb1d31892f93800.jpg"
57 },
58 "stat": {
59 "aid": 42443484,
60 "view": 1107,
61 "danmaku": 7,
62 "reply": 22,
63 "favorite": 5,
64 "coin": 4,
65 "share": 0,
66 "now_rank": 0,
67 "his_rank": 0,
68 "like": 10,
69 "dislike": 0,
70 "evaluation": "",
71 "argue_msg": ""
72 },
73 "dynamic": "#崩坏3#",
74 "cid": 74479750,
75 "dimension": {
76 "width": 1280,
77 "height": 720,
78 "rotate": 0
79 },
80 "no_cache": false,
81 "pages": [
82 {
83 "cid": 74479750,
84 "page": 1,
85 "from": "vupload",
86 "part": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
87 "duration": 265,
88 "vid": "",
89 "weblink": "",
90 "dimension": {
91 "width": 1280,
92 "height": 720,
93 "rotate": 0
94 }
95 }
96 ],
97 "subtitle": {
98 "allow_submit": false,
99 "list": []
100 },
101 "user_garb": {
102 "url_image_ani_cut": ""
103 }
104 }
105}
106
The struct for save is:
1err := raws.SaveRawData(data, url)
2// if there is "\u0000" in the bytes
3if err.Error() == "ERROR: unsupported Unicode escape sequence (SQLSTATE 22P05)" {
4 // try to remove \u0000, but not work
5 data = bytes.Trim(data, "\u0000")
6 e := raws.SaveRawData(data, url) // save data again
7 if e != nil {
8 return e // return the same error
9 }
10 return nil
11}
12{
13 "code": 0,
14 "message": "0",
15 "ttl": 1,
16 "data": {
17 "bvid": "BV1jb411C7m3",
18 "aid": 42443484,
19 "videos": 1,
20 "tid": 172,
21 "tname": "手机游戏",
22 "copyright": 1,
23 "pic": "http://i0.hdslb.com/bfs/archive/c76ee4798bf2ba0efc8449bcb3577d508321c6c5.jpg",
24 "title": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
25 "pubdate": 1549100438,
26 "ctime": 1549100438,
27 "desc": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
28 "desc_v2": [
29 {
30 "raw_text": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
31 "type": 1,
32 "biz_id": 0
33 }
34 ],
35 "state": 0,
36 "duration": 265,
37 "rights": {
38 "bp": 0,
39 "elec": 0,
40 "download": 1,
41 "movie": 0,
42 "pay": 0,
43 "hd5": 0,
44 "no_reprint": 1,
45 "autoplay": 1,
46 "ugc_pay": 0,
47 "is_cooperation": 0,
48 "ugc_pay_preview": 0,
49 "no_background": 0,
50 "clean_mode": 0,
51 "is_stein_gate": 0
52 },
53 "owner": {
54 "mid": 39699039,
55 "name": "明眸-雅望",
56 "face": "http://i0.hdslb.com/bfs/face/240f74f8706955119575ea6c6cb1d31892f93800.jpg"
57 },
58 "stat": {
59 "aid": 42443484,
60 "view": 1107,
61 "danmaku": 7,
62 "reply": 22,
63 "favorite": 5,
64 "coin": 4,
65 "share": 0,
66 "now_rank": 0,
67 "his_rank": 0,
68 "like": 10,
69 "dislike": 0,
70 "evaluation": "",
71 "argue_msg": ""
72 },
73 "dynamic": "#崩坏3#",
74 "cid": 74479750,
75 "dimension": {
76 "width": 1280,
77 "height": 720,
78 "rotate": 0
79 },
80 "no_cache": false,
81 "pages": [
82 {
83 "cid": 74479750,
84 "page": 1,
85 "from": "vupload",
86 "part": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
87 "duration": 265,
88 "vid": "",
89 "weblink": "",
90 "dimension": {
91 "width": 1280,
92 "height": 720,
93 "rotate": 0
94 }
95 }
96 ],
97 "subtitle": {
98 "allow_submit": false,
99 "list": []
100 },
101 "user_garb": {
102 "url_image_ani_cut": ""
103 }
104 }
105}
106type RawJSONData struct {
107 ID uint64 `gorm:"primarykey" json:"id"`
108 CreatedAt time.Time `json:"-"`
109 DeletedAt gorm.DeletedAt `json:"-" gorm:"index"`
110 Data datatypes.JSON `json:"data"`
111 URL string `gorm:"index" json:"url"`
112}
113
datatypes.JSON
is from gorm.io/datatypes
. It seems just is json.RawMessage
, it is (extend from?) a []byte
.
I use PostgreSQL's JSONB
type for storage this data.
Table:
1err := raws.SaveRawData(data, url)
2// if there is "\u0000" in the bytes
3if err.Error() == "ERROR: unsupported Unicode escape sequence (SQLSTATE 22P05)" {
4 // try to remove \u0000, but not work
5 data = bytes.Trim(data, "\u0000")
6 e := raws.SaveRawData(data, url) // save data again
7 if e != nil {
8 return e // return the same error
9 }
10 return nil
11}
12{
13 "code": 0,
14 "message": "0",
15 "ttl": 1,
16 "data": {
17 "bvid": "BV1jb411C7m3",
18 "aid": 42443484,
19 "videos": 1,
20 "tid": 172,
21 "tname": "手机游戏",
22 "copyright": 1,
23 "pic": "http://i0.hdslb.com/bfs/archive/c76ee4798bf2ba0efc8449bcb3577d508321c6c5.jpg",
24 "title": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
25 "pubdate": 1549100438,
26 "ctime": 1549100438,
27 "desc": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
28 "desc_v2": [
29 {
30 "raw_text": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
31 "type": 1,
32 "biz_id": 0
33 }
34 ],
35 "state": 0,
36 "duration": 265,
37 "rights": {
38 "bp": 0,
39 "elec": 0,
40 "download": 1,
41 "movie": 0,
42 "pay": 0,
43 "hd5": 0,
44 "no_reprint": 1,
45 "autoplay": 1,
46 "ugc_pay": 0,
47 "is_cooperation": 0,
48 "ugc_pay_preview": 0,
49 "no_background": 0,
50 "clean_mode": 0,
51 "is_stein_gate": 0
52 },
53 "owner": {
54 "mid": 39699039,
55 "name": "明眸-雅望",
56 "face": "http://i0.hdslb.com/bfs/face/240f74f8706955119575ea6c6cb1d31892f93800.jpg"
57 },
58 "stat": {
59 "aid": 42443484,
60 "view": 1107,
61 "danmaku": 7,
62 "reply": 22,
63 "favorite": 5,
64 "coin": 4,
65 "share": 0,
66 "now_rank": 0,
67 "his_rank": 0,
68 "like": 10,
69 "dislike": 0,
70 "evaluation": "",
71 "argue_msg": ""
72 },
73 "dynamic": "#崩坏3#",
74 "cid": 74479750,
75 "dimension": {
76 "width": 1280,
77 "height": 720,
78 "rotate": 0
79 },
80 "no_cache": false,
81 "pages": [
82 {
83 "cid": 74479750,
84 "page": 1,
85 "from": "vupload",
86 "part": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
87 "duration": 265,
88 "vid": "",
89 "weblink": "",
90 "dimension": {
91 "width": 1280,
92 "height": 720,
93 "rotate": 0
94 }
95 }
96 ],
97 "subtitle": {
98 "allow_submit": false,
99 "list": []
100 },
101 "user_garb": {
102 "url_image_ani_cut": ""
103 }
104 }
105}
106type RawJSONData struct {
107 ID uint64 `gorm:"primarykey" json:"id"`
108 CreatedAt time.Time `json:"-"`
109 DeletedAt gorm.DeletedAt `json:"-" gorm:"index"`
110 Data datatypes.JSON `json:"data"`
111 URL string `gorm:"index" json:"url"`
112}
113create table raw_json_data
114(
115 id bigserial not null constraint raw_json_data_pke primary key,
116 created_at timestamp with time zone,
117 deleted_at timestamp with time zone,
118 data jsonb,
119 url text
120);
121
ANSWER
Answered 2021-Jul-08 at 16:05Well, I solved it by:
1err := raws.SaveRawData(data, url)
2// if there is "\u0000" in the bytes
3if err.Error() == "ERROR: unsupported Unicode escape sequence (SQLSTATE 22P05)" {
4 // try to remove \u0000, but not work
5 data = bytes.Trim(data, "\u0000")
6 e := raws.SaveRawData(data, url) // save data again
7 if e != nil {
8 return e // return the same error
9 }
10 return nil
11}
12{
13 "code": 0,
14 "message": "0",
15 "ttl": 1,
16 "data": {
17 "bvid": "BV1jb411C7m3",
18 "aid": 42443484,
19 "videos": 1,
20 "tid": 172,
21 "tname": "手机游戏",
22 "copyright": 1,
23 "pic": "http://i0.hdslb.com/bfs/archive/c76ee4798bf2ba0efc8449bcb3577d508321c6c5.jpg",
24 "title": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
25 "pubdate": 1549100438,
26 "ctime": 1549100438,
27 "desc": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
28 "desc_v2": [
29 {
30 "raw_text": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
31 "type": 1,
32 "biz_id": 0
33 }
34 ],
35 "state": 0,
36 "duration": 265,
37 "rights": {
38 "bp": 0,
39 "elec": 0,
40 "download": 1,
41 "movie": 0,
42 "pay": 0,
43 "hd5": 0,
44 "no_reprint": 1,
45 "autoplay": 1,
46 "ugc_pay": 0,
47 "is_cooperation": 0,
48 "ugc_pay_preview": 0,
49 "no_background": 0,
50 "clean_mode": 0,
51 "is_stein_gate": 0
52 },
53 "owner": {
54 "mid": 39699039,
55 "name": "明眸-雅望",
56 "face": "http://i0.hdslb.com/bfs/face/240f74f8706955119575ea6c6cb1d31892f93800.jpg"
57 },
58 "stat": {
59 "aid": 42443484,
60 "view": 1107,
61 "danmaku": 7,
62 "reply": 22,
63 "favorite": 5,
64 "coin": 4,
65 "share": 0,
66 "now_rank": 0,
67 "his_rank": 0,
68 "like": 10,
69 "dislike": 0,
70 "evaluation": "",
71 "argue_msg": ""
72 },
73 "dynamic": "#崩坏3#",
74 "cid": 74479750,
75 "dimension": {
76 "width": 1280,
77 "height": 720,
78 "rotate": 0
79 },
80 "no_cache": false,
81 "pages": [
82 {
83 "cid": 74479750,
84 "page": 1,
85 "from": "vupload",
86 "part": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
87 "duration": 265,
88 "vid": "",
89 "weblink": "",
90 "dimension": {
91 "width": 1280,
92 "height": 720,
93 "rotate": 0
94 }
95 }
96 ],
97 "subtitle": {
98 "allow_submit": false,
99 "list": []
100 },
101 "user_garb": {
102 "url_image_ani_cut": ""
103 }
104 }
105}
106type RawJSONData struct {
107 ID uint64 `gorm:"primarykey" json:"id"`
108 CreatedAt time.Time `json:"-"`
109 DeletedAt gorm.DeletedAt `json:"-" gorm:"index"`
110 Data datatypes.JSON `json:"data"`
111 URL string `gorm:"index" json:"url"`
112}
113create table raw_json_data
114(
115 id bigserial not null constraint raw_json_data_pke primary key,
116 created_at timestamp with time zone,
117 deleted_at timestamp with time zone,
118 data jsonb,
119 url text
120);
121str := string(data)
122str = strings.ReplaceAll(str, `\u0000`, "")
123
QUESTION
Required item not in soup object - BeautifulSoup Python
Asked 2021-Jun-13 at 07:24So I want to extract "bilibili-player-video-info-people-number" from this link: https://www.bilibili.com/video/BV1a44y167wK. When I create my beautifulsoup object and search it, this class is not there. Is it due to the parser? I did try lxml and html5lib but neither did any better.
1<span class="bilibili-player-video-info-people-number">585</span>
2
That's the full element that I want to extract - the number updates every minute to show how many people are viewing currently.
1<span class="bilibili-player-video-info-people-number">585</span>
2import time
3from bs4 import BeautifulSoup
4from selenium import webdriver
5import re
6import html5lib
7
8driver = webdriver.Chrome(r'C:\Users\Rob\Downloads\chromedriver.exe')
9
10driver.get('https://www.bilibili.com/video/BV1a44y167wK')
11
12content = driver.page_source.encode('utf-8').strip()
13soup = BeautifulSoup(content, 'html5lib')
14
15viewers = soup.findAll('span', class_='bilibili-player-video-info-people-text')
16
17print(viewers[0])
18
print(viewers[0])
returns an out of range error as there is nothing in the viewers object.
Thank you!
ANSWER
Answered 2021-Jun-13 at 07:24Almost the entire site is behind JavaScript
so bs4
is useless, unless the element you want is in the requested HTML
. In your case, it's not.
However, there's an API endpoint that you can query that carries this data (and much more).
With a bit of regex
and requests
you can get the online count (of viewers).
Here's how:
1<span class="bilibili-player-video-info-people-number">585</span>
2import time
3from bs4 import BeautifulSoup
4from selenium import webdriver
5import re
6import html5lib
7
8driver = webdriver.Chrome(r'C:\Users\Rob\Downloads\chromedriver.exe')
9
10driver.get('https://www.bilibili.com/video/BV1a44y167wK')
11
12content = driver.page_source.encode('utf-8').strip()
13soup = BeautifulSoup(content, 'html5lib')
14
15viewers = soup.findAll('span', class_='bilibili-player-video-info-people-text')
16
17print(viewers[0])
18import re
19
20import requests
21
22with requests.Session() as connection:
23 page_url = "https://www.bilibili.com/video/BV1a44y167wK"
24 page = connection.get(page_url).text
25 cid = re.search(r"cid\":(\d+),\"page", page).group(1)
26 aid = re.search(r"aid\":(\d+),", page).group(1)
27 url = f"https://api.bilibili.com/x/player/v2?cid={cid}&aid={aid}&bvid={page_url.rsplit('/', 1)[-1]}"
28 print(connection.get(url).json()["data"]["online_count"])
29
Output (note: it might change, as viewers come and go):
1<span class="bilibili-player-video-info-people-number">585</span>
2import time
3from bs4 import BeautifulSoup
4from selenium import webdriver
5import re
6import html5lib
7
8driver = webdriver.Chrome(r'C:\Users\Rob\Downloads\chromedriver.exe')
9
10driver.get('https://www.bilibili.com/video/BV1a44y167wK')
11
12content = driver.page_source.encode('utf-8').strip()
13soup = BeautifulSoup(content, 'html5lib')
14
15viewers = soup.findAll('span', class_='bilibili-player-video-info-people-text')
16
17print(viewers[0])
18import re
19
20import requests
21
22with requests.Session() as connection:
23 page_url = "https://www.bilibili.com/video/BV1a44y167wK"
24 page = connection.get(page_url).text
25 cid = re.search(r"cid\":(\d+),\"page", page).group(1)
26 aid = re.search(r"aid\":(\d+),", page).group(1)
27 url = f"https://api.bilibili.com/x/player/v2?cid={cid}&aid={aid}&bvid={page_url.rsplit('/', 1)[-1]}"
28 print(connection.get(url).json()["data"]["online_count"])
29562
30
QUESTION
How to get each one of the objects from an array which includes three objects?
Asked 2020-Dec-11 at 05:19I put three arrays in an object, and I want to get each one of them to set the value, and I try Object.keys(obj).forEach(function(key){
to traverse it, it gets the value of an object, but I can not get the third level of the item.
And also, I try const iterate = (obj)
, it does not work well.
Whether the iterate function can get the item then set the value or use .forEach to get it.
1var iconsData = {
2 iconArraya: [
3 {
4 name: 'bilibili',
5 image:'https://i.ibb.co/R9HMTyR/1-5.png',
6 host: ['www.bilibili.com'],
7 popup: function (text) {
8 open('https://search.bilibili.com/live?keyword=' + encodeURIComponent(text));
9 }
10 }
11 ],
12 iconArrayb: [
13 {
14 name: 'open',
15 image:'https://i.ibb.co/R9HMTyR/1-5.png',
16 host: [''],
17 popup: function (text) {
18 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
19 window.open(text, "_blank");
20 else window.open("http://"+text, "_blank");
21 }
22 }
23 ],
24 iconArrayc: [
25 {
26 name: 'copy',
27 image:'https://i.ibb.co/R9HMTyR/1-5.png',
28 host: [''],
29 popup: function (text) {
30 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
31 window.open(text, "_blank");
32 else window.open("http://"+text, "_blank");
33 }
34 }
35 ],
36
37 hostCustomMap: {}
38 }
39
40 Object.keys(iconsData).forEach((key, index) => {
41 Object.keys(iconsData[key]).forEach((keya, index) => {
42 iconsData[key][keya].host.forEach(function (host) { // The console shows an Error
43 iconsData.hostCustomMap[host] = iconsData[key][keya].custom;
44 });
45 });
46 });
47
48 var text = GM_getValue('search');
49 if (text && window.location.host in iconsData.hostCustomMap) {
50 keyword.beforeCustom(iconsData.hostCustomMap[window.location.host]);
51 }
52 var iconArray =
53 {
54 icona: document.createElement('div'),
55 iconb: document.createElement('div'),
56 iconc: document.createElement('div')
57 }
58
59 Object.keys(iconsData).forEach((key, indexa) => {
60 Object.keys(iconsData[key]).forEach((keya, indexb) => {
61 Object.keys(iconsData[key][keya]).forEach(function (obj) {
62 var img = document.createElement('img');
63 img.setAttribute('src', obj.image);
64 img.setAttribute('alt', obj.name);
65 img.setAttribute('title', obj.name);
66 img.addEventListener('mouseup', function () {
67 keyword.beforePopup(obj.popup);
68 });
69 img.setAttribute('style', '' +
70 'cursor:pointer!important;' +
71 'display:inline-block!important;' +
72 'width:22px!important;' +
73 'height:22px!important;' +
74 'border:0!important;' +
75 'background-color:rgba(255,255,255,1)!important;' +
76 'padding:0!important;' +
77 'margin:0!important;' +
78 'margin-right:5px!important;' +
79 '');
80 Object.keys(iconArray).forEach((keyb, indexc) => {
81 if(indexc = indexa){
82 iconArray[keyb].appendChild(img);
83 console.log(indexc,indexa)
84 }
85 });
86 });
87 });
88 });
89
90 Object.getOwnPropertyNames(iconArray).forEach(function(key){
91 iconArray[key].setAttribute('style', '' +
92 'display:none!important;' +
93 'position:absolute!important;' +
94 'padding:0!important;' +
95 'margin:0!important;' +
96 'font-size:13px!important;' +
97 'text-align:left!important;' +
98 'border:0!important;' +
99 'background:transparent!important;' +
100 'z-index:2147483647!important;' +
101 '');
102 });
103
104 Object.getOwnPropertyNames(iconArray).forEach(function(key){
105 document.documentElement.appendChild(iconArray[key]);
106 });
107
108 document.addEventListener('mousedown', function (e) {
109 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
110 e.preventDefault();
111 }
112 });
113
114 document.addEventListener("selectionchange", function () {
115 if (!window.getSelection().toString().trim()) {
116 iconArray.icona.style.display = 'none';
117 iconArray.iconb.style.display = 'none';
118 iconArray.iconc.style.display = 'none';
119 }
120 });
121
122 document.addEventListener('mouseup', function (e) {
123 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
124 e.preventDefault();
125 return;
126 }
127 var text = window.getSelection().toString().trim();
128 var url = text.match(/(https?:\/\/(\w[\w-]*\.)+[A-Za-z]{2,4}(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?|(\w[\w-]*\.)+(com|cn|org|net|info|tv|cc|gov|edu)(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?)/i);
129 if (url && iconArray.iconb.style.display == 'none') {
130 iconArray.iconb.style.top = e.pageY +40 + 'px';
131 if(e.pageX -70<10)
132 iconArray.iconb.style.left='10px';
133 else
134 iconArray.iconb.style.left = e.pageX -70 + 'px';
135 iconArray.iconb.style.display = 'block';
136 } else if (text.length >= 30) {
137 iconArray.iconc.style.top = e.pageY +40 + 'px';
138 if(e.pageX -70<10)
139 iconArray.iconc.style.left='10px';
140 else
141 iconArray.iconc.style.left = e.pageX -70 + 'px';
142 iconArray.iconc.style.display = 'block';
143 } else if (!text) {
144 iconArray.icona.style.display = 'none';
145 iconArray.iconb.style.display = 'none';
146 iconArray.iconc.style.display = 'none';
147 } else if(text && iconArray.icona.style.display == 'none'){
148 iconArray.icona.style.top = e.pageY +40 + 'px';
149 if(e.pageX -70<10)
150 iconArray.icona.style.left='10px';
151 else
152 iconArray.icona.style.left = e.pageX -70 + 'px';
153 iconArray.icona.style.display = 'block';
154 }
155 });
156
ANSWER
Answered 2020-Dec-10 at 19:34Object.keys(object)
return an array of property names of the provided object and in your case it will return [iconArraya,iconArrayb,iconArrayc]
if want to do it this way you'll have reference the object i.e for each value in the array you need to do this object[value]
by the way this is called bracket notation you can search it if you're unfamiliar it the same using a dot like object.value
here's how ===
1var iconsData = {
2 iconArraya: [
3 {
4 name: 'bilibili',
5 image:'https://i.ibb.co/R9HMTyR/1-5.png',
6 host: ['www.bilibili.com'],
7 popup: function (text) {
8 open('https://search.bilibili.com/live?keyword=' + encodeURIComponent(text));
9 }
10 }
11 ],
12 iconArrayb: [
13 {
14 name: 'open',
15 image:'https://i.ibb.co/R9HMTyR/1-5.png',
16 host: [''],
17 popup: function (text) {
18 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
19 window.open(text, "_blank");
20 else window.open("http://"+text, "_blank");
21 }
22 }
23 ],
24 iconArrayc: [
25 {
26 name: 'copy',
27 image:'https://i.ibb.co/R9HMTyR/1-5.png',
28 host: [''],
29 popup: function (text) {
30 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
31 window.open(text, "_blank");
32 else window.open("http://"+text, "_blank");
33 }
34 }
35 ],
36
37 hostCustomMap: {}
38 }
39
40 Object.keys(iconsData).forEach((key, index) => {
41 Object.keys(iconsData[key]).forEach((keya, index) => {
42 iconsData[key][keya].host.forEach(function (host) { // The console shows an Error
43 iconsData.hostCustomMap[host] = iconsData[key][keya].custom;
44 });
45 });
46 });
47
48 var text = GM_getValue('search');
49 if (text && window.location.host in iconsData.hostCustomMap) {
50 keyword.beforeCustom(iconsData.hostCustomMap[window.location.host]);
51 }
52 var iconArray =
53 {
54 icona: document.createElement('div'),
55 iconb: document.createElement('div'),
56 iconc: document.createElement('div')
57 }
58
59 Object.keys(iconsData).forEach((key, indexa) => {
60 Object.keys(iconsData[key]).forEach((keya, indexb) => {
61 Object.keys(iconsData[key][keya]).forEach(function (obj) {
62 var img = document.createElement('img');
63 img.setAttribute('src', obj.image);
64 img.setAttribute('alt', obj.name);
65 img.setAttribute('title', obj.name);
66 img.addEventListener('mouseup', function () {
67 keyword.beforePopup(obj.popup);
68 });
69 img.setAttribute('style', '' +
70 'cursor:pointer!important;' +
71 'display:inline-block!important;' +
72 'width:22px!important;' +
73 'height:22px!important;' +
74 'border:0!important;' +
75 'background-color:rgba(255,255,255,1)!important;' +
76 'padding:0!important;' +
77 'margin:0!important;' +
78 'margin-right:5px!important;' +
79 '');
80 Object.keys(iconArray).forEach((keyb, indexc) => {
81 if(indexc = indexa){
82 iconArray[keyb].appendChild(img);
83 console.log(indexc,indexa)
84 }
85 });
86 });
87 });
88 });
89
90 Object.getOwnPropertyNames(iconArray).forEach(function(key){
91 iconArray[key].setAttribute('style', '' +
92 'display:none!important;' +
93 'position:absolute!important;' +
94 'padding:0!important;' +
95 'margin:0!important;' +
96 'font-size:13px!important;' +
97 'text-align:left!important;' +
98 'border:0!important;' +
99 'background:transparent!important;' +
100 'z-index:2147483647!important;' +
101 '');
102 });
103
104 Object.getOwnPropertyNames(iconArray).forEach(function(key){
105 document.documentElement.appendChild(iconArray[key]);
106 });
107
108 document.addEventListener('mousedown', function (e) {
109 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
110 e.preventDefault();
111 }
112 });
113
114 document.addEventListener("selectionchange", function () {
115 if (!window.getSelection().toString().trim()) {
116 iconArray.icona.style.display = 'none';
117 iconArray.iconb.style.display = 'none';
118 iconArray.iconc.style.display = 'none';
119 }
120 });
121
122 document.addEventListener('mouseup', function (e) {
123 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
124 e.preventDefault();
125 return;
126 }
127 var text = window.getSelection().toString().trim();
128 var url = text.match(/(https?:\/\/(\w[\w-]*\.)+[A-Za-z]{2,4}(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?|(\w[\w-]*\.)+(com|cn|org|net|info|tv|cc|gov|edu)(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?)/i);
129 if (url && iconArray.iconb.style.display == 'none') {
130 iconArray.iconb.style.top = e.pageY +40 + 'px';
131 if(e.pageX -70<10)
132 iconArray.iconb.style.left='10px';
133 else
134 iconArray.iconb.style.left = e.pageX -70 + 'px';
135 iconArray.iconb.style.display = 'block';
136 } else if (text.length >= 30) {
137 iconArray.iconc.style.top = e.pageY +40 + 'px';
138 if(e.pageX -70<10)
139 iconArray.iconc.style.left='10px';
140 else
141 iconArray.iconc.style.left = e.pageX -70 + 'px';
142 iconArray.iconc.style.display = 'block';
143 } else if (!text) {
144 iconArray.icona.style.display = 'none';
145 iconArray.iconb.style.display = 'none';
146 iconArray.iconc.style.display = 'none';
147 } else if(text && iconArray.icona.style.display == 'none'){
148 iconArray.icona.style.top = e.pageY +40 + 'px';
149 if(e.pageX -70<10)
150 iconArray.icona.style.left='10px';
151 else
152 iconArray.icona.style.left = e.pageX -70 + 'px';
153 iconArray.icona.style.display = 'block';
154 }
155 });
156[iconArraya,iconArrayb,iconArrayc].forEach(property=>{
157 object[property];
158 })
159
you can also access the values by using Object.values(object)
here is how ===
1var iconsData = {
2 iconArraya: [
3 {
4 name: 'bilibili',
5 image:'https://i.ibb.co/R9HMTyR/1-5.png',
6 host: ['www.bilibili.com'],
7 popup: function (text) {
8 open('https://search.bilibili.com/live?keyword=' + encodeURIComponent(text));
9 }
10 }
11 ],
12 iconArrayb: [
13 {
14 name: 'open',
15 image:'https://i.ibb.co/R9HMTyR/1-5.png',
16 host: [''],
17 popup: function (text) {
18 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
19 window.open(text, "_blank");
20 else window.open("http://"+text, "_blank");
21 }
22 }
23 ],
24 iconArrayc: [
25 {
26 name: 'copy',
27 image:'https://i.ibb.co/R9HMTyR/1-5.png',
28 host: [''],
29 popup: function (text) {
30 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
31 window.open(text, "_blank");
32 else window.open("http://"+text, "_blank");
33 }
34 }
35 ],
36
37 hostCustomMap: {}
38 }
39
40 Object.keys(iconsData).forEach((key, index) => {
41 Object.keys(iconsData[key]).forEach((keya, index) => {
42 iconsData[key][keya].host.forEach(function (host) { // The console shows an Error
43 iconsData.hostCustomMap[host] = iconsData[key][keya].custom;
44 });
45 });
46 });
47
48 var text = GM_getValue('search');
49 if (text && window.location.host in iconsData.hostCustomMap) {
50 keyword.beforeCustom(iconsData.hostCustomMap[window.location.host]);
51 }
52 var iconArray =
53 {
54 icona: document.createElement('div'),
55 iconb: document.createElement('div'),
56 iconc: document.createElement('div')
57 }
58
59 Object.keys(iconsData).forEach((key, indexa) => {
60 Object.keys(iconsData[key]).forEach((keya, indexb) => {
61 Object.keys(iconsData[key][keya]).forEach(function (obj) {
62 var img = document.createElement('img');
63 img.setAttribute('src', obj.image);
64 img.setAttribute('alt', obj.name);
65 img.setAttribute('title', obj.name);
66 img.addEventListener('mouseup', function () {
67 keyword.beforePopup(obj.popup);
68 });
69 img.setAttribute('style', '' +
70 'cursor:pointer!important;' +
71 'display:inline-block!important;' +
72 'width:22px!important;' +
73 'height:22px!important;' +
74 'border:0!important;' +
75 'background-color:rgba(255,255,255,1)!important;' +
76 'padding:0!important;' +
77 'margin:0!important;' +
78 'margin-right:5px!important;' +
79 '');
80 Object.keys(iconArray).forEach((keyb, indexc) => {
81 if(indexc = indexa){
82 iconArray[keyb].appendChild(img);
83 console.log(indexc,indexa)
84 }
85 });
86 });
87 });
88 });
89
90 Object.getOwnPropertyNames(iconArray).forEach(function(key){
91 iconArray[key].setAttribute('style', '' +
92 'display:none!important;' +
93 'position:absolute!important;' +
94 'padding:0!important;' +
95 'margin:0!important;' +
96 'font-size:13px!important;' +
97 'text-align:left!important;' +
98 'border:0!important;' +
99 'background:transparent!important;' +
100 'z-index:2147483647!important;' +
101 '');
102 });
103
104 Object.getOwnPropertyNames(iconArray).forEach(function(key){
105 document.documentElement.appendChild(iconArray[key]);
106 });
107
108 document.addEventListener('mousedown', function (e) {
109 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
110 e.preventDefault();
111 }
112 });
113
114 document.addEventListener("selectionchange", function () {
115 if (!window.getSelection().toString().trim()) {
116 iconArray.icona.style.display = 'none';
117 iconArray.iconb.style.display = 'none';
118 iconArray.iconc.style.display = 'none';
119 }
120 });
121
122 document.addEventListener('mouseup', function (e) {
123 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
124 e.preventDefault();
125 return;
126 }
127 var text = window.getSelection().toString().trim();
128 var url = text.match(/(https?:\/\/(\w[\w-]*\.)+[A-Za-z]{2,4}(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?|(\w[\w-]*\.)+(com|cn|org|net|info|tv|cc|gov|edu)(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?)/i);
129 if (url && iconArray.iconb.style.display == 'none') {
130 iconArray.iconb.style.top = e.pageY +40 + 'px';
131 if(e.pageX -70<10)
132 iconArray.iconb.style.left='10px';
133 else
134 iconArray.iconb.style.left = e.pageX -70 + 'px';
135 iconArray.iconb.style.display = 'block';
136 } else if (text.length >= 30) {
137 iconArray.iconc.style.top = e.pageY +40 + 'px';
138 if(e.pageX -70<10)
139 iconArray.iconc.style.left='10px';
140 else
141 iconArray.iconc.style.left = e.pageX -70 + 'px';
142 iconArray.iconc.style.display = 'block';
143 } else if (!text) {
144 iconArray.icona.style.display = 'none';
145 iconArray.iconb.style.display = 'none';
146 iconArray.iconc.style.display = 'none';
147 } else if(text && iconArray.icona.style.display == 'none'){
148 iconArray.icona.style.top = e.pageY +40 + 'px';
149 if(e.pageX -70<10)
150 iconArray.icona.style.left='10px';
151 else
152 iconArray.icona.style.left = e.pageX -70 + 'px';
153 iconArray.icona.style.display = 'block';
154 }
155 });
156[iconArraya,iconArrayb,iconArrayc].forEach(property=>{
157 object[property];
158 })
159Object.values(object).forEach(value=>{
160 //here you don't have to reference the object you have direct access to the
161 value
162})
163
or use a for...in loop and since a for...in loop gives you access to the property you will also have to reference the object here's how ===
1var iconsData = {
2 iconArraya: [
3 {
4 name: 'bilibili',
5 image:'https://i.ibb.co/R9HMTyR/1-5.png',
6 host: ['www.bilibili.com'],
7 popup: function (text) {
8 open('https://search.bilibili.com/live?keyword=' + encodeURIComponent(text));
9 }
10 }
11 ],
12 iconArrayb: [
13 {
14 name: 'open',
15 image:'https://i.ibb.co/R9HMTyR/1-5.png',
16 host: [''],
17 popup: function (text) {
18 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
19 window.open(text, "_blank");
20 else window.open("http://"+text, "_blank");
21 }
22 }
23 ],
24 iconArrayc: [
25 {
26 name: 'copy',
27 image:'https://i.ibb.co/R9HMTyR/1-5.png',
28 host: [''],
29 popup: function (text) {
30 if(text.indexOf("http://")==0||text.indexOf("https://")==0)
31 window.open(text, "_blank");
32 else window.open("http://"+text, "_blank");
33 }
34 }
35 ],
36
37 hostCustomMap: {}
38 }
39
40 Object.keys(iconsData).forEach((key, index) => {
41 Object.keys(iconsData[key]).forEach((keya, index) => {
42 iconsData[key][keya].host.forEach(function (host) { // The console shows an Error
43 iconsData.hostCustomMap[host] = iconsData[key][keya].custom;
44 });
45 });
46 });
47
48 var text = GM_getValue('search');
49 if (text && window.location.host in iconsData.hostCustomMap) {
50 keyword.beforeCustom(iconsData.hostCustomMap[window.location.host]);
51 }
52 var iconArray =
53 {
54 icona: document.createElement('div'),
55 iconb: document.createElement('div'),
56 iconc: document.createElement('div')
57 }
58
59 Object.keys(iconsData).forEach((key, indexa) => {
60 Object.keys(iconsData[key]).forEach((keya, indexb) => {
61 Object.keys(iconsData[key][keya]).forEach(function (obj) {
62 var img = document.createElement('img');
63 img.setAttribute('src', obj.image);
64 img.setAttribute('alt', obj.name);
65 img.setAttribute('title', obj.name);
66 img.addEventListener('mouseup', function () {
67 keyword.beforePopup(obj.popup);
68 });
69 img.setAttribute('style', '' +
70 'cursor:pointer!important;' +
71 'display:inline-block!important;' +
72 'width:22px!important;' +
73 'height:22px!important;' +
74 'border:0!important;' +
75 'background-color:rgba(255,255,255,1)!important;' +
76 'padding:0!important;' +
77 'margin:0!important;' +
78 'margin-right:5px!important;' +
79 '');
80 Object.keys(iconArray).forEach((keyb, indexc) => {
81 if(indexc = indexa){
82 iconArray[keyb].appendChild(img);
83 console.log(indexc,indexa)
84 }
85 });
86 });
87 });
88 });
89
90 Object.getOwnPropertyNames(iconArray).forEach(function(key){
91 iconArray[key].setAttribute('style', '' +
92 'display:none!important;' +
93 'position:absolute!important;' +
94 'padding:0!important;' +
95 'margin:0!important;' +
96 'font-size:13px!important;' +
97 'text-align:left!important;' +
98 'border:0!important;' +
99 'background:transparent!important;' +
100 'z-index:2147483647!important;' +
101 '');
102 });
103
104 Object.getOwnPropertyNames(iconArray).forEach(function(key){
105 document.documentElement.appendChild(iconArray[key]);
106 });
107
108 document.addEventListener('mousedown', function (e) {
109 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
110 e.preventDefault();
111 }
112 });
113
114 document.addEventListener("selectionchange", function () {
115 if (!window.getSelection().toString().trim()) {
116 iconArray.icona.style.display = 'none';
117 iconArray.iconb.style.display = 'none';
118 iconArray.iconc.style.display = 'none';
119 }
120 });
121
122 document.addEventListener('mouseup', function (e) {
123 if (e.target == iconArray || (e.target.parentNode && e.target.parentNode == iconArray)) {
124 e.preventDefault();
125 return;
126 }
127 var text = window.getSelection().toString().trim();
128 var url = text.match(/(https?:\/\/(\w[\w-]*\.)+[A-Za-z]{2,4}(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?|(\w[\w-]*\.)+(com|cn|org|net|info|tv|cc|gov|edu)(?!\w)(:\d+)?(\/([\x21-\x7e]*[\w\/=])?)?)/i);
129 if (url && iconArray.iconb.style.display == 'none') {
130 iconArray.iconb.style.top = e.pageY +40 + 'px';
131 if(e.pageX -70<10)
132 iconArray.iconb.style.left='10px';
133 else
134 iconArray.iconb.style.left = e.pageX -70 + 'px';
135 iconArray.iconb.style.display = 'block';
136 } else if (text.length >= 30) {
137 iconArray.iconc.style.top = e.pageY +40 + 'px';
138 if(e.pageX -70<10)
139 iconArray.iconc.style.left='10px';
140 else
141 iconArray.iconc.style.left = e.pageX -70 + 'px';
142 iconArray.iconc.style.display = 'block';
143 } else if (!text) {
144 iconArray.icona.style.display = 'none';
145 iconArray.iconb.style.display = 'none';
146 iconArray.iconc.style.display = 'none';
147 } else if(text && iconArray.icona.style.display == 'none'){
148 iconArray.icona.style.top = e.pageY +40 + 'px';
149 if(e.pageX -70<10)
150 iconArray.icona.style.left='10px';
151 else
152 iconArray.icona.style.left = e.pageX -70 + 'px';
153 iconArray.icona.style.display = 'block';
154 }
155 });
156[iconArraya,iconArrayb,iconArrayc].forEach(property=>{
157 object[property];
158 })
159Object.values(object).forEach(value=>{
160 //here you don't have to reference the object you have direct access to the
161 value
162})
163for(property in object){
164 object[property];
165 // you can do whatever you want with the values even change them
166}
167
QUESTION
Using wininet to download deflate XML on Windows MSVC, but gets broken data
Asked 2020-Jul-30 at 06:03This code download deflated XML document https://api.bilibili.com/x/v1/dm/list.so?oid=162677333
and save it to temp.Z
, which however seems broken. How is that?
1#include <stdio.h>
2#include <stdlib.h>
3#include <windows.h>
4#include <wininet.h>
5
6#pragma comment(linker, "/entry:\"mainCRTStartup\"")
7#pragma comment(lib, "wininet.lib")
8
9char *download(char *link, int *size)
10{
11 int prealloc_size = 100000;
12 char *buf = malloc(prealloc_size);
13 DWORD num;
14 HINTERNET hinet;
15 HINTERNET hurl;
16 *size = 0;
17
18 hinet = InternetOpen("Microsoft Internet Explorer",
19 INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, INTERNET_INVALID_PORT_NUMBER);
20 hurl = InternetOpenUrl(hinet, link, NULL, 0, INTERNET_FLAG_NEED_FILE, 0);
21
22 while (TRUE == InternetReadFile(hurl, buf + *size, 1024, &num) && num > 0)
23 {
24 *size += num;
25
26 if (*size + 1024 > prealloc_size)
27 {
28 prealloc_size += prealloc_size / 2;
29 buf = realloc(buf, prealloc_size);
30 }
31 }
32
33 InternetCloseHandle(hurl);
34 InternetCloseHandle(hinet);
35 return buf;
36}
37
38int main(void)
39{
40 char *link = "https://api.bilibili.com/x/v1/dm/list.so?oid=162677333";
41 FILE *f = fopen("temp.Z", "wb");
42 int siz;
43 char *dat = download(link, &siz);
44
45 fwrite(dat, 1, siz, f);
46 fclose(f);
47 free(dat);
48
49 return 0;
50}
51
I tried Fiddler and it gets the same data, however, Fiddler can decode it, and says it is deflate.
ANSWER
Answered 2020-Jul-30 at 06:03It is something between deflate, zlib and gzip. I don't know. But I can decode it now.
Just use zlib
, with inflateInit2(&strm, -MAX_WBITS)
instead of inflateInit(&strm)
.
Yes, it is totally good. But why did I think it broken? Because my archive manager don't decode this! Anyway, I need to call zlib
by my own. I have suggested the archive manager developers add this feature - which is useful, no?
QUESTION
Merge one audio file and one image file to create a video with ffmpeg
Asked 2020-Jul-24 at 17:51I first tried:
1ffmpeg -y -i image.jpg -i audio.mp3 -c:a copy output.mp4
2
but when I uploaded to video sharing websites (bilibili.com), it says "no video track", so I tried:
1ffmpeg -y -i image.jpg -i audio.mp3 -c:a copy output.mp4
2ffmpeg -r 1 -loop 1 -i image.jpg -i audio.mp3 -acodec copy -r 1 -shortest output.mp4
3
The file was successfully uploaded, but when I watched it on the website, the image disappeared and it turned grey. I merged 6 videos and only one of them can be normally played back. (This is the URL to the video: click here, if you can visit the website you'll see what I mean.)
What should I do?
ANSWER
Answered 2020-Jul-24 at 17:51Problems with your command #2:
- Frame rate is too low. Most players are unable to play 1 fps. Use 10 fps or higher for output, or set input
-framerate
to 10 fps or higher. - Chroma subsampling. Most players can only play 4:2:0, so use the format filter to force it to 4:2:0.
- MP3 in MP4. Some players are unable to play MP3 in MP4. For highest compatibility use AAC.
ffmpeg
will choose AAC by default for MP4. - Faststart (optional). Add
-movflags +faststart
so MP4 can begin playback faster.
Command:
1ffmpeg -y -i image.jpg -i audio.mp3 -c:a copy output.mp4
2ffmpeg -r 1 -loop 1 -i image.jpg -i audio.mp3 -acodec copy -r 1 -shortest output.mp4
3ffmpeg -framerate 1 -loop 1 -i image.jpg -i audio.mp3 -vf format=yuv420p -r 10 -shortest -movflags +faststart output.mp4
4
QUESTION
Web scraping multiple pages in python and writing it into a csv file
Asked 2020-Jun-04 at 03:18I am new to web scraping and I am trying to scrape all the video links from each page of this specific site and writing that into a csv file. For starters I am trying to scrape the URLs from this site:
and going through all 19 pages. The problem I'm encountering is that the same 20 video links are being written 19 times(because I'm trying to go through all 19 pages), instead of having (around) 19 distinct sets of URLs.
1import requests
2from bs4 import BeautifulSoup
3from csv import writer
4
5def make_soup(url):
6 response = requests.get(url)
7 soup = BeautifulSoup(response.text, 'html.parser')
8 return soup
9
10def scrape_url():
11 for video in soup.find_all('a', class_='img-anchor'):
12 link = video['href'].replace('//','')
13 csv_writer.writerow([link])
14
15with open("videoLinks.csv", 'w') as csv_file:
16 csv_writer = writer(csv_file)
17 header = ['URLS']
18 csv_writer.writerow(header)
19
20 url = 'https://search.bilibili.com/all?keyword=%E3%82%A2%E3%83%8B%E3%82%B2%E3%83%A9%EF%BC%81%E3%83%87%E3%82%A3%E3%83%89%E3%82%A5%E3%83%BC%E3%83%BC%E3%83%B3'
21 soup = make_soup(url)
22
23 lastButton = soup.find_all(class_='page-item last')
24 lastPage = lastButton[0].text
25 lastPage = int(lastPage)
26 #print(lastPage)
27
28 page = 1
29 pageExtension = ''
30
31 scrape_url()
32
33 while page < lastPage:
34 page = page + 1
35 if page == 1:
36 pageExtension = ''
37 else:
38 pageExtension = '&page='+str(page)
39 #print(url+pageExtension)
40 fullUrl = url+pageExtension
41 make_soup(fullUrl)
42 scrape_url()
43
Any help is much appreciated and I decided to code this specific way so that I can better generalize this throughout the BiliBili site.
A screenshot is linked below showing how the first link repeats a total of 19 times:
ANSWER
Answered 2020-Jun-04 at 03:16Try
1import requests
2from bs4 import BeautifulSoup
3from csv import writer
4
5def make_soup(url):
6 response = requests.get(url)
7 soup = BeautifulSoup(response.text, 'html.parser')
8 return soup
9
10def scrape_url():
11 for video in soup.find_all('a', class_='img-anchor'):
12 link = video['href'].replace('//','')
13 csv_writer.writerow([link])
14
15with open("videoLinks.csv", 'w') as csv_file:
16 csv_writer = writer(csv_file)
17 header = ['URLS']
18 csv_writer.writerow(header)
19
20 url = 'https://search.bilibili.com/all?keyword=%E3%82%A2%E3%83%8B%E3%82%B2%E3%83%A9%EF%BC%81%E3%83%87%E3%82%A3%E3%83%89%E3%82%A5%E3%83%BC%E3%83%BC%E3%83%B3'
21 soup = make_soup(url)
22
23 lastButton = soup.find_all(class_='page-item last')
24 lastPage = lastButton[0].text
25 lastPage = int(lastPage)
26 #print(lastPage)
27
28 page = 1
29 pageExtension = ''
30
31 scrape_url()
32
33 while page < lastPage:
34 page = page + 1
35 if page == 1:
36 pageExtension = ''
37 else:
38 pageExtension = '&page='+str(page)
39 #print(url+pageExtension)
40 fullUrl = url+pageExtension
41 make_soup(fullUrl)
42 scrape_url()
43soup = make_soup(fullurl)
44
in last but one line
QUESTION
Why can only download the first episode video on bilibili with youtube-dl?
Asked 2020-Mar-06 at 20:17I can download the first episode of a series.
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2
Now I want to download all episodes of the series.
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6
All other episodes except the first can't be downloaded ,both of them contains same error info such as below:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9
Please have a try and check what happens,how to fix then?
@Christos Lytras,strange thing happen with your code:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13
It surely can download video on bilibili,but all of downloaded video have different name and same content,all the content are the same as the first episode,have a try and check ,you will find that fact.
ANSWER
Answered 2020-Mar-06 at 20:17This error occurs because youtube-dl
ignores URI parameters after ?
for the filename, so the next file it tries to download has the same name with the previous one and it fails because a file already exists with that name. The solution is to use the --output
template filesystem option to set a filename which it'll have an index in its name using the variable i
.
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15
The -o
option allows users to indicate a
template for the output file names.
The basic usage is not to set any template arguments when downloading
a single file, like in youtube-dl -o funny_video.flv "https://some/video"
. However, it may contain special sequences that
will be replaced when downloading each video. The special sequences
may be formatted according to python string formatting operations. For
example, %(NAME)s
or %(NAME)05d
. To clarify, that is a percent symbol
followed by a name in parentheses, followed by formatting operations.
Allowed names along with sequence type are:
id
(string): Video identifier
title
(string): Video title
url
(string): Video URL
ext
(string): Video filename extension
...
For your case, to use the i
in the output filename, you can use something like this:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19
which will use the title
the id
the i
variable for indexing and the ext
for the video extension.
You can check the Output Template variables for more options defining the filename.
UPDATE
Apparently, bilibili.com has some Javascript involved to setup the video player and fetch the video files. There is no way so you can extract the whole playlist using youtube-dl
. I suggest you use Annie which supports Bilibili playlists out of the box. It has installers for all major operating systems and you can use it like this to download the whole playlist:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19annie -p https://www.bilibili.com/video/av90163846
20
of if you want to download only until 55 video, you can use -end 55
cli option like this:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19annie -p https://www.bilibili.com/video/av90163846
20annie -end 55 -p https://www.bilibili.com/video/av90163846
21
You can use the
-start
,-end
or-items
option to specify the download range of the list:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19annie -p https://www.bilibili.com/video/av90163846
20annie -end 55 -p https://www.bilibili.com/video/av90163846
21-start
22 Playlist video to start at (default 1)
23-end
24 Playlist video to end at
25-items
26 Playlist video items to download. Separated by commas like: 1,5,6,8-10
27
For bilibili playlists only:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19annie -p https://www.bilibili.com/video/av90163846
20annie -end 55 -p https://www.bilibili.com/video/av90163846
21-start
22 Playlist video to start at (default 1)
23-end
24 Playlist video to end at
25-items
26 Playlist video items to download. Separated by commas like: 1,5,6,8-10
27-eto
28 File name of each bilibili episode doesn't include the playlist title
29
If you want to only get information of a playlist without downloading files, then use the -i
command line option like this:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19annie -p https://www.bilibili.com/video/av90163846
20annie -end 55 -p https://www.bilibili.com/video/av90163846
21-start
22 Playlist video to start at (default 1)
23-end
24 Playlist video to end at
25-items
26 Playlist video items to download. Separated by commas like: 1,5,6,8-10
27-eto
28 File name of each bilibili episode doesn't include the playlist title
29annie -i -p https://www.bilibili.com/video/av90163846
30
will output something like this:
1 yutube-dl https://www.bilibili.com/video/av90163846?p=1
2for i in $(seq 1 55)
3do
4 yutube-dl https://www.bilibili.com/video/av90163846?p=$i
5done
6[BiliBili] 90163846: Downloading webpage
7[BiliBili] 90163846: Downloading video info page
8[download] 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识-90163846.flv has already been downloaded
9for i in $(seq 1 55)
10do
11 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
12done
13-o, --output TEMPLATE Output filename template, see the "OUTPUT
14 TEMPLATE" for all the info
15for i in $(seq 1 55)
16do
17 youtube-dl https://www.bilibili.com/video/av90163846?p=$i -o "%(title)s-%(id)s-$i.%(ext)s"
18done
19annie -p https://www.bilibili.com/video/av90163846
20annie -end 55 -p https://www.bilibili.com/video/av90163846
21-start
22 Playlist video to start at (default 1)
23-end
24 Playlist video to end at
25-items
26 Playlist video items to download. Separated by commas like: 1,5,6,8-10
27-eto
28 File name of each bilibili episode doesn't include the playlist title
29annie -i -p https://www.bilibili.com/video/av90163846
30 Site: 哔哩哔哩 bilibili.com
31 Title: 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识 P1 【001】Parts of Plants
32 Type: video
33 Streams: # All available quality
34 [64] -------------------
35 Quality: 高清 720P
36 Size: 308.24 MiB (323215935 Bytes)
37 # download with: annie -f 64 ...
38
39 [32] -------------------
40 Quality: 清晰 480P
41 Size: 201.57 MiB (211361230 Bytes)
42 # download with: annie -f 32 ...
43
44 [16] -------------------
45 Quality: 流畅 360P
46 Size: 124.75 MiB (130809508 Bytes)
47 # download with: annie -f 16 ...
48
49
50 Site: 哔哩哔哩 bilibili.com
51 Title: 【合集300集全】地道美音 美国中小学教学 自然科学 社会常识 P2 【002】Life Cycle of a Plant
52 Type: video
53 Streams: # All available quality
54 [64] -------------------
55 Quality: 高清 720P
56 Size: 227.75 MiB (238809781 Bytes)
57 # download with: annie -f 64 ...
58
59 [32] -------------------
60 Quality: 清晰 480P
61 Size: 148.96 MiB (156191413 Bytes)
62 # download with: annie -f 32 ...
63
64 [16] -------------------
65 Quality: 流畅 360P
66 Size: 94.82 MiB (99425641 Bytes)
67 # download with: annie -f 16 ...
68
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Bilibili
Tutorials and Learning Resources are not available at this moment for Bilibili