Explore all Cloud Storage open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Cloud Storage

minio

Bugfix release

rclone

rclone v1.58.0

flysystem

3.0.0

formidable

3.1.4

s3fs-fuse

Release version 1.91

Popular Libraries in Cloud Storage

minio

by minio doticongodoticon

star image 32602 doticonAGPL-3.0

High Performance, Kubernetes Native Object Storage

rclone

by rclone doticongodoticon

star image 32354 doticonMIT

"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

flysystem

by thephpleague doticonphpdoticon

star image 12433 doticonMIT

Abstraction for local and remote filesystems

boto

by boto doticonpythondoticon

star image 6468 doticonNOASSERTION

For the latest version of boto, see https://github.com/boto/boto3 -- Python interface to Amazon Web Services

Dropbox-Uploader

by andreafabrizi doticonshelldoticon

star image 6264 doticonGPL-3.0

Dropbox Uploader is a BASH script which can be used to upload, download, list or delete files from Dropbox, an online file sharing, synchronization and backup service.

formidable

by node-formidable doticonjavascriptdoticon

star image 6155 doticonMIT

The most used, flexible, fast and streaming parser for multipart form data. Supports uploading to serverless environments, AWS S3, Azure, GCP or the filesystem. Used in production.

s3fs-fuse

by s3fs-fuse doticonc++doticon

star image 5927 doticonGPL-2.0

FUSE-based file system backed by Amazon S3

aws-sdk-php

by aws doticonphpdoticon

star image 5535 doticonApache-2.0

Official repository of the AWS SDK for PHP (@awsforphp)

filestash

by mickael-kerjean doticongodoticon

star image 4846 doticonAGPL-3.0

🦄 A modern web client for SFTP, S3, FTP, WebDAV, Git, Minio, LDAP, CalDAV, CardDAV, Mysql, Backblaze, ...

Trending New libraries in Cloud Storage

Walrus

by Clivern doticongodoticon

star image 435 doticonMIT

🔥 Fast, Secure and Reliable System Backup, Set up in Minutes.

s3viewer

by SharonBrizinov doticonpythondoticon

star image 352 doticonMIT

Publicly Open Amazon AWS S3 Bucket Viewer

Gokapi

by Forceu doticongodoticon

star image 245 doticonAGPL-3.0

Lightweight selfhosted Firefox Send alternative without public upload. AWS S3 supported.

arkb

by textury doticontypescriptdoticon

star image 171 doticonMIT

Deploy your files to the Arweave network.

amazon-s3-find-and-forget

by awslabs doticonpythondoticon

star image 169 doticonApache-2.0

Amazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR)

festin

by cr0hn doticonpythondoticon

star image 162 doticonBSD-3-Clause

FestIn - S3 Bucket Weakness Discovery

nestjs-multer-extended

by jeffminsungkim doticontypescriptdoticon

star image 152 doticonMIT

💪 Extended MulterModule for NestJS with flexible S3 upload and helpful features

oss-spring-boot-starter

by pig-mesh doticonjavadoticon

star image 130 doticonNOASSERTION

兼容S3协议的通用文件存储工具类

rssfs

by dertuxmalwieder doticongodoticon

star image 115 doticonMIT-0

The RSS file system (Git mirror)

Top Authors in Cloud Storage

1

aws-samples

23 Libraries

star icon967

2

awslabs

18 Libraries

star icon2268

3

GoogleCloudPlatform

12 Libraries

star icon3655

4

minio

11 Libraries

star icon37645

5

Files-com

8 Libraries

star icon102

6

dropbox

7 Libraries

star icon1005

7

beyondstorage

6 Libraries

star icon362

8

aos-dev

5 Libraries

star icon111

9

simalexan

5 Libraries

star icon38

10

Azure

5 Libraries

star icon549

1

23 Libraries

star icon967

2

18 Libraries

star icon2268

3

12 Libraries

star icon3655

4

11 Libraries

star icon37645

5

8 Libraries

star icon102

6

7 Libraries

star icon1005

7

6 Libraries

star icon362

8

5 Libraries

star icon111

9

5 Libraries

star icon38

10

5 Libraries

star icon549

Trending Kits in Cloud Storage

No Trending Kits are available at this moment for Cloud Storage

Trending Discussions on Cloud Storage

Upload a modified XML file to google cloud storage after editting it with ElementTree (python)

Google Cloud Function succeeds, but not showing expected output

Does Hub support integrations for MinIO, AWS, and GCP? If so, how does it work?

Compress & Upload large videos to Google cloud storage using Flutter/Dart

How to grant access to google cloud storage buckets based on users auth uid

How to delete a Cloud Storage file after it has been downloaded?

How to ensure access the right backend M3U8 file in origin cluster mode

Terraform Firebase Web Application

Access specific folder in GCS bucket according to user, using Workload Identity Federation

Debugging a Google Dataflow Streaming Job that does not work expected

QUESTION

Upload a modified XML file to google cloud storage after editting it with ElementTree (python)

Asked 2022-Mar-25 at 17:35

I've modified a piece of code for merging two or more xml files into one. I got it working locally without using or storing files on google cloud storage.

I'd like to use it via cloud functions, which seems to work mostly fine, apart from uploading the final xml file to google cloud storage.

1import os
2import wget
3import logging
4
5from io import BytesIO
6from google.cloud import storage
7from xml.etree import ElementTree as ET
8
9def merge(event, context):
10    client = storage.Client()
11    bucket = client.get_bucket('mybucket')
12    test1 = bucket.blob("xml-file1.xml")
13    inputxml1 = test1.download_as_string()
14    root1 = ET.fromstring(inputxml1)
15    test2 = bucket.blob("xml-file2.xml")
16    inputxml2 = test2.download_as_string()
17    root2 = ET.fromstring(inputxml2)
18    copy_files = [e for e in root1.findall('./SHOPITEM')]
19    src_files = set([e.find('./SHOPITEM') for e in copy_files])
20    copy_files.extend([e for e in root2.findall('./SHOPITEM') if e.find('./CODE').text not in src_files])
21    files = ET.Element('SHOP')
22    files.extend(copy_files)
23    blob = bucket.blob("test.xml")
24    blob.upload_from_string(files)
25

Ive tried the functions .write and .tostring but unsuccessfully.

ANSWER

Answered 2022-Mar-25 at 17:35

Sorry for the incomplete question. I've already found a solution and I cant recall the error message I got. Here is my solution:

1import os
2import wget
3import logging
4
5from io import BytesIO
6from google.cloud import storage
7from xml.etree import ElementTree as ET
8
9def merge(event, context):
10    client = storage.Client()
11    bucket = client.get_bucket('mybucket')
12    test1 = bucket.blob("xml-file1.xml")
13    inputxml1 = test1.download_as_string()
14    root1 = ET.fromstring(inputxml1)
15    test2 = bucket.blob("xml-file2.xml")
16    inputxml2 = test2.download_as_string()
17    root2 = ET.fromstring(inputxml2)
18    copy_files = [e for e in root1.findall('./SHOPITEM')]
19    src_files = set([e.find('./SHOPITEM') for e in copy_files])
20    copy_files.extend([e for e in root2.findall('./SHOPITEM') if e.find('./CODE').text not in src_files])
21    files = ET.Element('SHOP')
22    files.extend(copy_files)
23    blob = bucket.blob("test.xml")
24    blob.upload_from_string(files)
25blob.upload_from_string(ET.tostring(files, encoding='UTF-8',xml_declaration=True, method='xml').decode('UTF-8'),content_type='application/xml')
26

Source https://stackoverflow.com/questions/71591801

QUESTION

Google Cloud Function succeeds, but not showing expected output

Asked 2022-Mar-23 at 02:20

I am testing out cloud function and I have things setup, but output is not populating correctly (the output is not being saved into Cloud Storage and my print statements are not populating). Here is my code and my requirements below. I have setup the Cloud Function to just run as a HTTP request trigger type with unauthenticated invocations and having a Runtime service account as a specified account that has write access to Cloud Storage. I have verified that I am calling the correct Entry point.

logs

12022-03-22T18:52:02.749482564Z test-example vczj9p85h5m2 Function execution started
22022-03-22T18:52:04.148507183Z test-example vczj9p85h5m2 Function execution took 1399 ms. 
3Finished with status code: 200
4

main.py

12022-03-22T18:52:02.749482564Z test-example vczj9p85h5m2 Function execution started
22022-03-22T18:52:04.148507183Z test-example vczj9p85h5m2 Function execution took 1399 ms. 
3Finished with status code: 200
4import requests
5from google.cloud import storage
6import json
7
8def upload_to_gsc(data):
9    print("saving to cloud storage")
10    client = storage.Client(project="my-project-id")
11    bucket = client.bucket("my-bucket-name")
12    blob = bucket.blob("subfolder/name_of_file")
13    blob.upload_from_string(data)
14    print("data uploaded to cloud storage")
15
16def get_pokemon(request):
17    url = "https://pokeapi.co/api/v2/pokemon?limit=100&offset=200"
18    data = requests.get(url).json()
19    output = [i.get("name") for i in data["results"]]
20    data = json.dumps(output)
21    upload_to_gsc(data=data)
22    print("saved data!")    
23
24

requirements.txt

12022-03-22T18:52:02.749482564Z test-example vczj9p85h5m2 Function execution started
22022-03-22T18:52:04.148507183Z test-example vczj9p85h5m2 Function execution took 1399 ms. 
3Finished with status code: 200
4import requests
5from google.cloud import storage
6import json
7
8def upload_to_gsc(data):
9    print("saving to cloud storage")
10    client = storage.Client(project="my-project-id")
11    bucket = client.bucket("my-bucket-name")
12    blob = bucket.blob("subfolder/name_of_file")
13    blob.upload_from_string(data)
14    print("data uploaded to cloud storage")
15
16def get_pokemon(request):
17    url = "https://pokeapi.co/api/v2/pokemon?limit=100&offset=200"
18    data = requests.get(url).json()
19    output = [i.get("name") for i in data["results"]]
20    data = json.dumps(output)
21    upload_to_gsc(data=data)
22    print("saved data!")    
23
24google-cloud-storage
25requests==2.26.0
26

ANSWER

Answered 2022-Mar-23 at 01:23

As @dko512 mentioned in comments, issue was resolved by recreating and redeploying the Cloud Function.

Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.

Feel free to edit this answer for additional information.

Source https://stackoverflow.com/questions/71577728

QUESTION

Does Hub support integrations for MinIO, AWS, and GCP? If so, how does it work?

Asked 2022-Mar-19 at 16:28

I was taking a look at Hub—the dataset format for AI—and noticed that hub integrates with GCP and AWS. I was wondering if it also supported integrations with MinIO.

I know that Hub allows you to directly stream datasets from cloud storage to ML workflows but I’m not sure which ML workflows it integrates with.

I would like to use MinIO over S3 since my team has a self-hosted MinIO instance (aka it's free).

ANSWER

Answered 2022-Mar-19 at 16:28

Hub allows you to load data from anywhere. Hub works locally, on Google Cloud, MinIO, AWS as well as Activeloop storage (no servers needed!). So, it allows you to load data and directly stream datasets from cloud storage to ML workflows.

You can find more information about storage authentication in the Hub docs.

Then, Hub allows you to stream data to PyTorch or TensorFlow with simple dataset integrations as if the data were local since you can connect Hub datasets to ML frameworks.

Source https://stackoverflow.com/questions/71539946

QUESTION

Compress & Upload large videos to Google cloud storage using Flutter/Dart

Asked 2022-Feb-20 at 03:41

There are a couple of notable packages on pub.dev that offer video compression. I've tried them, and other sketchy packages, and none work well once a video gets around 300MB. They crash or have other issues on various platforms and hardware. Namely, video compress and light compressor. The GH commits and support are concerning as well on the packages I've seen for video compression in pub.dev. PR's not being pulled in and issues not being resolved in a timely manner and some quite serious for recent android APK updates. So not something I want in my dependency stack.

I am uploading to Google Cloud Storage using FlutterFire. While my code does upload using FireBaseStorage upload task it does not have any ability to compress on the client side or handle background uploading when the app is closed.

So, currently on the server side, I have a GCF that triggers on file uploaded. Then I use nodejs ffmpeg, which is baked into GCF's to compress server side and convert to H264. And finally delete the original large upload video and save the compressed video to storage.

This solution works, but depending on a user's connection and whether they are on wifi, can take an awful long time and when it fails or the user closes the app, my current solution is useless.

I wish there was a solid native library on Android and iOS, that I could tap into, to confidently perform compression and conversion from any format to H264 and also allow uploading, whether my app is closed or in the background, to GC storage. Any thoughts? I wish this was standard in FlutterFire's cloud storage handling!

I have yet to test flutter_ffmpeg, but only because some have said it runs so slowly on client. So again, Flutter/Dart can access natively written code, but I don't know where to start on Android/iOS to do this the right way. And I understand this is what some of the packages are doing, but they do not work with large videos, so I'm hoping someone can point me in the right direction on Android and iOS.

My code for handling upload tasks to GC storage.

1 
2 ​  ​Future<​void​>​ ​uploadTask​({ 
3 ​    ​required​ ​WidgetRef​ ref, 
4 ​    ​required​ ​File​ file, 
5 ​    ​required​ ​String​ objectPath, 
6 ​    ​SettableMetadata​?​ metaData, 
7 ​    ​bool​ deleteAfterUpload ​=​ ​false​, 
8 ​    ​bool​ displayProgressNotification ​=​ ​false​, 
9 ​  }) ​async​ { 
10 ​    ​String​ filePath ​=​ file.path; 
11 ​    filename ​=​ ​basename​(file.path); 
12  
13 ​    ​/// Remove any instances of '//' from the path. 
14 ​    ​final​ ​String​ path ​=​ ​'$​storagePath​/$​objectPath​'​.​replaceAll​(​'//'​, ​'/'​); 
15 ​    ​UploadTask​ task ​=​ storage.​ref​(path).​putFile​( 
16 ​          ​File​(filePath), 
17 ​          metaData, 
18 ​        ); 
19  
20 ​    ​/// Store UploadTask in StateNotifierProvider to monitor progress. 
21 ​    ref.​read​(uploadingStateProvider.notifier).​myUploadTask​(task); 
22 ​  }
23

ANSWER

Answered 2022-Feb-20 at 03:41

I did resolve, to some degree, my original post's questions and frustrations by using the ffmpeg_kit_flutter_full_gpl package on the client side, and then ffmpeg again in GCF on the server side. In summary:

  • Within 60 seconds, I can now compress a 2 minute video by 90% before uploading to firebase storage.
  • Using onFinalize via GCF on the server side I run ffmpeg again on the uploaded video and gain another 77% reduction in file size on the server side without any loss in video quality.
  • My solution does not yet upload while the app is closed.
  • On the client side, this solution requires setting the camera ResolutionPreset to high (720p), rather than max, which can be a minimum of 1080p, and setting the ffmpeg -preset veryfast rather than the medium default.

Camera & ffmpeg solution settings:

Transcoding results stats for 2 minute video:

  • Before transcode: 255MB
  • After client side transcode: 25MB (90% decrease in size before upload)
  • Time to transcode: 60 seconds
  • onFinalized GCF ffmpeg transcode: 19MB (77% reduction in size)
  • In total a 93% reduction in size while keep high quality 720p video.

flutter_ffmpeg is archived, the new ffmpeg flutter package is ffmpeg_kit_flutter.

That being said, I used ffmpeg_kit_flutter to build my solution on the client side, rather than the server side, and transcode the video before uploading.

Cons:

  1. Doubled my app size to use ffmpeg, because I needed access to both lame and x264 so I had to install the full-gpl package to gain access to these libraries.
  2. A two minute video can take up to 60 seconds to transcode.

The pros:

  1. Low bandwidth connections will operate much better after a video is reduced in size by 90%.
  2. Large videos will transcode and ffmpegkit does not crash like other flutter packages I've tried.
  3. The second pass with ffmpeg on GCF gains another 77% reduction in size taking a video of 100's of MB's down to just 10-20 MB max for eventually delivery.
  4. Costs lower on the front and back end.

So, you'll have to decide if the pros outweighs the cons and if 720p is high enough quality for playback. For me 720p looks perfect for video playback on a mobile phone and 1080p or higher was big time overkill.

I've provided sample code (not full classes) to give anyone looking to implement my solution a try. It became very important, due to the amount of time to transcode, to display a progress meter so the user does not give up on the process. You'll see my simple solution to displaying transcoding progress.

pubspec.yaml

1 
2 ​  ​Future<​void​>​ ​uploadTask​({ 
3 ​    ​required​ ​WidgetRef​ ref, 
4 ​    ​required​ ​File​ file, 
5 ​    ​required​ ​String​ objectPath, 
6 ​    ​SettableMetadata​?​ metaData, 
7 ​    ​bool​ deleteAfterUpload ​=​ ​false​, 
8 ​    ​bool​ displayProgressNotification ​=​ ​false​, 
9 ​  }) ​async​ { 
10 ​    ​String​ filePath ​=​ file.path; 
11 ​    filename ​=​ ​basename​(file.path); 
12  
13 ​    ​/// Remove any instances of '//' from the path. 
14 ​    ​final​ ​String​ path ​=​ ​'$​storagePath​/$​objectPath​'​.​replaceAll​(​'//'​, ​'/'​); 
15 ​    ​UploadTask​ task ​=​ storage.​ref​(path).​putFile​( 
16 ​          ​File​(filePath), 
17 ​          metaData, 
18 ​        ); 
19  
20 ​    ​/// Store UploadTask in StateNotifierProvider to monitor progress. 
21 ​    ref.​read​(uploadingStateProvider.notifier).​myUploadTask​(task); 
22 ​  }
23dependencies:
24  camera: ^0.9.4+12
25  flutter_riverpod: ^1.0.3
26  ffmpeg_kit_flutter_full_gpl: ^4.5.1
27  wakelock: ^0.5.6
28

Riverpod StateNotifier classes to send progress updates to the ui for both transcoding and uploading to firebase storage. Assumption that users are familiar with Riverpod and StateNotifiers.

1 
2 ​  ​Future<​void​>​ ​uploadTask​({ 
3 ​    ​required​ ​WidgetRef​ ref, 
4 ​    ​required​ ​File​ file, 
5 ​    ​required​ ​String​ objectPath, 
6 ​    ​SettableMetadata​?​ metaData, 
7 ​    ​bool​ deleteAfterUpload ​=​ ​false​, 
8 ​    ​bool​ displayProgressNotification ​=​ ​false​, 
9 ​  }) ​async​ { 
10 ​    ​String​ filePath ​=​ file.path; 
11 ​    filename ​=​ ​basename​(file.path); 
12  
13 ​    ​/// Remove any instances of '//' from the path. 
14 ​    ​final​ ​String​ path ​=​ ​'$​storagePath​/$​objectPath​'​.​replaceAll​(​'//'​, ​'/'​); 
15 ​    ​UploadTask​ task ​=​ storage.​ref​(path).​putFile​( 
16 ​          ​File​(filePath), 
17 ​          metaData, 
18 ​        ); 
19  
20 ​    ​/// Store UploadTask in StateNotifierProvider to monitor progress. 
21 ​    ref.​read​(uploadingStateProvider.notifier).​myUploadTask​(task); 
22 ​  }
23dependencies:
24  camera: ^0.9.4+12
25  flutter_riverpod: ^1.0.3
26  ffmpeg_kit_flutter_full_gpl: ^4.5.1
27  wakelock: ^0.5.6
28@immutable
29class TranscodeUploadMessage {
30  const TranscodeUploadMessage({
31    required this.id,
32    required this.statusTitle,
33    required this.statusMessage,
34    required this.uploadPercentage,
35    required this.isRunning,
36    required this.completed,
37    required this.showSpinner,
38    required this.showPercentage,
39    required this.showError,
40  });
41
42  final int id;
43  final String statusTitle;
44  final String statusMessage;
45  final String uploadPercentage;
46  final bool isRunning;
47  final bool completed;
48  final bool showSpinner;
49  final bool showPercentage;
50  final bool showError;
51
52  TranscodeUploadMessage copyWith({
53    int? id,
54    String? statusTitle,
55    String? statusMessage,
56    String? uploadPercentage,
57    bool? isRunning,
58    bool? completed,
59    bool? showSpinner,
60    bool? showPercentage,
61    bool? showError,
62  }) {
63    return TranscodeUploadMessage(
64      id: id ?? this.id,
65      statusTitle: statusTitle ?? this.statusTitle,
66      statusMessage: statusMessage ?? this.statusMessage,
67      uploadPercentage: uploadPercentage ?? this.uploadPercentage,
68      isRunning: isRunning ?? this.isRunning,
69      completed: completed ?? this.completed,
70      showSpinner: showSpinner ?? this.showSpinner,
71      showPercentage: showSpinner ?? this.showPercentage,
72      showError: showError ?? this.showError,
73    );
74  }
75}
76
77class TranscodeUploadMessageNotifier
78    extends StateNotifier<List<TranscodeUploadMessage>> {
79  TranscodeUploadMessageNotifier() : super([]);
80
81  /// Since our state is immutable, we are not allowed to do
82  /// `state.add(message)`. Instead, we should create a new list of messages which
83  /// contains the previous items and the new one.
84  ///
85  /// Using Dart's spread operator here is helpful!
86  void set(TranscodeUploadMessage message) {
87    state = [...state, message];
88  }
89
90  /// Our state is immutable. So we're making a new list instead of changing
91  /// the existing list.
92  void remove(int id) {
93    state = [
94      for (final message in state)
95        if (message.id != id) message,
96    ];
97  }
98
99  /// Update message. Since our state is immutable, we need to make a copy of
100  /// the message. We're using our `copyWith` method implemented before to help
101  /// with that.
102  void update(TranscodeUploadMessage messageUpdated) {
103    state = [
104      for (final message in state)
105        if (message.id == messageUpdated.id)
106
107          /// Use copyWith to update a message
108          message.copyWith(
109            statusTitle: messageUpdated.statusTitle,
110            statusMessage: messageUpdated.statusMessage,
111            uploadPercentage: messageUpdated.uploadPercentage,
112            isRunning: messageUpdated.isRunning,
113            completed: messageUpdated.completed,
114            showSpinner: messageUpdated.showSpinner,
115            showPercentage: messageUpdated.showPercentage,
116            showError: messageUpdated.showError,
117          )
118        else
119
120          /// other messages, which there are not any at this time, are not
121          /// modified
122          message,
123    ];
124  }
125}
126
127/// Using StateNotifierProvider to allow the UI to interact with our
128/// TranscodeUploadMessageNotifier class.
129final transcodeMessageProvider = StateNotifierProvider.autoDispose<
130    TranscodeUploadMessageNotifier, List<TranscodeUploadMessage>>((ref) {
131  return TranscodeUploadMessageNotifier();
132});
133

ffmpegkit package running ffmpeg commands and sending transcoding statistics to the StateNotifier. (Missing quite a bit of code, but pseudocode to demonstrate.)

1 
2 ​  ​Future<​void​>​ ​uploadTask​({ 
3 ​    ​required​ ​WidgetRef​ ref, 
4 ​    ​required​ ​File​ file, 
5 ​    ​required​ ​String​ objectPath, 
6 ​    ​SettableMetadata​?​ metaData, 
7 ​    ​bool​ deleteAfterUpload ​=​ ​false​, 
8 ​    ​bool​ displayProgressNotification ​=​ ​false​, 
9 ​  }) ​async​ { 
10 ​    ​String​ filePath ​=​ file.path; 
11 ​    filename ​=​ ​basename​(file.path); 
12  
13 ​    ​/// Remove any instances of '//' from the path. 
14 ​    ​final​ ​String​ path ​=​ ​'$​storagePath​/$​objectPath​'​.​replaceAll​(​'//'​, ​'/'​); 
15 ​    ​UploadTask​ task ​=​ storage.​ref​(path).​putFile​( 
16 ​          ​File​(filePath), 
17 ​          metaData, 
18 ​        ); 
19  
20 ​    ​/// Store UploadTask in StateNotifierProvider to monitor progress. 
21 ​    ref.​read​(uploadingStateProvider.notifier).​myUploadTask​(task); 
22 ​  }
23dependencies:
24  camera: ^0.9.4+12
25  flutter_riverpod: ^1.0.3
26  ffmpeg_kit_flutter_full_gpl: ^4.5.1
27  wakelock: ^0.5.6
28@immutable
29class TranscodeUploadMessage {
30  const TranscodeUploadMessage({
31    required this.id,
32    required this.statusTitle,
33    required this.statusMessage,
34    required this.uploadPercentage,
35    required this.isRunning,
36    required this.completed,
37    required this.showSpinner,
38    required this.showPercentage,
39    required this.showError,
40  });
41
42  final int id;
43  final String statusTitle;
44  final String statusMessage;
45  final String uploadPercentage;
46  final bool isRunning;
47  final bool completed;
48  final bool showSpinner;
49  final bool showPercentage;
50  final bool showError;
51
52  TranscodeUploadMessage copyWith({
53    int? id,
54    String? statusTitle,
55    String? statusMessage,
56    String? uploadPercentage,
57    bool? isRunning,
58    bool? completed,
59    bool? showSpinner,
60    bool? showPercentage,
61    bool? showError,
62  }) {
63    return TranscodeUploadMessage(
64      id: id ?? this.id,
65      statusTitle: statusTitle ?? this.statusTitle,
66      statusMessage: statusMessage ?? this.statusMessage,
67      uploadPercentage: uploadPercentage ?? this.uploadPercentage,
68      isRunning: isRunning ?? this.isRunning,
69      completed: completed ?? this.completed,
70      showSpinner: showSpinner ?? this.showSpinner,
71      showPercentage: showSpinner ?? this.showPercentage,
72      showError: showError ?? this.showError,
73    );
74  }
75}
76
77class TranscodeUploadMessageNotifier
78    extends StateNotifier<List<TranscodeUploadMessage>> {
79  TranscodeUploadMessageNotifier() : super([]);
80
81  /// Since our state is immutable, we are not allowed to do
82  /// `state.add(message)`. Instead, we should create a new list of messages which
83  /// contains the previous items and the new one.
84  ///
85  /// Using Dart's spread operator here is helpful!
86  void set(TranscodeUploadMessage message) {
87    state = [...state, message];
88  }
89
90  /// Our state is immutable. So we're making a new list instead of changing
91  /// the existing list.
92  void remove(int id) {
93    state = [
94      for (final message in state)
95        if (message.id != id) message,
96    ];
97  }
98
99  /// Update message. Since our state is immutable, we need to make a copy of
100  /// the message. We're using our `copyWith` method implemented before to help
101  /// with that.
102  void update(TranscodeUploadMessage messageUpdated) {
103    state = [
104      for (final message in state)
105        if (message.id == messageUpdated.id)
106
107          /// Use copyWith to update a message
108          message.copyWith(
109            statusTitle: messageUpdated.statusTitle,
110            statusMessage: messageUpdated.statusMessage,
111            uploadPercentage: messageUpdated.uploadPercentage,
112            isRunning: messageUpdated.isRunning,
113            completed: messageUpdated.completed,
114            showSpinner: messageUpdated.showSpinner,
115            showPercentage: messageUpdated.showPercentage,
116            showError: messageUpdated.showError,
117          )
118        else
119
120          /// other messages, which there are not any at this time, are not
121          /// modified
122          message,
123    ];
124  }
125}
126
127/// Using StateNotifierProvider to allow the UI to interact with our
128/// TranscodeUploadMessageNotifier class.
129final transcodeMessageProvider = StateNotifierProvider.autoDispose<
130    TranscodeUploadMessageNotifier, List<TranscodeUploadMessage>>((ref) {
131  return TranscodeUploadMessageNotifier();
132});
133/// By default, set to video ffmpeg command.
134String ffmpegCommand = '-i $messageUri '
135    '-acodec aac '
136    '-vcodec libx264 '
137    '-f mp4 -preset veryfast '
138    '-movflags frag_keyframe+empty_moov '
139    '-crf 23 $newMessageUri';
140
141if (_recordingType == RecordingType.audio) {
142  ffmpegCommand = '-vn '
143      '-i $messageUri '
144      '-y '
145      '-acodec libmp3lame '
146      '-f '
147      'mp3 '
148      '$newMessageUri';
149}
150
151/// Set the initial state notifier as we start transcoding.
152ref
153.read(transcodeMessageProvider.notifier)
154.set(const TranscodeUploadMessage(
155  id: 1,
156  statusTitle: 'Transcoding Recording',
157  statusMessage: 'Your recording is being transcoded '
158      'before upload. Please do not navigate away from this screen.',
159  uploadPercentage: '0%',
160  isRunning: true,
161  completed: false,
162  showSpinner: false,
163  showPercentage: false,
164  showError: false,
165));
166
167await FFmpegKit.executeAsync(
168  ffmpegCommand,
169  (Session session) async {
170    final ReturnCode? returnCode = await session.getReturnCode();
171
172    if (ReturnCode.isSuccess(returnCode)) {
173      /// Transcoding is complete, now display uploading message
174      /// and spinner at 0%.
175      ref
176          .read(transcodeMessageProvider.notifier)
177          .update(const TranscodeUploadMessage(
178            id: 1,
179            statusTitle: 'Uploading Recording',
180            statusMessage:
181                'Your recording is now being '
182                'uploaded. Please do not navigate away from this screen.',
183            uploadPercentage: '0%',
184            isRunning: true,
185            completed: false,
186            showSpinner: true,
187            showPercentage: true,
188            showError: false,
189          ));
190
191      /// Upload the now transcoded video/audio to cloud storage where
192      /// Use flutterfire firebase storage tasks to get upload
193      /// progress. Your firebase storage function can also 
194      /// reuse the transcodeMessageProvider to send UI state
195      /// updates for the upload, which will happen very quickly
196      /// even on slow connections now that the recording size
197      /// is dramatically reduced.
198      await uploadRecordingToFirebaseCloudStorage(ref);
199    } else if (ReturnCode.isCancel(returnCode)) {
200      // Do something if canceled
201    } else {
202      // Do something with the error
203    }
204  },
205  (Log log) => debugPrint(log.getMessage()),
206  (Statistics statistic) {
207    /// Statistics provides a running transcoding progress meter.
208    int completePercentage = (statistic.getTime() * 100) ~/ _duration!;
209    ref
210        .read(transcodeMessageProvider.notifier)
211        .update(TranscodeUploadMessage(
212          id: 1,
213          statusTitle: 'Transcoding Recording',
214          statusMessage: 'Your recording is being '
215              'transcoded. Please do not navigate away from this screen.',
216          uploadPercentage: '$completePercentage%',
217          isRunning: true,
218          completed: false,
219          showSpinner: true,
220          showPercentage: true,
221          showError: false,
222        ));
223  }).then((Session session) {
224debugPrint(
225    'Async FFmpeg process started with sessionId ${session.getSessionId()}.');
226}).catchError((error) async {
227debugPrint('transcoding error: $error');
228});
229

Use a Riverpod Consumer to update UI by watching the StateNotifier and displaying updated state in your UI.

1 
2 ​  ​Future<​void​>​ ​uploadTask​({ 
3 ​    ​required​ ​WidgetRef​ ref, 
4 ​    ​required​ ​File​ file, 
5 ​    ​required​ ​String​ objectPath, 
6 ​    ​SettableMetadata​?​ metaData, 
7 ​    ​bool​ deleteAfterUpload ​=​ ​false​, 
8 ​    ​bool​ displayProgressNotification ​=​ ​false​, 
9 ​  }) ​async​ { 
10 ​    ​String​ filePath ​=​ file.path; 
11 ​    filename ​=​ ​basename​(file.path); 
12  
13 ​    ​/// Remove any instances of '//' from the path. 
14 ​    ​final​ ​String​ path ​=​ ​'$​storagePath​/$​objectPath​'​.​replaceAll​(​'//'​, ​'/'​); 
15 ​    ​UploadTask​ task ​=​ storage.​ref​(path).​putFile​( 
16 ​          ​File​(filePath), 
17 ​          metaData, 
18 ​        ); 
19  
20 ​    ​/// Store UploadTask in StateNotifierProvider to monitor progress. 
21 ​    ref.​read​(uploadingStateProvider.notifier).​myUploadTask​(task); 
22 ​  }
23dependencies:
24  camera: ^0.9.4+12
25  flutter_riverpod: ^1.0.3
26  ffmpeg_kit_flutter_full_gpl: ^4.5.1
27  wakelock: ^0.5.6
28@immutable
29class TranscodeUploadMessage {
30  const TranscodeUploadMessage({
31    required this.id,
32    required this.statusTitle,
33    required this.statusMessage,
34    required this.uploadPercentage,
35    required this.isRunning,
36    required this.completed,
37    required this.showSpinner,
38    required this.showPercentage,
39    required this.showError,
40  });
41
42  final int id;
43  final String statusTitle;
44  final String statusMessage;
45  final String uploadPercentage;
46  final bool isRunning;
47  final bool completed;
48  final bool showSpinner;
49  final bool showPercentage;
50  final bool showError;
51
52  TranscodeUploadMessage copyWith({
53    int? id,
54    String? statusTitle,
55    String? statusMessage,
56    String? uploadPercentage,
57    bool? isRunning,
58    bool? completed,
59    bool? showSpinner,
60    bool? showPercentage,
61    bool? showError,
62  }) {
63    return TranscodeUploadMessage(
64      id: id ?? this.id,
65      statusTitle: statusTitle ?? this.statusTitle,
66      statusMessage: statusMessage ?? this.statusMessage,
67      uploadPercentage: uploadPercentage ?? this.uploadPercentage,
68      isRunning: isRunning ?? this.isRunning,
69      completed: completed ?? this.completed,
70      showSpinner: showSpinner ?? this.showSpinner,
71      showPercentage: showSpinner ?? this.showPercentage,
72      showError: showError ?? this.showError,
73    );
74  }
75}
76
77class TranscodeUploadMessageNotifier
78    extends StateNotifier<List<TranscodeUploadMessage>> {
79  TranscodeUploadMessageNotifier() : super([]);
80
81  /// Since our state is immutable, we are not allowed to do
82  /// `state.add(message)`. Instead, we should create a new list of messages which
83  /// contains the previous items and the new one.
84  ///
85  /// Using Dart's spread operator here is helpful!
86  void set(TranscodeUploadMessage message) {
87    state = [...state, message];
88  }
89
90  /// Our state is immutable. So we're making a new list instead of changing
91  /// the existing list.
92  void remove(int id) {
93    state = [
94      for (final message in state)
95        if (message.id != id) message,
96    ];
97  }
98
99  /// Update message. Since our state is immutable, we need to make a copy of
100  /// the message. We're using our `copyWith` method implemented before to help
101  /// with that.
102  void update(TranscodeUploadMessage messageUpdated) {
103    state = [
104      for (final message in state)
105        if (message.id == messageUpdated.id)
106
107          /// Use copyWith to update a message
108          message.copyWith(
109            statusTitle: messageUpdated.statusTitle,
110            statusMessage: messageUpdated.statusMessage,
111            uploadPercentage: messageUpdated.uploadPercentage,
112            isRunning: messageUpdated.isRunning,
113            completed: messageUpdated.completed,
114            showSpinner: messageUpdated.showSpinner,
115            showPercentage: messageUpdated.showPercentage,
116            showError: messageUpdated.showError,
117          )
118        else
119
120          /// other messages, which there are not any at this time, are not
121          /// modified
122          message,
123    ];
124  }
125}
126
127/// Using StateNotifierProvider to allow the UI to interact with our
128/// TranscodeUploadMessageNotifier class.
129final transcodeMessageProvider = StateNotifierProvider.autoDispose<
130    TranscodeUploadMessageNotifier, List<TranscodeUploadMessage>>((ref) {
131  return TranscodeUploadMessageNotifier();
132});
133/// By default, set to video ffmpeg command.
134String ffmpegCommand = '-i $messageUri '
135    '-acodec aac '
136    '-vcodec libx264 '
137    '-f mp4 -preset veryfast '
138    '-movflags frag_keyframe+empty_moov '
139    '-crf 23 $newMessageUri';
140
141if (_recordingType == RecordingType.audio) {
142  ffmpegCommand = '-vn '
143      '-i $messageUri '
144      '-y '
145      '-acodec libmp3lame '
146      '-f '
147      'mp3 '
148      '$newMessageUri';
149}
150
151/// Set the initial state notifier as we start transcoding.
152ref
153.read(transcodeMessageProvider.notifier)
154.set(const TranscodeUploadMessage(
155  id: 1,
156  statusTitle: 'Transcoding Recording',
157  statusMessage: 'Your recording is being transcoded '
158      'before upload. Please do not navigate away from this screen.',
159  uploadPercentage: '0%',
160  isRunning: true,
161  completed: false,
162  showSpinner: false,
163  showPercentage: false,
164  showError: false,
165));
166
167await FFmpegKit.executeAsync(
168  ffmpegCommand,
169  (Session session) async {
170    final ReturnCode? returnCode = await session.getReturnCode();
171
172    if (ReturnCode.isSuccess(returnCode)) {
173      /// Transcoding is complete, now display uploading message
174      /// and spinner at 0%.
175      ref
176          .read(transcodeMessageProvider.notifier)
177          .update(const TranscodeUploadMessage(
178            id: 1,
179            statusTitle: 'Uploading Recording',
180            statusMessage:
181                'Your recording is now being '
182                'uploaded. Please do not navigate away from this screen.',
183            uploadPercentage: '0%',
184            isRunning: true,
185            completed: false,
186            showSpinner: true,
187            showPercentage: true,
188            showError: false,
189          ));
190
191      /// Upload the now transcoded video/audio to cloud storage where
192      /// Use flutterfire firebase storage tasks to get upload
193      /// progress. Your firebase storage function can also 
194      /// reuse the transcodeMessageProvider to send UI state
195      /// updates for the upload, which will happen very quickly
196      /// even on slow connections now that the recording size
197      /// is dramatically reduced.
198      await uploadRecordingToFirebaseCloudStorage(ref);
199    } else if (ReturnCode.isCancel(returnCode)) {
200      // Do something if canceled
201    } else {
202      // Do something with the error
203    }
204  },
205  (Log log) => debugPrint(log.getMessage()),
206  (Statistics statistic) {
207    /// Statistics provides a running transcoding progress meter.
208    int completePercentage = (statistic.getTime() * 100) ~/ _duration!;
209    ref
210        .read(transcodeMessageProvider.notifier)
211        .update(TranscodeUploadMessage(
212          id: 1,
213          statusTitle: 'Transcoding Recording',
214          statusMessage: 'Your recording is being '
215              'transcoded. Please do not navigate away from this screen.',
216          uploadPercentage: '$completePercentage%',
217          isRunning: true,
218          completed: false,
219          showSpinner: true,
220          showPercentage: true,
221          showError: false,
222        ));
223  }).then((Session session) {
224debugPrint(
225    'Async FFmpeg process started with sessionId ${session.getSessionId()}.');
226}).catchError((error) async {
227debugPrint('transcoding error: $error');
228});
229Consumer(
230    builder: (context, watch, child) {
231      final List<TranscodeUploadMessage> messages =
232          ref.watch(transcodeMessageProvider);
233      if (messages.isEmpty) {
234        return const SizedBox.shrink();
235      }
236
237      final message = messages[0];
238
239      if (message.isRunning ||
240          message.completed ||
241          message.showError) {
242        // Display widgets with StateNotifier data
243      }
244
245      return const SizedBox.shrink();
246    },
247)
248

Source https://stackoverflow.com/questions/71027684

QUESTION

How to grant access to google cloud storage buckets based on users auth uid

Asked 2022-Feb-07 at 22:55

I am running a cloud function that saves images like this:

1//Pseudocode
2const admin = await import("firebase-admin");
3const bucket = admin.storage().bucket();
4const file = bucket.file('myName');
5const stream = file.createWriteStream({ resumable: false });
6...
7

After the images are uploaded, I get the publicUrl like so

1//Pseudocode
2const admin = await import("firebase-admin");
3const bucket = admin.storage().bucket();
4const file = bucket.file('myName');
5const stream = file.createWriteStream({ resumable: false });
6...
7file.publicUrl()
8

and store it in an Object.

This object then gets stored to firestore.

When I now copy this url from the object(the url structure looks like this) https://storage.googleapis.com/new-project.appspot.com/ZWHpYGSQWYXLlUcAwkRFQLC0u7s1/f2a48bdc-7eb3-4174-9f6e-3fd963003bd7/177373254.png

and paste it into a browser field am getting an error:

1//Pseudocode
2const admin = await import("firebase-admin");
3const bucket = admin.storage().bucket();
4const file = bucket.file('myName');
5const stream = file.createWriteStream({ resumable: false });
6...
7file.publicUrl()
8<Error>
9  <Code>AccessDenied</Code>
10  <Message>Access denied.</Message>
11  <Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage 
12  object.</Details>
13</Error>
14

even with the test rules:

1//Pseudocode
2const admin = await import("firebase-admin");
3const bucket = admin.storage().bucket();
4const file = bucket.file('myName');
5const stream = file.createWriteStream({ resumable: false });
6...
7file.publicUrl()
8<Error>
9  <Code>AccessDenied</Code>
10  <Message>Access denied.</Message>
11  <Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage 
12  object.</Details>
13</Error>
14service firebase.storage {
15  match /b/{bucket}/o {
16    match /{allPaths=**} {
17      allow read: if true;
18      allow write: if false;
19    }
20  }
21}
22

I have read some issues here on stackoverflow and it seems like this is because I am using google cloud storage buckets and not firebase storage buckets (I thought they are the same)

but I am very confused about how to write rules in this case, so that only authenticated firebase users can read files.

Any help is highly appreciated.

ANSWER

Answered 2022-Feb-07 at 22:55

The bucket is shared between Firebase and Cloud Storage, but the way you access the bucket is quite different.

When you access the bucket through a Firebase SDK, or through a download URL generated by a Firebase SDK, your access goes through a Firebase layer. This layer enforces the security rules (for the SDK), and grants temporary read-only access to the download URL.

When you access the bucket through a Cloud SDK, or a signed URL generated by a Cloud SDK, your access does not go through any Firebase layer, and thus Firebase security rules have no effect on this access.

The public URL you have is just a way to identify a file in your Cloud Storage bucket. It does not have any implied access permissions. You will need to make sure the IAM properties for your bucket allow the access you want the user to have to the file.

Source https://stackoverflow.com/questions/71026323

QUESTION

How to delete a Cloud Storage file after it has been downloaded?

Asked 2022-Feb-03 at 15:57

I have a Cloud Function that deletes the file from Cloud Storage

Function:

1const { Storage } = require("@google-cloud/storage");
2const storage = new Storage();
3
4let handler = async (file, context) => {
5    console.log(`  Event: ${context.eventId}`);
6    console.log(`  Event Type: ${context.eventType}`);
7    console.log(`  Bucket: ${file.bucket}`);
8    console.log(`  File: ${file.name}`);
9
10    let bucket = storage.bucket(file.bucket);
11    let bucketFile = bucket.file(file.name);
12
13    bucketFile.delete();
14};
15

I want to trigger this function after any file in the bucket is downloaded.

I've looked at the Cloud Storage triggers,

1const { Storage } = require("@google-cloud/storage");
2const storage = new Storage();
3
4let handler = async (file, context) => {
5    console.log(`  Event: ${context.eventId}`);
6    console.log(`  Event Type: ${context.eventType}`);
7    console.log(`  Bucket: ${file.bucket}`);
8    console.log(`  File: ${file.name}`);
9
10    let bucket = storage.bucket(file.bucket);
11    let bucketFile = bucket.file(file.name);
12
13    bucketFile.delete();
14};
15google.storage.object.finalize (default)
16
17google.storage.object.delete
18
19google.storage.object.archive
20
21google.storage.object.metadataUpdate
22

but they don't work the way I want them to. Anyone have any suggestions?

Edit:

If I need to explain a little more what I want to do; Whether registered in our system or not, users are given links with some of their data openly to the public. Since these links may contain sensitive data, I would like to be granted a one-time download right. After a single download, the data needs to be permanently deleted.

ANSWER

Answered 2022-Feb-03 at 13:47

You can catch the get event if you activate the audit logs on Cloud Storage (be careful, the audit logs on Cloud Storage can generate a lot of logs volume and can cost).

When the audit logs are activated, you can filter on that method name:

1const { Storage } = require("@google-cloud/storage");
2const storage = new Storage();
3
4let handler = async (file, context) => {
5    console.log(`  Event: ${context.eventId}`);
6    console.log(`  Event Type: ${context.eventType}`);
7    console.log(`  Bucket: ${file.bucket}`);
8    console.log(`  File: ${file.name}`);
9
10    let bucket = storage.bucket(file.bucket);
11    let bucketFile = bucket.file(file.name);
12
13    bucketFile.delete();
14};
15google.storage.object.finalize (default)
16
17google.storage.object.delete
18
19google.storage.object.archive
20
21google.storage.object.metadataUpdate
22methodName: "storage.objects.get"
23

Of course you can add other filters on Cloud Logging, like the resourceName to filter on the bucket and/or the file prefix/suffix (use the =~ for the regex expression)

When your filter is OK and you get only the entries that you expect, create a sink to PubSub, then a Push PubSub subscription to invoke your piece of code (on Cloud Functions or on Cloud Run).

You will receive the Cloud Logging JSON entry, get the resourceName in the JSON to know which file has been downloaded, and then delete it.

Source https://stackoverflow.com/questions/70970632

QUESTION

How to ensure access the right backend M3U8 file in origin cluster mode

Asked 2022-Jan-31 at 16:53

From SRS how to transmux HLS wiki, we know SRS generate the corresponding M3U8 playlist in hls_path, here is my config file:

1http_server {
2    enabled         on;
3    listen          8080;
4    dir             ./objs/nginx/html;
5}
6vhost __defaultVhost__ {
7    hls {
8        enabled         on;
9        hls_path        /data/hls-records;
10        hls_fragment    10;
11        hls_window      60;
12    }
13}
14

In one SRS server case, every client play the HLS stream access the same push SRS server, that's OK. But in origin cluster mode, there are many SRS servers, and each stream is in one of them. When client play this HLS stream we can't guard it can access the right origin SRS server(cause 404 http status code if not exist). Unlike the RTMP and HTTP-FLV stream, SRS use coworker by HTTP-API feature to redirect the right origin SRS.

In order to fix this issue, I think below two solutions:

  • Use specialized backend HLS segment SRS server:
    Don't generate the M3U8 in origin SRS server, every stream is forward to this SRS server, all the M3U8 are generated in this server and all HLS request is proxy to this server(use nginx). The cons. of this solution is limit to one instance, no scaling ability and single node risk.

the origin srs.conf forward config like this:

1http_server {
2    enabled         on;
3    listen          8080;
4    dir             ./objs/nginx/html;
5}
6vhost __defaultVhost__ {
7    hls {
8        enabled         on;
9        hls_path        /data/hls-records;
10        hls_fragment    10;
11        hls_window      60;
12    }
13}
14vhost same.vhost.forward.srs.com {
15    # forward stream to other servers.
16    forward {
17
18        enabled on;
19
20        destination 192.168.1.120:1935;
21    }
22}
23

where 192.168.1.120 is the backend hls segment SRS server.

  • Use cloud storage such as NFS/K8S PV/Distributed File System:
    Mount the cloud storage as local folder in every SRS server, whatever the stream in which SRS server, the M3U8 file and ts segment is transfer to same big storage, so after HLS request, the http server served them as static file. From my test, if the cloud storage write speed is reliable, it is a good solution. But if network shake or write speed is not as fast as received speed, it will block the other coroutine and this cause the SRS abnormal.

The hls_path config like this:

1http_server {
2    enabled         on;
3    listen          8080;
4    dir             ./objs/nginx/html;
5}
6vhost __defaultVhost__ {
7    hls {
8        enabled         on;
9        hls_path        /data/hls-records;
10        hls_fragment    10;
11        hls_window      60;
12    }
13}
14vhost same.vhost.forward.srs.com {
15    # forward stream to other servers.
16    forward {
17
18        enabled on;
19
20        destination 192.168.1.120:1935;
21    }
22}
23vhost __defaultVhost__ {
24    hls {
25        enabled         on;
26        hls_path        /shared_storage/hls-records;
27        hls_fragment    10;
28        hls_window      60;
29    }
30}
31

Here 'shared_stoarge' means a nfs/cephfs/pv mount point.

The above solutions in my perspective are not radically resolve the access issue, I am looking forward to find better reliable product solution for such case?

ANSWER

Answered 2022-Jan-31 at 16:53

As you use OriginCluster, then you must get lots of streams to serve, there are lots of encoders to publish streams to your media servers. The key to solve the problem:

  1. Never use single server, use cluster for elastic ability, because you might get much more streams in future. So forward is not good, because you must config a special set of streams to foward to, similar to a manually hash algorithm.
  2. Beside of bandwidth, the disk IO is also the bottleneck. You definitely need a high performance network storage cluster. But be careful, never let SRS directly write to the storage, it will block SRS coroutine.

So the best solution, as I know, is to:

  1. Use SRS Origin Cluster, to write HLS on your local disk, or RAM disk is better, to make sure the disk IO never block the SRS coroutine(driven by state-threads network IO).
  2. Use network storage cluster to store the HLS files, for example cloud storage like AWS S3, or NFS/K8S PV/Distributed File System whatever. Use nginx or CDN to deliver the HLS.

Now the problem is: How to move data from memory/disk to a network storage cluster?

You must build a service, by Python or Go:

  • Use on_hls callback, to notify your service to move the HLS files.
  • Use on_publish callback, to notify your service to start FFmpeg to convert RTMP to HLS.

Note that FFmpeg should pull stream from SRS edge, never from origin server directly.

Source https://stackoverflow.com/questions/70405004

QUESTION

Terraform Firebase Web Application

Asked 2022-Jan-31 at 09:01

I have some trouble with this terraform file I wrote to define a Firebase application in my org account:

1terraform {
2  required_providers {
3    google = {
4      source = "hashicorp/google"
5      version = "3.86.0"
6    }
7  }
8}
9
10provider "google-beta" {
11  credentials = file("service-account-credentials.json")
12  project     = var.gcp_project_id
13  region      = var.region
14  zone        = var.zone
15}
16
17resource "google_project" "default" {
18    provider = google-beta
19
20    project_id = var.gcp_project_id
21    name       = "Optic OTP API"
22    org_id     = var.gcp_organization_id
23}
24
25resource "google_firebase_project" "default" {
26    provider = google-beta
27    project  = google_project.default.project_id
28}
29
30resource "google_firebase_web_app" "basic" {
31    provider = google-beta
32    project = google_project.default.project_id
33    display_name = "Optic OTP API"
34
35    depends_on = [google_firebase_project.default]
36}
37
38data "google_firebase_web_app_config" "basic" {
39  provider   = google-beta
40  web_app_id = google_firebase_web_app.basic.app_id
41}
42
43resource "google_storage_bucket" "default" {
44    provider = google-beta
45    name = "firebase-optic-storage"
46}
47
48resource "google_storage_bucket_object" "default" {
49    provider = google-beta
50    bucket = google_storage_bucket.default.name
51    name = "firebase-config.json"
52
53    content = jsonencode({
54        appId              = google_firebase_web_app.basic.app_id
55        apiKey             = data.google_firebase_web_app_config.basic.api_key
56        authDomain         = data.google_firebase_web_app_config.basic.auth_domain
57        databaseURL        = lookup(data.google_firebase_web_app_config.basic, "database_url", "")
58        storageBucket      = lookup(data.google_firebase_web_app_config.basic, "storage_bucket", "")
59        messagingSenderId  = lookup(data.google_firebase_web_app_config.basic, "messaging_sender_id", "")
60        measurementId      = lookup(data.google_firebase_web_app_config.basic, "measurement_id", "")
61    })
62}
63

I followed the official terraform plugin documentation here

I’m using a Service Account created in the company GCP org within the Firebase Service Management Service Agent role:

roles

But when I run terraform plan I get

1terraform {
2  required_providers {
3    google = {
4      source = "hashicorp/google"
5      version = "3.86.0"
6    }
7  }
8}
9
10provider "google-beta" {
11  credentials = file("service-account-credentials.json")
12  project     = var.gcp_project_id
13  region      = var.region
14  zone        = var.zone
15}
16
17resource "google_project" "default" {
18    provider = google-beta
19
20    project_id = var.gcp_project_id
21    name       = "Optic OTP API"
22    org_id     = var.gcp_organization_id
23}
24
25resource "google_firebase_project" "default" {
26    provider = google-beta
27    project  = google_project.default.project_id
28}
29
30resource "google_firebase_web_app" "basic" {
31    provider = google-beta
32    project = google_project.default.project_id
33    display_name = "Optic OTP API"
34
35    depends_on = [google_firebase_project.default]
36}
37
38data "google_firebase_web_app_config" "basic" {
39  provider   = google-beta
40  web_app_id = google_firebase_web_app.basic.app_id
41}
42
43resource "google_storage_bucket" "default" {
44    provider = google-beta
45    name = "firebase-optic-storage"
46}
47
48resource "google_storage_bucket_object" "default" {
49    provider = google-beta
50    bucket = google_storage_bucket.default.name
51    name = "firebase-config.json"
52
53    content = jsonencode({
54        appId              = google_firebase_web_app.basic.app_id
55        apiKey             = data.google_firebase_web_app_config.basic.api_key
56        authDomain         = data.google_firebase_web_app_config.basic.auth_domain
57        databaseURL        = lookup(data.google_firebase_web_app_config.basic, "database_url", "")
58        storageBucket      = lookup(data.google_firebase_web_app_config.basic, "storage_bucket", "")
59        messagingSenderId  = lookup(data.google_firebase_web_app_config.basic, "messaging_sender_id", "")
60        measurementId      = lookup(data.google_firebase_web_app_config.basic, "measurement_id", "")
61    })
62}
63Error when reading or editing Storage Bucket "firebase-optic-storage": googleapi: Error 403: XXX does not have storage.buckets.get access to the Google Cloud Storage bucket.
64

Even if the service account’s role has it!

1terraform {
2  required_providers {
3    google = {
4      source = "hashicorp/google"
5      version = "3.86.0"
6    }
7  }
8}
9
10provider "google-beta" {
11  credentials = file("service-account-credentials.json")
12  project     = var.gcp_project_id
13  region      = var.region
14  zone        = var.zone
15}
16
17resource "google_project" "default" {
18    provider = google-beta
19
20    project_id = var.gcp_project_id
21    name       = "Optic OTP API"
22    org_id     = var.gcp_organization_id
23}
24
25resource "google_firebase_project" "default" {
26    provider = google-beta
27    project  = google_project.default.project_id
28}
29
30resource "google_firebase_web_app" "basic" {
31    provider = google-beta
32    project = google_project.default.project_id
33    display_name = "Optic OTP API"
34
35    depends_on = [google_firebase_project.default]
36}
37
38data "google_firebase_web_app_config" "basic" {
39  provider   = google-beta
40  web_app_id = google_firebase_web_app.basic.app_id
41}
42
43resource "google_storage_bucket" "default" {
44    provider = google-beta
45    name = "firebase-optic-storage"
46}
47
48resource "google_storage_bucket_object" "default" {
49    provider = google-beta
50    bucket = google_storage_bucket.default.name
51    name = "firebase-config.json"
52
53    content = jsonencode({
54        appId              = google_firebase_web_app.basic.app_id
55        apiKey             = data.google_firebase_web_app_config.basic.api_key
56        authDomain         = data.google_firebase_web_app_config.basic.auth_domain
57        databaseURL        = lookup(data.google_firebase_web_app_config.basic, "database_url", "")
58        storageBucket      = lookup(data.google_firebase_web_app_config.basic, "storage_bucket", "")
59        messagingSenderId  = lookup(data.google_firebase_web_app_config.basic, "messaging_sender_id", "")
60        measurementId      = lookup(data.google_firebase_web_app_config.basic, "measurement_id", "")
61    })
62}
63Error when reading or editing Storage Bucket "firebase-optic-storage": googleapi: Error 403: XXX does not have storage.buckets.get access to the Google Cloud Storage bucket.
64$ gcloud projects get-iam-policy optic-web-otp
65
66# returns
67bindings:
68- members:
69  - serviceAccount:XXX
70  role: roles/firebase.managementServiceAgent
71- members:
72  - serviceAccount:XXX
73  role: roles/firebase.sdkAdminServiceAgent
74- members:
75  - serviceAccount:XXX
76  role: roles/firebase.sdkProvisioningServiceAgent
77- members:
78  - user:MY-EMAIL
79  role: roles/owner
80etag: 
81version: 1
82

(The XXX is the right service account identifier)

Do you have some hints to check what is missing from my Service Account?

ANSWER

Answered 2022-Jan-31 at 09:01

If the roles that you listed are the only ones that your account has - you lack roles that allow you to access Cloud Storage. Command you used to check the roles doesn't give you correct information.

Correct solution (described in this answer) would be to run this :

1terraform {
2  required_providers {
3    google = {
4      source = "hashicorp/google"
5      version = "3.86.0"
6    }
7  }
8}
9
10provider "google-beta" {
11  credentials = file("service-account-credentials.json")
12  project     = var.gcp_project_id
13  region      = var.region
14  zone        = var.zone
15}
16
17resource "google_project" "default" {
18    provider = google-beta
19
20    project_id = var.gcp_project_id
21    name       = "Optic OTP API"
22    org_id     = var.gcp_organization_id
23}
24
25resource "google_firebase_project" "default" {
26    provider = google-beta
27    project  = google_project.default.project_id
28}
29
30resource "google_firebase_web_app" "basic" {
31    provider = google-beta
32    project = google_project.default.project_id
33    display_name = "Optic OTP API"
34
35    depends_on = [google_firebase_project.default]
36}
37
38data "google_firebase_web_app_config" "basic" {
39  provider   = google-beta
40  web_app_id = google_firebase_web_app.basic.app_id
41}
42
43resource "google_storage_bucket" "default" {
44    provider = google-beta
45    name = "firebase-optic-storage"
46}
47
48resource "google_storage_bucket_object" "default" {
49    provider = google-beta
50    bucket = google_storage_bucket.default.name
51    name = "firebase-config.json"
52
53    content = jsonencode({
54        appId              = google_firebase_web_app.basic.app_id
55        apiKey             = data.google_firebase_web_app_config.basic.api_key
56        authDomain         = data.google_firebase_web_app_config.basic.auth_domain
57        databaseURL        = lookup(data.google_firebase_web_app_config.basic, "database_url", "")
58        storageBucket      = lookup(data.google_firebase_web_app_config.basic, "storage_bucket", "")
59        messagingSenderId  = lookup(data.google_firebase_web_app_config.basic, "messaging_sender_id", "")
60        measurementId      = lookup(data.google_firebase_web_app_config.basic, "measurement_id", "")
61    })
62}
63Error when reading or editing Storage Bucket "firebase-optic-storage": googleapi: Error 403: XXX does not have storage.buckets.get access to the Google Cloud Storage bucket.
64$ gcloud projects get-iam-policy optic-web-otp
65
66# returns
67bindings:
68- members:
69  - serviceAccount:XXX
70  role: roles/firebase.managementServiceAgent
71- members:
72  - serviceAccount:XXX
73  role: roles/firebase.sdkAdminServiceAgent
74- members:
75  - serviceAccount:XXX
76  role: roles/firebase.sdkProvisioningServiceAgent
77- members:
78  - user:MY-EMAIL
79  role: roles/owner
80etag: 
81version: 1
82gcloud projects get-iam-policy <your project name>  \
83--flatten="bindings[].members" \
84--format='table(bindings.role)' \
85--filter="bindings.members:<your account name>"
86

If you don't see any of these roles:

  • roles/storage.objectAdmin
  • roles/storage.admin
  • roles/storage.objectCreator

described here you won't be able to create any buckets/objects.

In this case add these roles to your service account and try again.

For example:

1terraform {
2  required_providers {
3    google = {
4      source = "hashicorp/google"
5      version = "3.86.0"
6    }
7  }
8}
9
10provider "google-beta" {
11  credentials = file("service-account-credentials.json")
12  project     = var.gcp_project_id
13  region      = var.region
14  zone        = var.zone
15}
16
17resource "google_project" "default" {
18    provider = google-beta
19
20    project_id = var.gcp_project_id
21    name       = "Optic OTP API"
22    org_id     = var.gcp_organization_id
23}
24
25resource "google_firebase_project" "default" {
26    provider = google-beta
27    project  = google_project.default.project_id
28}
29
30resource "google_firebase_web_app" "basic" {
31    provider = google-beta
32    project = google_project.default.project_id
33    display_name = "Optic OTP API"
34
35    depends_on = [google_firebase_project.default]
36}
37
38data "google_firebase_web_app_config" "basic" {
39  provider   = google-beta
40  web_app_id = google_firebase_web_app.basic.app_id
41}
42
43resource "google_storage_bucket" "default" {
44    provider = google-beta
45    name = "firebase-optic-storage"
46}
47
48resource "google_storage_bucket_object" "default" {
49    provider = google-beta
50    bucket = google_storage_bucket.default.name
51    name = "firebase-config.json"
52
53    content = jsonencode({
54        appId              = google_firebase_web_app.basic.app_id
55        apiKey             = data.google_firebase_web_app_config.basic.api_key
56        authDomain         = data.google_firebase_web_app_config.basic.auth_domain
57        databaseURL        = lookup(data.google_firebase_web_app_config.basic, "database_url", "")
58        storageBucket      = lookup(data.google_firebase_web_app_config.basic, "storage_bucket", "")
59        messagingSenderId  = lookup(data.google_firebase_web_app_config.basic, "messaging_sender_id", "")
60        measurementId      = lookup(data.google_firebase_web_app_config.basic, "measurement_id", "")
61    })
62}
63Error when reading or editing Storage Bucket "firebase-optic-storage": googleapi: Error 403: XXX does not have storage.buckets.get access to the Google Cloud Storage bucket.
64$ gcloud projects get-iam-policy optic-web-otp
65
66# returns
67bindings:
68- members:
69  - serviceAccount:XXX
70  role: roles/firebase.managementServiceAgent
71- members:
72  - serviceAccount:XXX
73  role: roles/firebase.sdkAdminServiceAgent
74- members:
75  - serviceAccount:XXX
76  role: roles/firebase.sdkProvisioningServiceAgent
77- members:
78  - user:MY-EMAIL
79  role: roles/owner
80etag: 
81version: 1
82gcloud projects get-iam-policy <your project name>  \
83--flatten="bindings[].members" \
84--format='table(bindings.role)' \
85--filter="bindings.members:<your account name>"
86gcloud projects add-iam-policy-binding optic-web-otp \
87    --member=user:my-user@example.com --role=roles/roles/storage.admin
88

Source https://stackoverflow.com/questions/69447788

QUESTION

Access specific folder in GCS bucket according to user, using Workload Identity Federation

Asked 2022-Jan-28 at 18:52

I have an external identity provider that supports OpenID Connect (OIDC) and want to access Google Cloud Storage(GCS) directly, using a short-lived access token. So I'm using workload identity federation in order to provide a credential from my external identity provider and get a federated token in exchange.

I have created the workload identity pool and provider and connected a service account to it, which has write access to a certain bucket in GCS.

How can I differentiate the access to specific folder in the bucket according to the token provided from my external identity provider? For example for userA to have access only to folderA in the bucket. Can I do this using one service account?

Any help would be highly appreciated.

ANSWER

Answered 2022-Jan-28 at 18:52

The folders don't exist on Cloud Storage, it's a blob storage, all the object are stored at the bucket level. For human readability and representation, the / are the folder separator, by convention.

Therefore, because directory doesn't exist, you can't grant any permission on it. The finer granularity is the bucket.


In your use case, you can't grant a write access at folder level, but you can create 1 bucket per user and therefore grant the impersonated service account on the bucket.

Source https://stackoverflow.com/questions/70897139

QUESTION

Debugging a Google Dataflow Streaming Job that does not work expected

Asked 2022-Jan-26 at 19:14

I am following this tutorial on migrating data from an oracle database to a Cloud SQL PostreSQL instance.

I am using the Google Provided Streaming Template Datastream to PostgreSQL

At a high level this is what is expected:

  1. Datastream exports in Avro format backfill and changed data into the specified Cloud Bucket location from the source Oracle database
  2. This triggers the Dataflow job to pickup the Avro files from this cloud storage location and insert into PostgreSQL instance.

When the Avro files are uploaded into the Cloud Storage location, the job is indeed triggered but when I check the target PostgreSQL database the required data has not been populated.

When I check the job logs and worker logs, there are no error logs. When the job is triggered these are the logs that logged:

1StartBundle: 4
2Matched 1 files for pattern gs://BUCKETNAME/ora2pg/DEMOAPP_DEMOTABLE/2022/01/11/20/03/7e13ac05aa3921875434e51c0c0c63aaabced31a_oracle-backfill_336860711_1_0.avro
3FinishBundle: 5
4

Does anyone know what the issue is? Is it a configuration issue? If needed I will post the required configurations.

If not could someone aid me on how to properly debug this particular Dataflow job? Thanks

EDIT 1:

When checking the step info for the steps in the pipeline, found the following:

Below are all the steps in the pipeline: enter image description here

First step (DatastreamIO) seems to work as expected with the correct number of element counters in the "Output collection" which is 2. enter image description here

However in the second step, these 2 element counters are not found in the Output collection. On further inspection, it can be seen that the elements seem to be dropped in the following step (Format to Postgres DML > Format to Postgres DML > Map): enter image description here

EDIT 2:

This is a screenshot of the Cloud Worker logs for the above step: enter image description here

EDIT 3:

I individually built and deployed the template from source in order to debug this issue. I found that the code works up to the following line in DatabaseMigrationUtils.java:

1StartBundle: 4
2Matched 1 files for pattern gs://BUCKETNAME/ora2pg/DEMOAPP_DEMOTABLE/2022/01/11/20/03/7e13ac05aa3921875434e51c0c0c63aaabced31a_oracle-backfill_336860711_1_0.avro
3FinishBundle: 5
4return KV.of(jsonString, dmlInfo);
5

Where the jsonString variable contains the dataset read from the .avro file.

But the code does not progress beyond this and seems to abruptly stop without any errors being thrown.

ANSWER

Answered 2022-Jan-26 at 19:14

This answer is accurate as of 19th January 2022.

Upon manual debug of this dataflow, I found that the issue is due to the dataflow job is looking for a schema with the exact same name as the value passed for the parameter databaseName and there was no other input parameter for the job using which we could pass a schema name. Therefore for this job to work, the tables will have to be created/imported into a schema with the same name as the database.

However, as @Iñigo González said this dataflow is currently in Beta and seems to have some bugs as I ran into another issue as soon as this was resolved which required me having to change the source code of the dataflow template job itself and build a custom docker image for it.

Source https://stackoverflow.com/questions/70703277

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Cloud Storage

Tutorials and Learning Resources are not available at this moment for Cloud Storage

Share this Page

share link

Get latest updates on Cloud Storage