gupload | Upload files with gRPC and/or | TLS library
kandi X-RAY | gupload Summary
kandi X-RAY | gupload Summary
gupload is an experiment to verify how uploading files via gRPC compares to a barebones HTTP2 server. See sending-files-via-grpc to check the experiment. Use serve to initiate a server (either gRPC or http2 based) and upload to upload a file to a given address (either via gRPC or http2). The http2 version of both server and client require certificates / private keys. This is needed to have a well formed TLS connection between them. The server takes both of them (certificate and private key) while the client just takes the certificate. I've already created some certificates at ./certs so you can just reference them as you wish (you can regenerate at any time with make certs). grpc is the default mechanism used (i.e., to make use of it you should not specify --http2) for both clients and servers.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gupload
gupload Key Features
gupload Examples and Code Snippets
Community Discussions
Trending Discussions on gupload
QUESTION
The response headers for an object stored in a Google Cloud Storage bucket include Google-specific headers like x-guploader-uploadid
, x-goog-generation
, and x-goog-storage-class
. The GCP console doesn't appear to have a way to disable these headers. Is there some other way built into GCS to remove the Google response headers?
ANSWER
Answered 2022-Mar-02 at 02:34There are two APIs: XML and JSON.
AFAIK only the XML API provides those extension headers. The JSON API does not. You cannot control which headers are returned in HTTP responses. Some of those headers can be used in request header match conditions.
The headers are documented here:
QUESTION
I have a static web site hosted on a bucket in Google Cloud Storage with Google Cloud CDN in front of it. That site has many pictures with inspirational quotes on them.
PageSpeed Insights is saying they need an efficient cache policy:
but I am setting a cache time of one year (Cache-Control:"Cache-Control:public,max-age=31536000") on all my images when I use gsutil to upload them to my bucket.
If I go to the bucket and check, it says they are properly set:
and if I check my settings for Google Cloud CDN, also set to one year across the board:
But PageSpeed Insights says:
Serve static assets with an efficient cache policy 15 resources found A long cache lifetime can speed up repeat visits to your page.
I check the actual Response headers delivered via Chrome Dev Tools and they look ok:
...ANSWER
Answered 2022-Jan-25 at 19:42The issue appears to be that you've written the phrase Cache-Control
in the Cache-Control
configuration field, causing it to appear twice in response header, making it invalid. Presumably setting it to simply public, max-age=31536000
will make the resulting header valid.
Don't worry abut the request Cache-Control
headers, you can't control them anyway. They're probably set that way because Dev Tools usually turns caching off.
QUESTION
I have a Cloud Run container that fetches a CSV file from a public Firebase Storage URL. The fetch is performed with Python requests
module (i.e.: requests.get()
).
Sometimes, apparently randomly, I get a 503 status code and the response has length zero:
...ANSWER
Answered 2021-Oct-20 at 00:53The Cloud Storage HTTP Status 503 means that Cloud Storage encountered an internal error. This usually means that you can try again later (e.g. wait ten seconds) and the request will succeed provided that the original request is valid.
The recommended solution is to try again using truncated exponential backoff.
Reference: 503 - Service Unavailable
[UPDATE - see comments below]
If one instance works correctly and another identical instance consistently fails, then you have a service problem. You will need to open a support case with Google Cloud. Collect your logs to clearly specify what you commented so that Google knows what to diagnose.
QUESTION
I have created a custom template which reads from BigQuery using the ReadFromBigQuery
I/O connector. I use it like this:
ANSWER
Answered 2021-Sep-03 at 03:57I believe the problem is that both pipelines are performing a Bigquery export into the same temporary directory, and they're steppign on each other's toes. You can provide a different directory for each one, like so:
Can you try providing separate GCS locations to your ReadFromBigQuery transforms? You'd do something like this:
QUESTION
I made an App with Python and Streamlit and I add Drive API. I have all the code as I found on the official Google page and at first it works.
I have a .csv at google drive and as I cannot save files in Heroku I save it in Drive and then download it every time I need it in the app. At first the Dowload code works, and the .csv is dowloaded correctly but after some uploads and dowloads the download code shows this error
...ANSWER
Answered 2021-Aug-17 at 19:40When you get a response from the service, it's always a good idea to first check the response code, before you try use the data you expect to have in that response.
If you have a look at the response objects, you can see that it's when it works, and
when it doesn't.
403
means "Forbidden". The server doesn't return to you the data you expect, that's why there is no content-disposition
header, and your regex fails.
QUESTION
I have a Python Apache Beam streaming pipeline running in Dataflow. It's reading from PubSub and writing to GCS. Sometimes I get errors like "Error in _start_upload while inserting file ...", which comes from:
...ANSWER
Answered 2021-Jun-14 at 18:49In a streaming pipeline, Dataflow retries work items running into errors indefinitely.
The code itself does not need to have retry logic.
QUESTION
I am trying to upload a file to Google Disk using the HTTP based protocol of the Google Drive API.
Request:
...ANSWER
Answered 2021-Jun-06 at 12:18In your request body, I think that "parentID": "[\'16zJngsKtpLtlFe-WTo8LOCCQ2k-uqkZI\']",
is required to be modified. And, there is no property of folder
. So, how about the following modification?
QUESTION
This might be a duplicate but none of the previous answers match my conditions.
I installed gsutil as part of the google-cloud-sdk following https://cloud.google.com/sdk/docs/install. I could configure gcloud properly without errors.
Every time I try to use gsutil, like for example with gsutil -D ls
, I get
ANSWER
Answered 2021-May-31 at 20:27After giving up on this I decided to reinstall one last time the whole google-cloud-sdk suite, but this time using the snap version. Installing it via snap solved the issue for me. I think this points to some issue with my environment that was bypassed thanks to the snap containerization.
So no clear answer here, but if anyone is experiencing the same problem giving a chance to snap may solve the issue as it did for me
QUESTION
I am building a file uploader which provides the user an option to upload files from his google drive. Google picker is set up and working on the frontend (reactJS) and I have the fileID and OAuth token given by Google Picker. I send these to the backend (node.js) and have the Google Drive API over there. I followed the documentation https://developers.google.com/drive/api/v3/manage-downloads and put the oauth token in the auth param in drive.files.get, now I got the following error
...ANSWER
Answered 2020-Dec-17 at 10:48- When you define
const drive = google.drive({version: 'v3', auth:XXX})
, you need to assign toauth
the response of the functionauthorize()
, as shown in the quickstart for Drive API in node.js - Please follow the complete quickstart to obtain a valid authenticated client
- If creating authenticating with an oAuth2 client is not what you want, there are also other options to create valid crendetials, see google-api-nodejs-client library
QUESTION
I have a backend GCS bucket behind a Google Cloud HTTP(S) load balancer with Cloud CDN enabled.
I'm trying to answer these questions based on response headers:
- was this response served from CDN
- if so which location/region
- was this a cache hit/miss
Here are the response headers. Based on cache-control
, in theory, this should be cached. However, I don't see an indication of this that can verify CDN works correctly. Similarly all other headers x-goog-*
and Server: UploadServer
are seem to be coming from GCS server, not CDN.
ANSWER
Answered 2020-Aug-06 at 08:43At the moment, you can not answer the above questions just by looking at the headers on the client side.
One indications if the request was served by cache or not is by the header age
, which Cloud CDN will append on the responses.
If you have enabled the cache logging on the HTTP Load Balancer level you can get all the above information from the logs.
More specifically from the fileds:
httpRequest.cacheHit
which indicate if the request was served from the cache or not.
jsonPayload.cacheId
which is the location and cache instance that the cache response was served from.
More detailed information on the above can be found here 1.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gupload
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page