Explore all Object Storage open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Object Storage

marfs

MarFS 1.13 beta

mikeio

Second alpha release of MIKE IO 1.0

jupyterlab-s3-browser

0.11.1

laravel-openstack-swift

v2.1.1

ufile-csdk

v1.0.3-alpha

Popular Libraries in Object Storage

snakebite

by spotify doticonpythondoticon

star image 834 doticonApache-2.0

A pure python HDFS client

ice_nine

by dkubb doticonrubydoticon

star image 290 doticonMIT

Deep Freeze Ruby Objects

gowfs

by vladimirvivien doticongodoticon

star image 128 doticonApache-2.0

A Go client binding for Hadoop HDFS using WebHDFS.

CodeWars-7-kyu-Soluitions

by Automedon doticonjavascriptdoticon

star image 92 doticonMIT

CodeWars 7 kyu Solutions (Please leave a star thank you) Created by https://github.com/Automedon

django-storage-swift

by dennisv doticonpythondoticon

star image 81 doticonMIT

OpenStack Swift storage backend for Django

marfs

by mar-file-system doticoncdoticon

star image 74 doticonNOASSERTION

MarFS provides a scalable near-POSIX file system by using one or more POSIX file systems as a scalable metadata component and one or more data stores (object, file, etc) as a scalable data component.

mikeio

by DHI doticonpythondoticon

star image 69 doticonBSD-3-Clause

Read, write and manipulate dfs0, dfs1, dfs2, dfs3, dfsu and mesh files.

jupyterlab-s3-browser

by IBM doticontypescriptdoticon

star image 54 doticonApache-2.0

A JupyterLab extension for browsing S3-compatible object storage

distributed

by smallnest doticongodoticon

star image 43 doticonMIT

distributed synchronization primitives

Trending New libraries in Object Storage

distributed

by smallnest doticongodoticon

star image 43 doticonMIT

distributed synchronization primitives

macos-tags

by dmkskn doticonpythondoticon

star image 10 doticonMIT

A python library to manipulate tags on macOS

dlock

by flowerinthenight doticongodoticon

star image 6 doticonMIT

Package for distributed locks.

cse110-w21-group18

by shravanhariharan2 doticonjavascriptdoticon

star image 4 doticon

CSE 110 group repo

openstack-swift-sdk

by Trendyol doticontypescriptdoticon

star image 4 doticonMIT

Openstack Swift SDK

-Fr-agile

by rudiejd doticonhtmldoticon

star image 3 doticon

Group project for CSE 201

CSES

by vini7148 doticonc++doticon

star image 2 doticon

My solutions for CSES problem set (https://cses.fi/problemset/)

Software-eng-project-

by kuralovalin doticoncssdoticon

star image 2 doticon

CSE 343 Software engineering Group project "Memovercity"

Top Authors in Object Storage

1

Automedon

2 Libraries

star icon112

2

eudisd

1 Libraries

star icon2

3

mrkamel

1 Libraries

star icon15

4

Percona-Lab

1 Libraries

star icon15

5

b1ueshad0w

1 Libraries

star icon4

6

iamprayush

1 Libraries

star icon40

7

redcurrant

1 Libraries

star icon10

8

shravanhariharan2

1 Libraries

star icon4

9

getsolus

1 Libraries

star icon2

10

dkubb

1 Libraries

star icon290

1

2 Libraries

star icon112

2

1 Libraries

star icon2

3

1 Libraries

star icon15

4

1 Libraries

star icon15

5

1 Libraries

star icon4

6

1 Libraries

star icon40

7

1 Libraries

star icon10

8

1 Libraries

star icon4

9

1 Libraries

star icon2

10

1 Libraries

star icon290

Trending Kits in Object Storage

No Trending Kits are available at this moment for Object Storage

Trending Discussions on Object Storage

Different MinIO disk cache configuration for each bucket

I don't know how to store an array of object with mysql/node correctly

Change Azure Storage Account TLS versions

functions and events in OCI

Cannot open Minio in browser after dockerizing it in Spring Boot App

Context of using Object Storage

Multipart upload with presigned urls - Scaleway S3-compatible object storage

Upload an image from input to server using JQuery/Ajax

How to use openstack on client side

How to call the right function (as a string) based on an argument?

QUESTION

Different MinIO disk cache configuration for each bucket

Asked 2022-Mar-29 at 07:44

I got a challenge to setup a simple object storage server or cdn, especially for storing and loading (via API) any PNG, TXT, and TIFF files, the server have docker and kubernetes both running. I am new to MinIO but managed to install a fresh MinIO then create multiple buckets, setup their access policy and encryption. For disk caching i am able to apply one configuration for multiple bucket, but i cannot figure it out how to apply different configuration to each different bucket since in the console only showing singular configuration.

enter image description here

My objectives are like this:

  • Bucket 1 - thematic data (PNG) - Expiry after 60 days
  • Bucket 2 - topographic data (TIFF) - Expiry after 30 days
  • Bucket 3 - metadata (TXT) - Expiry after 1 days

Already read the documentation but got no solution, any suggestion will be appreciated

Thank you

ANSWER

Answered 2022-Mar-28 at 14:44

Cache configuration is part of a Server deployment. Within a server you cannot apply different configurations for multiple buckets.
Note: caching is meant for gateway deployments to act as a relatively close consumption zone, counter-acting latency for consumers.

If performance and caching requirements are really a must (your distribution is, for example over multiple regions) you can deploy an extra server that acts as a gateway. But you'd end-up with a gateway per configuration that you'd like and even then it's not per bucket.

Personally I would go for a design with multiple server instances if differentiating in caching is really important.

Source https://stackoverflow.com/questions/70669014

QUESTION

I don't know how to store an array of object with mysql/node correctly

Asked 2022-Mar-24 at 10:16

I tried to make a table where i can register list of items. Actually everything is working, except array-object storage.

I tried a LOOOT of things on it, Blob format on my sql- but i can't get the data back in front(nuxt) with .JSON() (.JSON() is not recognized) with this method. Convert Buffer Data

I tried to store it as a JSON format but it's not recognized by mysql.

Now I put it on a TEXT format on my sql (but when i call back the data i only get [object Object]- can't get through it in front or in the back.

I'd like to store something like

1[
2{"quantity":"2","content":
3{"id":21,"reference":"LEG080276","designation":"cadre saillie mozaic blanc 2x6/8 modules 2x3 postes - prof 50mm","description":"sens de montage (horizontal)  nombre de modules (12)"}
4},
5
6{"quantity":"2","content":
7{"id":6,"reference":"LEG080260L","designation":"voyant 2 modules 0.03w pour support batibox 1 poste ","description":null}
8}
9]
10

This is the call route in node.js- The list (array of objects) should be stored in content

1[
2{"quantity":"2","content":
3{"id":21,"reference":"LEG080276","designation":"cadre saillie mozaic blanc 2x6/8 modules 2x3 postes - prof 50mm","description":"sens de montage (horizontal)  nombre de modules (12)"}
4},
5
6{"quantity":"2","content":
7{"id":6,"reference":"LEG080260L","designation":"voyant 2 modules 0.03w pour support batibox 1 poste ","description":null}
8}
9]
10router.post('/list', function getApi(req, res) {
11    const {user_id, to_uuid, content, message, list_name, status}= req.body;
12    db.query(`INSERT INTO list (user_id, to_uuid, content, message, list_name, status) VALUES (?,?,?,?,?,?)`, [user_id, to_uuid, content, message, list_name, status], (err, results) => {
13      if (err) {
14        res.status(500).send(`Erreur lors de l'enregistrement de la liste`);
15        console.log(err);
16      }
17      res.status(200).send('Liste Enregistrée');
18    });
19  });
20

is there someone who's have an idea?

Thanks in advance

ANSWER

Answered 2022-Mar-24 at 10:16

So i made it!!!

I registered array of objects (which was stringified) as a text format in mysql workbench!

it worked!!

1[
2{"quantity":"2","content":
3{"id":21,"reference":"LEG080276","designation":"cadre saillie mozaic blanc 2x6/8 modules 2x3 postes - prof 50mm","description":"sens de montage (horizontal)  nombre de modules (12)"}
4},
5
6{"quantity":"2","content":
7{"id":6,"reference":"LEG080260L","designation":"voyant 2 modules 0.03w pour support batibox 1 poste ","description":null}
8}
9]
10router.post('/list', function getApi(req, res) {
11    const {user_id, to_uuid, content, message, list_name, status}= req.body;
12    db.query(`INSERT INTO list (user_id, to_uuid, content, message, list_name, status) VALUES (?,?,?,?,?,?)`, [user_id, to_uuid, content, message, list_name, status], (err, results) => {
13      if (err) {
14        res.status(500).send(`Erreur lors de l'enregistrement de la liste`);
15        console.log(err);
16      }
17      res.status(200).send('Liste Enregistrée');
18    });
19  });
20this.$axios.post('http://localhost:3000/api/list/', {
21                'user_id': this.$auth.user[0].user_id,
22                'to_uuid':  this.$store.state.list.to_user.user_id,
23                'content': JSON.stringify(this.$store.state.list.finalList),
24                'message': this.message,
25                'list_name': this.name,
26                'status': "en attente"
27                 })
28

this part is when i post the stringified array on content

1[
2{"quantity":"2","content":
3{"id":21,"reference":"LEG080276","designation":"cadre saillie mozaic blanc 2x6/8 modules 2x3 postes - prof 50mm","description":"sens de montage (horizontal)  nombre de modules (12)"}
4},
5
6{"quantity":"2","content":
7{"id":6,"reference":"LEG080260L","designation":"voyant 2 modules 0.03w pour support batibox 1 poste ","description":null}
8}
9]
10router.post('/list', function getApi(req, res) {
11    const {user_id, to_uuid, content, message, list_name, status}= req.body;
12    db.query(`INSERT INTO list (user_id, to_uuid, content, message, list_name, status) VALUES (?,?,?,?,?,?)`, [user_id, to_uuid, content, message, list_name, status], (err, results) => {
13      if (err) {
14        res.status(500).send(`Erreur lors de l'enregistrement de la liste`);
15        console.log(err);
16      }
17      res.status(200).send('Liste Enregistrée');
18    });
19  });
20this.$axios.post('http://localhost:3000/api/list/', {
21                'user_id': this.$auth.user[0].user_id,
22                'to_uuid':  this.$store.state.list.to_user.user_id,
23                'content': JSON.stringify(this.$store.state.list.finalList),
24                'message': this.message,
25                'list_name': this.name,
26                'status': "en attente"
27                 })
28async created(){
29  const listReceived = await this.$axios.$get(`/api/listReceived/${this.$auth.user[0].user_id}`)
30  const usersList = await this.$axios.$get("/api/users")
31  
32  listReceived.forEach(element => {
33    const userdata=usersList.filter((post)=>post.user_id===element.user_id)
34    const userSelected={
35      listId:element.list_id,
36      listName:element.list_name,
37      message:element.message,
38      status:element.status,
39      content: JSON.parse(element.content),
40      nickname:userdata[0].nickname
41    }
42          this.finalList.push(userSelected)
43    })
44

this part is when i get the stringified array - I JSON.parse it on content

Source https://stackoverflow.com/questions/71574446

QUESTION

Change Azure Storage Account TLS versions

Asked 2022-Feb-17 at 09:01

would someone have or seen script that changes Azure Strorage Account TLS version in bulk? We have several hundreds of storage account that has still TLS 1.0 or 1.1 enabled and we would want to change them to 1.2. Because there are so many of them clicking manually those are really not option..

I have now googled and tried to script it by my self but are banging my head to wall.

I have managed to loop trough all my subscriptions and storage accounts and save storage account name, resouce group and tls version to csv but now I'm would need help with next step: how could I then change TLS version to 1.2 if it is 1.0 or 1.1 using that data.

The line that changing tls is (https://docs.microsoft.com/en-us/azure/storage/common/transport-layer-security-configure-minimum-version?tabs=powershell#configure-the-minimum-tls-version-for-a-storage-account)

1Set-AzStorageAccount -ResourceGroupName $rgName `
2    -Name $accountName `
3    -MinimumTlsVersion TLS1_2
4

My current script

1Set-AzStorageAccount -ResourceGroupName $rgName `
2    -Name $accountName `
3    -MinimumTlsVersion TLS1_2
4$Subscriptions = Get-AzSubscription
5$data = foreach ($sub in $Subscriptions) {
6    # suppress output on this line
7    Write-Host Working with Subscription $sub.Name 
8    $null = Get-AzSubscription -SubscriptionName $sub.Name | Set-AzContext
9    # let Select-Object output the objects that will be collected in variable $data
10    Get-AzStorageAccount | Select-Object StorageAccountName, ResourceGroupName,
11                                         @{Name = 'TLSVersion'; Expression = {$_.MinimumTlsVersion}}
12
13}
14
15# write a CSV file containing this data
16$data | Export-Csv -Path C:\temp\data.csv -NoTypeInformation
17

Tips?

ANSWER

Answered 2022-Feb-17 at 09:01

• I would like to appreciate the script prepared by you for changing the TLS version of multiple storage accounts across various subscriptions. I also tried to test the specific cmdlets for changing the TLS version of multiple storage accounts in my subscription since I don’t have multiple subscriptions and it worked great. Also, if you don’t have access to resources over multiple subscriptions, it is not possible to execute this script.

The minimum required Azure Resource Manager role for this script to execute successfully is ‘Contributor’ for for all subscriptions and you need to sign in with this user ID in that subscription which is higher in hierarchy in the management group than others so that it will have authority to execute and access resources in those subscriptions accordingly.

• Thus, the actual script that can successfully change the TLS version of the storage accounts in any subscription a user ID has access to is as follows. Also, it exports a CSV file containing the list of all the storage accounts which are accessed by this script: -

1Set-AzStorageAccount -ResourceGroupName $rgName `
2    -Name $accountName `
3    -MinimumTlsVersion TLS1_2
4$Subscriptions = Get-AzSubscription
5$data = foreach ($sub in $Subscriptions) {
6    # suppress output on this line
7    Write-Host Working with Subscription $sub.Name 
8    $null = Get-AzSubscription -SubscriptionName $sub.Name | Set-AzContext
9    # let Select-Object output the objects that will be collected in variable $data
10    Get-AzStorageAccount | Select-Object StorageAccountName, ResourceGroupName,
11                                         @{Name = 'TLSVersion'; Expression = {$_.MinimumTlsVersion}}
12
13}
14
15# write a CSV file containing this data
16$data | Export-Csv -Path C:\temp\data.csv -NoTypeInformation
17   $Subscriptions = Get-AzSubscription
18   $data = foreach ($sub in $Subscriptions) {
19Write-Host Working with Subscription $sub.Name 
20$null = Get-AzSubscription -SubscriptionName $sub.Name | Set-AzContext
21Get-AzStorageAccount | Set-AzStorageAccount -MinimumTlsVersion TLS1_2 @{Name = 'TLSVersion'; Expression = {$_.MinimumTlsVersion}}
22}
23   $data | Export-Csv -Path C:\data.csv -NoTypeInformation ‘
24

You can also configure an Azure policy for enforcing TLS v2.0 on storage accounts henceforth to enforce the use of TLS v2.0 on storage accounts by entering the below JSON code as a policy in Azure policy assignment.

1Set-AzStorageAccount -ResourceGroupName $rgName `
2    -Name $accountName `
3    -MinimumTlsVersion TLS1_2
4$Subscriptions = Get-AzSubscription
5$data = foreach ($sub in $Subscriptions) {
6    # suppress output on this line
7    Write-Host Working with Subscription $sub.Name 
8    $null = Get-AzSubscription -SubscriptionName $sub.Name | Set-AzContext
9    # let Select-Object output the objects that will be collected in variable $data
10    Get-AzStorageAccount | Select-Object StorageAccountName, ResourceGroupName,
11                                         @{Name = 'TLSVersion'; Expression = {$_.MinimumTlsVersion}}
12
13}
14
15# write a CSV file containing this data
16$data | Export-Csv -Path C:\temp\data.csv -NoTypeInformation
17   $Subscriptions = Get-AzSubscription
18   $data = foreach ($sub in $Subscriptions) {
19Write-Host Working with Subscription $sub.Name 
20$null = Get-AzSubscription -SubscriptionName $sub.Name | Set-AzContext
21Get-AzStorageAccount | Set-AzStorageAccount -MinimumTlsVersion TLS1_2 @{Name = 'TLSVersion'; Expression = {$_.MinimumTlsVersion}}
22}
23   $data | Export-Csv -Path C:\data.csv -NoTypeInformation ‘
24{
25   "policyRule": {
26   "if": {
27  "allOf": [
28    {
29      "field": "type",
30      "equals": "Microsoft.Storage/storageAccounts"
31    },
32    {
33      "not": {
34        "field": "Microsoft.Storage/storageAccounts/minimumTlsVersion",
35        "equals": "TLS1_2"
36      }
37    }
38  ]
39},
40"then": {
41  "effect": "deny"
42       }
43      }
44     }
45

Source https://stackoverflow.com/questions/71141765

QUESTION

functions and events in OCI

Asked 2022-Feb-12 at 00:17

I have a scenario in where I want to run commands in instance using functions based on events in OCI(oracle cloud infrastructure). flow : object storage: object update/modification -> trigger event -> execute function : to run commands in specific instances using run command Is this achievable ? As currently I see that to execute run command service we would need oci config files(profile)

ANSWER

Answered 2022-Feb-12 at 00:17

To do this, you can use the OCI Java or Python SDK within the function to invoke the Run Command API.

See API details at “Using the API” in the docs. You will also need to configure the Functions resource principal to have access to the Compute instance in question. See this for details on configuring the Functions resource principal

Source https://stackoverflow.com/questions/70648435

QUESTION

Cannot open Minio in browser after dockerizing it in Spring Boot App

Asked 2022-Jan-15 at 12:04

I have a problem in opening minio in the browser. I just created Spring Boot app with the usage of it.

Here is my application.yaml file shown below.

1server:
2  port: 8085
3spring:
4  application:
5    name: springboot-minio
6minio:
7  endpoint: http://127.0.0.1:9000
8  port: 9000
9  accessKey:  minioadmin #Login Account
10  secretKey:  minioadmin # Login Password
11  secure: false
12  bucket-name: commons # Bucket Name
13  image-size: 10485760 #  Maximum size of picture file
14  file-size: 1073741824 #  Maximum file size
15

Here is my docker-compose.yaml file shown below.

1server:
2  port: 8085
3spring:
4  application:
5    name: springboot-minio
6minio:
7  endpoint: http://127.0.0.1:9000
8  port: 9000
9  accessKey:  minioadmin #Login Account
10  secretKey:  minioadmin # Login Password
11  secure: false
12  bucket-name: commons # Bucket Name
13  image-size: 10485760 #  Maximum size of picture file
14  file-size: 1073741824 #  Maximum file size
15version: '3.8'
16
17services:
18  minio:
19    image: minio/minio:latest
20    container_name: minio
21    environment:
22      MINIO_ROOT_USER: "minioadmin"
23      MINIO_ROOT_PASSWORD: "minioadmin"
24    volumes:
25      - ./data:/data
26    ports:
27      - 9000:9000
28      - 9001:9001
29

I run it by these commands shown below.

1server:
2  port: 8085
3spring:
4  application:
5    name: springboot-minio
6minio:
7  endpoint: http://127.0.0.1:9000
8  port: 9000
9  accessKey:  minioadmin #Login Account
10  secretKey:  minioadmin # Login Password
11  secure: false
12  bucket-name: commons # Bucket Name
13  image-size: 10485760 #  Maximum size of picture file
14  file-size: 1073741824 #  Maximum file size
15version: '3.8'
16
17services:
18  minio:
19    image: minio/minio:latest
20    container_name: minio
21    environment:
22      MINIO_ROOT_USER: "minioadmin"
23      MINIO_ROOT_PASSWORD: "minioadmin"
24    volumes:
25      - ./data:/data
26    ports:
27      - 9000:9000
28      - 9001:9001
291 ) docker-compose up -d
302 ) docker ps -a
313 ) docker run minio/minio:latest
32

Here is the result shown below.

1server:
2  port: 8085
3spring:
4  application:
5    name: springboot-minio
6minio:
7  endpoint: http://127.0.0.1:9000
8  port: 9000
9  accessKey:  minioadmin #Login Account
10  secretKey:  minioadmin # Login Password
11  secure: false
12  bucket-name: commons # Bucket Name
13  image-size: 10485760 #  Maximum size of picture file
14  file-size: 1073741824 #  Maximum file size
15version: '3.8'
16
17services:
18  minio:
19    image: minio/minio:latest
20    container_name: minio
21    environment:
22      MINIO_ROOT_USER: "minioadmin"
23      MINIO_ROOT_PASSWORD: "minioadmin"
24    volumes:
25      - ./data:/data
26    ports:
27      - 9000:9000
28      - 9001:9001
291 ) docker-compose up -d
302 ) docker ps -a
313 ) docker run minio/minio:latest
32C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
33NAME:
34  minio - High Performance Object Storage
35
36DESCRIPTION:
37  Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
38
39USAGE:
40  minio [FLAGS] COMMAND [ARGS...]
41
42COMMANDS:
43  server   start object storage server
44  gateway  start object storage gateway
45
46FLAGS:
47  --certs-dir value, -S value  path to certs directory (default: "/root/.minio/certs")
48  --quiet                      disable startup information
49  --anonymous                  hide sensitive information from logging
50  --json                       output server logs and startup information in json format
51  --help, -h                   show help
52  --version, -v                print the version
53
54VERSION:
55  RELEASE.2022-01-08T03-11-54Z
56

When I write 127.0.0.1:9000 in the browser, I couldn't open the MinIo login page.

How can I fix my issue?

ANSWER

Answered 2022-Jan-15 at 12:04

The MinIO documentation includes a MinIO Docker Quickstart Guide that has some recipes for starting the container. The important thing here is that you cannot just docker run minio/minio; it needs a command to run, probably server. This also needs to be translated into your Compose setup.

The first example on that page breaks down like so:

1server:
2  port: 8085
3spring:
4  application:
5    name: springboot-minio
6minio:
7  endpoint: http://127.0.0.1:9000
8  port: 9000
9  accessKey:  minioadmin #Login Account
10  secretKey:  minioadmin # Login Password
11  secure: false
12  bucket-name: commons # Bucket Name
13  image-size: 10485760 #  Maximum size of picture file
14  file-size: 1073741824 #  Maximum file size
15version: '3.8'
16
17services:
18  minio:
19    image: minio/minio:latest
20    container_name: minio
21    environment:
22      MINIO_ROOT_USER: "minioadmin"
23      MINIO_ROOT_PASSWORD: "minioadmin"
24    volumes:
25      - ./data:/data
26    ports:
27      - 9000:9000
28      - 9001:9001
291 ) docker-compose up -d
302 ) docker ps -a
313 ) docker run minio/minio:latest
32C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
33NAME:
34  minio - High Performance Object Storage
35
36DESCRIPTION:
37  Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
38
39USAGE:
40  minio [FLAGS] COMMAND [ARGS...]
41
42COMMANDS:
43  server   start object storage server
44  gateway  start object storage gateway
45
46FLAGS:
47  --certs-dir value, -S value  path to certs directory (default: "/root/.minio/certs")
48  --quiet                      disable startup information
49  --anonymous                  hide sensitive information from logging
50  --json                       output server logs and startup information in json format
51  --help, -h                   show help
52  --version, -v                print the version
53
54VERSION:
55  RELEASE.2022-01-08T03-11-54Z
56docker run                     \
57  -p 9000:9000 -p 9001:9001    \  # publish ports
58  -e "MINIO_ROOT_USER=..."     \  # set environment variables
59  -e "MINIO_ROOT_PASSWORD=..." \
60  quay.io/minio/minio          \  # image name
61  server /data --console-address ":9001"  # command to run
62

That final command is important. In your example where you just docker run the image and get a help message, it's because you omitted the command. In the Compose setup you also don't have a command: line; if you look at docker-compose ps I expect you'll see the container is exited, and docker-compose logs minio will probably show the same help message.

You can include that command in your Compose setup with command::

1server:
2  port: 8085
3spring:
4  application:
5    name: springboot-minio
6minio:
7  endpoint: http://127.0.0.1:9000
8  port: 9000
9  accessKey:  minioadmin #Login Account
10  secretKey:  minioadmin # Login Password
11  secure: false
12  bucket-name: commons # Bucket Name
13  image-size: 10485760 #  Maximum size of picture file
14  file-size: 1073741824 #  Maximum file size
15version: '3.8'
16
17services:
18  minio:
19    image: minio/minio:latest
20    container_name: minio
21    environment:
22      MINIO_ROOT_USER: "minioadmin"
23      MINIO_ROOT_PASSWORD: "minioadmin"
24    volumes:
25      - ./data:/data
26    ports:
27      - 9000:9000
28      - 9001:9001
291 ) docker-compose up -d
302 ) docker ps -a
313 ) docker run minio/minio:latest
32C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
33NAME:
34  minio - High Performance Object Storage
35
36DESCRIPTION:
37  Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
38
39USAGE:
40  minio [FLAGS] COMMAND [ARGS...]
41
42COMMANDS:
43  server   start object storage server
44  gateway  start object storage gateway
45
46FLAGS:
47  --certs-dir value, -S value  path to certs directory (default: "/root/.minio/certs")
48  --quiet                      disable startup information
49  --anonymous                  hide sensitive information from logging
50  --json                       output server logs and startup information in json format
51  --help, -h                   show help
52  --version, -v                print the version
53
54VERSION:
55  RELEASE.2022-01-08T03-11-54Z
56docker run                     \
57  -p 9000:9000 -p 9001:9001    \  # publish ports
58  -e "MINIO_ROOT_USER=..."     \  # set environment variables
59  -e "MINIO_ROOT_PASSWORD=..." \
60  quay.io/minio/minio          \  # image name
61  server /data --console-address ":9001"  # command to run
62version: '3.8'
63services:
64  minio:
65    image: minio/minio:latest
66    environment:
67      MINIO_ROOT_USER: "..."
68      MINIO_ROOT_PASSWORD: "..."
69    volumes:
70      - ./data:/data
71    ports:
72      - 9000:9000
73      - 9001:9001
74    command: server /data --console-address :9001  # <-- add this
75

Source https://stackoverflow.com/questions/70720375

QUESTION

Context of using Object Storage

Asked 2022-Jan-11 at 09:13

Can someone please give me a solid use case example (in terms of performance/scalability/reliability) that an Object Storage FS should be used over Block Storage FS?

I am so confused since PM wants us to use MinIO in a company's project, but didn't give reasons for why are we should use it without any obvious pros?

I have read below posts, but I don't think these resolve my query.
What is Object storage really?
Difference between Object Storage And File Storage

ANSWER

Answered 2022-Jan-11 at 09:13

Block storage is a traditional disk. In the old days, you would walk down to the computer store, buy a Hard Disk Drive that has a specific amount of storage (eg 1TB), plug it into the computer and then use it for storage. If it runs out of disk space, you had to delete files or buy additional disks.

This type of storage is called 'Block Storage' because the disk is divided into many blocks. The Operating System is responsible for managing what data is stored in each block and it also maintains a directory of those files. You'll see terms like FAT32 or exFAT for these storage methods.

Computers expect to have this type of directly attached disk storage. It's where the computer keeps its operating system, applications, files, etc. It's the C: and D: drives you see on Windows computers. When using services like Amazon EC2, you can attach Block Storage by using the Amazon Elastic Block Store service (Amazon EBS). Even though storage is virtual (meaning you don't need to worry about the physical disks), you still need to specify the size of the disk because it is pretending to be a traditional disk drive. Therefore, you can run out of space on these disks (but it is fairly easy to expand their size).

Next comes network-attached storage. This is the type of storage that companies give employees where they can save their documents on the network instead of a local disk (eg the H: drive). The beauty of network-attached storage is that it doesn't look like blocks on a disk -- instead, the computer just says "save this file" or "open this file". The request goes across the network to a file server, which is responsible for storage the actual data on the disk. This is a much more efficient way to store data since it is centralized rather than on everybody's computer and it is much easier to backup. However, your company still needs to have disk drives that store the actual data.

Then comes object storage which has become popular with the Cloud. You can store files in Amazon S3 (or MinIO, which is S3-compatible) without worrying about hard disks and backups -- they are the job of the 'cloud'. You simply store data and somebody else worries about how that data is stored. It is typically charged via pay-as-you-go, so instead of buying expensive hard disks up-front, you just pay for the amount of storage used. The data is typically automatically replicated between multiple disks so it can survive the failure of disk drives and even data centers. You can think of cloud-based block storage as unlimited in size. (It isn't actually unlimited, but it acts like it is.)

Services like S3 and MinIO also do more than simply store the data. They can make the objects available via the Internet without having to run a web server. They can have fine-grained permissions to control who (and what) can access the data. Amazon S3 is very 'close' to other AWS services, making it very fast to use data in Amazon EC2, Amazon EMR, Amazon RDS, etc. It's even possible to use query engines like Amazon Athena that allow you to run SQL commands to query data stored in Amazon S3 without needing to load it into a database. You can choose different storage classes to reduce cost while trading-off access speed (like the old days of tape backup). So think of Object Storage as 'intelligent storage' that is much more capable than a dumb disk drive.

Bottom line: Computers expect to have block storage to boot and run apps, but block storage isn't a great way to manage the storage of large amounts of data. Object Storage in the Cloud is as simple as uploading and downloading data without having to worry about how it is stored and how to manage it -- that is the cloud's job. You get to spend your time adding value to your company rather than managing storage on disk drives.

Source https://stackoverflow.com/questions/70662392

QUESTION

Multipart upload with presigned urls - Scaleway S3-compatible object storage

Asked 2021-Dec-24 at 21:44

I’m trying to have multipart upload working on Scaleway Object Storage (S3 compatible) with presigned urls and I’m getting errors (403) on preflight request generated by the browser but my CORS settings seems correctly set. (Basically wildcard on allowed headers and origins).

The error comes with a 403 status code and is as follow:

1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3

I’m stuck on this one for a while now, I tried to copy the pre-flight request from my browser to reproduce it elsewhere and tried to tweak it a little bit. Removing the query params from the url of the pre-flight request make the request successful (returns a 200 with Access-Control-Allow-* response headers correctly set) but this is obviously not the browser behavior...

This Doesn’t work (secrets, keys and names have been changed)

1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
4

This Works (secrets, keys and names have been changed)

1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
4curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
5

The url comes from the aws-sdk and is generated this way :

1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
4curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
5const S3Client = new S3({
6  credentials: {
7    accessKeyId: env.SCW_ACCESS_KEY,
8    secretAccessKey: env.SCW_SECRET_KEY,
9  },
10  endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
11})
12
13S3Client.getSignedUrlPromise('uploadPart', {
14    Bucket: bucket,
15    Key: key,
16    UploadId: multipartUpload.UploadId,
17    PartNumber: idx + 1,
18})
19

and used this way in frontend:

1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
4curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
5const S3Client = new S3({
6  credentials: {
7    accessKeyId: env.SCW_ACCESS_KEY,
8    secretAccessKey: env.SCW_SECRET_KEY,
9  },
10  endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
11})
12
13S3Client.getSignedUrlPromise('uploadPart', {
14    Bucket: bucket,
15    Key: key,
16    UploadId: multipartUpload.UploadId,
17    PartNumber: idx + 1,
18})
19// url being the url generated in backend as demonstrated above
20const response = await fetch(url, {
21  method: 'PUT',
22  body: filePart,
23  signal: abortController.signal,
24})
25

If anyone can give me a hand at this or that would be great!

ANSWER

Answered 2021-Dec-01 at 20:49

As it turns out, Scaleway Object Storage is not fully S3-compatible on this case.
Here is a workaround:

  • Install aws4 library to sign request easily (or follow this scaleway doc to manually sign your request)
  • Form your request exactly as per stated in this other scaleway doc (this is where aws-sdk behavior differs, it generates an url with AWSAccessKeyId, Expires and Signature query params that cause the scaleway API to fail. Scaleway API only wants partNumber and uploadId).
  • Return the generated url and headers to the frontend
1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
4curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
5const S3Client = new S3({
6  credentials: {
7    accessKeyId: env.SCW_ACCESS_KEY,
8    secretAccessKey: env.SCW_SECRET_KEY,
9  },
10  endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
11})
12
13S3Client.getSignedUrlPromise('uploadPart', {
14    Bucket: bucket,
15    Key: key,
16    UploadId: multipartUpload.UploadId,
17    PartNumber: idx + 1,
18})
19// url being the url generated in backend as demonstrated above
20const response = await fetch(url, {
21  method: 'PUT',
22  body: filePart,
23  signal: abortController.signal,
24})
25// Backend code
26const signedRequest = aws4.sign(
27  {
28    method: 'PUT',
29    path: `/${key}?partNumber=${idx + 1}&uploadId=${
30      multipartUpload.UploadId
31    }`,
32    service: 's3',
33    region: env.SCW_REGION,
34    host: `${bucket}.s3.${env.SCW_REGION}.scw.cloud`,
35  },
36  {
37    accessKeyId: env.SCW_ACCESS_KEY,
38    secretAccessKey: env.SCW_SECRET_KEY,
39  },
40)
41
42return {
43  url: `https://${signedRequest.host}${signedRequest.path}`,
44  headers: Object.keys(signedRequest.headers).map((key) => ({
45    key,
46    value: signedRequest.headers[key] as string,
47  })),
48}
49

And then in frontend:

1<?xml version='1.0' encoding='UTF-8'?>
2<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
3curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
4curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
5const S3Client = new S3({
6  credentials: {
7    accessKeyId: env.SCW_ACCESS_KEY,
8    secretAccessKey: env.SCW_SECRET_KEY,
9  },
10  endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
11})
12
13S3Client.getSignedUrlPromise('uploadPart', {
14    Bucket: bucket,
15    Key: key,
16    UploadId: multipartUpload.UploadId,
17    PartNumber: idx + 1,
18})
19// url being the url generated in backend as demonstrated above
20const response = await fetch(url, {
21  method: 'PUT',
22  body: filePart,
23  signal: abortController.signal,
24})
25// Backend code
26const signedRequest = aws4.sign(
27  {
28    method: 'PUT',
29    path: `/${key}?partNumber=${idx + 1}&uploadId=${
30      multipartUpload.UploadId
31    }`,
32    service: 's3',
33    region: env.SCW_REGION,
34    host: `${bucket}.s3.${env.SCW_REGION}.scw.cloud`,
35  },
36  {
37    accessKeyId: env.SCW_ACCESS_KEY,
38    secretAccessKey: env.SCW_SECRET_KEY,
39  },
40)
41
42return {
43  url: `https://${signedRequest.host}${signedRequest.path}`,
44  headers: Object.keys(signedRequest.headers).map((key) => ({
45    key,
46    value: signedRequest.headers[key] as string,
47  })),
48}
49// Frontend code
50const headers = signedRequest.headers.reduce<Record<string, string>>(
51  (acc, h) => ({ ...acc, [h.key]: h.value }),
52  {},
53)
54
55const response = await fetch(signedRequest.url, {
56  method: 'PUT',
57  body: filePart,
58  headers,
59  signal: abortController.signal,
60})
61

Scaleway knows this issue as I directly discussed with their support team and they are putting some effort in order to be as compliant as possible with S3. This issue might be fixed by the time you read this. Thanks to them for the really quick response time and for taking this seriously.

Source https://stackoverflow.com/questions/70169992

QUESTION

Upload an image from input to server using JQuery/Ajax

Asked 2021-Dec-16 at 20:25

I have an Input like this:
<input type="file" accept="image/*">

Now I want to send the image to the server (I guess ajax is the way to go?)
From the Server I want to save the image to an aws-s3 storage (not actually my problem)
The question is how do I send the image to php in a way that I can later store it in an object storage?

ANSWER

Answered 2021-Dec-16 at 20:25

This code was copied from the following web page: https://www.w3schools.com/PHP/php_file_upload.asp

Note that it's much more harder to use AJAX/jQuery, so you can use this code.

First check your php.ini file (it's in C:/php-install-path/php.ini) and search for the following line:

1file_uploads = On
2

It may appear as

1file_uploads = On
2file_uploads = Off
3

so you need to turn in to On. Then restart your web server if it was off.

Next, create the form.

1file_uploads = On
2file_uploads = Off
3&lt;!DOCTYPE html&gt;
4&lt;html&gt;
5&lt;body&gt;
6
7&lt;form action=&quot;upload.php&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
8  Select image to upload:
9  &lt;input type=&quot;file&quot; name=&quot;fileToUpload&quot; id=&quot;fileToUpload&quot;&gt;
10  &lt;input type=&quot;submit&quot; value=&quot;Upload Image&quot; name=&quot;submit&quot;&gt;
11&lt;/form&gt;
12
13&lt;/body&gt;
14&lt;/html&gt;
15

It will need to redirect to a PHP file as PHP can receive elements.

For the PHP file, put code like this:

1file_uploads = On
2file_uploads = Off
3&lt;!DOCTYPE html&gt;
4&lt;html&gt;
5&lt;body&gt;
6
7&lt;form action=&quot;upload.php&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
8  Select image to upload:
9  &lt;input type=&quot;file&quot; name=&quot;fileToUpload&quot; id=&quot;fileToUpload&quot;&gt;
10  &lt;input type=&quot;submit&quot; value=&quot;Upload Image&quot; name=&quot;submit&quot;&gt;
11&lt;/form&gt;
12
13&lt;/body&gt;
14&lt;/html&gt;
15&lt;?php
16$target_dir = &quot;uploads/&quot;;
17$target_file = $target_dir . basename($_FILES[&quot;fileToUpload&quot;][&quot;name&quot;]);
18$uploadOk = 1;
19$imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));
20
21// Check if image file is a actual image or fake image
22if(isset($_POST[&quot;submit&quot;])) {
23  $check = getimagesize($_FILES[&quot;fileToUpload&quot;][&quot;tmp_name&quot;]);
24  if($check !== false) {
25    echo &quot;File is an image - &quot; . $check[&quot;mime&quot;] . &quot;.&quot;;
26    $uploadOk = 1;
27  } else {
28    echo &quot;File is not an image.&quot;;
29    $uploadOk = 0;
30  }
31}
32
33// Check if file already exists
34if (file_exists($target_file)) {
35  echo &quot;Sorry, file already exists.&quot;;
36  $uploadOk = 0;
37}
38
39// Check file size
40if ($_FILES[&quot;fileToUpload&quot;][&quot;size&quot;] &gt; 500000) {
41  echo &quot;Sorry, your file is too large.&quot;;
42  $uploadOk = 0;
43}
44
45// Allow certain file formats
46if($imageFileType != &quot;jpg&quot; &amp;&amp; $imageFileType != &quot;png&quot; &amp;&amp; $imageFileType != &quot;jpeg&quot;
47&amp;&amp; $imageFileType != &quot;gif&quot; ) {
48  echo &quot;Sorry, only JPG, JPEG, PNG &amp; GIF files are allowed.&quot;;
49  $uploadOk = 0;
50}
51
52// Check if $uploadOk is set to 0 by an error
53if ($uploadOk == 0) {
54  echo &quot;Sorry, your file was not uploaded.&quot;;
55// if everything is ok, try to upload file
56} else {
57  if (move_uploaded_file($_FILES[&quot;fileToUpload&quot;][&quot;tmp_name&quot;], $target_file)) {
58    echo &quot;The file &quot;. htmlspecialchars( basename( $_FILES[&quot;fileToUpload&quot;][&quot;name&quot;])). &quot; has been uploaded.&quot;;
59  } else {
60    echo &quot;Sorry, there was an error uploading your file.&quot;;
61  }
62}
63?&gt;
64

Bonus: If you want to create a function for this, you can.

1file_uploads = On
2file_uploads = Off
3&lt;!DOCTYPE html&gt;
4&lt;html&gt;
5&lt;body&gt;
6
7&lt;form action=&quot;upload.php&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
8  Select image to upload:
9  &lt;input type=&quot;file&quot; name=&quot;fileToUpload&quot; id=&quot;fileToUpload&quot;&gt;
10  &lt;input type=&quot;submit&quot; value=&quot;Upload Image&quot; name=&quot;submit&quot;&gt;
11&lt;/form&gt;
12
13&lt;/body&gt;
14&lt;/html&gt;
15&lt;?php
16$target_dir = &quot;uploads/&quot;;
17$target_file = $target_dir . basename($_FILES[&quot;fileToUpload&quot;][&quot;name&quot;]);
18$uploadOk = 1;
19$imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));
20
21// Check if image file is a actual image or fake image
22if(isset($_POST[&quot;submit&quot;])) {
23  $check = getimagesize($_FILES[&quot;fileToUpload&quot;][&quot;tmp_name&quot;]);
24  if($check !== false) {
25    echo &quot;File is an image - &quot; . $check[&quot;mime&quot;] . &quot;.&quot;;
26    $uploadOk = 1;
27  } else {
28    echo &quot;File is not an image.&quot;;
29    $uploadOk = 0;
30  }
31}
32
33// Check if file already exists
34if (file_exists($target_file)) {
35  echo &quot;Sorry, file already exists.&quot;;
36  $uploadOk = 0;
37}
38
39// Check file size
40if ($_FILES[&quot;fileToUpload&quot;][&quot;size&quot;] &gt; 500000) {
41  echo &quot;Sorry, your file is too large.&quot;;
42  $uploadOk = 0;
43}
44
45// Allow certain file formats
46if($imageFileType != &quot;jpg&quot; &amp;&amp; $imageFileType != &quot;png&quot; &amp;&amp; $imageFileType != &quot;jpeg&quot;
47&amp;&amp; $imageFileType != &quot;gif&quot; ) {
48  echo &quot;Sorry, only JPG, JPEG, PNG &amp; GIF files are allowed.&quot;;
49  $uploadOk = 0;
50}
51
52// Check if $uploadOk is set to 0 by an error
53if ($uploadOk == 0) {
54  echo &quot;Sorry, your file was not uploaded.&quot;;
55// if everything is ok, try to upload file
56} else {
57  if (move_uploaded_file($_FILES[&quot;fileToUpload&quot;][&quot;tmp_name&quot;], $target_file)) {
58    echo &quot;The file &quot;. htmlspecialchars( basename( $_FILES[&quot;fileToUpload&quot;][&quot;name&quot;])). &quot; has been uploaded.&quot;;
59  } else {
60    echo &quot;Sorry, there was an error uploading your file.&quot;;
61  }
62}
63?&gt;
64&lt;?php
65function uploadFile($names, $button) {
66  $file = $_FILES[$names];
67  $target_dir = &quot;uploads/&quot;;
68  $target_file = $target_dir . basename($file[&quot;name&quot;]);
69  $uploadOk = 1;
70  $imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));
71
72  // Check if image file is a actual image or fake image
73  if(!empty($button)) {
74    $check = getimagesize($file[&quot;fileToUpload&quot;][&quot;tmp_name&quot;]);
75    if($check !== false) {
76    echo &quot;File is an image - &quot; . $check[&quot;mime&quot;] . &quot;.&quot;;
77    $uploadOk = 1;
78  } else {
79    echo &quot;File is not an image.&quot;;
80    $uploadOk = 0;
81  }
82
83  // Check if file already exists
84  if (file_exists($target_file)) {
85    echo &quot;Sorry, file already exists.&quot;;
86    $uploadOk = 0;
87  }
88
89  // Check file size
90  if ($_FILES[&quot;fileToUpload&quot;][&quot;size&quot;] &gt; 500000) {
91    echo &quot;Sorry, your file is too large.&quot;;
92    $uploadOk = 0;
93  }
94
95  // Allow certain file formats
96  if($imageFileType != &quot;jpg&quot; &amp;&amp; $imageFileType != &quot;png&quot; &amp;&amp; $imageFileType != &quot;jpeg&quot;
97&amp;&amp; $imageFileType != &quot;gif&quot; ) {
98    echo &quot;Sorry, only JPG, JPEG, PNG &amp; GIF files are allowed.&quot;;
99    $uploadOk = 0;
100  }
101
102  // Check if $uploadOk is set to 0 by an error
103  if ($uploadOk == 0) {
104    echo &quot;Sorry, your file was not uploaded.&quot;;
105  // if everything is ok, try to upload file
106  } else {
107    if (move_uploaded_file($file[&quot;fileToUpload&quot;][&quot;tmp_name&quot;], $target_file)) {
108      echo &quot;The file &quot;. htmlspecialchars( basename( $file[&quot;fileToUpload&quot;] [&quot;name&quot;])). &quot; has been uploaded.&quot;;
109    } else {
110      echo &quot;Sorry, there was an error uploading your file.&quot;;
111    }
112  }
113}
114?&gt;
115

Then include or require the file in the PHP file that's receiving the file upload.

1file_uploads = On
2file_uploads = Off
3&lt;!DOCTYPE html&gt;
4&lt;html&gt;
5&lt;body&gt;
6
7&lt;form action=&quot;upload.php&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt;
8  Select image to upload:
9  &lt;input type=&quot;file&quot; name=&quot;fileToUpload&quot; id=&quot;fileToUpload&quot;&gt;
10  &lt;input type=&quot;submit&quot; value=&quot;Upload Image&quot; name=&quot;submit&quot;&gt;
11&lt;/form&gt;
12
13&lt;/body&gt;
14&lt;/html&gt;
15&lt;?php
16$target_dir = &quot;uploads/&quot;;
17$target_file = $target_dir . basename($_FILES[&quot;fileToUpload&quot;][&quot;name&quot;]);
18$uploadOk = 1;
19$imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));
20
21// Check if image file is a actual image or fake image
22if(isset($_POST[&quot;submit&quot;])) {
23  $check = getimagesize($_FILES[&quot;fileToUpload&quot;][&quot;tmp_name&quot;]);
24  if($check !== false) {
25    echo &quot;File is an image - &quot; . $check[&quot;mime&quot;] . &quot;.&quot;;
26    $uploadOk = 1;
27  } else {
28    echo &quot;File is not an image.&quot;;
29    $uploadOk = 0;
30  }
31}
32
33// Check if file already exists
34if (file_exists($target_file)) {
35  echo &quot;Sorry, file already exists.&quot;;
36  $uploadOk = 0;
37}
38
39// Check file size
40if ($_FILES[&quot;fileToUpload&quot;][&quot;size&quot;] &gt; 500000) {
41  echo &quot;Sorry, your file is too large.&quot;;
42  $uploadOk = 0;
43}
44
45// Allow certain file formats
46if($imageFileType != &quot;jpg&quot; &amp;&amp; $imageFileType != &quot;png&quot; &amp;&amp; $imageFileType != &quot;jpeg&quot;
47&amp;&amp; $imageFileType != &quot;gif&quot; ) {
48  echo &quot;Sorry, only JPG, JPEG, PNG &amp; GIF files are allowed.&quot;;
49  $uploadOk = 0;
50}
51
52// Check if $uploadOk is set to 0 by an error
53if ($uploadOk == 0) {
54  echo &quot;Sorry, your file was not uploaded.&quot;;
55// if everything is ok, try to upload file
56} else {
57  if (move_uploaded_file($_FILES[&quot;fileToUpload&quot;][&quot;tmp_name&quot;], $target_file)) {
58    echo &quot;The file &quot;. htmlspecialchars( basename( $_FILES[&quot;fileToUpload&quot;][&quot;name&quot;])). &quot; has been uploaded.&quot;;
59  } else {
60    echo &quot;Sorry, there was an error uploading your file.&quot;;
61  }
62}
63?&gt;
64&lt;?php
65function uploadFile($names, $button) {
66  $file = $_FILES[$names];
67  $target_dir = &quot;uploads/&quot;;
68  $target_file = $target_dir . basename($file[&quot;name&quot;]);
69  $uploadOk = 1;
70  $imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));
71
72  // Check if image file is a actual image or fake image
73  if(!empty($button)) {
74    $check = getimagesize($file[&quot;fileToUpload&quot;][&quot;tmp_name&quot;]);
75    if($check !== false) {
76    echo &quot;File is an image - &quot; . $check[&quot;mime&quot;] . &quot;.&quot;;
77    $uploadOk = 1;
78  } else {
79    echo &quot;File is not an image.&quot;;
80    $uploadOk = 0;
81  }
82
83  // Check if file already exists
84  if (file_exists($target_file)) {
85    echo &quot;Sorry, file already exists.&quot;;
86    $uploadOk = 0;
87  }
88
89  // Check file size
90  if ($_FILES[&quot;fileToUpload&quot;][&quot;size&quot;] &gt; 500000) {
91    echo &quot;Sorry, your file is too large.&quot;;
92    $uploadOk = 0;
93  }
94
95  // Allow certain file formats
96  if($imageFileType != &quot;jpg&quot; &amp;&amp; $imageFileType != &quot;png&quot; &amp;&amp; $imageFileType != &quot;jpeg&quot;
97&amp;&amp; $imageFileType != &quot;gif&quot; ) {
98    echo &quot;Sorry, only JPG, JPEG, PNG &amp; GIF files are allowed.&quot;;
99    $uploadOk = 0;
100  }
101
102  // Check if $uploadOk is set to 0 by an error
103  if ($uploadOk == 0) {
104    echo &quot;Sorry, your file was not uploaded.&quot;;
105  // if everything is ok, try to upload file
106  } else {
107    if (move_uploaded_file($file[&quot;fileToUpload&quot;][&quot;tmp_name&quot;], $target_file)) {
108      echo &quot;The file &quot;. htmlspecialchars( basename( $file[&quot;fileToUpload&quot;] [&quot;name&quot;])). &quot; has been uploaded.&quot;;
109    } else {
110      echo &quot;Sorry, there was an error uploading your file.&quot;;
111    }
112  }
113}
114?&gt;
115&lt;?php
116include_once(&quot;file_upload_fn.php&quot;);
117uploadFile(&quot;fileToUpload&quot;, $_POST['submit']);
118?&gt;
119

There you go. That's how you use PHP to upload an image.

Source https://stackoverflow.com/questions/70384416

QUESTION

How to use openstack on client side

Asked 2021-Dec-07 at 15:06

I try to make a upload/download services of files for my website, and i'm trying to use the object storage from openstack. The thing is, i have no problem doing it via php and openstack php sdk, but when i'm trying to do it via some javascript, i can't find a good sdk or methods. I'm not using node, I have a php server, and a javascript client. I would like to uploads or download files directly from the javascript client. I don't want the file to transfer through the php server. I managed to create openstack tokens with the php sdk, maybe i could send those to the javascript so then they can authenticate? It's been one week of searching with no solutions...

ANSWER

Answered 2021-Dec-07 at 15:06

Openstack has a S3 plugin that can ease your search for a library/sdk.

Otherwise, you should forge a temporary URL server-side, I'm sure your PHP library has tooling for this. The URL can then be used client-side to PUT the file.

The temporary URL is forged in a way to open temporary write-only access for the upload. There is also the same kind of URL to open read-only access to some elements.

So, either the client requests for a place to upload to the PHP which sends back the URL, or simply have the client upload the file to your PHP that will forge the link and then redirect the request to the URL.

Source https://stackoverflow.com/questions/70262339

QUESTION

How to call the right function (as a string) based on an argument?

Asked 2021-Nov-30 at 09:49

I have a class which is intended to create an IBM Cloud Object Storage object. There are 2 functions I can use for initialization : resource() and client(). In the init function there is an object_type parameter which will be used to decide which function to call.

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23

If I run this, I receive the following error:

TypeError: client() missing 1 required positional argument: 'service_name'

If I use this in a simple function and call it by using ibm_boto3.client() or ibm_boto3.resource(), it works like a charm.

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38

It looks like it calls the client function on this line, but I am not sure why:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39

I tried using:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39method_type = getattr(ibm_boto3, lambda: object_type)()
40

but it was a silly move.

The client function looks like this btw:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39method_type = getattr(ibm_boto3, lambda: object_type)()
40def client(*args, **kwargs):
41    &quot;&quot;&quot;
42    Create a low-level service client by name using the default session.
43
44    See :py:meth:`ibm_boto3.session.Session.client`.
45    &quot;&quot;&quot;
46    return _get_default_session().client(*args, **kwargs)
47
48

which refers to:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39method_type = getattr(ibm_boto3, lambda: object_type)()
40def client(*args, **kwargs):
41    &quot;&quot;&quot;
42    Create a low-level service client by name using the default session.
43
44    See :py:meth:`ibm_boto3.session.Session.client`.
45    &quot;&quot;&quot;
46    return _get_default_session().client(*args, **kwargs)
47
48def client(self, service_name, region_name=None, api_version=None,
49               use_ssl=True, verify=None, endpoint_url=None,
50               aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None,
51               ibm_api_key_id=None, ibm_service_instance_id=None, ibm_auth_endpoint=None,
52               auth_function=None, token_manager=None,
53               config=None):
54        return self._session.create_client(
55            service_name, region_name=region_name, api_version=api_version,
56            use_ssl=use_ssl, verify=verify, endpoint_url=endpoint_url,
57            aws_access_key_id=aws_access_key_id,
58            aws_secret_access_key=aws_secret_access_key,
59            aws_session_token=aws_session_token,
60            ibm_api_key_id=ibm_api_key_id, ibm_service_instance_id=ibm_service_instance_id,
61            ibm_auth_endpoint=ibm_auth_endpoint, auth_function=auth_function,
62            token_manager=token_manager, config=config)
63
64

Same goes for resource()

ANSWER

Answered 2021-Nov-30 at 09:49

If you look at the stracktrace, it will probably point to this line:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39method_type = getattr(ibm_boto3, lambda: object_type)()
40def client(*args, **kwargs):
41    &quot;&quot;&quot;
42    Create a low-level service client by name using the default session.
43
44    See :py:meth:`ibm_boto3.session.Session.client`.
45    &quot;&quot;&quot;
46    return _get_default_session().client(*args, **kwargs)
47
48def client(self, service_name, region_name=None, api_version=None,
49               use_ssl=True, verify=None, endpoint_url=None,
50               aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None,
51               ibm_api_key_id=None, ibm_service_instance_id=None, ibm_auth_endpoint=None,
52               auth_function=None, token_manager=None,
53               config=None):
54        return self._session.create_client(
55            service_name, region_name=region_name, api_version=api_version,
56            use_ssl=use_ssl, verify=verify, endpoint_url=endpoint_url,
57            aws_access_key_id=aws_access_key_id,
58            aws_secret_access_key=aws_secret_access_key,
59            aws_session_token=aws_session_token,
60            ibm_api_key_id=ibm_api_key_id, ibm_service_instance_id=ibm_service_instance_id,
61            ibm_auth_endpoint=ibm_auth_endpoint, auth_function=auth_function,
62            token_manager=token_manager, config=config)
63
64method_type = getattr(ibm_boto3, object_type)()
65

And not the one after where you actually call it. The reason is simple, those last two parenthese () mean you're calling the function you just retrieved via getattr.

So simply do this:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39method_type = getattr(ibm_boto3, lambda: object_type)()
40def client(*args, **kwargs):
41    &quot;&quot;&quot;
42    Create a low-level service client by name using the default session.
43
44    See :py:meth:`ibm_boto3.session.Session.client`.
45    &quot;&quot;&quot;
46    return _get_default_session().client(*args, **kwargs)
47
48def client(self, service_name, region_name=None, api_version=None,
49               use_ssl=True, verify=None, endpoint_url=None,
50               aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None,
51               ibm_api_key_id=None, ibm_service_instance_id=None, ibm_auth_endpoint=None,
52               auth_function=None, token_manager=None,
53               config=None):
54        return self._session.create_client(
55            service_name, region_name=region_name, api_version=api_version,
56            use_ssl=use_ssl, verify=verify, endpoint_url=endpoint_url,
57            aws_access_key_id=aws_access_key_id,
58            aws_secret_access_key=aws_secret_access_key,
59            aws_session_token=aws_session_token,
60            ibm_api_key_id=ibm_api_key_id, ibm_service_instance_id=ibm_service_instance_id,
61            ibm_auth_endpoint=ibm_auth_endpoint, auth_function=auth_function,
62            token_manager=token_manager, config=config)
63
64method_type = getattr(ibm_boto3, object_type)()
65method_type = getattr(ibm_boto3, object_type)
66

Which means that method_type is actually the method from the ibm_boto3 object you're interested in. Can confirm that by either debugging using import pdb; pdb.set_trace() and inspect it, or just add a print statement:

1class ObjectStorage:
2
3    def __init__(self, object_type: str, endpoint: str, api_key: str, instance_crn: str, auth_endpoint: str):
4
5        valid_object_types = (&quot;resource&quot;, &quot;client&quot;)
6        if object_type not in valid_object_types:
7            raise ValueError(&quot;Object initialization error: Status must be one of %r.&quot; % valid_object_types)
8
9        method_type = getattr(ibm_boto3, object_type)()
10        self._conn = method_type(
11                    &quot;s3&quot;,
12                    ibm_api_key_id = api_key,
13                    ibm_service_instance_id= instance_crn,
14                    ibm_auth_endpoint = auth_endpoint,
15                    config=Config(signature_version=&quot;oauth&quot;),
16                    endpoint_url=endpoint,
17        )
18
19    @property    
20    def connect(self):
21        return self._conn
22
23def get_cos_client_connection():
24    COS_ENDPOINT = &quot;xxxxx&quot;
25    COS_API_KEY_ID = &quot;yyyyy&quot;
26    COS_INSTANCE_CRN = &quot;zzzzz&quot;
27    COS_AUTH_ENDPOINT = &quot;----&quot;
28    cos = ibm_boto3.client(&quot;s3&quot;,
29            ibm_api_key_id=COS_API_KEY_ID,
30            ibm_service_instance_id=COS_INSTANCE_CRN,
31            ibm_auth_endpoint=COS_AUTH_ENDPOINT,
32            config=Config(signature_version=&quot;oauth&quot;),
33            endpoint_url=COS_ENDPOINT
34    )
35    return cos
36
37cos = get_cos_client_connection()
38method_type = getattr(ibm_boto3, object_type)()
39method_type = getattr(ibm_boto3, lambda: object_type)()
40def client(*args, **kwargs):
41    &quot;&quot;&quot;
42    Create a low-level service client by name using the default session.
43
44    See :py:meth:`ibm_boto3.session.Session.client`.
45    &quot;&quot;&quot;
46    return _get_default_session().client(*args, **kwargs)
47
48def client(self, service_name, region_name=None, api_version=None,
49               use_ssl=True, verify=None, endpoint_url=None,
50               aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None,
51               ibm_api_key_id=None, ibm_service_instance_id=None, ibm_auth_endpoint=None,
52               auth_function=None, token_manager=None,
53               config=None):
54        return self._session.create_client(
55            service_name, region_name=region_name, api_version=api_version,
56            use_ssl=use_ssl, verify=verify, endpoint_url=endpoint_url,
57            aws_access_key_id=aws_access_key_id,
58            aws_secret_access_key=aws_secret_access_key,
59            aws_session_token=aws_session_token,
60            ibm_api_key_id=ibm_api_key_id, ibm_service_instance_id=ibm_service_instance_id,
61            ibm_auth_endpoint=ibm_auth_endpoint, auth_function=auth_function,
62            token_manager=token_manager, config=config)
63
64method_type = getattr(ibm_boto3, object_type)()
65method_type = getattr(ibm_boto3, object_type)
66print(method_type)
67

Source https://stackoverflow.com/questions/70167132

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Object Storage

Tutorials and Learning Resources are not available at this moment for Object Storage

Share this Page

share link

Get latest updates on Object Storage