Explore all AWS open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in AWS

localstack

LocalStack release 0.14.2

cortex

v0.42.0

aws-sdk-go

Release v1.43.45

aws-sdk-js

Release v2.1120.0

troposphere

Release 4.0.1

Popular Libraries in AWS

localstack

by localstack doticonpythondoticon

star image 40029 doticonNOASSERTION

šŸ’» A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline!

og-aws

by open-guides doticonshelldoticon

star image 29988 doticonCC-BY-4.0

šŸ“™ Amazon Web Services ā€” a practical guide

aws-cli

by aws doticonpythondoticon

star image 12274 doticonNOASSERTION

Universal Command Line Interface for Amazon Web Services

awesome-aws

by donnemartin doticonpythondoticon

star image 9608 doticonNOASSERTION

A curated list of awesome Amazon Web Services (AWS) libraries, open source repos, guides, blogs, and other resources. Featuring the Fiery Meter of AWSome.

amplify-js

by aws-amplify doticontypescriptdoticon

star image 8675 doticonApache-2.0

A declarative JavaScript library for application development using cloud services.

cortex

by cortexlabs doticongodoticon

star image 7642 doticonApache-2.0

Production infrastructure for machine learning at scale

aws-sdk-go

by aws doticongodoticon

star image 7580 doticonApache-2.0

AWS SDK for the Go programming language.

boto3

by boto doticonpythondoticon

star image 7181 doticonApache-2.0

AWS SDK for Python

aws-sdk-js

by aws doticonjavascriptdoticon

star image 7006 doticonApache-2.0

AWS SDK for JavaScript in the browser and Node.js

Trending New libraries in AWS

eks-distro

by aws doticonshelldoticon

star image 1100 doticonApache-2.0

Amazon EKS Distro (EKS-D) is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service (EKS) to create reliable and secure Kubernetes clusters.

external-secrets

by external-secrets doticongodoticon

star image 914 doticonApache-2.0

External Secrets Operator reads information from a third-party service like AWS Secrets Manager and automatically injects the values as Kubernetes Secrets.

aws-solutions-constructs

by awslabs doticontypescriptdoticon

star image 738 doticonApache-2.0

The AWS Solutions Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions

akri

by deislabs doticonrustdoticon

star image 632 doticonMIT

A Kubernetes Resource Interface for the Edge

100DaysOfAWS

by sarthaksavvy doticonshelldoticon

star image 565 doticon

aws-perspective

by awslabs doticonjavascriptdoticon

star image 496 doticonApache-2.0

AWS Perspective is a solution to visualize AWS Cloud workloads. Using Perspective you can build, customize, and share detailed architecture diagrams of your workloads based on live data from AWS. Perspective works by maintaining an inventory of the AWS resources across your accounts and regions, mapping relationships between them and displaying them in the Web User Interface (Web UI). When you need to make changes to a resource, Perspective saves time by providing a link to the resource in the AWS Console.

aws-security-reference-architecture-examples

by aws-samples doticonpythondoticon

star image 481 doticonNOASSERTION

Example solutions demonstrating how to implement patterns within the AWS Security Reference Architecture guide using CloudFormation and Customizations for AWS Control Tower.

Qv2ray

by Shadowsocks-NET doticonc++doticon

star image 438 doticonGPL-3.0

A cross platform connection manager for V2Ray and other backends.

ecs-plugin

by docker-archive doticongodoticon

star image 423 doticonApache-2.0

See http://github.com/docker/compose-cli

Top Authors in AWS

1

aws-samples

405 Libraries

star icon20379

2

awslabs

153 Libraries

star icon29910

3

aws

86 Libraries

star icon58337

4

amazon-archives

64 Libraries

star icon9170

5

dabit3

40 Libraries

star icon2604

6

aws-cloudformation

40 Libraries

star icon4988

7

pahud

26 Libraries

star icon341

8

cloudyr

23 Libraries

star icon621

9

iann0036

23 Libraries

star icon3051

10

guardian

17 Libraries

star icon248

1

405 Libraries

star icon20379

2

153 Libraries

star icon29910

3

86 Libraries

star icon58337

4

64 Libraries

star icon9170

5

40 Libraries

star icon2604

6

40 Libraries

star icon4988

7

26 Libraries

star icon341

8

23 Libraries

star icon621

9

23 Libraries

star icon3051

10

17 Libraries

star icon248

Trending Kits in AWS

No Trending Kits are available at this moment for AWS

Trending Discussions on AWS

Python/Docker ImportError: cannot import name 'json' from itsdangerous

Docker push to AWS ECR hangs immediately and times out

What is jsconfig.json

Error: While updating laravel 8 to 9. Script @php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1

Python Selenium AWS Lambda Change WebGL Vendor/Renderer For Undetectable Headless Scraper

AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'>

Terraform AWS Provider Error: Value for unconfigurable attribute. Can't configure a value for "acl": its value will be decided automatically

How can I get output from boto3 ecs execute_command?

AWS Graphql lambda query

'AmplifySignOut' is not exported from '@aws-amplify/ui-react'

QUESTION

Python/Docker ImportError: cannot import name 'json' from itsdangerous

Asked 2022-Mar-31 at 12:49

I am trying to get a Flask and Docker application to work but when I try and run it using my docker-compose up command in my Visual Studio terminal, it gives me an ImportError called ImportError: cannot import name 'json' from itsdangerous. I have tried to look for possible solutions to this problem but as of right now there are not many on here or anywhere else. The only two solutions I could find are to change the current installation of MarkupSafe and itsdangerous to a higher version: https://serverfault.com/questions/1094062/from-itsdangerous-import-json-as-json-importerror-cannot-import-name-json-fr and another one on GitHub that tells me to essentially change the MarkUpSafe and itsdangerous installation again https://github.com/aws/aws-sam-cli/issues/3661, I have also tried to make a virtual environment named veganetworkscriptenv to install the packages but that has also failed as well. I am currently using Flask 2.0.0 and Docker 5.0.0 and the error occurs on line eight in vegamain.py.

Here is the full ImportError that I get when I try and run the program:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10

Here are my requirements.txt, vegamain.py, Dockerfile, and docker-compose.yml files:

requirements.txt:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20

vegamain.py:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20# Veganetwork (C) TetraSystemSolutions 2022
21# all rights are reserved.  
22# 
23# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
24#
25
26# get our imports in order first
27from flask import Flask # <-- error occurs here!!!
28
29# start the application through flask.
30app = Flask(__name__)
31
32# if set to true will return only a "Hello World" string.
33Debug = True
34
35# start a route to the index part of the app in flask.
36@app.route('/')
37def index():
38    if (Debug == True):
39        return 'Hello World!'
40    else:
41        pass
42
43# start the flask app here --->
44if __name__ == '__main__':
45    app.run(debug=True, host='0.0.0.0') 
46

Dockerfile:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20# Veganetwork (C) TetraSystemSolutions 2022
21# all rights are reserved.  
22# 
23# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
24#
25
26# get our imports in order first
27from flask import Flask # <-- error occurs here!!!
28
29# start the application through flask.
30app = Flask(__name__)
31
32# if set to true will return only a "Hello World" string.
33Debug = True
34
35# start a route to the index part of the app in flask.
36@app.route('/')
37def index():
38    if (Debug == True):
39        return 'Hello World!'
40    else:
41        pass
42
43# start the flask app here --->
44if __name__ == '__main__':
45    app.run(debug=True, host='0.0.0.0') 
46FROM python:3.9
47ENV PYTHONUNBUFFERED 1
48WORKDIR /app
49COPY requirements.txt /app/requirements.txt
50RUN pip install -r requirements.txt
51COPY . /app
52

docker-compose.yml:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20# Veganetwork (C) TetraSystemSolutions 2022
21# all rights are reserved.  
22# 
23# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
24#
25
26# get our imports in order first
27from flask import Flask # <-- error occurs here!!!
28
29# start the application through flask.
30app = Flask(__name__)
31
32# if set to true will return only a "Hello World" string.
33Debug = True
34
35# start a route to the index part of the app in flask.
36@app.route('/')
37def index():
38    if (Debug == True):
39        return 'Hello World!'
40    else:
41        pass
42
43# start the flask app here --->
44if __name__ == '__main__':
45    app.run(debug=True, host='0.0.0.0') 
46FROM python:3.9
47ENV PYTHONUNBUFFERED 1
48WORKDIR /app
49COPY requirements.txt /app/requirements.txt
50RUN pip install -r requirements.txt
51COPY . /app
52version: '3.8'
53services:
54  backend:
55    build:
56      context: .
57      dockerfile: Dockerfile
58    command: 'python vegamain.py'
59    ports:
60      - 8004:5000
61    volumes:
62      - .:/app
63    depends_on:
64      - db
65
66#  queue:
67#    build:
68#      context: .
69#      dockerfile: Dockerfile
70#    command: 'python -u consumer.py'
71#    depends_on:
72#      - db
73
74  db:
75    image: mysql:5.7.22
76    restart: always
77    environment:
78      MYSQL_DATABASE: admin
79      MYSQL_USER: root
80      MYSQL_PASSWORD: root
81      MYSQL_ROOT_PASSWORD: root
82    volumes:
83      - .dbdata:/var/lib/mysql
84    ports:
85      - 33069:3306
86

How exactly can I fix this code? thank you!

ANSWER

Answered 2022-Feb-20 at 12:31

I was facing the same issue while running docker containers with flask.

I downgraded Flask to 1.1.4 and markupsafe to 2.0.1 which solved my issue.

Check this for reference.

Source https://stackoverflow.com/questions/71189819

QUESTION

Docker push to AWS ECR hangs immediately and times out

Asked 2022-Mar-30 at 07:53

I'm trying to push my first docker image to ECR. I've followed the steps provided by AWS and things seem to be going smoothly until the final push which immediately times out. Specifically, I pass my aws ecr credentials to docker and get a "login succeeded" message. I then tag the image which also works. pushing to the ecr repo I get no error message, just the following:

1The push refers to repository [xxxxxxxxxxx.dkr.ecr.ca-central-1.amazonaws.com/reponame]
2714c1b96dd83: Retrying in 1 second 
3d2cdc77dd068: Retrying in 1 second 
430aad807caf5: Retrying in 1 second 
50559774c4ea2: Retrying in 1 second 
6285b8616682f: Retrying in 1 second 
74aeea0ec2b15: Waiting 
81b1312f842d8: Waiting 
9c310009e0ef3: Waiting 
10a48777e566d3: Waiting 
112a0c9f28029a: Waiting 
12EOF
13

It tries a bunch of times and then exits with no message. Any idea what's wrong?

ANSWER

Answered 2022-Jan-02 at 14:23

I figured out my issue. I wasn't using the correct credentials. I had a personal AWS account as my default credentials and needed to add my work profile to my credentials.

EDIT
If you have multiple aws profiles, you can mention the profile name at the docker login as below (assuming you have done aws configure --profile someprofile at earlier day),

1The push refers to repository [xxxxxxxxxxx.dkr.ecr.ca-central-1.amazonaws.com/reponame]
2714c1b96dd83: Retrying in 1 second 
3d2cdc77dd068: Retrying in 1 second 
430aad807caf5: Retrying in 1 second 
50559774c4ea2: Retrying in 1 second 
6285b8616682f: Retrying in 1 second 
74aeea0ec2b15: Waiting 
81b1312f842d8: Waiting 
9c310009e0ef3: Waiting 
10a48777e566d3: Waiting 
112a0c9f28029a: Waiting 
12EOF
13aws ecr get-login-password --region us-east-1 --profile someprofile | docker login ....
14

Source https://stackoverflow.com/questions/70452836

QUESTION

What is jsconfig.json

Asked 2022-Mar-29 at 17:49

If i search the same question on the internet, then i'll get only links to vscode website ans some blogs which implements it.

I want to know that is jsconfig.json is specific to vscode or javascript/webpack?

What will happen if we deploy the application on AWS / Heroku, etc. Do we have to make change?

ANSWER

Answered 2021-Aug-06 at 04:10

This is definitely specific to VSCode.

The presence of jsconfig.json file in a directory indicates that the directory is the root of a JavaScript Project. The jsconfig.json file specifies the root files and the options for the features provided by the JavaScript language service.

Check more details here: https://code.visualstudio.com/docs/languages/jsconfig

You don't need this file when deploy it on AWS/Heroku, basically, you can exclude this from your commit if you are using git repo, i.e., add jsconfig.json in your .gitignore, this will make your project IDE independent.

Source https://stackoverflow.com/questions/68675994

QUESTION

Error: While updating laravel 8 to 9. Script @php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1

Asked 2022-Mar-29 at 06:51

Nothing to install, update or remove Generating optimized autoload files Class App\Helpers\Helper located in C:/wamp64/www/vuexylaravel/app\Helpers\helpers.php does not comply with psr-4 autoloading standard. Skipping. > Illuminate\Foundation\ComposerScripts::postAutoloadDump > @php artisan package:discover --ansi

1   Error 
2
3  Undefined constant Illuminate\Http\Request::HEADER_X_FORWARDED_ALL
4  at C:\wamp64\www\vuexylaravel\vendor\fideloper\proxy\config\trustedproxy.php:48
5     44ā–•      * - 'HEADER_X_FORWARDED_AWS_ELB' (If you are using AWS Elastic Load Balancer)
6     45ā–•      *
7     46ā–•      * @link https://symfony.com/doc/current/deployment/proxies.html
8     47ā–•      */
9  āžœ  48ā–•     'headers' => Illuminate\Http\Request::HEADER_X_FORWARDED_ALL,
10     49ā–•
11     50ā–• ];
12     51ā–•
13
14  1   C:\wamp64\www\vuexylaravel\vendor\laravel\framework\src\Illuminate\Support\ServiceProvider.php:138
15      require()
16
17  2   C:\wamp64\www\vuexylaravel\vendor\fideloper\proxy\src\TrustedProxyServiceProvider.php:28
18      Illuminate\Support\ServiceProvider::mergeConfigFrom("C:\wamp64\www\vuexylaravel\vendor\fideloper\proxy\config\trustedproxy.php", "trustedproxy")
19Script @php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1
20

ANSWER

Answered 2022-Feb-13 at 17:35

If you are upgrading your Laravel 8 project to Laravel 9 by importing your existing application code into a totally new Laravel 9 application skeleton, you may need to update your application's "trusted proxy" middleware.

Within your app/Http/Middleware/TrustProxies.php file, update use Fideloper\Proxy\TrustProxies as Middleware to use Illuminate\Http\Middleware\TrustProxies as Middleware.

Next, within app/Http/Middleware/TrustProxies.php, you should update the $headers property definition:

// Before...

protected $headers = Request::HEADER_X_FORWARDED_ALL;

// After...

1   Error 
2
3  Undefined constant Illuminate\Http\Request::HEADER_X_FORWARDED_ALL
4  at C:\wamp64\www\vuexylaravel\vendor\fideloper\proxy\config\trustedproxy.php:48
5     44ā–•      * - 'HEADER_X_FORWARDED_AWS_ELB' (If you are using AWS Elastic Load Balancer)
6     45ā–•      *
7     46ā–•      * @link https://symfony.com/doc/current/deployment/proxies.html
8     47ā–•      */
9  āžœ  48ā–•     'headers' => Illuminate\Http\Request::HEADER_X_FORWARDED_ALL,
10     49ā–•
11     50ā–• ];
12     51ā–•
13
14  1   C:\wamp64\www\vuexylaravel\vendor\laravel\framework\src\Illuminate\Support\ServiceProvider.php:138
15      require()
16
17  2   C:\wamp64\www\vuexylaravel\vendor\fideloper\proxy\src\TrustedProxyServiceProvider.php:28
18      Illuminate\Support\ServiceProvider::mergeConfigFrom("C:\wamp64\www\vuexylaravel\vendor\fideloper\proxy\config\trustedproxy.php", "trustedproxy")
19Script @php artisan package:discover --ansi handling the post-autoload-dump event returned with error code 1
20protected $headers =
21    Request::HEADER_X_FORWARDED_FOR |
22    Request::HEADER_X_FORWARDED_HOST |
23    Request::HEADER_X_FORWARDED_PORT |
24    Request::HEADER_X_FORWARDED_PROTO |
25    Request::HEADER_X_FORWARDED_AWS_ELB;
26

then run composer update

Make sure you are using PHP 8.0

Source https://stackoverflow.com/questions/71103241

QUESTION

Python Selenium AWS Lambda Change WebGL Vendor/Renderer For Undetectable Headless Scraper

Asked 2022-Mar-21 at 20:19
Concept:

Using AWS Lambda functions with Python and Selenium, I want to create a undetectable headless chrome scraper by passing a headless chrome test. I check the undetectability of my headless scraper by opening up the test and taking a screenshot. I ran this test on a Local IDE and on a Lambda server.


Implementation:

I will be using a python library called selenium-stealth and will follow their basic configuration:

1stealth(driver,
2        languages=["en-US", "en"],
3        vendor="Google Inc.",
4        platform="Win32",
5        webgl_vendor="Intel Inc.",
6        renderer="Intel Iris OpenGL Engine",
7        fix_hairline=True,
8        )
9

I implemented this configuration on a Local IDE as well as an AWS Lambda Server to compare the results.


Local IDE:

Found below are the test results running on a local IDE: enter image description here


Lambda Server:

When I run this on a Lambda server, both the WebGL Vendor and Renderer are blank. as shown below:

enter image description here

I even tried to manually change the WebGL Vendor/Renderer using the following JavaScript command:

1stealth(driver,
2        languages=["en-US", "en"],
3        vendor="Google Inc.",
4        platform="Win32",
5        webgl_vendor="Intel Inc.",
6        renderer="Intel Iris OpenGL Engine",
7        fix_hairline=True,
8        )
9driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {"source": "WebGLRenderingContext.prototype.getParameter = function(parameter) {if (parameter === 37445) {return 'VENDOR_INPUT';}if (parameter === 37446) {return 'RENDERER_INPUT';}return getParameter(parameter);};"})
10

Then I thought maybe that it could be something wrong with the parameter number. I configured the command execution without the if statement, but the same thing happened: It worked on my Local IDE but had no effect on an AWS Lambda Server.

Simply Put:

Is it possible to add Vendor/Renderer on AWS Lambda? In my efforts, it seems that there is no possible way. I made sure to submit this issue on the selenium-stealth GitHub Repository.

ANSWER

Answered 2021-Dec-18 at 02:01
WebGL

WebGL is a cross-platform, open web standard for a low-level 3D graphics API based on OpenGL ES, exposed to ECMAScript via the HTML5 Canvas element. WebGL at it's core is a Shader-based API using GLSL, with constructs that are semantically similar to those of the underlying OpenGL ES API. It follows the OpenGL ES specification, with some exceptions for the out of memory-managed languages such as JavaScript. WebGL 1.0 exposes the OpenGL ES 2.0 feature set; WebGL 2.0 exposes the OpenGL ES 3.0 API.

Now, with the availability of Selenium Stealth building of Undetectable Scraper using Selenium driven ChromeDriver initiated Browsing Context have become much more easier.


selenium-stealth

selenium-stealth is a python package selenium-stealth to prevent detection. This programme tries to make python selenium more stealthy. However, as of now selenium-stealth only support Selenium Chrome.

  • Code Block:

1stealth(driver,
2        languages=["en-US", "en"],
3        vendor="Google Inc.",
4        platform="Win32",
5        webgl_vendor="Intel Inc.",
6        renderer="Intel Iris OpenGL Engine",
7        fix_hairline=True,
8        )
9driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {"source": "WebGLRenderingContext.prototype.getParameter = function(parameter) {if (parameter === 37445) {return 'VENDOR_INPUT';}if (parameter === 37446) {return 'RENDERER_INPUT';}return getParameter(parameter);};"})
10from selenium import webdriver
11from selenium.webdriver.chrome.options import Options
12from selenium.webdriver.chrome.service import Service
13from selenium_stealth import stealth
14
15options = Options()
16options.add_argument("start-maximized")
17options.add_experimental_option("excludeSwitches", ["enable-automation"])
18options.add_experimental_option('useAutomationExtension', False)
19s = Service('C:\\BrowserDrivers\\chromedriver.exe')
20driver = webdriver.Chrome(service=s, options=options)
21
22# Selenium Stealth settings
23stealth(driver,
24      languages=["en-US", "en"],
25      vendor="Google Inc.",
26      platform="Win32",
27      webgl_vendor="Intel Inc.",
28      renderer="Intel Iris OpenGL Engine",
29      fix_hairline=True,
30  )
31
32driver.get("https://bot.sannysoft.com/")
33
  • Browser Screenshot:

  • bot_sannysoft

    You can find a detailed relevant discussion in Can a website detect when you are using Selenium with chromedriver?


    Changing WebGL Vendor/Renderer in AWS Lambda

    AWS Lambda enables us to deliver compressed WebGL websites to end users. When requested webpage objects are compressed, the transfer size is reduced, leading to faster downloads, lower cloud storage fees, and lower data transfer fees. Improved load times also directly influence the viewer experience and retention, which helps in improving website conversion and discoverability. Using WebGL, websites are more immersive while still being accessible via a browser URL. Through this technique AWS Lambda to automatically compress the objects uploaded to S3.

    product-page-diagram_Lambda-RealTimeFileProcessing.a59577de4b6471674a540b878b0b684e0249a18c

    Background on compression and WebGL

    HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. This capability is negotiated between the server and the client using an HTTP header which may indicate that a resource being transferred, cached, or otherwise referenced is compressed. AWS Lambda on the server-side supports Content-Encoding header.

    On the client-side, most browsers today support brotli and gzip compression through HTTP headers (Accept-Encoding: deflate, br, gzip) and can handle server response headers. This means browsers will automatically download and decompress content from a web server at the client-side, before rendering webpages to the viewer.


    Conclusion

    Due to this constraint you may not be able to change the WebGL Vendor/Renderer in AWS Lambda, else it may directly affect the process of rendering webpages to the viewers and can stand out to be a bottleneck in UX.


    tl; dr

    You can find a couple of relevant detailed discussion in:

    Source https://stackoverflow.com/questions/70265306

    QUESTION

    AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'>

    Asked 2022-Feb-25 at 13:18

    I was using pyspark on AWS EMR (4 r5.xlarge as 4 workers, each has one executor and 4 cores), and I got AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'. Below is a snippet of the code that threw this error:

    1search =  SearchEngine(db_file_dir = &quot;/tmp/db&quot;)
    2conn = sqlite3.connect(&quot;/tmp/db/simple_db.sqlite&quot;)
    3pdf_ = pd.read_sql_query('''select  zipcode, lat, lng, 
    4                        bounds_west, bounds_east, bounds_north, bounds_south from 
    5                        simple_zipcode''',conn)
    6brd_pdf = spark.sparkContext.broadcast(pdf_) 
    7conn.close()
    8
    9
    10@udf('string')
    11def get_zip_b(lat, lng):
    12    pdf = brd_pdf.value 
    13    out = pdf[(np.array(pdf[&quot;bounds_north&quot;]) &gt;= lat) &amp; 
    14              (np.array(pdf[&quot;bounds_south&quot;]) &lt;= lat) &amp; 
    15              (np.array(pdf['bounds_west']) &lt;= lng) &amp; 
    16              (np.array(pdf['bounds_east']) &gt;= lng) ]
    17    if len(out):
    18        min_index = np.argmin( (np.array(out[&quot;lat&quot;]) - lat)**2 + (np.array(out[&quot;lng&quot;]) - lng)**2)
    19        zip_ = str(out[&quot;zipcode&quot;].iloc[min_index])
    20    else:
    21        zip_ = 'bad'
    22    return zip_
    23
    24df = df.withColumn('zipcode', get_zip_b(col(&quot;latitude&quot;),col(&quot;longitude&quot;)))
    25

    Below is the traceback, where line 102, in get_zip_b refers to pdf = brd_pdf.value:

    1search =  SearchEngine(db_file_dir = &quot;/tmp/db&quot;)
    2conn = sqlite3.connect(&quot;/tmp/db/simple_db.sqlite&quot;)
    3pdf_ = pd.read_sql_query('''select  zipcode, lat, lng, 
    4                        bounds_west, bounds_east, bounds_north, bounds_south from 
    5                        simple_zipcode''',conn)
    6brd_pdf = spark.sparkContext.broadcast(pdf_) 
    7conn.close()
    8
    9
    10@udf('string')
    11def get_zip_b(lat, lng):
    12    pdf = brd_pdf.value 
    13    out = pdf[(np.array(pdf[&quot;bounds_north&quot;]) &gt;= lat) &amp; 
    14              (np.array(pdf[&quot;bounds_south&quot;]) &lt;= lat) &amp; 
    15              (np.array(pdf['bounds_west']) &lt;= lng) &amp; 
    16              (np.array(pdf['bounds_east']) &gt;= lng) ]
    17    if len(out):
    18        min_index = np.argmin( (np.array(out[&quot;lat&quot;]) - lat)**2 + (np.array(out[&quot;lng&quot;]) - lng)**2)
    19        zip_ = str(out[&quot;zipcode&quot;].iloc[min_index])
    20    else:
    21        zip_ = 'bad'
    22    return zip_
    23
    24df = df.withColumn('zipcode', get_zip_b(col(&quot;latitude&quot;),col(&quot;longitude&quot;)))
    2521/08/02 06:18:19 WARN TaskSetManager: Lost task 12.0 in stage 7.0 (TID 1814, ip-10-22-17-94.pclc0.merkle.local, executor 6): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
    26  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py&quot;, line 605, in main
    27    process()
    28  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py&quot;, line 597, in process
    29    serializer.dump_stream(out_iter, outfile)
    30  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py&quot;, line 223, in dump_stream
    31    self.serializer.dump_stream(self._batched(iterator), stream)
    32  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py&quot;, line 141, in dump_stream
    33    for obj in iterator:
    34  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/serializers.py&quot;, line 212, in _batched
    35    for item in iterator:
    36  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py&quot;, line 450, in mapper
    37    result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
    38  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py&quot;, line 450, in &lt;genexpr&gt;
    39    result = tuple(f(*[a[o] for o in arg_offsets]) for (arg_offsets, f) in udfs)
    40  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/worker.py&quot;, line 90, in &lt;lambda&gt;
    41    return lambda *a: f(*a)
    42  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/util.py&quot;, line 121, in wrapper
    43    return f(*args, **kwargs)
    44  File &quot;/mnt/var/lib/hadoop/steps/s-1IBFS0SYWA19Z/Mobile_ID_process_center.py&quot;, line 102, in get_zip_b
    45  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py&quot;, line 146, in value
    46    self._value = self.load_from_path(self._path)
    47  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py&quot;, line 123, in load_from_path
    48    return self.load(f)
    49  File &quot;/mnt/yarn/usercache/hadoop/appcache/application_1627867699893_0001/container_1627867699893_0001_01_000009/pyspark.zip/pyspark/broadcast.py&quot;, line 129, in load
    50    return pickle.load(file)
    51AttributeError: Can't get attribute 'new_block' on &lt;module 'pandas.core.internals.blocks' from '/mnt/miniconda/lib/python3.9/site-packages/pandas/core/internals/blocks.py'&gt;
    52

    Some observations and thought process:

    1, After doing some search online, the AttributeError in pyspark seems to be caused by mismatched pandas versions between driver and workers?

    2, But I ran the same code on two different datasets, one worked without any errors but the other didn't, which seems very strange and undeterministic, and it seems like the errors may not be caused by mismatched pandas versions. Otherwise, neither two datasets would succeed.

    3, I then ran the same code on the successful dataset again, but this time with different spark configurations: setting spark.driver.memory from 2048M to 4192m, and it threw AttributeError.

    4, In conclusion, I think the AttributeError has something to do with driver. But I can't tell how they are related from the error message, and how to fix it: AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks'.

    ANSWER

    Answered 2021-Aug-26 at 14:53

    I had the same error using pandas 1.3.2 in the server while 1.2 in my client. Downgrading pandas to 1.2 solved the problem.

    Source https://stackoverflow.com/questions/68625748

    QUESTION

    Terraform AWS Provider Error: Value for unconfigurable attribute. Can't configure a value for &quot;acl&quot;: its value will be decided automatically

    Asked 2022-Feb-15 at 13:50

    Just today, whenever I run terraform apply, I see an error something like this: Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.

    It was working yesterday.

    Following is the command I run: terraform init && terraform apply

    Following is the list of initialized provider plugins:

    1- Finding latest version of hashicorp/archive...
    2- Finding latest version of hashicorp/aws...
    3- Finding latest version of hashicorp/null...
    4- Installing hashicorp/null v3.1.0...
    5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
    6- Installing hashicorp/archive v2.2.0...
    7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
    8- Installing hashicorp/aws v4.0.0...
    9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
    10

    Following are the errors:

    1- Finding latest version of hashicorp/archive...
    2- Finding latest version of hashicorp/aws...
    3- Finding latest version of hashicorp/null...
    4- Installing hashicorp/null v3.1.0...
    5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
    6- Installing hashicorp/archive v2.2.0...
    7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
    8- Installing hashicorp/aws v4.0.0...
    9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
    10Acquiring state lock. This may take a few moments...
    11Releasing state lock. This may take a few moments...
    12ā•·
    13ā”‚ Error: Value for unconfigurable attribute
    14ā”‚ 
    15ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    16ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    17ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    18ā”‚ 
    19ā”‚ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
    20ā”‚ automatically based on the result of applying this configuration.
    21ā•µ
    22ā•·
    23ā”‚ Error: Value for unconfigurable attribute
    24ā”‚ 
    25ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    26ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    27ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    28ā”‚ 
    29ā”‚ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
    30ā”‚ value will be decided automatically based on the result of applying this
    31ā”‚ configuration.
    32ā•µ
    33ā•·
    34ā”‚ Error: Value for unconfigurable attribute
    35ā”‚ 
    36ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    37ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    38ā”‚    3:   acl    = &quot;private&quot;
    39ā”‚ 
    40ā”‚ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
    41ā”‚ based on the result of applying this configuration.
    42ā•µ
    43ERRO[0012] 1 error occurred:
    44        * exit status 1
    45

    My code is as follows:

    1- Finding latest version of hashicorp/archive...
    2- Finding latest version of hashicorp/aws...
    3- Finding latest version of hashicorp/null...
    4- Installing hashicorp/null v3.1.0...
    5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
    6- Installing hashicorp/archive v2.2.0...
    7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
    8- Installing hashicorp/aws v4.0.0...
    9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
    10Acquiring state lock. This may take a few moments...
    11Releasing state lock. This may take a few moments...
    12ā•·
    13ā”‚ Error: Value for unconfigurable attribute
    14ā”‚ 
    15ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    16ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    17ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    18ā”‚ 
    19ā”‚ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
    20ā”‚ automatically based on the result of applying this configuration.
    21ā•µ
    22ā•·
    23ā”‚ Error: Value for unconfigurable attribute
    24ā”‚ 
    25ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    26ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    27ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    28ā”‚ 
    29ā”‚ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
    30ā”‚ value will be decided automatically based on the result of applying this
    31ā”‚ configuration.
    32ā•µ
    33ā•·
    34ā”‚ Error: Value for unconfigurable attribute
    35ā”‚ 
    36ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    37ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    38ā”‚    3:   acl    = &quot;private&quot;
    39ā”‚ 
    40ā”‚ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
    41ā”‚ based on the result of applying this configuration.
    42ā•µ
    43ERRO[0012] 1 error occurred:
    44        * exit status 1
    45resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    46  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    47  acl    = &quot;private&quot;
    48
    49  server_side_encryption_configuration {
    50    rule {
    51      apply_server_side_encryption_by_default {
    52        kms_master_key_id = data.aws_kms_key.s3.arn
    53        sse_algorithm     = &quot;aws:kms&quot;
    54      }
    55    }
    56  }
    57
    58  lifecycle_rule {
    59    id      = &quot;backups&quot;
    60    enabled = true
    61
    62    prefix = &quot;backups/&quot;
    63
    64    transition {
    65      days          = 90
    66      storage_class = &quot;GLACIER_IR&quot;
    67    }
    68
    69    transition {
    70      days          = 180
    71      storage_class = &quot;DEEP_ARCHIVE&quot;
    72    }
    73
    74    expiration {
    75      days = 365
    76    }
    77  }
    78
    79  tags = {
    80    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    81    Environment = var.environment
    82  }
    83}
    84

    ANSWER

    Answered 2022-Feb-15 at 13:49

    Terraform AWS Provider is upgraded to version 4.0.0 which is published on 10 February 2022.

    Major changes in the release include:

    • Version 4.0.0 of the AWS Provider introduces significant changes to the aws_s3_bucket resource.
    • Version 4.0.0 of the AWS Provider will be the last major version to support EC2-Classic resources as AWS plans to fully retire EC2-Classic Networking. See the AWS News Blog for additional details.
    • Version 4.0.0 and 4.x.x versions of the AWS Provider will be the last versions compatible with Terraform 0.12-0.15.

    The reason for this change by Terraform is as follows: To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket resource have become read-only. Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_* resource. Once updated, new aws_s3_bucket_* resources should be imported into Terraform state.

    So, I updated my code accordingly by following the guide here: Terraform AWS Provider Version 4 Upgrade Guide | S3 Bucket Refactor

    The new working code looks like this:

    1- Finding latest version of hashicorp/archive...
    2- Finding latest version of hashicorp/aws...
    3- Finding latest version of hashicorp/null...
    4- Installing hashicorp/null v3.1.0...
    5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
    6- Installing hashicorp/archive v2.2.0...
    7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
    8- Installing hashicorp/aws v4.0.0...
    9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
    10Acquiring state lock. This may take a few moments...
    11Releasing state lock. This may take a few moments...
    12ā•·
    13ā”‚ Error: Value for unconfigurable attribute
    14ā”‚ 
    15ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    16ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    17ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    18ā”‚ 
    19ā”‚ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
    20ā”‚ automatically based on the result of applying this configuration.
    21ā•µ
    22ā•·
    23ā”‚ Error: Value for unconfigurable attribute
    24ā”‚ 
    25ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    26ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    27ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    28ā”‚ 
    29ā”‚ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
    30ā”‚ value will be decided automatically based on the result of applying this
    31ā”‚ configuration.
    32ā•µ
    33ā•·
    34ā”‚ Error: Value for unconfigurable attribute
    35ā”‚ 
    36ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    37ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    38ā”‚    3:   acl    = &quot;private&quot;
    39ā”‚ 
    40ā”‚ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
    41ā”‚ based on the result of applying this configuration.
    42ā•µ
    43ERRO[0012] 1 error occurred:
    44        * exit status 1
    45resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    46  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    47  acl    = &quot;private&quot;
    48
    49  server_side_encryption_configuration {
    50    rule {
    51      apply_server_side_encryption_by_default {
    52        kms_master_key_id = data.aws_kms_key.s3.arn
    53        sse_algorithm     = &quot;aws:kms&quot;
    54      }
    55    }
    56  }
    57
    58  lifecycle_rule {
    59    id      = &quot;backups&quot;
    60    enabled = true
    61
    62    prefix = &quot;backups/&quot;
    63
    64    transition {
    65      days          = 90
    66      storage_class = &quot;GLACIER_IR&quot;
    67    }
    68
    69    transition {
    70      days          = 180
    71      storage_class = &quot;DEEP_ARCHIVE&quot;
    72    }
    73
    74    expiration {
    75      days = 365
    76    }
    77  }
    78
    79  tags = {
    80    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    81    Environment = var.environment
    82  }
    83}
    84resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    85  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    86
    87  tags = {
    88    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    89    Environment = var.environment
    90  }
    91}
    92
    93resource &quot;aws_s3_bucket_acl&quot; &quot;this&quot; {
    94  bucket = aws_s3_bucket.this.id
    95  acl    = &quot;private&quot;
    96}
    97
    98resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;this&quot; {
    99  bucket = aws_s3_bucket.this.id
    100
    101  rule {
    102    apply_server_side_encryption_by_default {
    103      kms_master_key_id = data.aws_kms_key.s3.arn
    104      sse_algorithm     = &quot;aws:kms&quot;
    105    }
    106  }
    107}
    108
    109resource &quot;aws_s3_bucket_lifecycle_configuration&quot; &quot;this&quot; {
    110  bucket = aws_s3_bucket.this.id
    111
    112  rule {
    113    id     = &quot;backups&quot;
    114    status = &quot;Enabled&quot;
    115
    116    filter {
    117      prefix = &quot;backups/&quot;
    118    }
    119
    120    transition {
    121      days          = 90
    122      storage_class = &quot;GLACIER_IR&quot;
    123    }
    124
    125    transition {
    126      days          = 180
    127      storage_class = &quot;DEEP_ARCHIVE&quot;
    128    }
    129
    130    expiration {
    131      days = 365
    132    }
    133  }
    134}
    135

    If you don't want to upgrade your Terraform AWS Provider version to 4.0.0, you can use the existing or older version by specifying it explicitly in the code as below:

    1- Finding latest version of hashicorp/archive...
    2- Finding latest version of hashicorp/aws...
    3- Finding latest version of hashicorp/null...
    4- Installing hashicorp/null v3.1.0...
    5- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
    6- Installing hashicorp/archive v2.2.0...
    7- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
    8- Installing hashicorp/aws v4.0.0...
    9- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
    10Acquiring state lock. This may take a few moments...
    11Releasing state lock. This may take a few moments...
    12ā•·
    13ā”‚ Error: Value for unconfigurable attribute
    14ā”‚ 
    15ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    16ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    17ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    18ā”‚ 
    19ā”‚ Can't configure a value for &quot;lifecycle_rule&quot;: its value will be decided
    20ā”‚ automatically based on the result of applying this configuration.
    21ā•µ
    22ā•·
    23ā”‚ Error: Value for unconfigurable attribute
    24ā”‚ 
    25ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    26ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    27ā”‚    1: resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    28ā”‚ 
    29ā”‚ Can't configure a value for &quot;server_side_encryption_configuration&quot;: its
    30ā”‚ value will be decided automatically based on the result of applying this
    31ā”‚ configuration.
    32ā•µ
    33ā•·
    34ā”‚ Error: Value for unconfigurable attribute
    35ā”‚ 
    36ā”‚   with module.ssm-parameter-store-backup.aws_s3_bucket.this,
    37ā”‚   on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource &quot;aws_s3_bucket&quot; &quot;this&quot;:
    38ā”‚    3:   acl    = &quot;private&quot;
    39ā”‚ 
    40ā”‚ Can't configure a value for &quot;acl&quot;: its value will be decided automatically
    41ā”‚ based on the result of applying this configuration.
    42ā•µ
    43ERRO[0012] 1 error occurred:
    44        * exit status 1
    45resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    46  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    47  acl    = &quot;private&quot;
    48
    49  server_side_encryption_configuration {
    50    rule {
    51      apply_server_side_encryption_by_default {
    52        kms_master_key_id = data.aws_kms_key.s3.arn
    53        sse_algorithm     = &quot;aws:kms&quot;
    54      }
    55    }
    56  }
    57
    58  lifecycle_rule {
    59    id      = &quot;backups&quot;
    60    enabled = true
    61
    62    prefix = &quot;backups/&quot;
    63
    64    transition {
    65      days          = 90
    66      storage_class = &quot;GLACIER_IR&quot;
    67    }
    68
    69    transition {
    70      days          = 180
    71      storage_class = &quot;DEEP_ARCHIVE&quot;
    72    }
    73
    74    expiration {
    75      days = 365
    76    }
    77  }
    78
    79  tags = {
    80    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    81    Environment = var.environment
    82  }
    83}
    84resource &quot;aws_s3_bucket&quot; &quot;this&quot; {
    85  bucket = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    86
    87  tags = {
    88    Name        = &quot;${var.project}-${var.environment}-ssm-parameter-store-backups-bucket&quot;
    89    Environment = var.environment
    90  }
    91}
    92
    93resource &quot;aws_s3_bucket_acl&quot; &quot;this&quot; {
    94  bucket = aws_s3_bucket.this.id
    95  acl    = &quot;private&quot;
    96}
    97
    98resource &quot;aws_s3_bucket_server_side_encryption_configuration&quot; &quot;this&quot; {
    99  bucket = aws_s3_bucket.this.id
    100
    101  rule {
    102    apply_server_side_encryption_by_default {
    103      kms_master_key_id = data.aws_kms_key.s3.arn
    104      sse_algorithm     = &quot;aws:kms&quot;
    105    }
    106  }
    107}
    108
    109resource &quot;aws_s3_bucket_lifecycle_configuration&quot; &quot;this&quot; {
    110  bucket = aws_s3_bucket.this.id
    111
    112  rule {
    113    id     = &quot;backups&quot;
    114    status = &quot;Enabled&quot;
    115
    116    filter {
    117      prefix = &quot;backups/&quot;
    118    }
    119
    120    transition {
    121      days          = 90
    122      storage_class = &quot;GLACIER_IR&quot;
    123    }
    124
    125    transition {
    126      days          = 180
    127      storage_class = &quot;DEEP_ARCHIVE&quot;
    128    }
    129
    130    expiration {
    131      days = 365
    132    }
    133  }
    134}
    135terraform {
    136  required_version = &quot;~&gt; 1.0.11&quot;
    137  required_providers {
    138    aws  = &quot;~&gt; 3.73.0&quot;
    139  }
    140}
    141

    Source https://stackoverflow.com/questions/71078462

    QUESTION

    How can I get output from boto3 ecs execute_command?

    Asked 2022-Jan-13 at 19:35

    I have an ECS task running on Fargate on which I want to run a command in boto3 and get back the output. I can do so in the awscli just fine.

    1āžœ aws ecs execute-command --cluster cluster1 \                                                                                   
    2    --task abc \
    3    --container container1 \
    4    --interactive \
    5    --command 'echo hi'    
    6
    7The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
    8
    9Starting session with SessionId: ecs-execute-command-0f913e47ae7801aeb
    10hi
    11
    12Exiting session with sessionId: ecs-execute-command-0f913e47ae7801aeb.
    13

    But I cannot sort out how to get the output for the same in boto3.

    1āžœ aws ecs execute-command --cluster cluster1 \                                                                                   
    2    --task abc \
    3    --container container1 \
    4    --interactive \
    5    --command 'echo hi'    
    6
    7The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
    8
    9Starting session with SessionId: ecs-execute-command-0f913e47ae7801aeb
    10hi
    11
    12Exiting session with sessionId: ecs-execute-command-0f913e47ae7801aeb.
    13ecs = boto3.client(&quot;ecs&quot;)
    14ssm = boto3.client(&quot;ssm&quot;)
    15exec_resp = ecs.execute_command(
    16    cluster=self.cluster,
    17    task=self.task,
    18    container=self.container,
    19    interactive=True,
    20    command=&quot;echo hi&quot;,
    21)
    22s_active = ssm.describe_sessions(
    23    State=&quot;Active&quot;,
    24    Filters=[
    25        {
    26            &quot;key&quot;: &quot;SessionId&quot;,
    27            &quot;value&quot;: exec_resp[&quot;session&quot;][&quot;sessionId&quot;],
    28        },
    29    ],
    30)
    31# Here I get the document for the active session.
    32doc_active = ssm.get_document(Name=s_active[&quot;Sessions&quot;][0][&quot;DocumentName&quot;])
    33# Now I wait for the session to finish.
    34s_history = {}
    35done = False
    36while not done:
    37    s_history = ssm.describe_sessions(
    38        State=&quot;History&quot;,
    39        Filters=[
    40            {
    41                &quot;key&quot;: &quot;SessionId&quot;,
    42                &quot;value&quot;: exec_resp[&quot;session&quot;][&quot;sessionId&quot;],
    43            },
    44        ],
    45    )
    46    done = len(s_history[&quot;Sessions&quot;]) &gt; 0
    47doc_history = ssm.get_document(Name=s_history[&quot;Sessions&quot;][0][&quot;DocumentName&quot;])
    48

    Now the session is terminating and I get another document back, but there still doesn't seem to be output anywhere. Has anybody gotten output from this? How?


    For anybody arriving seeking a similar solution, I have created a tool for making this task simple. It is called interloper. This is mostly thanks to the excellent answer by Andrey.

    ANSWER

    Answered 2022-Jan-04 at 23:43

    Ok, basically by reading the ssm session manager plugin source code I came up with the following simplified reimplementation that is capable of just grabbing the command output: (you need to pip install websocket-client construct)

    1āžœ aws ecs execute-command --cluster cluster1 \                                                                                   
    2    --task abc \
    3    --container container1 \
    4    --interactive \
    5    --command 'echo hi'    
    6
    7The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
    8
    9Starting session with SessionId: ecs-execute-command-0f913e47ae7801aeb
    10hi
    11
    12Exiting session with sessionId: ecs-execute-command-0f913e47ae7801aeb.
    13ecs = boto3.client(&quot;ecs&quot;)
    14ssm = boto3.client(&quot;ssm&quot;)
    15exec_resp = ecs.execute_command(
    16    cluster=self.cluster,
    17    task=self.task,
    18    container=self.container,
    19    interactive=True,
    20    command=&quot;echo hi&quot;,
    21)
    22s_active = ssm.describe_sessions(
    23    State=&quot;Active&quot;,
    24    Filters=[
    25        {
    26            &quot;key&quot;: &quot;SessionId&quot;,
    27            &quot;value&quot;: exec_resp[&quot;session&quot;][&quot;sessionId&quot;],
    28        },
    29    ],
    30)
    31# Here I get the document for the active session.
    32doc_active = ssm.get_document(Name=s_active[&quot;Sessions&quot;][0][&quot;DocumentName&quot;])
    33# Now I wait for the session to finish.
    34s_history = {}
    35done = False
    36while not done:
    37    s_history = ssm.describe_sessions(
    38        State=&quot;History&quot;,
    39        Filters=[
    40            {
    41                &quot;key&quot;: &quot;SessionId&quot;,
    42                &quot;value&quot;: exec_resp[&quot;session&quot;][&quot;sessionId&quot;],
    43            },
    44        ],
    45    )
    46    done = len(s_history[&quot;Sessions&quot;]) &gt; 0
    47doc_history = ssm.get_document(Name=s_history[&quot;Sessions&quot;][0][&quot;DocumentName&quot;])
    48import json
    49import uuid
    50
    51import boto3
    52import construct as c
    53import websocket
    54
    55ecs = boto3.client(&quot;ecs&quot;)
    56ssm = boto3.client(&quot;ssm&quot;)
    57exec_resp = ecs.execute_command(
    58    cluster=self.cluster,
    59    task=self.task,
    60    container=self.container,
    61    interactive=True,
    62    command=&quot;ls -la /&quot;,
    63)
    64
    65session = exec_resp['session']
    66connection = websocket.create_connection(session['streamUrl'])
    67try:
    68    init_payload = {
    69        &quot;MessageSchemaVersion&quot;: &quot;1.0&quot;,
    70        &quot;RequestId&quot;: str(uuid.uuid4()),
    71        &quot;TokenValue&quot;: session['tokenValue']
    72    }
    73    connection.send(json.dumps(init_payload))
    74
    75    AgentMessageHeader = c.Struct(
    76        'HeaderLength' / c.Int32ub,
    77        'MessageType' / c.PaddedString(32, 'ascii'),
    78    )
    79
    80    AgentMessagePayload = c.Struct(
    81        'PayloadLength' / c.Int32ub,
    82        'Payload' / c.PaddedString(c.this.PayloadLength, 'ascii')
    83    )
    84
    85    while True:
    86        response = connection.recv()
    87
    88        message = AgentMessageHeader.parse(response)
    89
    90        if 'channel_closed' in message.MessageType:
    91            raise Exception('Channel closed before command output was received')
    92
    93        if 'output_stream_data' in message.MessageType:
    94            break
    95
    96finally:
    97    connection.close()
    98
    99payload_message = AgentMessagePayload.parse(response[message.HeaderLength:])
    100
    101print(payload_message.Payload)
    102

    Source https://stackoverflow.com/questions/70367030

    QUESTION

    AWS Graphql lambda query

    Asked 2022-Jan-09 at 17:12

    I am not using AWS AppSync for this app. I have created Graphql schema, I have made my own resolvers. For each create, query, I have made each Lambda functions. I used DynamoDB Single table concept and it's Global secondary indexes.

    It was ok for me, to create an Book item. In DynamoDB, the table looks like this: enter image description here.

    I am having issue with the return Graphql queries. After getting the Items from DynamoDB table, I have to use Map function then return the Items based on Graphql type. I feel like this is not efficient way to do that. Idk the best way query data. Also I am getting null both author and authors query.

    This is my gitlab-branch.

    This is my Graphql Schema

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;

    This is I created the Book Item

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;import AWS from 'aws-sdk';
    69import { v4 } from 'uuid';
    70import { CreateBook } from '../../generated/schema';
    71
    72async function createBook(_: unknown, { input }: { input: CreateBook }) {
    73  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    74  const id = v4();
    75
    76  const authorsName = 
    77    input.author &amp;&amp;
    78    input.author.map(function (item) {
    79      return item['authorName'];
    80    });
    81
    82  const params = {
    83    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    84    Item: {
    85      PK: `AUTHOR`,
    86      SK: `AUTHORS#${id}`,
    87      GSI1PK: `BOOKS`,
    88      GSI1SK: `BOOK#${input.name}`,
    89      name: input.name,
    90      author: authorsName,
    91      price: input.price,
    92      publishingYear: input.publishingYear,
    93      publisher: input.publisher,
    94      page: input.page,
    95      description: input.description,
    96      genre: input.genre,
    97    },
    98  };
    99
    100  await dynamoDb.put(params).promise();
    101
    102  return {
    103    ...input,
    104    id,
    105  };
    106}
    107
    108export default createBook;

    This is how query the All Book

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;import AWS from 'aws-sdk';
    69import { v4 } from 'uuid';
    70import { CreateBook } from '../../generated/schema';
    71
    72async function createBook(_: unknown, { input }: { input: CreateBook }) {
    73  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    74  const id = v4();
    75
    76  const authorsName = 
    77    input.author &amp;&amp;
    78    input.author.map(function (item) {
    79      return item['authorName'];
    80    });
    81
    82  const params = {
    83    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    84    Item: {
    85      PK: `AUTHOR`,
    86      SK: `AUTHORS#${id}`,
    87      GSI1PK: `BOOKS`,
    88      GSI1SK: `BOOK#${input.name}`,
    89      name: input.name,
    90      author: authorsName,
    91      price: input.price,
    92      publishingYear: input.publishingYear,
    93      publisher: input.publisher,
    94      page: input.page,
    95      description: input.description,
    96      genre: input.genre,
    97    },
    98  };
    99
    100  await dynamoDb.put(params).promise();
    101
    102  return {
    103    ...input,
    104    id,
    105  };
    106}
    107
    108export default createBook;import AWS from 'aws-sdk';
    109
    110async function books(_: unknown, input: { book: string }) {
    111  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    112
    113  const params = {
    114    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    115    IndexName: 'GSI1',
    116    KeyConditionExpression: 'GSI1PK = :hkey',
    117    ExpressionAttributeValues: {
    118      ':hkey': `${input.book}`,
    119    },
    120  };
    121
    122  const { Items } = await dynamoDb.query(params).promise();
    123
    124  const allBooks =  // NEED TO MAP THE FUNcTION THEN RETURN THE DATA BASED ON GRAPHQL //QUERIES.
    125    Items &amp;&amp;
    126    Items.map((i) =&gt; {
    127      const genre = i.genre.filter((i) =&gt; i);
    128      return {
    129        name: i.name,
    130        author: i.author,
    131        genre,
    132      };
    133    });
    134
    135  return allBooks;
    136}
    137
    138export default books;

    This my Author query and Image of the console result

    enter image description here

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;import AWS from 'aws-sdk';
    69import { v4 } from 'uuid';
    70import { CreateBook } from '../../generated/schema';
    71
    72async function createBook(_: unknown, { input }: { input: CreateBook }) {
    73  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    74  const id = v4();
    75
    76  const authorsName = 
    77    input.author &amp;&amp;
    78    input.author.map(function (item) {
    79      return item['authorName'];
    80    });
    81
    82  const params = {
    83    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    84    Item: {
    85      PK: `AUTHOR`,
    86      SK: `AUTHORS#${id}`,
    87      GSI1PK: `BOOKS`,
    88      GSI1SK: `BOOK#${input.name}`,
    89      name: input.name,
    90      author: authorsName,
    91      price: input.price,
    92      publishingYear: input.publishingYear,
    93      publisher: input.publisher,
    94      page: input.page,
    95      description: input.description,
    96      genre: input.genre,
    97    },
    98  };
    99
    100  await dynamoDb.put(params).promise();
    101
    102  return {
    103    ...input,
    104    id,
    105  };
    106}
    107
    108export default createBook;import AWS from 'aws-sdk';
    109
    110async function books(_: unknown, input: { book: string }) {
    111  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    112
    113  const params = {
    114    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    115    IndexName: 'GSI1',
    116    KeyConditionExpression: 'GSI1PK = :hkey',
    117    ExpressionAttributeValues: {
    118      ':hkey': `${input.book}`,
    119    },
    120  };
    121
    122  const { Items } = await dynamoDb.query(params).promise();
    123
    124  const allBooks =  // NEED TO MAP THE FUNcTION THEN RETURN THE DATA BASED ON GRAPHQL //QUERIES.
    125    Items &amp;&amp;
    126    Items.map((i) =&gt; {
    127      const genre = i.genre.filter((i) =&gt; i);
    128      return {
    129        name: i.name,
    130        author: i.author,
    131        genre,
    132      };
    133    });
    134
    135  return allBooks;
    136}
    137
    138export default books;import AWS from 'aws-sdk';
    139import { Author, Authors } from '../../generated/schema';
    140
    141async function authors(
    142  _: unknown,
    143  input: { author: Authors }
    144): Promise&lt;Author&gt; {
    145  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    146
    147  const params = {
    148    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    149    KeyConditionExpression: 'PK = :hkey',
    150    ExpressionAttributeValues: {
    151      ':hkey': `${input.author}`,
    152    },
    153  };
    154
    155  const { Items } = await dynamoDb.query(params).promise();
    156
    157  console.log({ Items }); // I can see the data but don't know how to returns the data like this below type without using map function
    158
    159  // type Author {
    160  //   id: ID!
    161  //   authorName: String
    162  //   book: [Book]!
    163  // }
    164
    165  return Items; // return null in Graphql play ground. 
    166}
    167
    168export default authors;

    Edit: current resolver map

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;import AWS from 'aws-sdk';
    69import { v4 } from 'uuid';
    70import { CreateBook } from '../../generated/schema';
    71
    72async function createBook(_: unknown, { input }: { input: CreateBook }) {
    73  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    74  const id = v4();
    75
    76  const authorsName = 
    77    input.author &amp;&amp;
    78    input.author.map(function (item) {
    79      return item['authorName'];
    80    });
    81
    82  const params = {
    83    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    84    Item: {
    85      PK: `AUTHOR`,
    86      SK: `AUTHORS#${id}`,
    87      GSI1PK: `BOOKS`,
    88      GSI1SK: `BOOK#${input.name}`,
    89      name: input.name,
    90      author: authorsName,
    91      price: input.price,
    92      publishingYear: input.publishingYear,
    93      publisher: input.publisher,
    94      page: input.page,
    95      description: input.description,
    96      genre: input.genre,
    97    },
    98  };
    99
    100  await dynamoDb.put(params).promise();
    101
    102  return {
    103    ...input,
    104    id,
    105  };
    106}
    107
    108export default createBook;import AWS from 'aws-sdk';
    109
    110async function books(_: unknown, input: { book: string }) {
    111  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    112
    113  const params = {
    114    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    115    IndexName: 'GSI1',
    116    KeyConditionExpression: 'GSI1PK = :hkey',
    117    ExpressionAttributeValues: {
    118      ':hkey': `${input.book}`,
    119    },
    120  };
    121
    122  const { Items } = await dynamoDb.query(params).promise();
    123
    124  const allBooks =  // NEED TO MAP THE FUNcTION THEN RETURN THE DATA BASED ON GRAPHQL //QUERIES.
    125    Items &amp;&amp;
    126    Items.map((i) =&gt; {
    127      const genre = i.genre.filter((i) =&gt; i);
    128      return {
    129        name: i.name,
    130        author: i.author,
    131        genre,
    132      };
    133    });
    134
    135  return allBooks;
    136}
    137
    138export default books;import AWS from 'aws-sdk';
    139import { Author, Authors } from '../../generated/schema';
    140
    141async function authors(
    142  _: unknown,
    143  input: { author: Authors }
    144): Promise&lt;Author&gt; {
    145  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    146
    147  const params = {
    148    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    149    KeyConditionExpression: 'PK = :hkey',
    150    ExpressionAttributeValues: {
    151      ':hkey': `${input.author}`,
    152    },
    153  };
    154
    155  const { Items } = await dynamoDb.query(params).promise();
    156
    157  console.log({ Items }); // I can see the data but don't know how to returns the data like this below type without using map function
    158
    159  // type Author {
    160  //   id: ID!
    161  //   authorName: String
    162  //   book: [Book]!
    163  // }
    164
    165  return Items; // return null in Graphql play ground. 
    166}
    167
    168export default authors;// resolver map - src/resolvers/index.ts
    169const resolvers = {
    170  Query: {
    171    books,
    172    authors,
    173    author,
    174    book,
    175  },
    176  Mutation: {
    177    createBook,
    178  },
    179};
    180

    ANSWER

    Answered 2022-Jan-09 at 17:06

    TL;DR You are missing some resolvers. Your query resolvers are trying to do the job of the missing resolvers. Your resolvers must return data in the right shape.

    In other words, your problems are with configuring Apollo Server's resolvers. Nothing Lambda-specific, as far as I can tell.

    Write and register the missing resolvers.

    GraphQL doesn't know how to "resolve" an author's books, for instance. Add a Author {books(parent)} entry to Apollo Server's resolver map. The corresponding resolver function should return a list of book objects (i.e. [Books]), as your schema requires. Apollo's docs have a similar example you can adapt.

    Here's a refactored author query, commented with the resolvers that will be called:

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;import AWS from 'aws-sdk';
    69import { v4 } from 'uuid';
    70import { CreateBook } from '../../generated/schema';
    71
    72async function createBook(_: unknown, { input }: { input: CreateBook }) {
    73  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    74  const id = v4();
    75
    76  const authorsName = 
    77    input.author &amp;&amp;
    78    input.author.map(function (item) {
    79      return item['authorName'];
    80    });
    81
    82  const params = {
    83    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    84    Item: {
    85      PK: `AUTHOR`,
    86      SK: `AUTHORS#${id}`,
    87      GSI1PK: `BOOKS`,
    88      GSI1SK: `BOOK#${input.name}`,
    89      name: input.name,
    90      author: authorsName,
    91      price: input.price,
    92      publishingYear: input.publishingYear,
    93      publisher: input.publisher,
    94      page: input.page,
    95      description: input.description,
    96      genre: input.genre,
    97    },
    98  };
    99
    100  await dynamoDb.put(params).promise();
    101
    102  return {
    103    ...input,
    104    id,
    105  };
    106}
    107
    108export default createBook;import AWS from 'aws-sdk';
    109
    110async function books(_: unknown, input: { book: string }) {
    111  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    112
    113  const params = {
    114    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    115    IndexName: 'GSI1',
    116    KeyConditionExpression: 'GSI1PK = :hkey',
    117    ExpressionAttributeValues: {
    118      ':hkey': `${input.book}`,
    119    },
    120  };
    121
    122  const { Items } = await dynamoDb.query(params).promise();
    123
    124  const allBooks =  // NEED TO MAP THE FUNcTION THEN RETURN THE DATA BASED ON GRAPHQL //QUERIES.
    125    Items &amp;&amp;
    126    Items.map((i) =&gt; {
    127      const genre = i.genre.filter((i) =&gt; i);
    128      return {
    129        name: i.name,
    130        author: i.author,
    131        genre,
    132      };
    133    });
    134
    135  return allBooks;
    136}
    137
    138export default books;import AWS from 'aws-sdk';
    139import { Author, Authors } from '../../generated/schema';
    140
    141async function authors(
    142  _: unknown,
    143  input: { author: Authors }
    144): Promise&lt;Author&gt; {
    145  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    146
    147  const params = {
    148    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    149    KeyConditionExpression: 'PK = :hkey',
    150    ExpressionAttributeValues: {
    151      ':hkey': `${input.author}`,
    152    },
    153  };
    154
    155  const { Items } = await dynamoDb.query(params).promise();
    156
    157  console.log({ Items }); // I can see the data but don't know how to returns the data like this below type without using map function
    158
    159  // type Author {
    160  //   id: ID!
    161  //   authorName: String
    162  //   book: [Book]!
    163  // }
    164
    165  return Items; // return null in Graphql play ground. 
    166}
    167
    168export default authors;// resolver map - src/resolvers/index.ts
    169const resolvers = {
    170  Query: {
    171    books,
    172    authors,
    173    author,
    174    book,
    175  },
    176  Mutation: {
    177    createBook,
    178  },
    179};
    180query author(id: '1') {     # Query { author } resolver
    181  authorName
    182  books {                   # Author { books(parent) } resolver
    183    name
    184    authors {               # Book { author(parent) } resolver
    185      id
    186    }
    187  }
    188}
    189

    Apollo Server uses the resolver map during query execution to decide what resolvers to call for a given query field. It's not a coincidence that the map looks like your schema. Resolver functions are called with parent, arg, context and info arguments, which give your functions the context to fetch the right records from the data source.

    1import { gql } from 'apollo-server-lambda';
    2
    3const typeDefs = gql`
    4  enum Genre {
    5    adventure
    6    drama
    7    scifi
    8  }
    9
    10  enum Authors {
    11    AUTHOR
    12  }
    13
    14  # Root Query - all the queries supported by the schema
    15
    16  type Query {
    17    """
    18    All Authors query
    19    """
    20    authors(author: Authors): [Author]
    21    books(book: String): [Book]
    22  }
    23
    24  # Root Mutation - all the mutations supported by the schema
    25  type Mutation {
    26    createBook(input: CreateBook!): Book
    27  }
    28
    29  """
    30  One Author can have many books
    31  """
    32  type Author {
    33    id: ID!
    34    authorName: String
    35    book: [Book]!
    36  }
    37
    38  """
    39  Book Schema
    40  """
    41  type Book {
    42    id: ID!
    43    name: String
    44    price: String
    45    publishingYear: String
    46    publisher: String
    47    author: [Author]
    48    description: String
    49    page: Int
    50    genre: [Genre]
    51  }
    52
    53  input CreateBook {
    54    name: String
    55    price: String
    56    publishingYear: String
    57    publisher: String
    58    author: [CreateAuthor]
    59    description: String
    60    page: Int
    61    genre: [Genre]
    62  }
    63
    64  input CreateAuthor {
    65    authorName: String!
    66  }
    67`;
    68export default typeDefs;import AWS from 'aws-sdk';
    69import { v4 } from 'uuid';
    70import { CreateBook } from '../../generated/schema';
    71
    72async function createBook(_: unknown, { input }: { input: CreateBook }) {
    73  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    74  const id = v4();
    75
    76  const authorsName = 
    77    input.author &amp;&amp;
    78    input.author.map(function (item) {
    79      return item['authorName'];
    80    });
    81
    82  const params = {
    83    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    84    Item: {
    85      PK: `AUTHOR`,
    86      SK: `AUTHORS#${id}`,
    87      GSI1PK: `BOOKS`,
    88      GSI1SK: `BOOK#${input.name}`,
    89      name: input.name,
    90      author: authorsName,
    91      price: input.price,
    92      publishingYear: input.publishingYear,
    93      publisher: input.publisher,
    94      page: input.page,
    95      description: input.description,
    96      genre: input.genre,
    97    },
    98  };
    99
    100  await dynamoDb.put(params).promise();
    101
    102  return {
    103    ...input,
    104    id,
    105  };
    106}
    107
    108export default createBook;import AWS from 'aws-sdk';
    109
    110async function books(_: unknown, input: { book: string }) {
    111  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    112
    113  const params = {
    114    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    115    IndexName: 'GSI1',
    116    KeyConditionExpression: 'GSI1PK = :hkey',
    117    ExpressionAttributeValues: {
    118      ':hkey': `${input.book}`,
    119    },
    120  };
    121
    122  const { Items } = await dynamoDb.query(params).promise();
    123
    124  const allBooks =  // NEED TO MAP THE FUNcTION THEN RETURN THE DATA BASED ON GRAPHQL //QUERIES.
    125    Items &amp;&amp;
    126    Items.map((i) =&gt; {
    127      const genre = i.genre.filter((i) =&gt; i);
    128      return {
    129        name: i.name,
    130        author: i.author,
    131        genre,
    132      };
    133    });
    134
    135  return allBooks;
    136}
    137
    138export default books;import AWS from 'aws-sdk';
    139import { Author, Authors } from '../../generated/schema';
    140
    141async function authors(
    142  _: unknown,
    143  input: { author: Authors }
    144): Promise&lt;Author&gt; {
    145  const dynamoDb = new AWS.DynamoDB.DocumentClient();
    146
    147  const params = {
    148    TableName: process.env.ITEM_TABLE ? process.env.ITEM_TABLE : '',
    149    KeyConditionExpression: 'PK = :hkey',
    150    ExpressionAttributeValues: {
    151      ':hkey': `${input.author}`,
    152    },
    153  };
    154
    155  const { Items } = await dynamoDb.query(params).promise();
    156
    157  console.log({ Items }); // I can see the data but don't know how to returns the data like this below type without using map function
    158
    159  // type Author {
    160  //   id: ID!
    161  //   authorName: String
    162  //   book: [Book]!
    163  // }
    164
    165  return Items; // return null in Graphql play ground. 
    166}
    167
    168export default authors;// resolver map - src/resolvers/index.ts
    169const resolvers = {
    170  Query: {
    171    books,
    172    authors,
    173    author,
    174    book,
    175  },
    176  Mutation: {
    177    createBook,
    178  },
    179};
    180query author(id: '1') {     # Query { author } resolver
    181  authorName
    182  books {                   # Author { books(parent) } resolver
    183    name
    184    authors {               # Book { author(parent) } resolver
    185      id
    186    }
    187  }
    188}
    189// resolver map - passed to the Apollo Server constructor
    190const resolvers = {
    191  Query: {
    192    books,
    193    authors,
    194    author,
    195    book,
    196  },
    197
    198  Author: {
    199    books(parent) { getAuthorBooks(parent); }, // parent is the author - resolver should return a list of books
    200  },
    201
    202  Book: {
    203    authors(parent) { getBookAuthors(parent); }, // parent is the book - resolver should return a list of authors
    204  },
    205};
    206
    Your query resolvers are trying to do too much work.

    It's not the author query resolver's job to resolve all the child fields. Apollo Server will call multiple resolvers multiple times during query execution:

    You can think of each field in a GraphQL query as a function or method of the previous type which returns the next type. In fact, this is exactly how GraphQL works. Each field on each type is backed by a function called the resolver which is provided by the GraphQL server developer. When a field is executed, the corresponding resolver is called to produce the next value

    Apollo Server calls this the resolver chain. The books(parent) resolver will be invoked with Author as its parent argument. You can use the author id to look up her books.

    Your resolver return values must be consistent with the schema.

    Make sure your resolvers are returning data in the shape required by the schema. Your author resolver is apparently returning a map {Items: [author-record]}, but your schema says it needs to be a list.

    (If I were you, I would change the author query signature from author(PK: String, SK: String): [Author] to something more caller-friendly like author(id: ID): Author. Return an Object, not a List. Hide the DynamoDB implementation details in the resolver function. Apollo Server has a ID scalar type that is serialised as a String.)

    Source https://stackoverflow.com/questions/70577447

    QUESTION

    'AmplifySignOut' is not exported from '@aws-amplify/ui-react'

    Asked 2021-Dec-19 at 14:09

    I've run into this issue today, and it's only started today. Ran the usual sequence of installs and pushes to build the app...

    1npx create-react-app exampleapp
    2npm start
    3amplify init
    4amplify add api
    5Amplify push
    6npm install aws-amplify @aws-amplify/ui-react
    7amplify add auth
    8amplify push
    9

    Make my changes to the index.js and ap.js as usual..

    index.js:

    1npx create-react-app exampleapp
    2npm start
    3amplify init
    4amplify add api
    5Amplify push
    6npm install aws-amplify @aws-amplify/ui-react
    7amplify add auth
    8amplify push
    9import React from 'react';
    10import ReactDOM from 'react-dom';
    11import './index.css';
    12import App from './App';
    13import reportWebVitals from './reportWebVitals';
    14import Amplify from 'aws-amplify';
    15import aws_exports from './aws-exports'
    16
    17Amplify.configure(aws_exports);
    18
    19ReactDOM.render(
    20  &lt;React.StrictMode&gt;
    21    &lt;App /&gt;
    22  &lt;/React.StrictMode&gt;,
    23  document.getElementById('root')
    24);
    25
    26reportWebVitals();
    27

    App.js:

    1npx create-react-app exampleapp
    2npm start
    3amplify init
    4amplify add api
    5Amplify push
    6npm install aws-amplify @aws-amplify/ui-react
    7amplify add auth
    8amplify push
    9import React from 'react';
    10import ReactDOM from 'react-dom';
    11import './index.css';
    12import App from './App';
    13import reportWebVitals from './reportWebVitals';
    14import Amplify from 'aws-amplify';
    15import aws_exports from './aws-exports'
    16
    17Amplify.configure(aws_exports);
    18
    19ReactDOM.render(
    20  &lt;React.StrictMode&gt;
    21    &lt;App /&gt;
    22  &lt;/React.StrictMode&gt;,
    23  document.getElementById('root')
    24);
    25
    26reportWebVitals();
    27import React from 'react';
    28import './App.css';
    29import { withAuthenticator, AmplifySignOut, Authenticator } from '@aws-amplify/ui-react';
    30import { Amplify, Auth } from 'aws-amplify';
    31import awsExports from './aws-exports';
    32
    33import awsconfig from './aws-exports';
    34
    35Amplify.configure(awsconfig);
    36Auth.configure(awsconfig);
    37
    38function App() {
    39   return (
    40    &lt;div&gt;
    41      &lt;h1&gt;Help!&lt;/h1&gt;
    42      &lt;AmplifySignOut /&gt;
    43    &lt;/div&gt;
    44   );
    45}
    46
    47export default withAuthenticator(App);
    48

    If I add AmplifySignOut it throws the error: 'AmplifySignOut' is not exported from '@aws-amplify/ui-react'

    If I remove AmplifySignOut, then the login appears but it has no formatting as per the Amazon Authentication style (orange button etc.).

    I can add import '@aws-amplify/ui-react/styles.css'; and I get some styling back, but I really need things back to how the were working. Any help would be appreciated!

    ANSWER

    Answered 2021-Nov-20 at 19:28

    I am following along with the Amplify tutorial and hit this roadblock as well. It looks like they just upgraded the react components from 1.2.5 to 2.0.0 https://github.com/aws-amplify/docs/pull/3793

    Downgrading ui-react to 1.2.5 brings back the AmplifySignOut and other components used in the tutorials.

    in package.json:

    1npx create-react-app exampleapp
    2npm start
    3amplify init
    4amplify add api
    5Amplify push
    6npm install aws-amplify @aws-amplify/ui-react
    7amplify add auth
    8amplify push
    9import React from 'react';
    10import ReactDOM from 'react-dom';
    11import './index.css';
    12import App from './App';
    13import reportWebVitals from './reportWebVitals';
    14import Amplify from 'aws-amplify';
    15import aws_exports from './aws-exports'
    16
    17Amplify.configure(aws_exports);
    18
    19ReactDOM.render(
    20  &lt;React.StrictMode&gt;
    21    &lt;App /&gt;
    22  &lt;/React.StrictMode&gt;,
    23  document.getElementById('root')
    24);
    25
    26reportWebVitals();
    27import React from 'react';
    28import './App.css';
    29import { withAuthenticator, AmplifySignOut, Authenticator } from '@aws-amplify/ui-react';
    30import { Amplify, Auth } from 'aws-amplify';
    31import awsExports from './aws-exports';
    32
    33import awsconfig from './aws-exports';
    34
    35Amplify.configure(awsconfig);
    36Auth.configure(awsconfig);
    37
    38function App() {
    39   return (
    40    &lt;div&gt;
    41      &lt;h1&gt;Help!&lt;/h1&gt;
    42      &lt;AmplifySignOut /&gt;
    43    &lt;/div&gt;
    44   );
    45}
    46
    47export default withAuthenticator(App);
    48&quot;dependencies&quot;: {
    49    &quot;@aws-amplify/ui-react&quot;: &quot;^1.2.5&quot;,
    50   ...
    51}
    52

    Alternatively, you'll need to look into the version 2 docs to find suitable replacements: https://ui.docs.amplify.aws/components/authenticator

    Source https://stackoverflow.com/questions/70036160

    Community Discussions contain sources that include Stack Exchange Network

    Tutorials and Learning Resources in AWS

    Tutorials and Learning Resources are not available at this moment for AWS

    Share this Page

    share link

    Get latest updates on AWS