Explore all Load Balancing open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Load Balancing

ingress-nginx

NGINX Ingress Controller - v1.2.0

bfe

BFE v1.5.0

metallb

metallb-chart-0.12.1

aws-load-balancer-controller

v2.3.1

glb-director

Popular Libraries in Load Balancing

ingress-nginx

by kubernetes doticongodoticon

star image 12540 doticonApache-2.0

NGINX Ingress Controller for Kubernetes

bfe

by bfenetworks doticongodoticon

star image 5230 doticonApache-2.0

A modern layer 7 load balancer from baidu

metallb

by metallb doticongodoticon

star image 4637 doticonApache-2.0

A network load-balancer implementation for Kubernetes using standard routing protocols

aws-load-balancer-controller

by kubernetes-sigs doticongodoticon

star image 2678 doticonApache-2.0

A Kubernetes controller for Elastic Load Balancers

glb-director

by github doticoncdoticon

star image 2065 doticonNOASSERTION

GitHub Load Balancer Director and supporting tooling.

voyager

by voyagermesh doticonshelldoticon

star image 1305 doticonApache-2.0

🚀 Secure L7/L4 (HAProxy) Ingress Controller for Kubernetes

haproxy-ingress

by jcmoraisjr doticongodoticon

star image 831 doticonApache-2.0

HAProxy Ingress

kube-vip

by kube-vip doticongodoticon

star image 777 doticonApache-2.0

Kubernetes Control Plane Virtual IP and Load-Balancer

openelb

by kubesphere doticongodoticon

star image 747 doticonApache-2.0

Load Balancer Implementation for Bare Metal Kubernetes Cluster

Trending New libraries in Load Balancing

kube-vip

by kube-vip doticongodoticon

star image 777 doticonApache-2.0

Kubernetes Control Plane Virtual IP and Load-Balancer

kube-vip

by plunder-app doticongodoticon

star image 289 doticonApache-2.0

Kubernetes Control Plane Virtual IP and Load-Balancer

CVE-2019-19781

by jas502n doticonpythondoticon

star image 76 doticon

Citrix ADC Remote Code Execution

CVE-2020-8193

by jas502n doticonpythondoticon

star image 72 doticon

Citrix ADC Vulns

pie-smart-pools

by pie-dao doticontypescriptdoticon

star image 52 doticon

Pie balancer smart pool controllers

devlab-eks-alb-2048game

by starchx doticonshelldoticon

star image 50 doticon

DevLab prepared for AWS Submit

nginx-ingress-operator

by nginxinc doticongodoticon

star image 49 doticonApache-2.0

NGINX Ingress Operator for NGINX and NGINX Plus Ingress Controllers

kube-karp

by immanuelfodor doticonshelldoticon

star image 46 doticonMIT

☸ Add a floating virtual IP to Kubernetes cluster nodes for load balancing easily.

Load Balancer and Ingress services for Kubernetes

Top Authors in Load Balancing

1

slauger

3 Libraries

star icon11

2

github

2 Libraries

star icon2160

3

m-lab

2 Libraries

star icon12

4

openshift

2 Libraries

star icon58

5

jas502n

2 Libraries

star icon148

6

fonoster

2 Libraries

star icon6

7

mesosphere

2 Libraries

star icon463

8

kubesphere

2 Libraries

star icon1470

9

aswinkarthik

2 Libraries

star icon8

10

onestraw

2 Libraries

star icon325

1

3 Libraries

star icon11

2

2 Libraries

star icon2160

3

2 Libraries

star icon12

4

2 Libraries

star icon58

5

2 Libraries

star icon148

6

2 Libraries

star icon6

7

2 Libraries

star icon463

8

2 Libraries

star icon1470

9

2 Libraries

star icon8

10

2 Libraries

star icon325

Trending Kits in Load Balancing

No Trending Kits are available at this moment for Load Balancing

Trending Discussions on Load Balancing

list of map required for loabdbalancer rules specs

How to run Jupyter, locally, connecting to Google Cloud VM using only internal IP address?

Files needed to run a container docker-compose command

NiFi Cluster Docker Load Balancing configuration

When to enable application load balancers on AWS

Code works, but running it enough times the average load exceeds 50%

IAP User is able to access a Cloud Run without permission

How can I deploy Node JS app along with dist folder for production in Kubernetes?

How to configure GKE Autopilot w/Envoy & gRPC-Web

Azure Load Balancing Solutions. Direct Traffic to Specific VMs

QUESTION

list of map required for loabdbalancer rules specs

Asked 2022-Apr-01 at 07:17

I have a terraform tfvars.json file as like below:

1{
2  "loadbalancer_rule": {
3    "patterns_default_loadbalancer_rule": {
4      "backend_address_pool_id": null,
5      "lb_rule_specs" : {
6        
7          "name" : "test2",
8          "protocol": "tcp",
9          "frontend_port": "8080",
10          "backend_port": "8081",
11          "frontend_ip_configuration_name": "projectname-lb-nic"
12      
13      },
14      "load_distribution": "",
15      "loadbalancer_id": null,
16      "probe_id": "",
17      "resource_group_name": null
18    }
19  }
20}
21

The main.tf is like below:

1{
2  "loadbalancer_rule": {
3    "patterns_default_loadbalancer_rule": {
4      "backend_address_pool_id": null,
5      "lb_rule_specs" : {
6        
7          "name" : "test2",
8          "protocol": "tcp",
9          "frontend_port": "8080",
10          "backend_port": "8081",
11          "frontend_ip_configuration_name": "projectname-lb-nic"
12      
13      },
14      "load_distribution": "",
15      "loadbalancer_id": null,
16      "probe_id": "",
17      "resource_group_name": null
18    }
19  }
20}
21variable "loadbalancer_rule" {
22  description = "Map of loadbalancer-rule objects"
23  type        = any
24  default     = null
25}
26
27module "loadbalancer_rule" {
28  for_each            = coalesce(var.loadbalancer_rule, {})
29  source              = "../loadbalancer-rule/azurerm"
30  version             = "7.0.0-2-1.0"
31
32  backend_address_pool_id = try(each.value.backend_address_pool_id, null)
33  lb_rule_specs = try(each.value.lb_rule_specs, null)
34  load_distribution = try(each.value.load_distribution, "")
35  loadbalancer_id = try(each.value.loadbalancer_id, null)
36  probe_id = try(each.value.probe_id, "")
37  resource_group_name = var.environment_resource_groups
38  
39}
40

The main.tf of module itself is like below:

1{
2  "loadbalancer_rule": {
3    "patterns_default_loadbalancer_rule": {
4      "backend_address_pool_id": null,
5      "lb_rule_specs" : {
6        
7          "name" : "test2",
8          "protocol": "tcp",
9          "frontend_port": "8080",
10          "backend_port": "8081",
11          "frontend_ip_configuration_name": "projectname-lb-nic"
12      
13      },
14      "load_distribution": "",
15      "loadbalancer_id": null,
16      "probe_id": "",
17      "resource_group_name": null
18    }
19  }
20}
21variable "loadbalancer_rule" {
22  description = "Map of loadbalancer-rule objects"
23  type        = any
24  default     = null
25}
26
27module "loadbalancer_rule" {
28  for_each            = coalesce(var.loadbalancer_rule, {})
29  source              = "../loadbalancer-rule/azurerm"
30  version             = "7.0.0-2-1.0"
31
32  backend_address_pool_id = try(each.value.backend_address_pool_id, null)
33  lb_rule_specs = try(each.value.lb_rule_specs, null)
34  load_distribution = try(each.value.load_distribution, "")
35  loadbalancer_id = try(each.value.loadbalancer_id, null)
36  probe_id = try(each.value.probe_id, "")
37  resource_group_name = var.environment_resource_groups
38  
39}
40resource "azurerm_lb_rule" "lb_rule" {
41  count                          = length(var.lb_rule_specs)
42  name                           = var.lb_rule_specs[count.index]["name"]
43  resource_group_name            = var.resource_group_name
44  loadbalancer_id                = var.loadbalancer_id
45  frontend_ip_configuration_name = var.lb_rule_specs[count.index]["frontend_ip_configuration_name"]
46  protocol                       = var.lb_rule_specs[count.index]["protocol"]
47  frontend_port                  = var.lb_rule_specs[count.index]["frontend_port"]
48  backend_port                   = var.lb_rule_specs[count.index]["backend_port"]
49  probe_id                       = var.probe_id
50  load_distribution              = var.load_distribution
51  backend_address_pool_id        = var.backend_address_pool_id
52}
53
54

And Variables.tf like below:

1{
2  "loadbalancer_rule": {
3    "patterns_default_loadbalancer_rule": {
4      "backend_address_pool_id": null,
5      "lb_rule_specs" : {
6        
7          "name" : "test2",
8          "protocol": "tcp",
9          "frontend_port": "8080",
10          "backend_port": "8081",
11          "frontend_ip_configuration_name": "projectname-lb-nic"
12      
13      },
14      "load_distribution": "",
15      "loadbalancer_id": null,
16      "probe_id": "",
17      "resource_group_name": null
18    }
19  }
20}
21variable "loadbalancer_rule" {
22  description = "Map of loadbalancer-rule objects"
23  type        = any
24  default     = null
25}
26
27module "loadbalancer_rule" {
28  for_each            = coalesce(var.loadbalancer_rule, {})
29  source              = "../loadbalancer-rule/azurerm"
30  version             = "7.0.0-2-1.0"
31
32  backend_address_pool_id = try(each.value.backend_address_pool_id, null)
33  lb_rule_specs = try(each.value.lb_rule_specs, null)
34  load_distribution = try(each.value.load_distribution, "")
35  loadbalancer_id = try(each.value.loadbalancer_id, null)
36  probe_id = try(each.value.probe_id, "")
37  resource_group_name = var.environment_resource_groups
38  
39}
40resource "azurerm_lb_rule" "lb_rule" {
41  count                          = length(var.lb_rule_specs)
42  name                           = var.lb_rule_specs[count.index]["name"]
43  resource_group_name            = var.resource_group_name
44  loadbalancer_id                = var.loadbalancer_id
45  frontend_ip_configuration_name = var.lb_rule_specs[count.index]["frontend_ip_configuration_name"]
46  protocol                       = var.lb_rule_specs[count.index]["protocol"]
47  frontend_port                  = var.lb_rule_specs[count.index]["frontend_port"]
48  backend_port                   = var.lb_rule_specs[count.index]["backend_port"]
49  probe_id                       = var.probe_id
50  load_distribution              = var.load_distribution
51  backend_address_pool_id        = var.backend_address_pool_id
52}
53
54variable "lb_rule_specs" {
55  description = "Load balancer rules specifications"
56  type        = list(map(string))
57}
58
59variable "resource_group_name" {
60  description = "Name of the resource group"
61  type        = string
62}
63
64variable "loadbalancer_id" {
65  description = "ID of the load balancer"
66  type        = string
67}
68
69variable "backend_address_pool_id" {
70  description = "Backend address pool id for the load balancer"
71  type        = string
72}
73
74variable "probe_id" {
75  description = "ID of the loadbalancer probe"
76  type        = string
77  default     = ""
78}
79
80variable "load_distribution" {
81  description = "Specifies the load balancing distribution type to be used by the Load Balancer. Possible values are: Default – The load balancer is configured to use a 5 tuple hash to map traffic to available servers. SourceIP – The load balancer is configured to use a 2 tuple hash to map traffic to available servers. SourceIPProtocol – The load balancer is configured to use a 3 tuple hash to map traffic to available servers. Also known as Session Persistence, where the options are called None, Client IP and Client IP and Protocol respectively."
82  type        = string
83  default     = ""
84}
85

I did try to remove the { braces but honestly I couldn't figure it out what is the issue. If the tfvars file was in proper .tf format things would have been little better, with json I get totally confused.

I am getting error like below:

1{
2  "loadbalancer_rule": {
3    "patterns_default_loadbalancer_rule": {
4      "backend_address_pool_id": null,
5      "lb_rule_specs" : {
6        
7          "name" : "test2",
8          "protocol": "tcp",
9          "frontend_port": "8080",
10          "backend_port": "8081",
11          "frontend_ip_configuration_name": "projectname-lb-nic"
12      
13      },
14      "load_distribution": "",
15      "loadbalancer_id": null,
16      "probe_id": "",
17      "resource_group_name": null
18    }
19  }
20}
21variable "loadbalancer_rule" {
22  description = "Map of loadbalancer-rule objects"
23  type        = any
24  default     = null
25}
26
27module "loadbalancer_rule" {
28  for_each            = coalesce(var.loadbalancer_rule, {})
29  source              = "../loadbalancer-rule/azurerm"
30  version             = "7.0.0-2-1.0"
31
32  backend_address_pool_id = try(each.value.backend_address_pool_id, null)
33  lb_rule_specs = try(each.value.lb_rule_specs, null)
34  load_distribution = try(each.value.load_distribution, "")
35  loadbalancer_id = try(each.value.loadbalancer_id, null)
36  probe_id = try(each.value.probe_id, "")
37  resource_group_name = var.environment_resource_groups
38  
39}
40resource "azurerm_lb_rule" "lb_rule" {
41  count                          = length(var.lb_rule_specs)
42  name                           = var.lb_rule_specs[count.index]["name"]
43  resource_group_name            = var.resource_group_name
44  loadbalancer_id                = var.loadbalancer_id
45  frontend_ip_configuration_name = var.lb_rule_specs[count.index]["frontend_ip_configuration_name"]
46  protocol                       = var.lb_rule_specs[count.index]["protocol"]
47  frontend_port                  = var.lb_rule_specs[count.index]["frontend_port"]
48  backend_port                   = var.lb_rule_specs[count.index]["backend_port"]
49  probe_id                       = var.probe_id
50  load_distribution              = var.load_distribution
51  backend_address_pool_id        = var.backend_address_pool_id
52}
53
54variable "lb_rule_specs" {
55  description = "Load balancer rules specifications"
56  type        = list(map(string))
57}
58
59variable "resource_group_name" {
60  description = "Name of the resource group"
61  type        = string
62}
63
64variable "loadbalancer_id" {
65  description = "ID of the load balancer"
66  type        = string
67}
68
69variable "backend_address_pool_id" {
70  description = "Backend address pool id for the load balancer"
71  type        = string
72}
73
74variable "probe_id" {
75  description = "ID of the loadbalancer probe"
76  type        = string
77  default     = ""
78}
79
80variable "load_distribution" {
81  description = "Specifies the load balancing distribution type to be used by the Load Balancer. Possible values are: Default – The load balancer is configured to use a 5 tuple hash to map traffic to available servers. SourceIP – The load balancer is configured to use a 2 tuple hash to map traffic to available servers. SourceIPProtocol – The load balancer is configured to use a 3 tuple hash to map traffic to available servers. Also known as Session Persistence, where the options are called None, Client IP and Client IP and Protocol respectively."
82  type        = string
83  default     = ""
84}
85│ Error: Invalid value for module argument
86│
87│   on loadbalancer_rule.tf line 13, in module "loadbalancer_rule":
88│   13:   lb_rule_specs = try(each.value.lb_rule_specs, null)
89│
90│ The given value is not suitable for child module variable "lb_rule_specs"
91│ defined at .terraform/modules/loadbalancer_rule/variables.tf:1,1-25: list
92│ of map of string required.
93

Need some help to resolve the error.

ANSWER

Answered 2022-Apr-01 at 07:16

Your lb_rule_specs is a list(map(string)) but you are just passing a map(string).

Assuming that everything else works, to address your error it should be:

1{
2  "loadbalancer_rule": {
3    "patterns_default_loadbalancer_rule": {
4      "backend_address_pool_id": null,
5      "lb_rule_specs" : {
6        
7          "name" : "test2",
8          "protocol": "tcp",
9          "frontend_port": "8080",
10          "backend_port": "8081",
11          "frontend_ip_configuration_name": "projectname-lb-nic"
12      
13      },
14      "load_distribution": "",
15      "loadbalancer_id": null,
16      "probe_id": "",
17      "resource_group_name": null
18    }
19  }
20}
21variable "loadbalancer_rule" {
22  description = "Map of loadbalancer-rule objects"
23  type        = any
24  default     = null
25}
26
27module "loadbalancer_rule" {
28  for_each            = coalesce(var.loadbalancer_rule, {})
29  source              = "../loadbalancer-rule/azurerm"
30  version             = "7.0.0-2-1.0"
31
32  backend_address_pool_id = try(each.value.backend_address_pool_id, null)
33  lb_rule_specs = try(each.value.lb_rule_specs, null)
34  load_distribution = try(each.value.load_distribution, "")
35  loadbalancer_id = try(each.value.loadbalancer_id, null)
36  probe_id = try(each.value.probe_id, "")
37  resource_group_name = var.environment_resource_groups
38  
39}
40resource "azurerm_lb_rule" "lb_rule" {
41  count                          = length(var.lb_rule_specs)
42  name                           = var.lb_rule_specs[count.index]["name"]
43  resource_group_name            = var.resource_group_name
44  loadbalancer_id                = var.loadbalancer_id
45  frontend_ip_configuration_name = var.lb_rule_specs[count.index]["frontend_ip_configuration_name"]
46  protocol                       = var.lb_rule_specs[count.index]["protocol"]
47  frontend_port                  = var.lb_rule_specs[count.index]["frontend_port"]
48  backend_port                   = var.lb_rule_specs[count.index]["backend_port"]
49  probe_id                       = var.probe_id
50  load_distribution              = var.load_distribution
51  backend_address_pool_id        = var.backend_address_pool_id
52}
53
54variable "lb_rule_specs" {
55  description = "Load balancer rules specifications"
56  type        = list(map(string))
57}
58
59variable "resource_group_name" {
60  description = "Name of the resource group"
61  type        = string
62}
63
64variable "loadbalancer_id" {
65  description = "ID of the load balancer"
66  type        = string
67}
68
69variable "backend_address_pool_id" {
70  description = "Backend address pool id for the load balancer"
71  type        = string
72}
73
74variable "probe_id" {
75  description = "ID of the loadbalancer probe"
76  type        = string
77  default     = ""
78}
79
80variable "load_distribution" {
81  description = "Specifies the load balancing distribution type to be used by the Load Balancer. Possible values are: Default – The load balancer is configured to use a 5 tuple hash to map traffic to available servers. SourceIP – The load balancer is configured to use a 2 tuple hash to map traffic to available servers. SourceIPProtocol – The load balancer is configured to use a 3 tuple hash to map traffic to available servers. Also known as Session Persistence, where the options are called None, Client IP and Client IP and Protocol respectively."
82  type        = string
83  default     = ""
84}
85│ Error: Invalid value for module argument
86│
87│   on loadbalancer_rule.tf line 13, in module "loadbalancer_rule":
88│   13:   lb_rule_specs = try(each.value.lb_rule_specs, null)
89│
90│ The given value is not suitable for child module variable "lb_rule_specs"
91│ defined at .terraform/modules/loadbalancer_rule/variables.tf:1,1-25: list
92│ of map of string required.
93lb_rule_specs = [try(each.value.lb_rule_specs, null)]
94

Source https://stackoverflow.com/questions/71702960

QUESTION

How to run Jupyter, locally, connecting to Google Cloud VM using only internal IP address?

Asked 2022-Mar-11 at 17:41

I configured a Compute Engine instance with only an internal IP (10.X.X.10). I am able to ssh into it via gcloud with IAP with tunneling, access and copy files storage via Private Google Access and VPC was set up with no conflicting IP ranges:

1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2

Now I want to open Jupyter notebook without creating an external IP in the VM.

Identity-Aware Proxy (IAP) is working well, Private Google Access also. After that I enabled a NAT Gateway, that generated an external IP (35.X.X.155).

I configured Jupyter by running jupyter notebook --generate-config, set up a password "sha"

Now I run Jupyter by typing this on gcloud SSH:

1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3

Replacing:http://instance-XXX/?token=abcd

By:http://35.X.X.155/?token=abcd

But the external IP is not accessible, not even in the browser, neither in http nor in https. Note that I'm not considering using a Network Load Balancing, because it's not necessary.

Ping 35.X.X.155 works perfectly

I also tried jupyter notebook --gateway-url=http://NAT-gateway:8888 without success

Jupyter Notebook

Look at this as an alternative to a bastion (VM with external IP)

Any ideas on how to solve this issue ?

UPDATE: Looks like I have to find a way to SSH into the NAT Gateway.

ANSWER

Answered 2022-Mar-11 at 17:41

What you are trying to do can be accomplished using IAP for TCP forwarding, and there is no need to use NAT at all in this scenario. Here are the steps to follow:

  1. Ensure you have ports 22 and 8080 allowed in the project's firewall:
1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3gcloud compute firewall-rules list
4NAME                         NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
5allow-8080-ingress-from-iap  default  INGRESS    1000      tcp:8080                            False
6allow-ssh-ingress-from-iap   default  INGRESS    1000      tcp:22                              False
7
  1. On your endpoint's gcloud CLI, log in to GCP and set the project to where the instance is running:
1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3gcloud compute firewall-rules list
4NAME                         NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
5allow-8080-ingress-from-iap  default  INGRESS    1000      tcp:8080                            False
6allow-ssh-ingress-from-iap   default  INGRESS    1000      tcp:22                              False
7gcloud config set project $GCP_PROJECT_NAME
8
  1. Check if you already have SSH keys generated in your system:
1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3gcloud compute firewall-rules list
4NAME                         NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
5allow-8080-ingress-from-iap  default  INGRESS    1000      tcp:8080                            False
6allow-ssh-ingress-from-iap   default  INGRESS    1000      tcp:22                              False
7gcloud config set project $GCP_PROJECT_NAME
8ls -1 ~/.ssh/*
9
10#=>
11
12/. . ./id_rsa
13/. . ./id_rsa.pub
14

If you don't have any, you can generate them with the command: ssh-keygen -t rsa -f ~/.ssh/id_rsa -C id_rsa

  1. Add the SSH keys to your project's metadata:
1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3gcloud compute firewall-rules list
4NAME                         NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
5allow-8080-ingress-from-iap  default  INGRESS    1000      tcp:8080                            False
6allow-ssh-ingress-from-iap   default  INGRESS    1000      tcp:22                              False
7gcloud config set project $GCP_PROJECT_NAME
8ls -1 ~/.ssh/*
9
10#=>
11
12/. . ./id_rsa
13/. . ./id_rsa.pub
14gcloud compute project-info add-metadata \
15--metadata ssh-keys="$(gcloud compute project-info describe \
16--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
17$(whoami):$(cat ~/.ssh/id_rsa.pub)"
18
19#=>
20
21Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
22
  1. Assign the iap.tunnelResourceAccessor role to the user:
1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3gcloud compute firewall-rules list
4NAME                         NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
5allow-8080-ingress-from-iap  default  INGRESS    1000      tcp:8080                            False
6allow-ssh-ingress-from-iap   default  INGRESS    1000      tcp:22                              False
7gcloud config set project $GCP_PROJECT_NAME
8ls -1 ~/.ssh/*
9
10#=>
11
12/. . ./id_rsa
13/. . ./id_rsa.pub
14gcloud compute project-info add-metadata \
15--metadata ssh-keys="$(gcloud compute project-info describe \
16--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
17$(whoami):$(cat ~/.ssh/id_rsa.pub)"
18
19#=>
20
21Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
22gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
23    --member=user:$USER_ID \
24    --role=roles/iap.tunnelResourceAccessor
25
  1. Start an IAP tunnel pointing to your instance:port and bind it to your desired localhost port (in this case, 9000):
1gcloud compute ssh --zone "us-central1-c" "vm_name"  --tunnel-through-iap --project "projectXXX"
2python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
3gcloud compute firewall-rules list
4NAME                         NETWORK  DIRECTION  PRIORITY  ALLOW                         DENY  DISABLED
5allow-8080-ingress-from-iap  default  INGRESS    1000      tcp:8080                            False
6allow-ssh-ingress-from-iap   default  INGRESS    1000      tcp:22                              False
7gcloud config set project $GCP_PROJECT_NAME
8ls -1 ~/.ssh/*
9
10#=>
11
12/. . ./id_rsa
13/. . ./id_rsa.pub
14gcloud compute project-info add-metadata \
15--metadata ssh-keys="$(gcloud compute project-info describe \
16--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
17$(whoami):$(cat ~/.ssh/id_rsa.pub)"
18
19#=>
20
21Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
22gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
23    --member=user:$USER_ID \
24    --role=roles/iap.tunnelResourceAccessor
25gcloud compute start-iap-tunnel $INSTANCE_NAME 8080 \
26    --local-host-port=localhost:9000
27
28Testing if tunnel connection works.
29Listening on port [9000].
30

At this point, you should be able to access your Jupyter Notebook in http://127.0.0.1:9000?token=abcd.

Note: The start-iap-tunnel command is not a one-time running command and should be issued and kept running every time you want to access your Jupyter Notebook implementation.

Source https://stackoverflow.com/questions/71429262

QUESTION

Files needed to run a container docker-compose command

Asked 2022-Mar-05 at 11:37

I think I have a bit of a hard time understanding what files do I need to be able to run a container with my Rails app on an empty instance.

I have a docker-compose.prod.yml that I want to run:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39

So, in the instance, I have the docker-compose.prod.yml file. And since I am passing variables for the environments and the image web tag, I created an .env file with those variables in it as well. Finally, since I am building nginx, I have a folder with the image:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49

and config file nginx.conf

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77

So, docker-compose.prod.yml, nginx directory with the 2 files and the .env file.

When I do: docker-compose -f docker-compose.prod.yml --env-file .env run app rake db:create db:migrate it downloads the postgres and app images but once it starts doing the rake for the db:create db:migrate, I get this error:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84

but then when I add the Rakefile, it keeps asking for other dependents files so either I need the whole project itself (clone it from my repo on GitHub) or I am doing this wrong.

Any ideas on what files do I need or if I need to change commands are welcome! Thank you.

ANSWER

Answered 2022-Mar-05 at 11:37

When your docker-compose.yml file says

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86

It means the contents of the /myapp directory in your image – probably the entire application – are ignored and replaced with whatever's in your host directory. So with this setup you need to copy the entire application source code, or it won't work.

But also:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88

Your application's static assets are being stored in a Docker named volume. This is hard to transfer between systems, and it will ignore any changes in the underlying image.


I'd update a couple of things in this setup, both to avoid most of the volumes and to make things a little easier to work with.

Docker has an internal networking layer, and you can communicate between containers using their Compose service names as host names. (See Networking in Compose in the Docker documentation.) That means you can set the Nginx reverse proxy to talk to the Rails app over normal HTTP/TCP

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91

This eliminates the need for the tmp-data volume.

In the same way you're building your application code into an image, you can also build the static assets into an image. Update the Nginx image Dockerfile to look like:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105

This removes the need for the public-data volume.

In your main application image, you should declare an ENTRYPOINT and CMD so you don't need to repeat the long-winded command:. For the ENTRYPOINT, I'd suggest a shell script:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113

Make sure this file is executable (chmod +x entrypoint.sh) and add it to your repository, maybe in the top-level directory next to your Dockerfile and Gemfile. In the Dockerfile, declare this script as the ENTRYPOINT, and make Puma be the CMD:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113ENTRYPOINT ["./entrypoint.sh"]       # must be JSON-array syntax
114CMD ["puma", "-C", "config/puma.rb"] # could be shell syntax
115

The only volume mounts we haven't touched are the database storage and the log directory. For the database storage, a Docker named volume could be appropriate, since you never need to look at the files directly. A named volume is substantially faster on some platforms (MacOS and some cases of Windows) but is harder to transfer between systems. Conversely, for the logs, I'd use a bind mount, since you do generally want to read them directly from the host system.

This reduces the docker-compose.yml file to:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113ENTRYPOINT ["./entrypoint.sh"]       # must be JSON-array syntax
114CMD ["puma", "-C", "config/puma.rb"] # could be shell syntax
115version: "3.8"
116services:
117  db: { unchanged: from the original question }
118  app:
119    image: "username/repo:${WEB_TAG:-latest}"
120    depends_on:
121      - db
122    volumes:
123      - ./tmp/log:/myapp/log
124    environment:
125      POSTGRES_USER: ${POSTGRES_USER:-default}
126      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
127      POSTGRES_HOST: ${POSTGRES_HOST:-default}
128  web:
129    image: "username/web:${WEB_TAG:-latest}"
130    ports:
131      - "80:80"
132    depends_on:
133      - app
134

We've removed almost all of the volumes:, and all of the references to host directories except for the database storage and a directory to output the logs. We've also removed the command: override as repeating what's in the Dockerfile. (In other similar SO questions, I might remove unnecessary networks:, container_name:, and hostname: declarations, along with obsolete links: and expose: options.)

If you've done this, you need a way to build your images and push them to the repository. You can have a second Compose file that only describes how to build the images:

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113ENTRYPOINT ["./entrypoint.sh"]       # must be JSON-array syntax
114CMD ["puma", "-C", "config/puma.rb"] # could be shell syntax
115version: "3.8"
116services:
117  db: { unchanged: from the original question }
118  app:
119    image: "username/repo:${WEB_TAG:-latest}"
120    depends_on:
121      - db
122    volumes:
123      - ./tmp/log:/myapp/log
124    environment:
125      POSTGRES_USER: ${POSTGRES_USER:-default}
126      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
127      POSTGRES_HOST: ${POSTGRES_HOST:-default}
128  web:
129    image: "username/web:${WEB_TAG:-latest}"
130    ports:
131      - "80:80"
132    depends_on:
133      - app
134# docker-compose.override.yml
135# in the same directory as docker-compose.yml
136version: '3.8'
137services:
138  app:
139    build: .
140  web:
141    build:
142      context: ./nginx
143      args:
144        appimage: username/repo:${WEB_TAG:-latest}
145

Compose's one shortcoming in this situation is that it doesn't know that the one image depends on the other. This means you need to manually build the base image first.

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113ENTRYPOINT ["./entrypoint.sh"]       # must be JSON-array syntax
114CMD ["puma", "-C", "config/puma.rb"] # could be shell syntax
115version: "3.8"
116services:
117  db: { unchanged: from the original question }
118  app:
119    image: "username/repo:${WEB_TAG:-latest}"
120    depends_on:
121      - db
122    volumes:
123      - ./tmp/log:/myapp/log
124    environment:
125      POSTGRES_USER: ${POSTGRES_USER:-default}
126      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
127      POSTGRES_HOST: ${POSTGRES_HOST:-default}
128  web:
129    image: "username/web:${WEB_TAG:-latest}"
130    ports:
131      - "80:80"
132    depends_on:
133      - app
134# docker-compose.override.yml
135# in the same directory as docker-compose.yml
136version: '3.8'
137services:
138  app:
139    build: .
140  web:
141    build:
142      context: ./nginx
143      args:
144        appimage: username/repo:${WEB_TAG:-latest}
145export WEB_TAG=20220305
146docker-compose build app
147docker-compose build
148docker-compose push
149

This seems like a lot of setup. But having done this, the only thing we need to copy to the new system is the docker-compose.yml file and the production .env settings.

1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113ENTRYPOINT ["./entrypoint.sh"]       # must be JSON-array syntax
114CMD ["puma", "-C", "config/puma.rb"] # could be shell syntax
115version: "3.8"
116services:
117  db: { unchanged: from the original question }
118  app:
119    image: "username/repo:${WEB_TAG:-latest}"
120    depends_on:
121      - db
122    volumes:
123      - ./tmp/log:/myapp/log
124    environment:
125      POSTGRES_USER: ${POSTGRES_USER:-default}
126      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
127      POSTGRES_HOST: ${POSTGRES_HOST:-default}
128  web:
129    image: "username/web:${WEB_TAG:-latest}"
130    ports:
131      - "80:80"
132    depends_on:
133      - app
134# docker-compose.override.yml
135# in the same directory as docker-compose.yml
136version: '3.8'
137services:
138  app:
139    build: .
140  web:
141    build:
142      context: ./nginx
143      args:
144        appimage: username/repo:${WEB_TAG:-latest}
145export WEB_TAG=20220305
146docker-compose build app
147docker-compose build
148docker-compose push
149# on the local system
150scp docker-compose.yml .env there:
151
1version: "3.8"
2
3services:
4  db:
5    image: postgres
6    environment:
7      POSTGRES_USER: ${POSTGRES_USER:-default}
8      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
9    volumes:
10      - ./tmp/db:/var/lib/postgresql/data
11  app:
12    image: "username/repo:${WEB_TAG:-latest}"
13    depends_on:
14      - db
15    command: bash -c "rm -f tmp/pids/server.pid && bundle exec puma -C config/puma.rb"
16    volumes:
17      - .:/myapp
18      - public-data:/myapp/public
19      - tmp-data:/myapp/tmp
20      - log-data:/myapp/log
21    environment:
22      POSTGRES_USER: ${POSTGRES_USER:-default}
23      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
24      POSTGRES_HOST: ${POSTGRES_HOST:-default}
25  web:
26    build: nginx
27    volumes:
28      - public-data:/myapp/public
29      - tmp-data:/myapp/tmp
30    ports:
31      - "80:80"
32    depends_on:
33      - app
34volumes:
35  public-data:
36  tmp-data:
37  log-data:
38  db-data:
39FROM arm64v8/nginx
40
41# インクルード用のディレクトリ内を削除
42RUN rm -f /etc/nginx/conf.d/*
43
44# Nginxの設定ファイルをコンテナにコピー
45ADD nginx.conf /etc/nginx/myapp.conf
46
47# ビルド完了後にNginxを起動
48CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/myapp.conf
49user  root;
50worker_processes  1;
51
52events{
53    worker_connections  512;
54}
55
56# ソケット接続
57http {
58  upstream myapp{
59    server unix:///myapp/tmp/sockets/puma.sock;
60  }
61  server { # simple load balancing
62    listen 80;
63    server_name localhost;
64
65    #ログを記録しようとするとエラーが生じます
66    #root /myapp/public;
67    access_log /var/log/nginx/access.log;
68    error_log  /var/log/nginx/error.log;
69
70    location / {
71      proxy_pass http://myapp;
72      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
73      proxy_set_header Host $http_host;
74    }
75  }
76}
77Status: Downloaded newer image for user/repo:48
78Creating rails-app_db_1 ... done
79Creating rails-app_app_run ... done
80rake aborted!
81No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
82/usr/local/bundle/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
83(See full trace by running task with --trace)
84volumes:
85  - .:/myapp
86volumes:
87  - public-data:/myapp/public
88upstream myapp {
89  server http://app:9292; # or the port from your config.ru/puma.rb file
90}
91# Artificial build stage to include app; https://stackoverflow.com/a/69303997
92ARG appimage
93FROM ${appimage} AS app
94
95FROM arm64v8/nginx
96
97# Replace the Nginx configuration
98RUN rm -f /etc/nginx/conf.d/*
99COPY nginx.conf /etc/nginx/nginx.conf
100
101# Copy in the static assets
102COPY --from=app /myapp/public /myapp/public
103
104# Use the default CMD from the base image; no need to rewrite it
105#!/bin/sh
106# entrypoint.sh
107
108# Remove a stale pid file
109rm -f tmp/pids/server.pid
110
111# Run the main container CMD under Bundler
112exec bundle exec "$@"
113ENTRYPOINT ["./entrypoint.sh"]       # must be JSON-array syntax
114CMD ["puma", "-C", "config/puma.rb"] # could be shell syntax
115version: "3.8"
116services:
117  db: { unchanged: from the original question }
118  app:
119    image: "username/repo:${WEB_TAG:-latest}"
120    depends_on:
121      - db
122    volumes:
123      - ./tmp/log:/myapp/log
124    environment:
125      POSTGRES_USER: ${POSTGRES_USER:-default}
126      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-default}
127      POSTGRES_HOST: ${POSTGRES_HOST:-default}
128  web:
129    image: "username/web:${WEB_TAG:-latest}"
130    ports:
131      - "80:80"
132    depends_on:
133      - app
134# docker-compose.override.yml
135# in the same directory as docker-compose.yml
136version: '3.8'
137services:
138  app:
139    build: .
140  web:
141    build:
142      context: ./nginx
143      args:
144        appimage: username/repo:${WEB_TAG:-latest}
145export WEB_TAG=20220305
146docker-compose build app
147docker-compose build
148docker-compose push
149# on the local system
150scp docker-compose.yml .env there:
151# on the remote system
152export WEB_TAG=20220305
153docker-compose run app \
154  rake db:create db:migrate
155docker-compose up -d
156

Docker will automatically pull the images from the repository if it doesn't have a copy locally. (It's helpful, and required in contexts like Kubernetes, to use a unique image tag per build; docker system prune can clean up old unused images.) If you give the files their default names docker-compose.yml and .env then you don't need to mention them on the command line.

Source https://stackoverflow.com/questions/71359102

QUESTION

NiFi Cluster Docker Load Balancing configuration

Asked 2022-Feb-22 at 12:08

I would like to configure Load Balancing in docker-compose.yml file for NiFi cluster deployed via Docker containers. Current docker-compose parameters for LB are as follows (for each of three NiFi nodes):

1      # load balancing
2      - NIFI_CLUSTER_LOAD_BALANCE_PORT=6342
3      - NIFI_CLUSTER_LOAD_BALANCE_HOST=node.name
4      - NIFI_CLUSTER_LOAD_BALANCE_CONNECTIONS_PER_NODE=4
5      - NIFI_CLUSTER_LOAD_BALANCE_MAX_THREADS=8
6

But, when I try to use load balancing in queues, I can choose all the parameters there, and do not have any error, but LB is not working, everything is done on the primary node (because I used GetSFTP on the primary node only, but want to then process data on all 3 nodes). Also, NiFi cluster is configured to work with SSL.

Thanks in advance!

ANSWER

Answered 2022-Feb-22 at 12:08

I had opened load balance port on my docker file. Also I had to specify hostname for each node's compose file

here is my docker file for basic clustering

1      # load balancing
2      - NIFI_CLUSTER_LOAD_BALANCE_PORT=6342
3      - NIFI_CLUSTER_LOAD_BALANCE_HOST=node.name
4      - NIFI_CLUSTER_LOAD_BALANCE_CONNECTIONS_PER_NODE=4
5      - NIFI_CLUSTER_LOAD_BALANCE_MAX_THREADS=8
6version: "3.3"
7services:
8  nifi_service:
9    container_name: "nifi_service"
10    image: "apache/nifi:1.11.4"
11    hostname: "APPTHLP7"
12    environment:
13        - TZ=Europe/Istanbul
14        - NIFI_CLUSTER_IS_NODE=true
15        - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8088
16        - NIFI_ZK_CONNECT_STRING=172.16.2.238:2181,172.16.2.240:2181,172.16.2.241:2181
17    ports:
18        - "8080:8080"
19        - "8088:8088"
20        - "6342:6342" 
21    volumes:
22        - /home/my/nifi-conf:/opt/nifi/nifi-current/conf
23    networks:
24      - my_network
25    restart: unless-stopped    
26networks:
27  my_network:
28    external: true
29

please not that you have to configure load balance strategy on the downstream connection in your flow.

Source https://stackoverflow.com/questions/71177161

QUESTION

When to enable application load balancers on AWS

Asked 2022-Feb-13 at 15:15

I have an app launched on AWS ELB at the moment. AWS automatically enables an application load balancer which is a significant cost driver to my application. I only have 20 users at the moment, so the load on my application is quite low. When is a good time to enable load balancing?

ANSWER

Answered 2022-Feb-13 at 15:15

Use single instance environment in Elastic Beanstalk if you don't want to use load balancer yet.

Ref: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-types.html#single-instance-environ

Quote:

Single-instance environment

A single-instance environment contains one Amazon EC2 instance with an Elastic IP address. A single-instance environment doesn't have a load balancer, which can help you reduce costs compared to a load-balanced, scalable environment. Although a single-instance environment does use the Amazon EC2 Auto Scaling service, settings for the minimum number of instances, maximum number of instances, and desired capacity are all set to 1. Consequently, new instances are not started to accommodate increasing load on your application.

Use a single-instance environment if you expect your production application to have low traffic or if you are doing remote development. If you're not sure which environment type to select, you can pick one and, if required, you can switch the environment type later. For more information, see Changing environment type.

Source https://stackoverflow.com/questions/71101848

QUESTION

Code works, but running it enough times the average load exceeds 50%

Asked 2022-Jan-09 at 17:26

I am learning Python from Coursera, and on the optional section of the OOP, there was a question about coding for server. I am getting the right answer, but when I run it randomly 4-5 times, sometimes the answer is exceeding the number 50% , which should not happen.

The last line, print(l.avg_load()), should return a value of <50%, which it indeed is doing, but when I run it enough times, it is showing a value >50%.

1#!/usr/bin/env python
2# coding: utf-8
3
4# # Assessment - Object-oriented programming
5
6# In this exercise, we'll create a few classes to simulate a server that's taking connections from the outside and then a load balancer that ensures that there are enough servers to serve those connections. 
7# &lt;br&gt;&lt;br&gt;
8# To represent the servers that are taking care of the connections, we'll use a Server class. Each connection is represented by an id, that could, for example, be the IP address of the computer connecting to the server.  For our simulation, each connection creates a random amount of load in the server, between 1 and 10.
9# &lt;br&gt;&lt;br&gt;
10# Run the following code that defines this Server class.
11
12# In[36]:
13
14
15#Begin Portion 1#
16import random
17
18class Server:
19    def __init__(self):
20        &quot;&quot;&quot;Creates a new server instance, with no active connections.&quot;&quot;&quot;
21        self.connections = {}
22
23    def add_connection(self, connection_id):
24        &quot;&quot;&quot;Adds a new connection to this server.&quot;&quot;&quot;
25        connection_load = random.random()*10+1
26        
27        # Add the connection to the dictionary with the calculated load
28        self.connections[connection_id] = connection_load
29        
30
31    def close_connection(self, connection_id):
32        &quot;&quot;&quot;Closes a connection on this server.&quot;&quot;&quot;
33        # Remove the connection from the dictionary
34        del self.connections[connection_id]
35
36    def load(self):
37        &quot;&quot;&quot;Calculates the current load for all connections.&quot;&quot;&quot;
38        total = 0
39        # Add up the load for each of the connections
40        for load in self.connections.values():
41            total += load
42        return total
43
44    def __str__(self):
45        &quot;&quot;&quot;Returns a string with the current load of the server&quot;&quot;&quot;
46        return &quot;{:.2f}%&quot;.format(self.load())
47    
48#End Portion 1#
49
50
51# Now run the following cell to create a Server instance and add a connection to it, then check the load:
52
53# In[37]:
54
55
56server = Server()
57server.add_connection(&quot;192.168.1.1&quot;)
58
59print(server.load())
60
61
62# After running the above code cell, if you get a **&lt;font color =red&gt;NameError&lt;/font&gt;** message, be sure to run the Server class definition code block first.
63# 
64# The output should be 0.  This is because some things are missing from the Server class. So, you'll need to go back and fill in the blanks to make it behave properly. 
65# &lt;br&gt;&lt;br&gt;
66# Go back to the Server class definition and fill in the missing parts for the `add_connection` and `load` methods to make the cell above print a number different than zero.  As the load is calculated randomly, this number should be different each time the code is executed.
67# &lt;br&gt;&lt;br&gt;
68# **Hint:** Recall that you can iterate through the values of your connections dictionary just as you would any sequence.
69
70# Great! If your output is a random number between 1 and 10, you have successfully coded the `add_connection` and `load` methods of the Server class.  Well done!
71# &lt;br&gt;&lt;br&gt;
72# What about closing a connection? Right now the `close_connection` method doesn't do anything. Go back to the Server class definition and fill in the missing code for the `close_connection` method to make the following code work correctly:
73
74# In[38]:
75
76
77server.close_connection(&quot;192.168.1.1&quot;)
78print(server.load())
79
80
81# You have successfully coded the `close_connection` method if the cell above prints 0.
82# &lt;br&gt;&lt;br&gt;
83# **Hint:** Remember that `del` dictionary[key] removes the item with key *key* from the dictionary.
84
85# Alright, we now have a basic implementation of the server class. Let's look at the basic LoadBalancing class. This class will start with only one server available. When a connection gets added, it will randomly select a server to serve that connection, and then pass on the connection to the server. The LoadBalancing class also needs to keep track of the ongoing connections to be able to close them. This is the basic structure:
86
87# In[60]:
88
89
90#Begin Portion 2#
91class LoadBalancing:
92    def __init__(self):
93        &quot;&quot;&quot;Initialize the load balancing system with one server&quot;&quot;&quot;
94        self.connections = {}
95        self.servers = [Server()]
96
97    def add_connection(self, connection_id):
98        &quot;&quot;&quot;Randomly selects a server and adds a connection to it.&quot;&quot;&quot;
99        server = random.choice(self.servers)
100        
101        # Add the connection to the dictionary with the selected server
102        self.connections[connection_id] = server
103        # Add the connection to the server
104        self.connections[connection_id].add_connection(connection_id)
105        
106        self.ensure_availability()
107
108    def close_connection(self, connection_id):
109        &quot;&quot;&quot;Closes the connection on the the server corresponding to connection_id.&quot;&quot;&quot;
110        
111        # Find out the right server
112        term = self.connections[connection_id]
113        
114        # Close the connection on the server
115        term.close_connection(connection_id)
116        
117        # Remove the connection from the load balancer
118        del self.connections[connection_id]
119
120    def avg_load(self):
121        &quot;&quot;&quot;Calculates the average load of all servers&quot;&quot;&quot;
122        la = 0
123        a = len(self.servers)
124        # Sum the load of each server and divide by the amount of servers
125        for i in self.connections.values():
126            la = la + i.load()
127            
128            
129        return la/a
130
131    def ensure_availability(self):
132        &quot;&quot;&quot;If the average load is higher than 50, spin up a new server&quot;&quot;&quot;
133        if self.avg_load() &gt; 50 :
134            self.servers.append(Server())
135        
136        pass
137
138    def __str__(self):
139        &quot;&quot;&quot;Returns a string with the load for each server.&quot;&quot;&quot;
140        loads = [str(server) for server in self.servers]
141        return &quot;[{}]&quot;.format(&quot;,&quot;.join(loads))
142#End Portion 2#
143
144
145# As with the Server class, this class is currently incomplete. You need to fill in the gaps to make it work correctly. For example, this snippet should create a connection in the load balancer, assign it to a running server and then the load should be more than zero:
146
147# In[61]:
148
149
150l = LoadBalancing()
151l.add_connection(&quot;fdca:83d2::f20d&quot;)
152print(l.avg_load())
153
154
155# After running the above code, the output is 0.  Fill in the missing parts for the `add_connection` and `avg_load` methods of the LoadBalancing class to make this print the right load. Be sure that the load balancer now has an average load more than 0 before proceeding.
156
157# What if we add a new server?
158
159# In[62]:
160
161
162l.servers.append(Server())
163print(l.avg_load())
164
165
166# The average load should now be half of what it was before. If it's not, make sure you correctly fill in the missing gaps for the `add_connection` and `avg_load` methods so that this code works correctly. 
167# &lt;br&gt;&lt;br&gt;
168# **Hint:** You can iterate through the all servers in the *self.servers* list to get the total server load amount and then divide by the length of the *self.servers* list to compute the average load amount.
169
170# Fantastic! Now what about closing the connection?
171
172# In[63]:
173
174
175l.close_connection(&quot;fdca:83d2::f20d&quot;)
176print(l.avg_load())
177
178
179# Fill in the code of the LoadBalancing class to make the load go back to zero once the connection is closed.
180# &lt;br&gt;&lt;br&gt;
181# Great job! Before, we added a server manually. But we want this to happen automatically when the average load is more than 50%. To make this possible, fill in the missing code for the `ensure_availability` method and call it from the `add_connection` method after a connection has been added. You can test it with the following code:
182
183# In[64]:
184
185
186for connection in range(20):
187    l.add_connection(connection)
188print(l)
189
190
191# The code above adds 20 new connections and then prints the loads for each server in the load balancer.  If you coded correctly, new servers should have been added automatically to ensure that the average load of all servers is not more than 50%.
192# &lt;br&gt;&lt;br&gt;
193# Run the following code to verify that the average load of the load balancer is not more than 50%.
194
195# In[65]:
196
197
198print(l.avg_load())
199
200
201
202# Awesome! If the average load is indeed less than 50%, you are all done with this assessment.
203

ANSWER

Answered 2022-Jan-09 at 17:26

When you add_connection() in your load balancer, you are doing:

1#!/usr/bin/env python
2# coding: utf-8
3
4# # Assessment - Object-oriented programming
5
6# In this exercise, we'll create a few classes to simulate a server that's taking connections from the outside and then a load balancer that ensures that there are enough servers to serve those connections. 
7# &lt;br&gt;&lt;br&gt;
8# To represent the servers that are taking care of the connections, we'll use a Server class. Each connection is represented by an id, that could, for example, be the IP address of the computer connecting to the server.  For our simulation, each connection creates a random amount of load in the server, between 1 and 10.
9# &lt;br&gt;&lt;br&gt;
10# Run the following code that defines this Server class.
11
12# In[36]:
13
14
15#Begin Portion 1#
16import random
17
18class Server:
19    def __init__(self):
20        &quot;&quot;&quot;Creates a new server instance, with no active connections.&quot;&quot;&quot;
21        self.connections = {}
22
23    def add_connection(self, connection_id):
24        &quot;&quot;&quot;Adds a new connection to this server.&quot;&quot;&quot;
25        connection_load = random.random()*10+1
26        
27        # Add the connection to the dictionary with the calculated load
28        self.connections[connection_id] = connection_load
29        
30
31    def close_connection(self, connection_id):
32        &quot;&quot;&quot;Closes a connection on this server.&quot;&quot;&quot;
33        # Remove the connection from the dictionary
34        del self.connections[connection_id]
35
36    def load(self):
37        &quot;&quot;&quot;Calculates the current load for all connections.&quot;&quot;&quot;
38        total = 0
39        # Add up the load for each of the connections
40        for load in self.connections.values():
41            total += load
42        return total
43
44    def __str__(self):
45        &quot;&quot;&quot;Returns a string with the current load of the server&quot;&quot;&quot;
46        return &quot;{:.2f}%&quot;.format(self.load())
47    
48#End Portion 1#
49
50
51# Now run the following cell to create a Server instance and add a connection to it, then check the load:
52
53# In[37]:
54
55
56server = Server()
57server.add_connection(&quot;192.168.1.1&quot;)
58
59print(server.load())
60
61
62# After running the above code cell, if you get a **&lt;font color =red&gt;NameError&lt;/font&gt;** message, be sure to run the Server class definition code block first.
63# 
64# The output should be 0.  This is because some things are missing from the Server class. So, you'll need to go back and fill in the blanks to make it behave properly. 
65# &lt;br&gt;&lt;br&gt;
66# Go back to the Server class definition and fill in the missing parts for the `add_connection` and `load` methods to make the cell above print a number different than zero.  As the load is calculated randomly, this number should be different each time the code is executed.
67# &lt;br&gt;&lt;br&gt;
68# **Hint:** Recall that you can iterate through the values of your connections dictionary just as you would any sequence.
69
70# Great! If your output is a random number between 1 and 10, you have successfully coded the `add_connection` and `load` methods of the Server class.  Well done!
71# &lt;br&gt;&lt;br&gt;
72# What about closing a connection? Right now the `close_connection` method doesn't do anything. Go back to the Server class definition and fill in the missing code for the `close_connection` method to make the following code work correctly:
73
74# In[38]:
75
76
77server.close_connection(&quot;192.168.1.1&quot;)
78print(server.load())
79
80
81# You have successfully coded the `close_connection` method if the cell above prints 0.
82# &lt;br&gt;&lt;br&gt;
83# **Hint:** Remember that `del` dictionary[key] removes the item with key *key* from the dictionary.
84
85# Alright, we now have a basic implementation of the server class. Let's look at the basic LoadBalancing class. This class will start with only one server available. When a connection gets added, it will randomly select a server to serve that connection, and then pass on the connection to the server. The LoadBalancing class also needs to keep track of the ongoing connections to be able to close them. This is the basic structure:
86
87# In[60]:
88
89
90#Begin Portion 2#
91class LoadBalancing:
92    def __init__(self):
93        &quot;&quot;&quot;Initialize the load balancing system with one server&quot;&quot;&quot;
94        self.connections = {}
95        self.servers = [Server()]
96
97    def add_connection(self, connection_id):
98        &quot;&quot;&quot;Randomly selects a server and adds a connection to it.&quot;&quot;&quot;
99        server = random.choice(self.servers)
100        
101        # Add the connection to the dictionary with the selected server
102        self.connections[connection_id] = server
103        # Add the connection to the server
104        self.connections[connection_id].add_connection(connection_id)
105        
106        self.ensure_availability()
107
108    def close_connection(self, connection_id):
109        &quot;&quot;&quot;Closes the connection on the the server corresponding to connection_id.&quot;&quot;&quot;
110        
111        # Find out the right server
112        term = self.connections[connection_id]
113        
114        # Close the connection on the server
115        term.close_connection(connection_id)
116        
117        # Remove the connection from the load balancer
118        del self.connections[connection_id]
119
120    def avg_load(self):
121        &quot;&quot;&quot;Calculates the average load of all servers&quot;&quot;&quot;
122        la = 0
123        a = len(self.servers)
124        # Sum the load of each server and divide by the amount of servers
125        for i in self.connections.values():
126            la = la + i.load()
127            
128            
129        return la/a
130
131    def ensure_availability(self):
132        &quot;&quot;&quot;If the average load is higher than 50, spin up a new server&quot;&quot;&quot;
133        if self.avg_load() &gt; 50 :
134            self.servers.append(Server())
135        
136        pass
137
138    def __str__(self):
139        &quot;&quot;&quot;Returns a string with the load for each server.&quot;&quot;&quot;
140        loads = [str(server) for server in self.servers]
141        return &quot;[{}]&quot;.format(&quot;,&quot;.join(loads))
142#End Portion 2#
143
144
145# As with the Server class, this class is currently incomplete. You need to fill in the gaps to make it work correctly. For example, this snippet should create a connection in the load balancer, assign it to a running server and then the load should be more than zero:
146
147# In[61]:
148
149
150l = LoadBalancing()
151l.add_connection(&quot;fdca:83d2::f20d&quot;)
152print(l.avg_load())
153
154
155# After running the above code, the output is 0.  Fill in the missing parts for the `add_connection` and `avg_load` methods of the LoadBalancing class to make this print the right load. Be sure that the load balancer now has an average load more than 0 before proceeding.
156
157# What if we add a new server?
158
159# In[62]:
160
161
162l.servers.append(Server())
163print(l.avg_load())
164
165
166# The average load should now be half of what it was before. If it's not, make sure you correctly fill in the missing gaps for the `add_connection` and `avg_load` methods so that this code works correctly. 
167# &lt;br&gt;&lt;br&gt;
168# **Hint:** You can iterate through the all servers in the *self.servers* list to get the total server load amount and then divide by the length of the *self.servers* list to compute the average load amount.
169
170# Fantastic! Now what about closing the connection?
171
172# In[63]:
173
174
175l.close_connection(&quot;fdca:83d2::f20d&quot;)
176print(l.avg_load())
177
178
179# Fill in the code of the LoadBalancing class to make the load go back to zero once the connection is closed.
180# &lt;br&gt;&lt;br&gt;
181# Great job! Before, we added a server manually. But we want this to happen automatically when the average load is more than 50%. To make this possible, fill in the missing code for the `ensure_availability` method and call it from the `add_connection` method after a connection has been added. You can test it with the following code:
182
183# In[64]:
184
185
186for connection in range(20):
187    l.add_connection(connection)
188print(l)
189
190
191# The code above adds 20 new connections and then prints the loads for each server in the load balancer.  If you coded correctly, new servers should have been added automatically to ensure that the average load of all servers is not more than 50%.
192# &lt;br&gt;&lt;br&gt;
193# Run the following code to verify that the average load of the load balancer is not more than 50%.
194
195# In[65]:
196
197
198print(l.avg_load())
199
200
201
202# Awesome! If the average load is indeed less than 50%, you are all done with this assessment.
203self.connections[connection_id] = server
204

So each connection is assigned a server.

Then when you calculate the average load you do:

1#!/usr/bin/env python
2# coding: utf-8
3
4# # Assessment - Object-oriented programming
5
6# In this exercise, we'll create a few classes to simulate a server that's taking connections from the outside and then a load balancer that ensures that there are enough servers to serve those connections. 
7# &lt;br&gt;&lt;br&gt;
8# To represent the servers that are taking care of the connections, we'll use a Server class. Each connection is represented by an id, that could, for example, be the IP address of the computer connecting to the server.  For our simulation, each connection creates a random amount of load in the server, between 1 and 10.
9# &lt;br&gt;&lt;br&gt;
10# Run the following code that defines this Server class.
11
12# In[36]:
13
14
15#Begin Portion 1#
16import random
17
18class Server:
19    def __init__(self):
20        &quot;&quot;&quot;Creates a new server instance, with no active connections.&quot;&quot;&quot;
21        self.connections = {}
22
23    def add_connection(self, connection_id):
24        &quot;&quot;&quot;Adds a new connection to this server.&quot;&quot;&quot;
25        connection_load = random.random()*10+1
26        
27        # Add the connection to the dictionary with the calculated load
28        self.connections[connection_id] = connection_load
29        
30
31    def close_connection(self, connection_id):
32        &quot;&quot;&quot;Closes a connection on this server.&quot;&quot;&quot;
33        # Remove the connection from the dictionary
34        del self.connections[connection_id]
35
36    def load(self):
37        &quot;&quot;&quot;Calculates the current load for all connections.&quot;&quot;&quot;
38        total = 0
39        # Add up the load for each of the connections
40        for load in self.connections.values():
41            total += load
42        return total
43
44    def __str__(self):
45        &quot;&quot;&quot;Returns a string with the current load of the server&quot;&quot;&quot;
46        return &quot;{:.2f}%&quot;.format(self.load())
47    
48#End Portion 1#
49
50
51# Now run the following cell to create a Server instance and add a connection to it, then check the load:
52
53# In[37]:
54
55
56server = Server()
57server.add_connection(&quot;192.168.1.1&quot;)
58
59print(server.load())
60
61
62# After running the above code cell, if you get a **&lt;font color =red&gt;NameError&lt;/font&gt;** message, be sure to run the Server class definition code block first.
63# 
64# The output should be 0.  This is because some things are missing from the Server class. So, you'll need to go back and fill in the blanks to make it behave properly. 
65# &lt;br&gt;&lt;br&gt;
66# Go back to the Server class definition and fill in the missing parts for the `add_connection` and `load` methods to make the cell above print a number different than zero.  As the load is calculated randomly, this number should be different each time the code is executed.
67# &lt;br&gt;&lt;br&gt;
68# **Hint:** Recall that you can iterate through the values of your connections dictionary just as you would any sequence.
69
70# Great! If your output is a random number between 1 and 10, you have successfully coded the `add_connection` and `load` methods of the Server class.  Well done!
71# &lt;br&gt;&lt;br&gt;
72# What about closing a connection? Right now the `close_connection` method doesn't do anything. Go back to the Server class definition and fill in the missing code for the `close_connection` method to make the following code work correctly:
73
74# In[38]:
75
76
77server.close_connection(&quot;192.168.1.1&quot;)
78print(server.load())
79
80
81# You have successfully coded the `close_connection` method if the cell above prints 0.
82# &lt;br&gt;&lt;br&gt;
83# **Hint:** Remember that `del` dictionary[key] removes the item with key *key* from the dictionary.
84
85# Alright, we now have a basic implementation of the server class. Let's look at the basic LoadBalancing class. This class will start with only one server available. When a connection gets added, it will randomly select a server to serve that connection, and then pass on the connection to the server. The LoadBalancing class also needs to keep track of the ongoing connections to be able to close them. This is the basic structure:
86
87# In[60]:
88
89
90#Begin Portion 2#
91class LoadBalancing:
92    def __init__(self):
93        &quot;&quot;&quot;Initialize the load balancing system with one server&quot;&quot;&quot;
94        self.connections = {}
95        self.servers = [Server()]
96
97    def add_connection(self, connection_id):
98        &quot;&quot;&quot;Randomly selects a server and adds a connection to it.&quot;&quot;&quot;
99        server = random.choice(self.servers)
100        
101        # Add the connection to the dictionary with the selected server
102        self.connections[connection_id] = server
103        # Add the connection to the server
104        self.connections[connection_id].add_connection(connection_id)
105        
106        self.ensure_availability()
107
108    def close_connection(self, connection_id):
109        &quot;&quot;&quot;Closes the connection on the the server corresponding to connection_id.&quot;&quot;&quot;
110        
111        # Find out the right server
112        term = self.connections[connection_id]
113        
114        # Close the connection on the server
115        term.close_connection(connection_id)
116        
117        # Remove the connection from the load balancer
118        del self.connections[connection_id]
119
120    def avg_load(self):
121        &quot;&quot;&quot;Calculates the average load of all servers&quot;&quot;&quot;
122        la = 0
123        a = len(self.servers)
124        # Sum the load of each server and divide by the amount of servers
125        for i in self.connections.values():
126            la = la + i.load()
127            
128            
129        return la/a
130
131    def ensure_availability(self):
132        &quot;&quot;&quot;If the average load is higher than 50, spin up a new server&quot;&quot;&quot;
133        if self.avg_load() &gt; 50 :
134            self.servers.append(Server())
135        
136        pass
137
138    def __str__(self):
139        &quot;&quot;&quot;Returns a string with the load for each server.&quot;&quot;&quot;
140        loads = [str(server) for server in self.servers]
141        return &quot;[{}]&quot;.format(&quot;,&quot;.join(loads))
142#End Portion 2#
143
144
145# As with the Server class, this class is currently incomplete. You need to fill in the gaps to make it work correctly. For example, this snippet should create a connection in the load balancer, assign it to a running server and then the load should be more than zero:
146
147# In[61]:
148
149
150l = LoadBalancing()
151l.add_connection(&quot;fdca:83d2::f20d&quot;)
152print(l.avg_load())
153
154
155# After running the above code, the output is 0.  Fill in the missing parts for the `add_connection` and `avg_load` methods of the LoadBalancing class to make this print the right load. Be sure that the load balancer now has an average load more than 0 before proceeding.
156
157# What if we add a new server?
158
159# In[62]:
160
161
162l.servers.append(Server())
163print(l.avg_load())
164
165
166# The average load should now be half of what it was before. If it's not, make sure you correctly fill in the missing gaps for the `add_connection` and `avg_load` methods so that this code works correctly. 
167# &lt;br&gt;&lt;br&gt;
168# **Hint:** You can iterate through the all servers in the *self.servers* list to get the total server load amount and then divide by the length of the *self.servers* list to compute the average load amount.
169
170# Fantastic! Now what about closing the connection?
171
172# In[63]:
173
174
175l.close_connection(&quot;fdca:83d2::f20d&quot;)
176print(l.avg_load())
177
178
179# Fill in the code of the LoadBalancing class to make the load go back to zero once the connection is closed.
180# &lt;br&gt;&lt;br&gt;
181# Great job! Before, we added a server manually. But we want this to happen automatically when the average load is more than 50%. To make this possible, fill in the missing code for the `ensure_availability` method and call it from the `add_connection` method after a connection has been added. You can test it with the following code:
182
183# In[64]:
184
185
186for connection in range(20):
187    l.add_connection(connection)
188print(l)
189
190
191# The code above adds 20 new connections and then prints the loads for each server in the load balancer.  If you coded correctly, new servers should have been added automatically to ensure that the average load of all servers is not more than 50%.
192# &lt;br&gt;&lt;br&gt;
193# Run the following code to verify that the average load of the load balancer is not more than 50%.
194
195# In[65]:
196
197
198print(l.avg_load())
199
200
201
202# Awesome! If the average load is indeed less than 50%, you are all done with this assessment.
203self.connections[connection_id] = server
204for i in self.connections.values():
205    la = la + i.load()
206

So that sums up the load for each "connection".

But each connection is a server and a server can appear many times in self.connections because you are assigning servers to connections not connections to servers.

Essentially you have 20 connections but likely more like 10 servers at the moment and thus every server is likely double counted and your load is then incorrect.

This likely fixes your issue, but I think you strategy of adding servers to connections rather than the other way around might not be the right way to go:

1#!/usr/bin/env python
2# coding: utf-8
3
4# # Assessment - Object-oriented programming
5
6# In this exercise, we'll create a few classes to simulate a server that's taking connections from the outside and then a load balancer that ensures that there are enough servers to serve those connections. 
7# &lt;br&gt;&lt;br&gt;
8# To represent the servers that are taking care of the connections, we'll use a Server class. Each connection is represented by an id, that could, for example, be the IP address of the computer connecting to the server.  For our simulation, each connection creates a random amount of load in the server, between 1 and 10.
9# &lt;br&gt;&lt;br&gt;
10# Run the following code that defines this Server class.
11
12# In[36]:
13
14
15#Begin Portion 1#
16import random
17
18class Server:
19    def __init__(self):
20        &quot;&quot;&quot;Creates a new server instance, with no active connections.&quot;&quot;&quot;
21        self.connections = {}
22
23    def add_connection(self, connection_id):
24        &quot;&quot;&quot;Adds a new connection to this server.&quot;&quot;&quot;
25        connection_load = random.random()*10+1
26        
27        # Add the connection to the dictionary with the calculated load
28        self.connections[connection_id] = connection_load
29        
30
31    def close_connection(self, connection_id):
32        &quot;&quot;&quot;Closes a connection on this server.&quot;&quot;&quot;
33        # Remove the connection from the dictionary
34        del self.connections[connection_id]
35
36    def load(self):
37        &quot;&quot;&quot;Calculates the current load for all connections.&quot;&quot;&quot;
38        total = 0
39        # Add up the load for each of the connections
40        for load in self.connections.values():
41            total += load
42        return total
43
44    def __str__(self):
45        &quot;&quot;&quot;Returns a string with the current load of the server&quot;&quot;&quot;
46        return &quot;{:.2f}%&quot;.format(self.load())
47    
48#End Portion 1#
49
50
51# Now run the following cell to create a Server instance and add a connection to it, then check the load:
52
53# In[37]:
54
55
56server = Server()
57server.add_connection(&quot;192.168.1.1&quot;)
58
59print(server.load())
60
61
62# After running the above code cell, if you get a **&lt;font color =red&gt;NameError&lt;/font&gt;** message, be sure to run the Server class definition code block first.
63# 
64# The output should be 0.  This is because some things are missing from the Server class. So, you'll need to go back and fill in the blanks to make it behave properly. 
65# &lt;br&gt;&lt;br&gt;
66# Go back to the Server class definition and fill in the missing parts for the `add_connection` and `load` methods to make the cell above print a number different than zero.  As the load is calculated randomly, this number should be different each time the code is executed.
67# &lt;br&gt;&lt;br&gt;
68# **Hint:** Recall that you can iterate through the values of your connections dictionary just as you would any sequence.
69
70# Great! If your output is a random number between 1 and 10, you have successfully coded the `add_connection` and `load` methods of the Server class.  Well done!
71# &lt;br&gt;&lt;br&gt;
72# What about closing a connection? Right now the `close_connection` method doesn't do anything. Go back to the Server class definition and fill in the missing code for the `close_connection` method to make the following code work correctly:
73
74# In[38]:
75
76
77server.close_connection(&quot;192.168.1.1&quot;)
78print(server.load())
79
80
81# You have successfully coded the `close_connection` method if the cell above prints 0.
82# &lt;br&gt;&lt;br&gt;
83# **Hint:** Remember that `del` dictionary[key] removes the item with key *key* from the dictionary.
84
85# Alright, we now have a basic implementation of the server class. Let's look at the basic LoadBalancing class. This class will start with only one server available. When a connection gets added, it will randomly select a server to serve that connection, and then pass on the connection to the server. The LoadBalancing class also needs to keep track of the ongoing connections to be able to close them. This is the basic structure:
86
87# In[60]:
88
89
90#Begin Portion 2#
91class LoadBalancing:
92    def __init__(self):
93        &quot;&quot;&quot;Initialize the load balancing system with one server&quot;&quot;&quot;
94        self.connections = {}
95        self.servers = [Server()]
96
97    def add_connection(self, connection_id):
98        &quot;&quot;&quot;Randomly selects a server and adds a connection to it.&quot;&quot;&quot;
99        server = random.choice(self.servers)
100        
101        # Add the connection to the dictionary with the selected server
102        self.connections[connection_id] = server
103        # Add the connection to the server
104        self.connections[connection_id].add_connection(connection_id)
105        
106        self.ensure_availability()
107
108    def close_connection(self, connection_id):
109        &quot;&quot;&quot;Closes the connection on the the server corresponding to connection_id.&quot;&quot;&quot;
110        
111        # Find out the right server
112        term = self.connections[connection_id]
113        
114        # Close the connection on the server
115        term.close_connection(connection_id)
116        
117        # Remove the connection from the load balancer
118        del self.connections[connection_id]
119
120    def avg_load(self):
121        &quot;&quot;&quot;Calculates the average load of all servers&quot;&quot;&quot;
122        la = 0
123        a = len(self.servers)
124        # Sum the load of each server and divide by the amount of servers
125        for i in self.connections.values():
126            la = la + i.load()
127            
128            
129        return la/a
130
131    def ensure_availability(self):
132        &quot;&quot;&quot;If the average load is higher than 50, spin up a new server&quot;&quot;&quot;
133        if self.avg_load() &gt; 50 :
134            self.servers.append(Server())
135        
136        pass
137
138    def __str__(self):
139        &quot;&quot;&quot;Returns a string with the load for each server.&quot;&quot;&quot;
140        loads = [str(server) for server in self.servers]
141        return &quot;[{}]&quot;.format(&quot;,&quot;.join(loads))
142#End Portion 2#
143
144
145# As with the Server class, this class is currently incomplete. You need to fill in the gaps to make it work correctly. For example, this snippet should create a connection in the load balancer, assign it to a running server and then the load should be more than zero:
146
147# In[61]:
148
149
150l = LoadBalancing()
151l.add_connection(&quot;fdca:83d2::f20d&quot;)
152print(l.avg_load())
153
154
155# After running the above code, the output is 0.  Fill in the missing parts for the `add_connection` and `avg_load` methods of the LoadBalancing class to make this print the right load. Be sure that the load balancer now has an average load more than 0 before proceeding.
156
157# What if we add a new server?
158
159# In[62]:
160
161
162l.servers.append(Server())
163print(l.avg_load())
164
165
166# The average load should now be half of what it was before. If it's not, make sure you correctly fill in the missing gaps for the `add_connection` and `avg_load` methods so that this code works correctly. 
167# &lt;br&gt;&lt;br&gt;
168# **Hint:** You can iterate through the all servers in the *self.servers* list to get the total server load amount and then divide by the length of the *self.servers* list to compute the average load amount.
169
170# Fantastic! Now what about closing the connection?
171
172# In[63]:
173
174
175l.close_connection(&quot;fdca:83d2::f20d&quot;)
176print(l.avg_load())
177
178
179# Fill in the code of the LoadBalancing class to make the load go back to zero once the connection is closed.
180# &lt;br&gt;&lt;br&gt;
181# Great job! Before, we added a server manually. But we want this to happen automatically when the average load is more than 50%. To make this possible, fill in the missing code for the `ensure_availability` method and call it from the `add_connection` method after a connection has been added. You can test it with the following code:
182
183# In[64]:
184
185
186for connection in range(20):
187    l.add_connection(connection)
188print(l)
189
190
191# The code above adds 20 new connections and then prints the loads for each server in the load balancer.  If you coded correctly, new servers should have been added automatically to ensure that the average load of all servers is not more than 50%.
192# &lt;br&gt;&lt;br&gt;
193# Run the following code to verify that the average load of the load balancer is not more than 50%.
194
195# In[65]:
196
197
198print(l.avg_load())
199
200
201
202# Awesome! If the average load is indeed less than 50%, you are all done with this assessment.
203self.connections[connection_id] = server
204for i in self.connections.values():
205    la = la + i.load()
206    def avg_load(self):
207        &quot;&quot;&quot;Calculates the average load of all servers&quot;&quot;&quot;
208        server_count = len(self.servers)
209        total_load = sum(s.load() for s in self.servers)
210        return total_load/server_count
211

Source https://stackoverflow.com/questions/70643371

QUESTION

IAP User is able to access a Cloud Run without permission

Asked 2022-Jan-05 at 20:58

Steps to reproduce this:

  • create a Cloud Run service with "Require authentication" options setup ingress options to be "Allow internal traffic and traffic from Cloud Load Balancing"
  • expose the service using and External Load Balancer with IAP enabled
  • give the user the role "IAP-Secured Web App User" for the backend service

The user will be able to access the Cloud Run service without explicit permission.

You can follow this tutorial to have a working examples hodo.dev/posts/post-30-gcp-cloudrun-iap/

Is this a bug or is the expected behavior?

If this is expected then where this implicit user permission is documented?

ANSWER

Answered 2022-Jan-05 at 20:58

Google's Identity Aware Proxy (IAP) acts a front-end for access to back-end systems. For certain back-ends, if a request is received by IAP, then IAP will do the work to validate that the user is suitably authorized to make the final request. What this implies is that if a request directly to the backend then the backend will have the responsibility for approval. However, if we route through IAP, then we have delegated to IAP the approval responsibility. As such, the requesting user will be able to access the services of the backend (eg. Cloud Run) without needing explicit Cloud Run approval because we have defined that IAP can make the decision and Cloud Run trusts that IAP's decision is sufficient.

Source https://stackoverflow.com/questions/70533598

QUESTION

How can I deploy Node JS app along with dist folder for production in Kubernetes?

Asked 2021-Dec-16 at 11:16

The Node JS app that I'm trying to deploy in Kubernetes runs on express js as a backend framework.The repository is managed via Bitbucket. The application is a microservice and the pipeline manifest file for building the Docker image is written this way:

1options:
2  docker: true
3image: node:14.17.0
4pipelines:
5  branches:
6    test:
7      - step:
8          services:
9           - docker
10          name: Build and push new docker image
11          deployment: dev
12          script:
13            - yarn install
14            - yarn build
15            - yarn test
16            - yarn lint
17            - make deploy.dev
18            - docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASSWORD
19            - docker build -t testapp/helloapp:latest -f ./Dockerfile .
20            - docker push testapp/helloapp
21          caches:
22            - docker # adds docker layer caching
23

The K8s cluster is hosted on cloud but does not have the internal Load Balancer of their own. The K8s cluster version is v1.22.4 and MetalLB v0.11.0 is configured to serve the Load Balancing purpose. To expose the K8s service- Cloudflare Tunnel is configured as a K8s deployment.

So, this is the manifest file set-up used for building the Docker image. The pipeline deploys successfully and in the Kubernetes part, this is the service and deployment manifest:

1options:
2  docker: true
3image: node:14.17.0
4pipelines:
5  branches:
6    test:
7      - step:
8          services:
9           - docker
10          name: Build and push new docker image
11          deployment: dev
12          script:
13            - yarn install
14            - yarn build
15            - yarn test
16            - yarn lint
17            - make deploy.dev
18            - docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASSWORD
19            - docker build -t testapp/helloapp:latest -f ./Dockerfile .
20            - docker push testapp/helloapp
21          caches:
22            - docker # adds docker layer caching
23apiVersion: v1
24kind: Service
25metadata:
26 name: helloapp
27 labels:
28   app: helloapp
29spec:
30 type: NodePort
31 ports:
32 - port: 5000
33   targetPort: 7000
34   protocol: TCP
35   name: https
36 selector:
37   app: helloapp
38
39---
40apiVersion: apps/v1
41kind: Deployment
42metadata:
43  name: helloapp
44  labels:
45    app: helloapp
46spec:
47  replicas: 3
48  selector:
49    matchLabels:
50      app: helloapp
51  template:
52    metadata:
53      labels:
54        app: helloapp
55    spec:
56      imagePullSecrets:
57      - name: regcred
58      containers:
59      - name: helloapp
60        image: testapp/helloapp:latest
61

Also, here is the Dockerfile snippet to give more clarity on what I have been doing:

1options:
2  docker: true
3image: node:14.17.0
4pipelines:
5  branches:
6    test:
7      - step:
8          services:
9           - docker
10          name: Build and push new docker image
11          deployment: dev
12          script:
13            - yarn install
14            - yarn build
15            - yarn test
16            - yarn lint
17            - make deploy.dev
18            - docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASSWORD
19            - docker build -t testapp/helloapp:latest -f ./Dockerfile .
20            - docker push testapp/helloapp
21          caches:
22            - docker # adds docker layer caching
23apiVersion: v1
24kind: Service
25metadata:
26 name: helloapp
27 labels:
28   app: helloapp
29spec:
30 type: NodePort
31 ports:
32 - port: 5000
33   targetPort: 7000
34   protocol: TCP
35   name: https
36 selector:
37   app: helloapp
38
39---
40apiVersion: apps/v1
41kind: Deployment
42metadata:
43  name: helloapp
44  labels:
45    app: helloapp
46spec:
47  replicas: 3
48  selector:
49    matchLabels:
50      app: helloapp
51  template:
52    metadata:
53      labels:
54        app: helloapp
55    spec:
56      imagePullSecrets:
57      - name: regcred
58      containers:
59      - name: helloapp
60        image: testapp/helloapp:latest
61FROM node:14.17.0
62
63WORKDIR /app
64
65COPY package.json /app
66
67RUN npm install
68
69COPY . /app
70
71CMD node app.js
72
73EXPOSE 8100
74

Just to give a context, the service and deployment works fine, with no CrashLoopBackOff or any other errors. My doubt here is, there is dist directory which is not getting deployed to Docker Hub as it is generated during npm build. How can I deploy the app along with the dist directory without having to worry about security risks? Any feedbacks and suggestions on where I could add a script to pull the dist directory would be appreciated.

ANSWER

Answered 2021-Dec-16 at 11:16

Eventually, I could resolve the issue. The issue was trivial yet bothering. In the Dockerfile, there was a missing script, i.e., npm run build. So, here is the final Dockerfile I used it for building the dist directory along with other requirements:

1options:
2  docker: true
3image: node:14.17.0
4pipelines:
5  branches:
6    test:
7      - step:
8          services:
9           - docker
10          name: Build and push new docker image
11          deployment: dev
12          script:
13            - yarn install
14            - yarn build
15            - yarn test
16            - yarn lint
17            - make deploy.dev
18            - docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASSWORD
19            - docker build -t testapp/helloapp:latest -f ./Dockerfile .
20            - docker push testapp/helloapp
21          caches:
22            - docker # adds docker layer caching
23apiVersion: v1
24kind: Service
25metadata:
26 name: helloapp
27 labels:
28   app: helloapp
29spec:
30 type: NodePort
31 ports:
32 - port: 5000
33   targetPort: 7000
34   protocol: TCP
35   name: https
36 selector:
37   app: helloapp
38
39---
40apiVersion: apps/v1
41kind: Deployment
42metadata:
43  name: helloapp
44  labels:
45    app: helloapp
46spec:
47  replicas: 3
48  selector:
49    matchLabels:
50      app: helloapp
51  template:
52    metadata:
53      labels:
54        app: helloapp
55    spec:
56      imagePullSecrets:
57      - name: regcred
58      containers:
59      - name: helloapp
60        image: testapp/helloapp:latest
61FROM node:14.17.0
62
63WORKDIR /app
64
65COPY package.json /app
66
67RUN npm install
68
69COPY . /app
70
71CMD node app.js
72
73EXPOSE 8100
74FROM node:14.17.0
75
76WORKDIR /app
77
78COPY package.json /app
79
80RUN npm install
81
82RUN npm run build &lt;------ the missing script
83
84COPY . /app
85
86CMD node app.js
87
88EXPOSE 8100
89

This way, the entire dist directory gets built inside the container. Also, I removed all the .ENV dependencies from dist directory and stored as Kubernetes secret in base64 format.

Source https://stackoverflow.com/questions/70249466

QUESTION

How to configure GKE Autopilot w/Envoy &amp; gRPC-Web

Asked 2021-Dec-14 at 20:31

I have an application running on my local machine that uses React -> gRPC-Web -> Envoy -> Go app and everything runs with no problems. I'm trying to deploy this using GKE Autopilot and I just haven't been able to get the configuration right. I'm new to all of GCP/GKE, so I'm looking for help to figure out where I'm going wrong.

I was following this doc initially, even though I only have one gRPC service: https://cloud.google.com/architecture/exposing-grpc-services-on-gke-using-envoy-proxy

From what I've read, GKE Autopilot mode requires using External HTTP(s) load balancing instead of Network Load Balancing as described in the above solution, so I've been trying to get that to work. After a variety of attempts, my current strategy has an Ingress, BackendConfig, Service, and Deployment. The deployment has three containers: my app, an Envoy sidecar to transform the gRPC-Web requests and responses, and a cloud SQL proxy sidecar. I eventually want to be using TLS, but for now, I left that out so it wouldn't complicate things even more.

When I apply all of the configs, the backend service shows one backend in one zone and the health check fails. The health check is set for port 8080 and path /healthz which is what I think I've specified in the deployment config, but I'm suspicious because when I look at the details for the envoy-sidecar container, it shows the Readiness probe as: http-get HTTP://:0/healthz headers=x-envoy-livenessprobe:healthz. Does ":0" just mean it's using the default address and port for the container, or does indicate a config problem?

I've been reading various docs and just haven't been able to piece it all together. Is there an example somewhere that shows how this can be done? I've been searching and haven't found one.

My current configs are:

1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4  name: grammar-games-ingress
5  #annotations:
6    # If the class annotation is not specified it defaults to &quot;gce&quot;.
7    # kubernetes.io/ingress.class: &quot;gce&quot;
8    # kubernetes.io/ingress.global-static-ip-name: &lt;IP addr&gt;
9spec:
10  defaultBackend:
11    service:
12      name: grammar-games-core
13      port:
14        number: 80
15---
16apiVersion: cloud.google.com/v1
17kind: BackendConfig
18metadata:
19  name: grammar-games-bec
20  annotations:
21    cloud.google.com/neg: '{&quot;ingress&quot;: true}'
22spec:
23  sessionAffinity:
24    affinityType: &quot;CLIENT_IP&quot;  
25  healthCheck:
26    checkIntervalSec: 15
27    port: 8080
28    type: HTTP
29    requestPath: /healthz
30  timeoutSec: 60
31---
32apiVersion: v1
33kind: Service
34metadata:
35  name: grammar-games-core
36  annotations:
37    cloud.google.com/neg: '{&quot;ingress&quot;: true}'
38    cloud.google.com/app-protocols: '{&quot;http&quot;:&quot;HTTP&quot;}'
39    cloud.google.com/backend-config: '{&quot;default&quot;: &quot;grammar-games-bec&quot;}'
40spec:
41  type: ClusterIP
42  selector:
43    app: grammar-games-core
44  ports:
45  - name: http
46    protocol: TCP
47    port: 80
48    targetPort: 8080
49---
50apiVersion: apps/v1
51kind: Deployment
52metadata:
53  name: grammar-games-core
54spec:
55  # Two replicas for right now, just so I can see how RPC calls get directed.
56  # replicas: 2
57  selector:
58    matchLabels:
59      app: grammar-games-core
60  template:
61    metadata:
62      labels:
63        app: grammar-games-core
64    spec:
65      serviceAccountName: grammar-games-core-k8sa
66      containers:
67      - name: grammar-games-core
68        image: gcr.io/grammar-games/grammar-games-core:1.1.2
69        command:
70          - &quot;/bin/grammar-games-core&quot;
71        ports:
72        - containerPort: 52001
73        env:
74        - name: GAMESDB_USER
75          valueFrom:
76            secretKeyRef:
77              name: gamesdb-config
78              key: username
79        - name: GAMESDB_PASSWORD
80          valueFrom:
81            secretKeyRef:
82              name: gamesdb-config
83              key: password
84        - name: GAMESDB_DB_NAME
85          valueFrom:
86            secretKeyRef:
87              name: gamesdb-config
88              key: db-name 
89        - name: GRPC_SERVER_PORT
90          value: '52001'
91        - name: GAMES_LOG_FILE_PATH
92          value: ''
93        - name: GAMESDB_LOG_LEVEL
94          value: 'debug'
95        resources:
96          requests:
97            # The proxy's memory use scales linearly with the number of active
98            # connections. Fewer open connections will use less memory. Adjust
99            # this value based on your application's requirements.
100            memory: &quot;2Gi&quot;
101            # The proxy's CPU use scales linearly with the amount of IO between
102            # the database and the application. Adjust this value based on your
103            # application's requirements.
104            cpu:    &quot;1&quot;
105        readinessProbe:
106          exec:
107            command: [&quot;/bin/grpc_health_probe&quot;, &quot;-addr=:52001&quot;]
108          initialDelaySeconds: 5
109      - name: cloud-sql-proxy
110        # It is recommended to use the latest version of the Cloud SQL proxy
111        # Make sure to update on a regular schedule!
112        image: gcr.io/cloudsql-docker/gce-proxy:1.24.0
113        command:
114          - &quot;/cloud_sql_proxy&quot;
115
116          # If connecting from a VPC-native GKE cluster, you can use the
117          # following flag to have the proxy connect over private IP
118          # - &quot;-ip_address_types=PRIVATE&quot;
119
120          # Replace DB_PORT with the port the proxy should listen on
121          # Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433
122          - &quot;-instances=grammar-games:us-east1:grammar-games-db=tcp:3306&quot;
123        securityContext:
124          # The default Cloud SQL proxy image runs as the
125          # &quot;nonroot&quot; user and group (uid: 65532) by default.
126          runAsNonRoot: true
127        # Resource configuration depends on an application's requirements. You
128        # should adjust the following values based on what your application
129        # needs. For details, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
130        resources:
131          requests:
132            # The proxy's memory use scales linearly with the number of active
133            # connections. Fewer open connections will use less memory. Adjust
134            # this value based on your application's requirements.
135            memory: &quot;2Gi&quot;
136            # The proxy's CPU use scales linearly with the amount of IO between
137            # the database and the application. Adjust this value based on your
138            # application's requirements.
139            cpu:    &quot;1&quot;
140      - name: envoy-sidecar
141        image: envoyproxy/envoy:v1.20-latest
142        ports:
143        - name: http
144          containerPort: 8080
145        resources:
146          requests:
147            cpu: 10m
148            ephemeral-storage: 256Mi
149            memory: 256Mi
150        volumeMounts:
151        - name: config
152          mountPath: /etc/envoy
153        readinessProbe:
154          httpGet:
155            port: http
156            httpHeaders:
157            - name: x-envoy-livenessprobe
158              value: healthz
159            path: /healthz
160            scheme: HTTP
161      volumes:
162      - name: config
163        configMap:
164          name: envoy-sidecar-conf      
165---
166apiVersion: v1
167kind: ConfigMap
168metadata:
169  name: envoy-sidecar-conf
170data:
171  envoy.yaml: |
172    static_resources:
173      listeners:
174      - name: listener_0
175        address:
176          socket_address:
177            address: 0.0.0.0
178            port_value: 8080
179        filter_chains:
180        - filters:
181          - name: envoy.filters.network.http_connection_manager
182            typed_config:
183              &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
184              access_log:
185              - name: envoy.access_loggers.stdout
186                typed_config:
187                  &quot;@type&quot;: type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
188              codec_type: AUTO
189              stat_prefix: ingress_http
190              route_config:
191                name: local_route
192                virtual_hosts:
193                - name: http
194                  domains:
195                  - &quot;*&quot;
196                  routes:
197                  - match:
198                      prefix: &quot;/grammar_games_protos.GrammarGames/&quot;
199                    route:
200                      cluster: grammar-games-core-grpc
201                  cors:
202                    allow_origin_string_match:
203                    - prefix: &quot;*&quot;
204                    allow_methods: GET, PUT, DELETE, POST, OPTIONS
205                    allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
206                    max_age: &quot;1728000&quot;
207                    expose_headers: custom-header-1,grpc-status,grpc-message
208              http_filters:
209              - name: envoy.filters.http.health_check
210                typed_config:
211                  &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck
212                  pass_through_mode: false
213                  headers:
214                  - name: &quot;:path&quot;
215                    exact_match: &quot;/healthz&quot;
216                  - name: &quot;x-envoy-livenessprobe&quot;
217                    exact_match: &quot;healthz&quot;
218              - name: envoy.filters.http.grpc_web
219              - name: envoy.filters.http.cors
220              - name: envoy.filters.http.router
221                typed_config: {}
222      clusters:
223      - name: grammar-games-core-grpc
224        connect_timeout: 0.5s
225        type: logical_dns
226        lb_policy: ROUND_ROBIN
227        http2_protocol_options: {}
228        load_assignment:
229          cluster_name: grammar-games-core-grpc
230          endpoints:
231          - lb_endpoints:
232            - endpoint:
233                address:
234                  socket_address:
235                    address: 0.0.0.0
236                    port_value: 52001
237        health_checks:
238          timeout: 1s
239          interval: 10s
240          unhealthy_threshold: 2
241          healthy_threshold: 2
242          grpc_health_check: {}
243    admin:
244      access_log_path: /dev/stdout
245      address:
246        socket_address:
247          address: 127.0.0.1
248          port_value: 8090
249
250

ANSWER

Answered 2021-Oct-14 at 22:35

Here is some documentation about Setting up HTTP(S) Load Balancing with Ingress. This tutorial shows how to run a web application behind an external HTTP(S) load balancer by configuring the Ingress resource.

Related to Creating a HTTP Load Balancer on GKE using Ingress, I found two threads where instances created are marked as unhealthy.

In the first one, they mention the necessity to manually enable a firewall rule to allow http load balancer ip range to pass health check.

In the second one, they mention that the Pod’s spec must also include containerPort. Example:

1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4  name: grammar-games-ingress
5  #annotations:
6    # If the class annotation is not specified it defaults to &quot;gce&quot;.
7    # kubernetes.io/ingress.class: &quot;gce&quot;
8    # kubernetes.io/ingress.global-static-ip-name: &lt;IP addr&gt;
9spec:
10  defaultBackend:
11    service:
12      name: grammar-games-core
13      port:
14        number: 80
15---
16apiVersion: cloud.google.com/v1
17kind: BackendConfig
18metadata:
19  name: grammar-games-bec
20  annotations:
21    cloud.google.com/neg: '{&quot;ingress&quot;: true}'
22spec:
23  sessionAffinity:
24    affinityType: &quot;CLIENT_IP&quot;  
25  healthCheck:
26    checkIntervalSec: 15
27    port: 8080
28    type: HTTP
29    requestPath: /healthz
30  timeoutSec: 60
31---
32apiVersion: v1
33kind: Service
34metadata:
35  name: grammar-games-core
36  annotations:
37    cloud.google.com/neg: '{&quot;ingress&quot;: true}'
38    cloud.google.com/app-protocols: '{&quot;http&quot;:&quot;HTTP&quot;}'
39    cloud.google.com/backend-config: '{&quot;default&quot;: &quot;grammar-games-bec&quot;}'
40spec:
41  type: ClusterIP
42  selector:
43    app: grammar-games-core
44  ports:
45  - name: http
46    protocol: TCP
47    port: 80
48    targetPort: 8080
49---
50apiVersion: apps/v1
51kind: Deployment
52metadata:
53  name: grammar-games-core
54spec:
55  # Two replicas for right now, just so I can see how RPC calls get directed.
56  # replicas: 2
57  selector:
58    matchLabels:
59      app: grammar-games-core
60  template:
61    metadata:
62      labels:
63        app: grammar-games-core
64    spec:
65      serviceAccountName: grammar-games-core-k8sa
66      containers:
67      - name: grammar-games-core
68        image: gcr.io/grammar-games/grammar-games-core:1.1.2
69        command:
70          - &quot;/bin/grammar-games-core&quot;
71        ports:
72        - containerPort: 52001
73        env:
74        - name: GAMESDB_USER
75          valueFrom:
76            secretKeyRef:
77              name: gamesdb-config
78              key: username
79        - name: GAMESDB_PASSWORD
80          valueFrom:
81            secretKeyRef:
82              name: gamesdb-config
83              key: password
84        - name: GAMESDB_DB_NAME
85          valueFrom:
86            secretKeyRef:
87              name: gamesdb-config
88              key: db-name 
89        - name: GRPC_SERVER_PORT
90          value: '52001'
91        - name: GAMES_LOG_FILE_PATH
92          value: ''
93        - name: GAMESDB_LOG_LEVEL
94          value: 'debug'
95        resources:
96          requests:
97            # The proxy's memory use scales linearly with the number of active
98            # connections. Fewer open connections will use less memory. Adjust
99            # this value based on your application's requirements.
100            memory: &quot;2Gi&quot;
101            # The proxy's CPU use scales linearly with the amount of IO between
102            # the database and the application. Adjust this value based on your
103            # application's requirements.
104            cpu:    &quot;1&quot;
105        readinessProbe:
106          exec:
107            command: [&quot;/bin/grpc_health_probe&quot;, &quot;-addr=:52001&quot;]
108          initialDelaySeconds: 5
109      - name: cloud-sql-proxy
110        # It is recommended to use the latest version of the Cloud SQL proxy
111        # Make sure to update on a regular schedule!
112        image: gcr.io/cloudsql-docker/gce-proxy:1.24.0
113        command:
114          - &quot;/cloud_sql_proxy&quot;
115
116          # If connecting from a VPC-native GKE cluster, you can use the
117          # following flag to have the proxy connect over private IP
118          # - &quot;-ip_address_types=PRIVATE&quot;
119
120          # Replace DB_PORT with the port the proxy should listen on
121          # Defaults: MySQL: 3306, Postgres: 5432, SQLServer: 1433
122          - &quot;-instances=grammar-games:us-east1:grammar-games-db=tcp:3306&quot;
123        securityContext:
124          # The default Cloud SQL proxy image runs as the
125          # &quot;nonroot&quot; user and group (uid: 65532) by default.
126          runAsNonRoot: true
127        # Resource configuration depends on an application's requirements. You
128        # should adjust the following values based on what your application
129        # needs. For details, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
130        resources:
131          requests:
132            # The proxy's memory use scales linearly with the number of active
133            # connections. Fewer open connections will use less memory. Adjust
134            # this value based on your application's requirements.
135            memory: &quot;2Gi&quot;
136            # The proxy's CPU use scales linearly with the amount of IO between
137            # the database and the application. Adjust this value based on your
138            # application's requirements.
139            cpu:    &quot;1&quot;
140      - name: envoy-sidecar
141        image: envoyproxy/envoy:v1.20-latest
142        ports:
143        - name: http
144          containerPort: 8080
145        resources:
146          requests:
147            cpu: 10m
148            ephemeral-storage: 256Mi
149            memory: 256Mi
150        volumeMounts:
151        - name: config
152          mountPath: /etc/envoy
153        readinessProbe:
154          httpGet:
155            port: http
156            httpHeaders:
157            - name: x-envoy-livenessprobe
158              value: healthz
159            path: /healthz
160            scheme: HTTP
161      volumes:
162      - name: config
163        configMap:
164          name: envoy-sidecar-conf      
165---
166apiVersion: v1
167kind: ConfigMap
168metadata:
169  name: envoy-sidecar-conf
170data:
171  envoy.yaml: |
172    static_resources:
173      listeners:
174      - name: listener_0
175        address:
176          socket_address:
177            address: 0.0.0.0
178            port_value: 8080
179        filter_chains:
180        - filters:
181          - name: envoy.filters.network.http_connection_manager
182            typed_config:
183              &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
184              access_log:
185              - name: envoy.access_loggers.stdout
186                typed_config:
187                  &quot;@type&quot;: type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
188              codec_type: AUTO
189              stat_prefix: ingress_http
190              route_config:
191                name: local_route
192                virtual_hosts:
193                - name: http
194                  domains:
195                  - &quot;*&quot;
196                  routes:
197                  - match:
198                      prefix: &quot;/grammar_games_protos.GrammarGames/&quot;
199                    route:
200                      cluster: grammar-games-core-grpc
201                  cors:
202                    allow_origin_string_match:
203                    - prefix: &quot;*&quot;
204                    allow_methods: GET, PUT, DELETE, POST, OPTIONS
205                    allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
206                    max_age: &quot;1728000&quot;
207                    expose_headers: custom-header-1,grpc-status,grpc-message
208              http_filters:
209              - name: envoy.filters.http.health_check
210                typed_config:
211                  &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck
212                  pass_through_mode: false
213                  headers:
214                  - name: &quot;:path&quot;
215                    exact_match: &quot;/healthz&quot;
216                  - name: &quot;x-envoy-livenessprobe&quot;
217                    exact_match: &quot;healthz&quot;
218              - name: envoy.filters.http.grpc_web
219              - name: envoy.filters.http.cors
220              - name: envoy.filters.http.router
221                typed_config: {}
222      clusters:
223      - name: grammar-games-core-grpc
224        connect_timeout: 0.5s
225        type: logical_dns
226        lb_policy: ROUND_ROBIN
227        http2_protocol_options: {}
228        load_assignment:
229          cluster_name: grammar-games-core-grpc
230          endpoints:
231          - lb_endpoints:
232            - endpoint:
233                address:
234                  socket_address:
235                    address: 0.0.0.0
236                    port_value: 52001
237        health_checks:
238          timeout: 1s
239          interval: 10s
240          unhealthy_threshold: 2
241          healthy_threshold: 2
242          grpc_health_check: {}
243    admin:
244      access_log_path: /dev/stdout
245      address:
246        socket_address:
247          address: 127.0.0.1
248          port_value: 8090
249
250spec:
251  containers:
252  - name: nginx
253    image: nginx:1.7.9
254    ports:
255    - containerPort: 80
256

Source https://stackoverflow.com/questions/69560536

QUESTION

Azure Load Balancing Solutions. Direct Traffic to Specific VMs

Asked 2021-Dec-14 at 05:34

We are having difficulties choosing a load balancing solution (Load Balancer, Application Gateway, Traffic Manager, Front Door) for IIS websites on Azure VMs. The simple use case when there are 2 identical sites is covered well – just use Azure Load Balancer or Application Gateway. However, in cases when we would like to update websites and test those updates, we encounter limitation of load balancing solutions.

For example, if we would like to update IIS websites on VM1 and test those updates, the strategy would be:

  • Point a load balancer to VM2.
  • Update IIS website on VM1
  • Test the changes
  • If all tests are passed then point the load balancer to VM1 only, while we update VM2.
  • Point the load balancer to both VMs

We would like to know what is the best solution for directing traffic to only one VM. So far, we only see one option – removing a VM from backend address pool then returning it back and repeating the process for other VMs. Surely, there must be a better way to direct 100% of traffic to only one (or to specific VMs), right?

Update:

We ended up blocking the connection between VMs and Load Balancer by creating Network Security Group rule with Deny action on Service Tag Load Balancer. Once we want that particular VM to be accessible again we switch the NSG rule from Deny to Allow.

The downside of this approach is that it takes 1-3 minutes for the changes to take an effect. Continuous Delivery with Azure Load Balancer

If anybody can think of a faster (or instantaneous) solution for this, please let me know.

ANSWER

Answered 2021-Nov-02 at 21:22

Without any Azure specifics, the usual pattern is to point a load balancer to a /status endpoint of your process, and to design the endpoint behavior according to your needs, eg:

  • When a service is first deployed its status is 'pending"
  • When you deem it healthy, eg all tests pass, do a POST /status to update it
  • The service then returns status 'ok'

Meanwhile the load balancer polls the /status endpoint every minute and knows to mark down / exclude forwarding for any servers not in the 'ok' state.

Some load balancers / gateways may work best with HTTP status codes whereas others may be able to read response text from the status endpoint. Pretty much all of them will support this general behavior though - you should not need an expensive solution.

Source https://stackoverflow.com/questions/69688874

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Load Balancing

Tutorials and Learning Resources are not available at this moment for Load Balancing

Share this Page

share link

Get latest updates on Load Balancing