microservices-demo | Sample cloud-first application | GCP library
kandi X-RAY | microservices-demo Summary
Support
Quality
Security
License
Reuse
- Sends an order confirmation email
- Sends a confirmation email
- Start the server
- Adds an email service servicer to the given server
- Checkout a cart
- Add product to cart
- Set index
- Index server
- Adds recommendations service servicer to server
- Get a JSON logger
microservices-demo Key Features
microservices-demo Examples and Code Snippets
# Local port to run Producer Tomcat server.port=8080 # Kafka connection settings spring.cloud.stream.kafka.binder.brokers= spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=PLAIN spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="" password=""; spring.cloud.stream.kafka.binder.configuration.ssl.endpoint.identification.algorithm=https # Topic to use in Kafka cluster to store incoming bookmarks events spring.cloud.stream.source=output spring.cloud.stream.bindings.output-out-0.destination=bookmarks spring.cloud.stream.bindings.output.contentType=application/json
# application name and port where the application Tomcat will be running spring.application.name=kafkastream server.port=8090 # incoming Kafka Topic for Kafka Streams spring.cloud.stream.source=reduce spring.cloud.stream.function.definition=reduce spring.cloud.stream.bindings.reduce-in-0.destination=bookmarks # Kafka consumer group id spring.cloud.stream.kafka.streams.binder.applicationId=bookmarks # Kafka connection settings spring.cloud.stream.kafka.streams.binder.brokers= spring.cloud.stream.kafka.streams.binder.configuration.security.protocol=SASL_SSL spring.cloud.stream.kafka.streams.binder.configuration.sasl.mechanism=PLAIN spring.cloud.stream.kafka.streams.binder.configuration.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="" password=""; spring.cloud.stream.kafka.streams.binder.configuration.ssl.endpoint.identification.algorithm=https # Show Kafka where is the local state sore running spring.cloud.stream.kafka.streams.binder.configuration.application.server=localhost:${server.port} spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
Sample Cats added Web server listening at: http://0.0.0.0:3030 Browse your REST API at http://0.0.0.0:3030/explorer
Sample bears added Web server listening at: http://0.0.0.0:3001 Browse your REST API at http://0.0.0.0:3001/explorer
swift build
cd $GOPATH/src/github.com/microservices-demo/payment/ go get -u github.com/FiloSottile/gvt gvt restore
cd $GOPATH/src/github.com/microservices-demo/payment/paymentsvc/ go build -o payment
Trending Discussions on microservices-demo
Trending Discussions on microservices-demo
QUESTION
I have configured linkerd with 2 clusters in GKE (west and east cluster) for multicluster purpose. I used this demo app provided by google https://github.com/GoogleCloudPlatform/microservices-demo
First I did it with Istio and everything worked out fine, but with linkerd its different. As expected the exported service from east to west cluster with get the cluster name appended to the service. eg in west cluster ,you will get currencyservice-east.
The problem I think I am having is that the frontend in the west cluster keeps sending request to currencyservice instead of currencyservice-east.
I didn't have this problem in Istio because Istio used the same service name across clusters. I am not a GO programmer, but I have googled my life to find out where the service name was defined in the frontend source code to change it but I have not succeeded.
another alternative will be for linkerd to maintain the service name while exporting it.
please guys help me.
ANSWER
Answered 2021-Nov-16 at 17:19You can use a TrafficSplit on the source cluster to direct calls to currentservice
to currentservice-east
.
QUESTION
TRYING TO EDIT AS SUGGESTED:
STS crashes continously, here is an exmple of the last logs in projects folder:
...
...
...
!ENTRY org.eclipse.ui 4 0 2021-10-31 08:41:50.924
!MESSAGE Unhandled event loop exception
!STACK 0
org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.OutOfMemoryError: Requested array size exceeds VM limit)
...
...
...
Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
!ENTRY org.eclipse.ui 4 0 2021-10-31 08:41:58.932
!MESSAGE Unhandled event loop exception
!STACK 0
org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.OutOfMemoryError: Requested array size exceeds VM limit)
...
...
...
Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
!ENTRY org.eclipse.ui 4 0 2021-10-31 08:42:00.869
!MESSAGE Unhandled event loop exception
!STACK 0
org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.OutOfMemoryError: Requested array size exceeds VM limit)
at org.eclipse.swt.SWT.error(SWT.java:4720)
...
...
...
Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
!ENTRY org.eclipse.core.jobs 4 2 2021-10-31 08:42:47.266
!MESSAGE An internal error occurred during: "Updating Maven Project".
!STACK 0
java.lang.OutOfMemoryError: Java heap space
!SESSION 2021-10-31 08:43:56.477 -----------------------------------------------
eclipse.buildId=4.5.1.202001211336-RELEASE
java.version=11.0.9
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=it_IT
Framework arguments: -product org.springframework.boot.ide.branding.sts4
Command-line arguments: -os win32 -ws win32 -arch x86_64 -product org.springframework.boot.ide.branding.sts4
!ENTRY org.eclipse.jface 2 0 2021-10-31 08:44:07.942
!MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation.
!SUBENTRY 1 org.eclipse.jface 2 0 2021-10-31 08:44:07.942
!MESSAGE A conflict occurred for CTRL+SHIFT+T:
Binding(CTRL+SHIFT+T,
ParameterizedCommand(Command(org.eclipse.jdt.ui.navigate.open.type,Open Type,
Open a type in a Java editor,
Category(org.eclipse.ui.category.navigate,Navigate,null,true),
org.eclipse.ui.internal.WorkbenchHandlerServiceHandler@2689b752,
,,true),null),
org.eclipse.ui.defaultAcceleratorConfiguration,
org.eclipse.ui.contexts.window,,,system)
Binding(CTRL+SHIFT+T,
ParameterizedCommand(Command(org.eclipse.lsp4e.symbolinworkspace,Go to Symbol in Workspace,
,
Category(org.eclipse.lsp4e.category,Language Servers,null,true),
org.eclipse.ui.internal.WorkbenchHandlerServiceHandler@84eafc2,
,,true),null),
org.eclipse.ui.defaultAcceleratorConfiguration,
org.eclipse.ui.contexts.window,,,system)
!ENTRY org.eclipse.egit.ui 2 0 2021-10-31 08:44:21.052
!MESSAGE Warning: The environment variable HOME is not set. The following directory will be used to store the Git
user global configuration and to define the default location to store repositories: 'C:\Users\franc'. If this is
not correct please set the HOME environment variable and restart Eclipse. Otherwise Git for Windows and
EGit might behave differently since they see different configuration options.
This warning can be switched off on the Team > Git > Confirmations and Warnings preference page.
!ENTRY org.eclipse.ui 2 2 2021-10-31 08:46:43.793
!MESSAGE Invalid property category path: org.springframework.ide.eclipse.beans.ui.properties.ProjectPropertyPage (bundle: org.springframework.ide.eclipse.xml.namespaces, propertyPage: org.springframework.ide.eclipse.beans.ui.namespaces.projectPropertyPage)
!ENTRY org.eclipse.core.jobs 4 2 2021-10-31 08:46:54.518
!MESSAGE An internal error occurred during: "Updating Maven Project".
!STACK 0
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.HashMap.readObject(HashMap.java:1452)
...
...
...
Here's an example of one of the edited poms of the workspace:
microservices-demo
com.microservices.demo
0.0.1-SNAPSHOT
../../pom.xml
4.0.0
kafka-producer
com.microservices.demo
app-config-data
com.microservices.demo
kafka-model
org.springframework.kafka
spring-kafka
io.confluent
kafka-avro-serializer
It began when I was comparing and modifying java, pom, and properties files outside STS using Winmerge
ANSWER
Answered 2021-Oct-31 at 08:04Deleted the project (only from eclipse, don't check to option to delete from disk). Then reimport in workspace solved the problem.
QUESTION
I am trying to run a test server on aws using terraform. When i run terraform apply
it throws an error saying Reference to undeclared resource. Below is my test server file inside terraform.
test-server.tf
module "test-server" {
source = "./node-server"
ami-id = "Here ive given my ami_id"
key-pair = aws_key_pair.microservices-demo-key.key_name
name = "Test Server"
}
Below is my key pair file code.
key-pairs
resource "aws_key_pair" "microservcies-demo-key" {
key_name = "microservices-demo-key"
public_key = file("./microservices_demo.pem")
}
Error detail thrown by terraform:
Error: Reference to undeclared resource
on test-server.tf line 4, in module "test-server":
4: key-pair = aws_key_pair.microservices-demo-key.key_name
A managed resource "aws_key_pair" "microservices-demo-key" has not been declared in the root module.
Although ive declard the variables. Its still throwing the error. This is the image of the directory.
ANSWER
Answered 2021-Sep-16 at 17:07You have a typo here:
resource "aws_key_pair" "microservcies-demo-key" {
Fix this name to be microservices-demo-key
so that it matches the name you reference in test-server.tf
.
QUESTION
I am creating microservice application. for which we have a pom
in the project. Under this we have multiple modules. I am using Java11
and spring 2.5.4.
below is my pom
. from this all the modules
will get versions. Its packaging is pom
.
4.0.0
twitter-to-kafka-service
org.springframework.boot
spring-boot-starter-parent
2.5.4
com.microservices.demo
microservices-demo
0.0.1-SNAPSHOT
microservices-demo
Microservices demo project for Spring Boot
pom
11
2.5.4
3.8.1
4.0.7
1.18.16
org.springframework.boot
spring-boot-starter
${spring-boot.version}
org.springframework.boot
spring-boot-starter-test
${spring-boot.version}
test
org.junit.vintage
junit-vintage-engine
org.twitter4j
twitter4j-stream
${twitter4j.version}
org.projectlombok
lombok
${lombok.version}
provided
org.apache.maven.plugins
maven-compiler-plugin
${maven-compiler-plugin.version}
11
org.springframework.boot
spring-boot-maven-plugin
${spring-boot.version}
Under this project I created one module for which pom is the below.
microservices-demo
com.microservices.demo
0.0.1-SNAPSHOT
4.0.0
twitter-to-kafka-service
org.springframework.boot
spring-boot-starter
org.springframework.boot
spring-boot-starter-test
test
org.twitter4j
twitter4j-stream
org.projectlombok
lombok
provided
org.springframework.boot
spring-boot-maven-plugin
15
15
ANSWER
Answered 2021-Sep-04 at 20:42You are using IntelliJ IDEA and did not update the pom information (Icon in the uppoer right with small 'm' symbol.)
Therefore the dependencies are not resolved properly.
Besides I suggest to use a single module project for spring boot applications, if there is no real requirement to do otherwise.
QUESTION
That is my server.go
file where I define my server
struct and its methods.
package main
import (
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
)
type FrontendServer struct {
Router *mux.Router
}
func (f *FrontendServer) Run(addr string) {
log.Fatal(http.ListenAndServe(addr, f.Router))
}
// func (f *FrontendServer) InitializeRoutes() {
// f.Router.HandleFunc("/", f.handleHome)
// }
func (f *FrontendServer) HandleHome(w http.ResponseWriter, h *http.Request) {
fmt.Fprintf(w, "sadsadas")
}
Here is my main.go
file where I start my application.
package main
func main() {
server := FrontendServer{}
server.Router.HandleFunc("/", server.HandleHome)
server.Run(":8080")
}
When I run the app with go run *.go
it gives me the error below
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x123e0fa]
goroutine 1 [running]:
github.com/gorilla/mux.(*Router).NewRoute(...)
/Users/barisertas/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:279
github.com/gorilla/mux.(*Router).HandleFunc(0x0, 0x1305508, 0x1, 0xc000012d20, 0xffffffff)
/Users/barisertas/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:300 +0x3a
main.main()
/Users/barisertas/microservices-demo/frontend/main.go:5 +0x92
exit status 2
I made uppercase of first letter of the method and both files are in the same package. Is it because of the gorilla/mux
itself? Because in the import
scope it is underlied red saying could not import github.com/gorilla/mux no required module provides package
Here is my go.mod
and go.sum
files.
module github.com/bariis/microservices-demo/frontend
go 1.16
require github.com/gorilla/mux v1.8.0 // indirect
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
ANSWER
Answered 2021-May-16 at 18:51This line:
server := FrontendServer{}
Creates a variable server
of type FrontendServer
. FrontendServer
is a struct, and all its fields will have their zero value. FrontendServer.Router
is a pointer, so it will be nil
. A nil
Router
is not functional, calling any of its methods may panic, just as you experienced.
Initialize it properly:
server := FrontendServer{
Router: mux.NewRouter(),
}
QUESTION
Dear StackOverflow community!
I am trying to run the https://github.com/GoogleCloudPlatform/microservices-demo locally on minikube, so I am following their development guide: https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md
After I successfully set up minikube (using virtualbox driver, but I tried also hyperkit, however the results were the same) and execute skaffold run
, after some time it will end up with following error:
Building [shippingservice]...
Sending build context to Docker daemon 127kB
Step 1/14 : FROM golang:1.15-alpine as builder
---> 6466dd056dc2
Step 2/14 : RUN apk add --no-cache ca-certificates git
---> Running in 0e6d2ab2a615
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: DNS lookup error
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: DNS lookup error
ERROR: unable to select packages:
git (no such package):
required by: world[git]
Building [recommendationservice]...
Building [cartservice]...
Building [emailservice]...
Building [productcatalogservice]...
Building [loadgenerator]...
Building [checkoutservice]...
Building [currencyservice]...
Building [frontend]...
Building [adservice]...
unable to stream build output: The command '/bin/sh -c apk add --no-cache ca-certificates git' returned a non-zero code: 1. Please fix the Dockerfile and try again..
The error message suggest that DNS does not work. I tried to add 8.8.8.8
to /etc/resolv.conf
on a minikube VM, but it did not help. I've noticed that after I re-run skaffold run
and it fails again, the content /etc/resolv.conf
returns to its original state containing 10.0.2.3
as the only DNS entry. Reaching the outside internet and pinging 8.8.8.8
form within the minikube VM works.
Could you point me to a direction how can possible I fix the problem and learn on how the DNS inside minikube/kubernetes works? I've heard that problems with DNS inside Kubernetes cluster are frequent problems you run into.
Thanks for your answers!
Best regards, Richard
ANSWER
Answered 2021-May-14 at 11:16Tried it with docker driver, i.e. minikube start --driver=docker
, and it works. Thanks Brian!
QUESTION
I am trying to build and deploy microservices images to a single-node Kubernetes cluster running on my development machine using minikube. I am using the cloud-native microservices demo application Online Boutique by Google to understand the use of technologies like Kubernetes, Istio etc.
Link to github repo: microservices-demo
While following the installation process, and on running command skaffold run
to build and deploy my application, I get some errors:
Step 10/11 : RUN apt-get -qq update && apt-get install -y --no-install-recommends curl
---> Running in 43d61232617c
W: GPG error: http://deb.debian.org/debian buster InRelease: At least one invalid signature was encountered.
E: The repository 'http://deb.debian.org/debian buster InRelease' is not signed.
W: GPG error: http://deb.debian.org/debian buster-updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://deb.debian.org/debian buster-updates InRelease' is not signed.
W: GPG error: http://security.debian.org/debian-security buster/updates InRelease: At least one invalid signature was encountered.
E: The repository 'http://security.debian.org/debian-security buster/updates InRelease' is not signed.
failed to build: couldn't build "loadgenerator": unable to stream build output: The command '/bin/sh -c apt-get -qq update && apt-get install -y --no-install-recommends curl' returned a non-zero code: 100
I receive these errors when trying to build loadgenerator. How can I resolve this issue?
ANSWER
Answered 2020-Jun-22 at 09:07There are a few reasons why you encounter these errors:
There might be an issue with the existing cache and/or disc space. In order to fix it you need to clear the APT cache by executing:
sudo apt-get clean
andsudo apt-get update
.The same goes with existing docker images. Execute:
docker image prune -f
anddocker container prune -f
in order to remove unused data and free disc space.If you don't care about the security risks, you can try to run the
apt-get
command with the--allow-unauthenticated
or--allow-insecure-repositories
flag. According to the docs:
Ignore if packages can't be authenticated and don't prompt about it. This can be useful while working with local repositories, but is a huge security risk if data authenticity isn't ensured in another way by the user itself.
Please let me know if that helped.
QUESTION
I have been working on extending the GCP Online Boutique microservices example, and I would like to add Istio AuthorizationPolicy resources to the system.
Concretely, I want an AuthorizationPolicy
to block all not-whitelisted traffic to cartservice
, and I want to whitelist traffic from frontend
to cartservice
.
Currently, I am able to block traffic with an AuthorizationPolicy
, but I cannot whitelist traffic by principal or namespace.
For context, here is my system setup. (Anything not explicitly stated here is the default from the demo linked above)
Istio Version:
$ istioctl version
client version: 1.4.6
control plane version: 1.4.6-gke.0
data plane version: 1.4.6-gke.0 (16 proxies)
Command I Ran to Enforce Strict mTLS:
gcloud beta container clusters update --update-addons=Istio=ENABLED \--istio-config=auth=MTLS_STRICT --zone=us-central1-a
I added this ServiceAccount using kubectl apply -f
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend-serviceaccount
---
To make this work, I added one line to the spec
for the frontend
Deployment
, which was:
serviceAccountName: frontend-serviceaccount
Lastly, this is the AuthorizationPolicy I am trying to use to only permit traffic from the frontend
to talk to the cartservice
:
kind: AuthorizationPolicy
metadata:
name: allow-cart-and-frontend-comm
namespace: default
spec:
selector:
matchLabels:
app: cartservice
rules:
- from:
- source:
namespaces:
- "default"
# principals: ["cluster.local/ns/default/sa/frontend-serviceaccount", "frontend", "frontend-serviceaccount", "frontend-serviceaccount.default.sa.cluster.local", "/api/v1/namespaces/default/serviceaccounts/frontend-serviceaccount", "frontend.default.svc.cluster.local"]
The Principals
commented out above are all of the different ways I have tried to refer to the service account defined above, and neither them nor the namespace work properly - as soon as this is applied the frontend
cannot talk to the cartservice
.
Results of System Debugging Calls: Note, these were made with the AuthPolicy applied for principals: ["cluster.local/ns/default/sa/frontend-serviceaccount"]
.
$ istioctl x authz check frontend-
Checked 21/40 listeners with node IP 10.4.4.14.
LISTENER[FilterChain] CERTIFICATE mTLS (MODE) JWT (ISSUERS) AuthZ (RULES)
0.0.0.0_80[0] none no (none) no (none) no (none)
0.0.0.0_80[1] none no (none) no (none) no (none)
0.0.0.0_443[0] none no (none) no (none) no (none)
0.0.0.0_443[1] none no (none) no (none) no (none)
0.0.0.0_443[2] none no (none) no (none) no (none)
0.0.0.0_443[3] none no (none) no (none) no (none)
0.0.0.0_3550[0] none no (none) no (none) no (none)
0.0.0.0_3550[1] none no (none) no (none) no (none)
0.0.0.0_5000[0] none no (none) no (none) no (none)
0.0.0.0_5000[1] none no (none) no (none) no (none)
0.0.0.0_5050[0] none no (none) no (none) no (none)
0.0.0.0_5050[1] none no (none) no (none) no (none)
0.0.0.0_7000[0] none no (none) no (none) no (none)
0.0.0.0_7000[1] none no (none) no (none) no (none)
0.0.0.0_7070[0] none no (none) no (none) no (none)
0.0.0.0_7070[1] none no (none) no (none) no (none)
0.0.0.0_8060[0] none no (none) no (none) no (none)
0.0.0.0_8060[1] none no (none) no (none) no (none)
0.0.0.0_8080[0] none no (none) no (none) no (none)
0.0.0.0_8080[1] none no (none) no (none) no (none)
0.0.0.0_9090[0] none no (none) no (none) no (none)
0.0.0.0_9090[1] none no (none) no (none) no (none)
0.0.0.0_9091[0] none no (none) no (none) no (none)
0.0.0.0_9091[1] none no (none) no (none) no (none)
0.0.0.0_9555[0] none no (none) no (none) no (none)
0.0.0.0_9555[1] none no (none) no (none) no (none)
0.0.0.0_9901[0] none no (none) no (none) no (none)
0.0.0.0_9901[1] none no (none) no (none) no (none)
virtualOutbound[0] none no (none) no (none) no (none)
virtualOutbound[1] none no (none) no (none) no (none)
0.0.0.0_15004[0] none no (none) no (none) no (none)
0.0.0.0_15004[1] none no (none) no (none) no (none)
virtualInbound[0] none no (none) no (none) no (none)
virtualInbound[1] none no (none) no (none) no (none)
virtualInbound[2] /etc/certs/cert-chain.pem yes (PERMISSIVE) no (none) no (none)
virtualInbound[3] none no (PERMISSIVE) no (none) no (none)
0.0.0.0_15010[0] none no (none) no (none) no (none)
0.0.0.0_15010[1] none no (none) no (none) no (none)
0.0.0.0_15014[0] none no (none) no (none) no (none)
0.0.0.0_15014[1] none no (none) no (none) no (none)
0.0.0.0_50051[0] none no (none) no (none) no (none)
0.0.0.0_50051[1] none no (none) no (none) no (none)
10.4.4.14_8080[0] /etc/certs/cert-chain.pem yes (PERMISSIVE) no (none) no (none)
10.4.4.14_8080[1] none no (PERMISSIVE) no (none) no (none)
10.4.4.14_15020 none no (none) no (none) no (none)
$ istioctl x authz check cartservice-69955dd686-wf5bt
Checked 21/40 listeners with node IP 10.4.5.6.
LISTENER[FilterChain] CERTIFICATE mTLS (MODE) JWT (ISSUERS) AuthZ (RULES)
0.0.0.0_80[0] none no (none) no (none) no (none)
0.0.0.0_80[1] none no (none) no (none) no (none)
0.0.0.0_443[0] none no (none) no (none) no (none)
0.0.0.0_443[1] none no (none) no (none) no (none)
0.0.0.0_443[2] none no (none) no (none) no (none)
0.0.0.0_443[3] none no (none) no (none) no (none)
0.0.0.0_3550[0] none no (none) no (none) no (none)
0.0.0.0_3550[1] none no (none) no (none) no (none)
0.0.0.0_5000[0] none no (none) no (none) no (none)
0.0.0.0_5000[1] none no (none) no (none) no (none)
0.0.0.0_5050[0] none no (none) no (none) no (none)
0.0.0.0_5050[1] none no (none) no (none) no (none)
0.0.0.0_7000[0] none no (none) no (none) no (none)
0.0.0.0_7000[1] none no (none) no (none) no (none)
0.0.0.0_7070[0] none no (none) no (none) no (none)
0.0.0.0_7070[1] none no (none) no (none) no (none)
0.0.0.0_8060[0] none no (none) no (none) no (none)
0.0.0.0_8060[1] none no (none) no (none) no (none)
0.0.0.0_8080[0] none no (none) no (none) no (none)
0.0.0.0_8080[1] none no (none) no (none) no (none)
0.0.0.0_9090[0] none no (none) no (none) no (none)
0.0.0.0_9090[1] none no (none) no (none) no (none)
0.0.0.0_9091[0] none no (none) no (none) no (none)
0.0.0.0_9091[1] none no (none) no (none) no (none)
0.0.0.0_9555[0] none no (none) no (none) no (none)
0.0.0.0_9555[1] none no (none) no (none) no (none)
0.0.0.0_9901[0] none no (none) no (none) no (none)
0.0.0.0_9901[1] none no (none) no (none) no (none)
virtualOutbound[0] none no (none) no (none) no (none)
virtualOutbound[1] none no (none) no (none) no (none)
0.0.0.0_15004[0] none no (none) no (none) no (none)
0.0.0.0_15004[1] none no (none) no (none) no (none)
virtualInbound[0] none no (none) no (none) yes (1: ns[default]-policy[allow-cart-and-frontend-comm]-rule[0])
virtualInbound[1] none no (none) no (none) no (none)
virtualInbound[2] /etc/certs/cert-chain.pem yes (PERMISSIVE) no (none) yes (1: ns[default]-policy[allow-cart-and-frontend-comm]-rule[0])
virtualInbound[3] none no (PERMISSIVE) no (none) yes (1: ns[default]-policy[allow-cart-and-frontend-comm]-rule[0])
0.0.0.0_15010[0] none no (none) no (none) no (none)
0.0.0.0_15010[1] none no (none) no (none) no (none)
0.0.0.0_15014[0] none no (none) no (none) no (none)
0.0.0.0_15014[1] none no (none) no (none) no (none)
0.0.0.0_50051[0] none no (none) no (none) no (none)
0.0.0.0_50051[1] none no (none) no (none) no (none)
10.4.5.6_7070[0] /etc/certs/cert-chain.pem yes (PERMISSIVE) no (none) yes (1: ns[default]-policy[allow-cart-and-frontend-comm]-rule[0])
10.4.5.6_7070[1] none no (PERMISSIVE) no (none) yes (1: ns[default]-policy[allow-cart-and-frontend-comm]-rule[0])
10.4.5.6_15020 none no (none) no (none) no (none)
ANSWER
Answered 2020-Aug-01 at 15:14For reference, after debugging in person with OP, we discovered that the cluster was underspecified in terms of CPU usage. On resizing the cluster to have additional CPU (1 vCPU -> 4 vCPUs), we were able to get authz
policies working and respected.
Our hypothesis is that istiod
was failing to respond to requests because of this issue. We do not know why.
QUESTION
I am trying to build and deploy microservices images to a single-node Kubernetes cluster running on my development machine using minikube. I am using the cloud-native microservices demo application Online Boutique by Google to understand the use of technologies like Kubernetes, Istio etc.
Link to github repo: microservices-demo
I have followed all the installation process to locally build and deploy the microservices, and am able to access the web frontend through my browser. However, when I click on any of the product images say, I see this error page.
HTTP Status: 500 Internal Server Error
On doing a check using kubectl get pods I realize that one of my pods( Recommendation service) has status CrashLoopBackOff. Running kubectl describe pods recommendationservice-55b4d6c477-kxv8r
:
Namespace: default
Priority: 0
Node: minikube/192.168.99.116
Start Time: Thu, 23 Jul 2020 19:58:38 +0530
Labels: app=recommendationservice
app.kubernetes.io/managed-by=skaffold-v1.11.0
pod-template-hash=55b4d6c477
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.40
skaffold.dev/run-id=49913ced-e8df-40a7-9336-a227b56bcb5f
skaffold.dev/tag-policy=git-commit
Annotations:
Status: Running
IP: 172.17.0.14
IPs:
IP: 172.17.0.14
Controlled By: ReplicaSet/recommendationservice-55b4d6c477
Containers:
server:
Container ID: docker://2d92aa966a82fbe58c8f40f6ecf9d6d55c29f8081cb40e0423a2397e1419350f
Image: recommendationservice:2216d526d249cc8363129aed9a09d752f9ad8f458e61e50a2a99c59d000606cb
Image ID: docker://sha256:2216d526d249cc8363129aed9a09d752f9ad8f458e61e50a2a99c59d000606cb
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 23 Jul 2020 21:09:33 +0530
Finished: Thu, 23 Jul 2020 21:09:53 +0530
Ready: False
Restart Count: 29
Limits:
cpu: 200m
memory: 450Mi
Requests:
cpu: 100m
memory: 220Mi
Liveness: exec [/bin/grpc_health_probe -addr=:8080] delay=0s timeout=1s period=5s #success=1 #failure=3
Readiness: exec [/bin/grpc_health_probe -addr=:8080] delay=0s timeout=1s period=5s #success=1 #failure=3
Environment:
PORT: 8080
PRODUCT_CATALOG_SERVICE_ADDR: productcatalogservice:3550
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sbpcx (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sbpcx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sbpcx
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 44m (x15 over 74m) kubelet, minikube Container image "recommendationservice:2216d526d249cc8363129aed9a09d752f9ad8f458e61e50a2a99c59d000606cb" already present on machine
Warning Unhealthy 9m33s (x99 over 74m) kubelet, minikube Readiness probe failed: timeout: failed to connect service ":8080" within 1s
Warning BackOff 4m25s (x294 over 72m) kubelet, minikube Back-off restarting failed container
In Events, I see Readiness probe failed: timeout: failed to connect service ":8080" within 1s
. What is the reason and how can I resolve this? Thanks for the help!
ANSWER
Answered 2020-Jul-25 at 20:26The timeout of the Readiness Probe (1 second) was too short.
More InfoThe relevant Readiness Probe is defined such that /bin/grpc_health_probe -addr=:8080
is run inside the server
container.
You would expect a 1 second timeout to be sufficient for such a probe but this is running on Minikube so that could be impacting the timeout of the probe.
QUESTION
I have two applications, nginx and redis, where nginx uses redis to cache some data so the redis address must be configured in nginx.
On the one hand, I could first apply the redis deployment and get its IP and then apply the nginx deployment to set up the two application in my minikube.
But on the other, to simplify installation in the Kubernetes Dashboard for QA, I want to create a single Kubernetes YAML file (like GoogleCloudPlatform/microservices-demo/kubernetes-manifests.yaml) to deploy these two applications on two diverse Pods. However, if I do it by means of Environment Variables, I cannot get the redis address.
So how do I achieve it?
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master-c
image: docker.io/redis:alpine
ports:
- containerPort: 6379
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector: # Defines how the Deployment finds which Pods to manage.
matchLabels:
app: my-nginx
template:
metadata: # Defines what the newly created Pods are labeled.
labels:
app: my-nginx
tier: frontend
spec:
terminationGracePeriodSeconds: 5
containers:
- name: my-nginx # Defines container name
image: my-nginx:dev # docker image load -i my-nginx-docker_image.tar
imagePullPolicy: Never # Always, IfNotPresent (default), Never
ports:
env:
- name: NGINX_ERROR_LOG_SEVERITY_LEVEL
value: debug
- name: MY_APP_REDIS_HOST
# How to use the IP address of the POD with redis-master labeled that is created by the previous deployment?
value: 10.86.50.235
# https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
# valueFrom:
# fieldRef:
# fieldPath: status.podIP # this is the current POD IP
- name: MY_APP_CLIENT_ID
value: client_id
- name: MY_APP_CLIENT_SECRET
# https://kubernetes.io/docs/concepts/configuration/secret
value: client_secret
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
apiVersion: v1
kind: Service
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
# https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
# metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
metadata:
name: my-nginx
spec:
type: NodePort
selector:
# Defines a proper selector for your pods with corresponding `.metadata.labels` field.
# Verify it using: kubectl get pods --selector app=my-nginx || kubectl get pod -l app=my-nginx
# Make sure the service points to correct pod by, for example, `kubectl describe pod -l app=my-nginx`
app: my-nginx
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- name: http
port: 6080
targetPort: 80
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30080
- name: https
port: 6443
targetPort: 443
nodePort: 30443
Added some network output,
Microsoft Windows [Version 10.0.18362.900]
(c) 2019 Microsoft Corporation. All rights reserved.
PS C:\Users\ssfang> kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-pod 1/1 Running 9 5d14h
redis-master-7db899bccb-npl6s 1/1 Running 3 2d15h
redis-master-7db899bccb-rgx47 1/1 Running 3 2d15h
C:\Users\ssfang> kubectl exec redis-master-7db899bccb-npl6s -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
C:\Users\ssfang> kubectl exec my-nginx-pod -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
C:\Users\ssfang> kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller-admission ClusterIP 10.108.221.2 443/TCP 7d11h
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 7d17h
C:\Users\ssfang> kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.17.0.2:53,172.17.0.5:53,172.17.0.2:9153 + 3 more... 7d17h
C:\Users\ssfang> kubectl get ep kube-dns --namespace=kube-system -o=yaml
apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2020-07-09T02:08:35Z"
creationTimestamp: "2020-07-01T09:34:44Z"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: KubeDNS
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:endpoints.kubernetes.io/last-change-trigger-time: {}
f:labels:
.: {}
f:k8s-app: {}
f:kubernetes.io/cluster-service: {}
f:kubernetes.io/name: {}
f:subsets: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-09T02:08:35Z"
name: kube-dns
namespace: kube-system
resourceVersion: "523617"
selfLink: /api/v1/namespaces/kube-system/endpoints/kube-dns
subsets:
- addresses:
nodeName: minikube
targetRef:
kind: Pod
namespace: kube-system
resourceVersion: "523566"
uid: ed3a9f46-718a-477a-8804-e87511db16d1
- ip: 172.17.0.5
nodeName: minikube
targetRef:
kind: Pod
name: coredns-546565776c-hmm5s
namespace: kube-system
resourceVersion: "523616"
uid: ae21c65c-e937-4e3d-8a7a-636d4f780855
ports:
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
- name: dns
port: 53
protocol: UDP
C:\Users\ssfang> kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 7d20h
my-nginx-service NodePort 10.98.82.96 6080:30080/TCP,6443:30443/TCP 7d13h
PS C:\Users\ssfang> kubectl describe pod/my-nginx-pod | findstr IP
IP: 172.17.0.8
IPs:
IP: 172.17.0.8
PS C:\Users\ssfang> kubectl describe service/my-nginx-service | findstr IP
IP: 10.98.82.96
C:\Users\ssfang> kubectl describe pod/my-nginx-65ffdfb5b5-dzgjk | findstr IP
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Take two Pods with nginx for example to inspect network,
- C:\Users\ssfang> kubectl exec my-nginx-pod -it -- bash
# How to install nslookup, dig, host commands in Linux
apt-get install dnsutils -y # In ubuntu
yum install bind-utils -y # In RHEL/Centos
root@my-nginx-pod:/etc# apt update && apt-get install -y dnsutils iputils-ping
root@my-nginx-pod:/etc# nslookup my-nginx-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: my-nginx-service.default.svc.cluster.local
Address: 10.98.82.96
root@my-nginx-pod:/etc# nslookup my-nginx-pod
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find my-nginx-pod: SERVFAIL
root@my-nginx-pod:/etc# ping -c3 -W60 my-nginx-pod
PING my-nginx-pod (172.17.0.8) 56(84) bytes of data.
64 bytes from my-nginx-pod (172.17.0.8): icmp_seq=1 ttl=64 time=0.011 ms
64 bytes from my-nginx-pod (172.17.0.8): icmp_seq=2 ttl=64 time=0.021 ms
64 bytes from my-nginx-pod (172.17.0.8): icmp_seq=3 ttl=64 time=0.020 ms
--- my-nginx-pod ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2065ms
rtt min/avg/max/mdev = 0.011/0.017/0.021/0.005 ms
root@my-nginx-pod:/etc# ping -c3 -W20 my-nginx-service
PING my-nginx-service.default.svc.cluster.local (10.98.82.96) 56(84) bytes of data.
--- my-nginx-service.default.svc.cluster.local ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2060ms
root@my-nginx-pod:/etc# ping -c3 -W20 my-nginx-pod.default.svc.cluster.local
ping: my-nginx-pod.default.svc.cluster.local: Name or service not known
root@my-nginx-pod:/etc# ping -c3 -W20 my-nginx-service.default.svc.cluster.local
PING my-nginx-service.default.svc.cluster.local (10.98.82.96) 56(84) bytes of data.
--- my-nginx-service.default.svc.cluster.local ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2051ms
- C:\Users\ssfang> kubectl exec my-nginx-65ffdfb5b5-dzgjk -it -- bash
root@my-nginx-65ffdfb5b5-dzgjk:/etc# ping -c3 -W20 my-nginx-pod.default.svc.cluster.local
ping: my-nginx-pod.default.svc.cluster.local: Name or service not known
root@my-nginx-65ffdfb5b5-dzgjk:/etc# ping -c3 -W20 my-nginx-service.default.svc.cluster.local
ping: my-nginx-service.default.svc.cluster.local: Name or service not known
root@my-nginx-65ffdfb5b5-dzgjk:/etc# ping -c3 -W20 172.17.0.8
PING 172.17.0.8 (172.17.0.8) 56(84) bytes of data.
64 bytes from 172.17.0.8: icmp_seq=1 ttl=64 time=0.195 ms
64 bytes from 172.17.0.8: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 172.17.0.8: icmp_seq=3 ttl=64 time=0.039 ms
--- 172.17.0.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2055ms
rtt min/avg/max/mdev = 0.039/0.091/0.195/0.073 ms
- C:\Users\ssfang> ssh -o StrictHostKeyChecking=no -i C:\Users\ssfang.minikube\machines\minikube\id_rsa docker@10.86.50.252 &:: minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ping default.svc.cluster.local
ping: bad address 'default.svc.cluster.local'
$ ping my-nginx-pod.default.svc.cluster.local
ping: bad address 'my-nginx-pod.default.svc.cluster.local'
$ ping my-nginx-service.default.svc.cluster.local
ping: bad address 'my-nginx-service.default.svc.cluster.local'
$ nslookup whoami
Server: 10.86.50.1
Address: 10.86.50.1:53
** server can't find whoami: NXDOMAIN
** server can't find whoami: NXDOMAIN
$ ping -c3 -W20 172.17.0.8
PING 172.17.0.8 (172.17.0.8): 56 data bytes
64 bytes from 172.17.0.8: seq=0 ttl=64 time=0.053 ms
64 bytes from 172.17.0.8: seq=1 ttl=64 time=0.035 ms
64 bytes from 172.17.0.8: seq=2 ttl=64 time=0.040 ms
--- 172.17.0.8 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.035/0.042/0.053 ms
$ ping -c3 -W20 172.17.0.4
PING 172.17.0.4 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.17.0.4: seq=1 ttl=64 time=0.039 ms
64 bytes from 172.17.0.4: seq=2 ttl=64 time=0.038 ms
--- 172.17.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.038/0.049/0.070 ms
ANSWER
Answered 2020-Jul-08 at 11:25Hardcoding IP-address is not a good practice. Instead you can create a service for redis as well and configure the service dns name in your nginx deployment using the kubernetes dns config like this my-svc.my-namespace.svc.cluster-domain.example
. Your nginx will then communicate to the redis container through this service.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install microservices-demo
Create a Google Cloud Platform project or use an existing project. Set the PROJECT_ID environment variable and ensure the Google Kubernetes Engine and Cloud Operations APIs are enabled.
Clone this repository.
Create a GKE cluster.
GKE autopilot mode (see Autopilot overview to learn more):
GKE Standard mode:
Deploy the sample app to the cluster.
Wait for the Pods to be ready.
Access the web frontend in a browser using the frontend's EXTERNAL_IP.
[Optional] Clean up:
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page