eShopOnContainers | Cross-platform .NET sample microservices and container based application that runs on Linux Windows | Continuous Deployment library
kandi X-RAY | eShopOnContainers Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
eShopOnContainers Key Features
eShopOnContainers Examples and Code Snippets
Trending Discussions on eShopOnContainers
Trending Discussions on eShopOnContainers
QUESTION
I am trying to implement something similar to EventBus implemented in eshopOnContainers.
Is it Possible to define a method like this in java & read the meta data about T and TH at runtime ? There can be multiple classes extending IntegrationEvent(e.g. PriceChangedEvent) & we should be able to Identify the exact class name at runtime.
public > void subscribe() {
}
ANSWER
Answered 2022-Mar-05 at 19:22You can pass type info into method:
public > void subscribe(java.lang.Class tClass, java.lang.Class thClass) {
}
Probably Guava Event Bus will be useful for you.
QUESTION
I am building an Azure web service using ASP.NET Core (C#, .NET 6). I have an Azure storage account. The storage account has two (2) access keys. The idea is that if the first key expires or needs to be changed, the application can seemlessly switch to the second key while operators renew the first key.
My application has a health check, and I would like to include a health check of the storage account. I took inspiration from this article: https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/monitor-app-health#healthchecks-implementation-in-eshoponcontainers
I wrote code like this, using NuGet package AspNetCore.HealthChecks.AzureStorage:
services.AddHealthChecks()
.AddAzureBlobStorage(
$"DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={key1};EndpointSuffix=core.windows.net",
name: "catalog-storage-check",
tags: new string[] { "catalogstorage" });
The problem is that this health check will fail if key 1 is expired or revoked. That is not what I want. I want the health check to succeed as long as at least 1 of the 2 keys works, and fail if both keys fail.
Is there an easy way to do this, or will I have to code it on my own?
ANSWER
Answered 2022-Jan-28 at 08:23I looked at the implementation of AspNetCore.HealthChecks.AzureStorage.
It is actually really simple. It just checks whether a blob container exists.
So I just did my own implementation using the same idea but retrying if the first key does not work. (I already had a generic way to retry an operation if the first key fails.)
QUESTION
I am currently following the tutorial for eShopOnContainers, and I decided to try to test out the GRPC functionality, similar to the project.
What I am trying to build is GRPC Client and GRPC Service, which are both hosted on docker and can talk to each other. Now, I managed to make it work, and if you look a the Startup.cs
in GRPC Client, this Uri http://host.docker.internal:5104
manages to make the call and get the response.
However, the original eshopOnContainers project uses the http://basket-api:81
path, which is much nicer and in my opinion more maintainable. It also uses few more components and some configuration:
- The GRPC Service uses the following in Startup.cs:
app.UsePathBase("/basket-api")
Original project
and some configuration in Program.cs to listen to ports:
BuildWebHost
...
.ConfigureKestrel(options =>
{
var ports = GetDefinedPorts(configuration);
options.Listen(IPAddress.Any, ports.httpPort, listenOptions =>
{
listenOptions.Protocols = HttpProtocols.Http1AndHttp2;
});
options.Listen(IPAddress.Any, ports.grpcPort, listenOptions =>
{
listenOptions.Protocols = HttpProtocols.Http2;
});
})
...
Original project The port for httpPort
is 80 and for grpcPort
81.
- The GRPC client uses the following Uri to make the call
http://basket-api:81
- In addition, there is an Envoy proxy also deployed, which has rules as follows, but what I believe the most important parts are rules
b-short
,b-long
, and the clusterbasket
, which I believe results in the final URL beingbasket-api:80
(as per cluster configuration).
I don't actually understand, why it would need port 81 if it ends up calling the GRPC service, but it would be nice if someone with more knowledge could explain.
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: eshop_backend_route
virtual_hosts:
- name: eshop_backend
domains:
- "*"
routes:
- name: "c-short"
match:
prefix: "/c/"
route:
auto_host_rewrite: true
prefix_rewrite: "/catalog-api/"
cluster: catalog
- name: "c-long"
match:
prefix: "/catalog-api/"
route:
auto_host_rewrite: true
cluster: catalog
- name: "o-short"
match:
prefix: "/o/"
route:
auto_host_rewrite: true
prefix_rewrite: "/ordering-api/"
cluster: ordering
- name: "o-long"
match:
prefix: "/ordering-api/"
route:
auto_host_rewrite: true
cluster: ordering
- name: "h-long"
match:
prefix: "/hub/notificationhub"
route:
auto_host_rewrite: true
cluster: signalr-hub
timeout: 300s
upgrade_configs:
upgrade_type: "websocket"
enabled: true
- name: "b-short"
match:
prefix: "/b/"
route:
auto_host_rewrite: true
prefix_rewrite: "/basket-api/"
cluster: basket
- name: "b-long"
match:
prefix: "/basket-api/"
route:
auto_host_rewrite: true
cluster: basket
- name: "agg"
match:
prefix: "/"
route:
auto_host_rewrite: true
prefix_rewrite: "/"
cluster: shoppingagg
http_filters:
- name: envoy.router
access_log:
- name: envoy.file_access_log
filter:
not_health_check_filter: {}
config:
json_format:
time: "%START_TIME%"
protocol: "%PROTOCOL%"
duration: "%DURATION%"
request_method: "%REQ(:METHOD)%"
request_host: "%REQ(HOST)%"
path: "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%"
response_flags: "%RESPONSE_FLAGS%"
route_name: "%ROUTE_NAME%"
upstream_host: "%UPSTREAM_HOST%"
upstream_cluster: "%UPSTREAM_CLUSTER%"
upstream_local_address: "%UPSTREAM_LOCAL_ADDRESS%"
path: "/tmp/access.log"
clusters:
- name: shoppingagg
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: webshoppingagg
port_value: 80
- name: catalog
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: catalog-api
port_value: 80
- name: basket
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: basket-api
port_value: 80
- name: ordering
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: ordering-api
port_value: 80
- name: signalr-hub
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: ordering-signalrhub
port_value: 80
In my approach, I assumed that if I completely skip the Envoy proxy component and I call the service using http://basket-api:80
, it would manage to find it, but unfortunately no luck. Now I am not sure whether my port is bad or whether my URI is bad, but I believe I am following a similar approach as in the original project just skipping the proxy.**
I might be also misinterpreting my Docker configuration, but I don't see any suspicious elements there.
Error stack:RpcException: Status(StatusCode="Unavailable", Detail="Error starting gRPC call. HttpRequestException: Resource temporarily unavailable (basket-api:81) SocketException: Resource temporarily unavailable", DebugException="System.Net.Http.HttpRequestException: Resource temporarily unavailable (basket-api:81)
---> System.Net.Sockets.SocketException (11): Resource temporarily unavailable
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
at System.Net.Sockets.Socket.g__WaitForConnectWithCancellation|283_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.DefaultConnectAsync(SocketsHttpConnectionContext context, CancellationToken cancellationToken)
at System.Net.Http.ConnectHelper.ConnectAsync(Func`3 callback, DnsEndPoint endPoint, HttpRequestMessage requestMessage, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.ConnectAsync(Func`3 callback, DnsEndPoint endPoint, HttpRequestMessage requestMessage, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttp2ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at Grpc.Shared.TelemetryHeaderHandler.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.Extensions.Http.Logging.LoggingHttpMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Microsoft.Extensions.Http.Logging.LoggingScopeHttpMessageHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at Grpc.Net.Client.Internal.GrpcCall`2.RunCall(HttpRequestMessage request, Nullable`1 timeout)")
Index.cshtml.cs
public void OnGet()
{
var response = _greeterClient.SayHello(new HelloRequest
{
Name = "Bob"
});
Debug.WriteLine(response.Message);
}
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddRazorPages();
services.AddGrpcClient((services, options) =>
{
// This one works
//options.Address = new Uri("http://host.docker.internal:5104");
// This one doesn't
options.Address = new Uri("http://basket-api:80");
});
}
Program.cs
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureKestrel(options =>
{
options.Listen(IPAddress.Any, 80, listenOptions =>
{
listenOptions.Protocols = HttpProtocols.Http1AndHttp2;
});
options.Listen(IPAddress.Any, 81, listenOptions =>
{
listenOptions.Protocols = HttpProtocols.Http2;
});
});
webBuilder.UseStartup();
});
Startup.cs
public class Startup
{
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
services.AddGrpc();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UsePathBase("/basket-api");
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService();
endpoints.MapGet("/", async context =>
{
await context.Response.WriteAsync("Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909");
});
});
}
}
docker-compose.yml
version: '3.4'
services:
grpcserver:
image: ${DOCKER_REGISTRY-}grpcserver
build:
context: .
dockerfile: GrpcServer/Dockerfile
grpcclient:
image: ${DOCKER_REGISTRY-}grpcclient
build:
context: .
dockerfile: GrpcClient/Dockerfile
docker-compose.override.yml
version: '3.4'
services:
grpcserver:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://0.0.0.0:80
ports:
- "5103:80"
- "5104:81"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
grpcclient:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "5121:80"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
ANSWER
Answered 2021-Aug-27 at 12:35You should be able to use docker generated DNS name from your docker-compose file. Your GRPC client should be able to reach the server on http://grpcserver:5103
With docker-compose you can talk between containers simply by using the name of the service and the port that you are exposing in the container.
[Edit] Removed the extension from the path because UsePathBase()
Adds a middleware that extracts the specified path base from request path and postpend it to the request path base.
QUESTION
I am currently learning how microservices work for an application i am building for my portfolio and for a small community of people that want this specific application. I have followed a tutorial online and successfully got IdentityServer4 to authenticate an MVC Client, however, I am trying to get swagger to work alongside the API's especially if they require authentication. Each time I try to authorize swagger with IdentityServer4, I am getting invalid_scope error each time I try authenticate. I have been debugging this problem for many hours an am unable to figure out the issue. I have also used the Microsoft eShopOnContainers as an example but still no luck. Any help would be greatly appreciated. Ill try keep the code examples short, please request any code not shown and ill do my best to respond asap. Thank you.
Identiy.API project startup.cs:
public class Startup {
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer()
.AddInMemoryClients(Config.GetClients())
.AddInMemoryIdentityResources(Config.GetIdentityResources())
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryApiScopes(Config.GetApiScopes())
.AddTestUsers(Config.GetTestUsers())
.AddDeveloperSigningCredential(); // @note - demo purposes only. need X509Certificate2 for production)
services.AddControllersWithViews();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseStaticFiles();
app.UseIdentityServer();
app.UseAuthorization();
app.UseEndpoints(endpoints => endpoints.MapDefaultControllerRoute());
}
}
Config.cs (i removed the test users to keep the code shorter, since it is not relevant). @note -I created a WeatherSwaggerUI client specifically for use with swagger since this was apart of eShopOnContainers example project provided by microsoft:
public static class Config
{
public static List GetTestUsers()
{
return new List(); // removed test users for this post
}
public static IEnumerable GetClients()
{
// @note - clients can be defined in appsettings.json
return new List
{
// m2m client credentials flow client
new Client
{
ClientId = "m2m.client",
ClientName = "Client Credentials Client",
AllowedGrantTypes = GrantTypes.ClientCredentials,
ClientSecrets = { new Secret("SuperSecretPassword".ToSha256())},
AllowedScopes = { "weatherapi.read", "weatherapi.write" }
},
// interactive client
new Client
{
ClientId = "interactive",
ClientSecrets = {new Secret("SuperSecretPassword".Sha256())},
AllowedGrantTypes = GrantTypes.Code,
RedirectUris = {"https://localhost:5444/signin-oidc"},
FrontChannelLogoutUri = "https://localhost:5444/signout-oidc",
PostLogoutRedirectUris = {"https://localhost:5444/signout-callback-oidc"},
AllowOfflineAccess = true,
AllowedScopes = {"openid", "profile", "weatherapi.read"},
RequirePkce = true,
RequireConsent = false,
AllowPlainTextPkce = false
},
new Client
{
ClientId = "weatherswaggerui",
ClientName = "Weather Swagger UI",
AllowedGrantTypes = GrantTypes.Implicit,
AllowAccessTokensViaBrowser = true,
RedirectUris = {"https://localhost:5445/swagger/oauth2-redirect.html"},
PostLogoutRedirectUris = { "https://localhost:5445/swagger/" },
AllowedScopes = { "weatherswaggerui.read", "weatherswaggerui.write" },
}
};
}
public static IEnumerable GetApiResources()
{
return new List
{
new ApiResource("weatherapi", "Weather Service")
{
Scopes = new List { "weatherapi.read", "weatherapi.write" },
ApiSecrets = new List { new Secret("ScopeSecret".Sha256()) },
UserClaims = new List { "role" }
},
new ApiResource("weatherswaggerui", "Weather Swagger UI")
{
Scopes = new List { "weatherswaggerui.read", "weatherswaggerui.write" }
}
};
}
public static IEnumerable GetIdentityResources()
{
return new List
{
new IdentityResources.OpenId(),
new IdentityResources.Profile(),
new IdentityResource
{
Name = "role",
UserClaims = new List { "role" }
},
};
}
public static IEnumerable GetApiScopes()
{
return new List
{
// weather API specific scopes
new ApiScope("weatherapi.read"),
new ApiScope("weatherapi.write"),
// SWAGGER TEST weather API specific scopes
new ApiScope("weatherswaggerui.read"),
new ApiScope("weatherswaggerui.write")
};
}
}
Next project is the just the standard weather api when creating a web api project with vs2019
WeatherAPI Project startup.cs (note i created extension methods as found in eShopOnContainers as i liked that flow):
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddCustomAuthentication(Configuration)
.AddSwagger(Configuration);
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseSwagger()
.UseSwaggerUI(options =>
{
options.SwaggerEndpoint("/swagger/v1/swagger.json", "Weather.API V1");
options.OAuthClientId("weatherswaggerui");
options.OAuthAppName("Weather Swagger UI");
});
}
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
public static class CustomExtensionMethods
{
public static IServiceCollection AddSwagger(this IServiceCollection services, IConfiguration configuration)
{
services.AddSwaggerGen(options =>
{
options.SwaggerDoc("v1", new OpenApiInfo
{
Title = "Find Scrims - Weather HTTP API Test",
Version = "v1",
Description = "Randomly generates weather data for API testing"
});
options.AddSecurityDefinition("oauth2", new OpenApiSecurityScheme
{
Type = SecuritySchemeType.OAuth2,
Flows = new OpenApiOAuthFlows()
{
Implicit = new OpenApiOAuthFlow()
{
AuthorizationUrl = new Uri($"{ configuration.GetValue("IdentityUrl")}/connect/authorize"),
TokenUrl = new Uri($"{ configuration.GetValue("IdentityUrl")}/connect/token"),
Scopes = new Dictionary()
{
{ "weatherswaggerui", "Weather Swagger UI" }
},
}
}
});
options.OperationFilter();
});
return services;
}
public static IServiceCollection AddCustomAuthentication(this IServiceCollection services, IConfiguration configuration)
{
JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Remove("sub");
var identityUrl = configuration.GetValue("IdentityUrl");
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(options =>
{
options.Authority = identityUrl;
options.RequireHttpsMetadata = false;
options.Audience = "weatherapi";
});
return services;
}
}
Lastly is the AuthorizeCheckOperationFilter.cs
public class AuthorizeCheckOperationFilter : IOperationFilter
{
public void Apply(OpenApiOperation operation, OperationFilterContext context)
{
var hasAuthorize = context.MethodInfo.DeclaringType.GetCustomAttributes(true).OfType().Any() ||
context.MethodInfo.GetCustomAttributes(true).OfType().Any();
if (!hasAuthorize) return;
operation.Responses.TryAdd("401", new OpenApiResponse { Description = "Unauthorized" });
operation.Responses.TryAdd("403", new OpenApiResponse { Description = "Forbidden" });
var oAuthScheme = new OpenApiSecurityScheme
{
Reference = new OpenApiReference {
Type = ReferenceType.SecurityScheme,
Id = "oauth2"
}
};
operation.Security = new List
{
new OpenApiSecurityRequirement
{
[oAuthScheme ] = new [] { "weatherswaggerui" }
}
};
}
}
again, any help, or recommendations on a guide would be greatly appreciated as google has not provided me with any results to fixing this issue. Im quite new to IdentityServer4 and am assuming its a small issue due with clients and ApiResources and ApiScopes. Thank you.
ANSWER
Answered 2021-Mar-19 at 04:44The swagger client needs to access the api and to do so it requires api scopes. What you have for swagger scopes are not doing this. Change the scopes for swagger client ‘weatherswaggerui’ to include the api scopes like this:
AllowedScopes = {"weatherapi.read"}
QUESTION
tl;dnr
I can not get my docker-compose Envoy API gateway to properly forward to referenced services.
"Forwarded routes" always end up with the error:
upstream connect error or disconnect/reset before headers. reset reason: connection termination
I must have mucked up something simple since this is so difficult to resolve. Any additional assistance would be appreciated.
Apologies for the length, the nitty-gritty!
Hello folks, this is the first time I've attemped to use Envoy as my API gateway, and can not seem to get past the above error. Basically I have two .Net 5 services, tags and docs, inside individual docker containers with their ports exposed. I would like to use Envoy as a simple API gateway (no https yet) just to redirect traffic to the proper service based on a simple route match.
Now, I've been using a number of online examples as the basis for my experiment, so I have had to gone wrong somewhere. Any drection pointing would be greatly appreciated. I have verified all of the following:
- I can access my web services directly from my desktop by using both "http://localhost:[portid]" and "http://host.docker.internal:[portid]"
- From the envoy docker cli instance, I can properly resolve the internal IP (docker-compose) addresses of the services.
- From the envoy docker cli instance, I can
curl http://localapp-tags/api/Cats
successfully. - Externally, I can navigate to "http://host.docker.internal:10000" and see the Envoy home page.
- NO MATTER WHAT I DO, I can not redirect to my service externally using "http://localhost:10080/t/api/Cats" for example. (trace dump below)
Sites I've used as reference or ran locally to base my application's config file[s] on:
- Envoy Docs, Static Configuration w/ the Default demo file
- Envoy Docs, Front Proxy & Blog Post
- Reference Architecture used as a guide, .Net eShopOnContainers
- Hosting Reference
- Tutorial, Build an API Gateway with Envoy and use with .NET Core APIs (great resource )
After Googling for hours (too many sites/links to list), the most concrete suggestion was to increase the connection_timeout
from .25s to 30secs, this did not make any difference for me. So while I know there seems to be answers out there (and this may be a duplicate), in fact nothing else solved the problem :(
I've also tried to create a named network inside docker-compose (eg. Front-Proxy example, but this did not work either!)
Again, any other suggestions to look into would be appreciated.
So as you can see, I went through a number of tutorials and baby-steps before I came up with my own docker-compose and envoy config files.
docker-compose.yaml (some parts left out for brevity, also disregard spelling errors as I did a search and replace and may have missed something)
# Edge/Api Gateway localapp-gw: image: ${REGISTRY:}/localapp.apigw:${PLATFORM:-linux}-${TAG:-latest} build: context: ./server-conf/envoy ports: - "9901:9901" - "10000:10000" - "10001:10001" - "10080:10080" - "10443:10443" volumes: - ./server-conf/envoy/envoy.yaml:/etc/envoy/envoy.yaml # Backend APIS localapp-tags: image: ${REGISTRY:}/localapp.tags:${PLATFORM:-linux}-${TAG:-latest} build: context: . dockerfile: code/Services/Tag/localapp.tags/Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_URLS=http://+:80; - GRPC_PORT=81 - PORT=80 ports: - "32080:80" - "32081:81" - "32443:443" volumes: - ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro - ${USERPROFILE}/.aspnet/https:/root/.aspnet/https:ro localapp-docs: image: ${REGISTRY:}/localapp.docs:${PLATFORM:-linux}-${TAG:-latest} build: context: . dockerfile: code/Services/Document/localapp.docs/Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_URLS=http://+:80; - GRPC_PORT=81 - PORT=80 ports: - "36080:80" - "36081:81" - "36443:443" volumes: - ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro - ${USERPROFILE}/.aspnet/https:/root/.aspnet/https:ro
envoy.yaml
admin: access_log_path: /tmp/admin_access.log address: socket_address: address: 0.0.0.0 port_value: 9901 static_resources: listeners: - name: listener_0 address: socket_address: address: 0.0.0.0 port_value: 10000 filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http access_log: - name: envoy.access_loggers.file typed_config: "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog path: /tmp/stdout http_filters: - name: envoy.filters.http.router route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: prefix: "/" route: host_rewrite_literal: www.envoyproxy.io cluster: service_envoyproxy_io - name: listener_10080 address: socket_address: address: 0.0.0.0 port_value: 10080 filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager codec_type: auto stat_prefix: ingress_http route_config: name: onecore_route virtual_hosts: - name: localapp_services domains: ["*"] routes: - match: { prefix: "/t/" } route: cluster: tags_service auto_host_rewrite: true idle_timeout: 10s - match: { prefix: "/d/" } route: cluster: docs_service auto_host_rewrite: true idle_timeout: 10s http_filters: - name: envoy.filters.http.router typed_config: {} clusters: - name: service_envoyproxy_io connect_timeout: 30s type: LOGICAL_DNS dns_lookup_family: V4_ONLY load_assignment: cluster_name: service_envoyproxy_io endpoints: - lb_endpoints: - endpoint: address: socket_address: address: www.envoyproxy.io port_value: 443 transport_socket: name: envoy.transport_sockets.tls typed_config: "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext sni: www.envoyproxy.io - name: tags_service connect_timeout: 30s type: strict_dns lb_policy: round_robin http2_protocol_options: {} load_assignment: cluster_name: tags_service endpoints: - lb_endpoints: - endpoint: address: socket_address: address: localapp-tags port_value: 80 - name: docs_service connect_timeout: 30s type: strict_dns lb_policy: round_robin http2_protocol_options: {} load_assignment: cluster_name: docs_service endpoints: - lb_endpoints: - endpoint: address: socket_address: address: localapp-docs port_value: 80 layered_runtime: layers: - name: static_layer_0 static_layer: envoy: resource_limits: listener: example_listener_name: connection_limit: 10000
So yeah, I'm a bit stuck here trying to figure out what I missed/botched up (but I could just be code blind after these two weeks). The envoy.yaml file was copied from reference material, so I'm just missing something obvious (and hopefully it is not just an ico issue!).
There are a few errors in the tracelog, including the main one above, but I am not yet able to figure out the issue (or to appreciate/understand the tracelog)! Excerpt:
[2021-02-28 10:51:10.705][17][trace][misc] [source/common/event/scaled_range_timer_manager_impl.cc:60] enableTimer called on 0x1e07bf442380 for 3600000ms, min is 3600000ms [2021-02-28 10:51:10.705][17][debug][conn_handler] [source/server/connection_handler_impl.cc:501] [C2] new connection [2021-02-28 10:51:10.705][17][trace][connection] [source/common/network/connection_impl.cc:548] [C2] socket event: 2 [2021-02-28 10:51:10.705][17][trace][connection] [source/common/network/connection_impl.cc:657] [C2] write ready [2021-02-28 10:51:10.707][17][trace][connection] [source/common/network/connection_impl.cc:548] [C2] socket event: 3 [2021-02-28 10:51:10.707][17][trace][connection] [source/common/network/connection_impl.cc:657] [C2] write ready [2021-02-28 10:51:10.707][17][trace][connection] [source/common/network/connection_impl.cc:586] [C2] read ready. dispatch_buffered_data=false [2021-02-28 10:51:10.707][17][trace][connection] [source/common/network/raw_buffer_socket.cc:25] [C2] read returns: 357 [2021-02-28 10:51:10.707][17][trace][connection] [source/common/network/raw_buffer_socket.cc:39] [C2] read error: Resource temporarily unavailable [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:571] [C2] parsing 357 bytes [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:847] [C2] message begin [2021-02-28 10:51:10.708][17][debug][http] [source/common/http/conn_manager_impl.cc:254] [C2] new stream [2021-02-28 10:51:10.708][17][trace][misc] [source/common/event/scaled_range_timer_manager_impl.cc:60] enableTimer called on 0x1e07bf442400 for 300000ms, min is 300000ms [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=Host value=host.docker.internal:10080 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=User-Agent value=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=Accept value=text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=Accept-Language value=en-US,en;q=0.5 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=Accept-Encoding value=gzip, deflate [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=Connection value=keep-alive [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:699] [C2] onHeadersCompleteBase [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:494] [C2] completed header: key=Upgrade-Insecure-Requests value=1 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:952] [C2] Server: onHeadersComplete size=7 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:823] [C2] message complete [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:346] [C2] readDisable: disable=true disable_count=0 state=0 buffer_length=357 [2021-02-28 10:51:10.708][17][debug][http] [source/common/http/conn_manager_impl.cc:886] [C2][S9636672460028773107] request headers complete (end_stream=true): ':authority', 'host.docker.internal:10080' ':path', '/t/api/Tags' ':method', 'GET' 'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0' 'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' 'accept-language', 'en-US,en;q=0.5' 'accept-encoding', 'gzip, deflate' 'connection', 'keep-alive' 'upgrade-insecure-requests', '1' [2021-02-28 10:51:10.708][17][debug][http] [source/common/http/filter_manager.cc:755] [C2][S9636672460028773107] request end stream [2021-02-28 10:51:10.708][17][debug][router] [source/common/router/router.cc:425] [C2][S9636672460028773107] cluster 'tags_service' match for URL '/t/api/Tags' [2021-02-28 10:51:10.708][17][debug][router] [source/common/router/router.cc:582] [C2][S9636672460028773107] router decoding headers: ':authority', 'host.docker.internal:10080' ':path', '/t/api/Tags' ':method', 'GET' ':scheme', 'http' 'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0' 'accept', 'text/html,ap plication/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' 'accept-language', 'en-US,en;q=0.5' 'accept-encoding', 'gzip, deflate' 'upgrade-insecure-requests', '1' 'x-forwarded-proto', 'http' 'x-request-id', 'd3ddb11a-85bb-479e-a29f-b850439d04aa' 'x-envoy-expected-rq-timeout-ms', '15000' [2021-02-28 10:51:10.708][17][debug][pool] [source/common/http/conn_pool_base.cc:79] queueing stream due to no available connections [2021-02-28 10:51:10.708][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:106] creating a new connection [2021-02-28 10:51:10.708][17][debug][client] [source/common/http/codec_client.cc:41] [C3] connecting [2021-02-28 10:51:10.708][17][debug][connection] [source/common/network/connection_impl.cc:860] [C3] connecting to 172.18.0.11:80 [2021-02-28 10:51:10.708][17][debug][connection] [source/common/network/connection_impl.cc:876] [C3] connection in progress [2021-02-28 10:51:10.708][17][trace][http2] [source/common/http/http2/codec_impl.cc:1355] Codec does not have Metadata frame support. [2021-02-28 10:51:10.708][17][debug][http2] [source/common/http/http2/codec_impl.cc:1184] [C3] updating connection-level initial window size to 268435456 [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/filter_manager.cc:498] [C2][S9636672460028773107] decode headers called: filter=0x1e07bf442100 status=1 [2021-02-28 10:51:10.708][17][trace][misc] [source/common/event/scaled_range_timer_manager_impl.cc:60] enableTimer called on 0x1e07bf442400 for 10000ms, min is 10000ms [2021-02-28 10:51:10.708][17][trace][http] [source/common/http/http1/codec_impl.cc:620] [C2] parsed 357 bytes [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:548] [C2] socket event: 2 [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:657] [C2] write ready [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:548] [C3] socket event: 2 [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:657] [C3] write ready [2021-02-28 10:51:10.708][17][debug][connection] [source/common/network/connection_impl.cc:666] [C3] connected [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:407] [C3] raising connection event 2 [2021-02-28 10:51:10.708][17][debug][client] [source/common/http/codec_client.cc:80] [C3] connected [2021-02-28 10:51:10.708][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:225] [C3] attaching to next stream [2021-02-28 10:51:10.708][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:130] [C3] creating stream [2021-02-28 10:51:10.708][17][debug][router] [source/common/router/upstream_request.cc:354] [C2][S9636672460028773107] pool ready [2021-02-28 10:51:10.708][17][trace][http2] [source/common/http/http2/codec_impl.cc:946] [C3] send data: bytes=24 [2021-02-28 10:51:10.708][17][trace][connection] [source/common/network/connection_impl.cc:465] [C3] writing 24 bytes, end_stream false [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:924] [C3] about to send frame type=4, flags=0 [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:946] [C3] send data: bytes=39 [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/connection_impl.cc:465] [C3] writing 39 bytes, end_stream false [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:853] [C3] sent frame type=4 [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:924] [C3] about to send frame type=8, flags=0 [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:946] [C3] send data: bytes=13 [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/connection_impl.cc:465] [C3] writing 13 bytes, end_stream false [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:853] [C3] sent frame type=8 [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:924] [C3] about to send frame type=1, flags=5 [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:946] [C3] send data: bytes=282 [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/connection_impl.cc:465] [C3] writing 282 bytes, end_stream false [2021-02-28 10:51:10.709][17][trace][http2] [source/common/http/http2/codec_impl.cc:853] [C3] sent frame type=1 [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/connection_impl.cc:657] [C3] write ready [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/raw_buffer_socket.cc:68] [C3] write returns: 358 [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/connection_impl.cc:548] [C3] socket event: 2 [2021-02-28 10:51:10.709][17][trace][connection] [source/common/network/connection_impl.cc:657] [C3] write ready [2021-02-28 10:51:10.714][17][trace][connection] [source/common/network/connection_impl.cc:548] [C3] socket event: 3 [2021-02-28 10:51:10.714][17][trace][connection] [source/common/network/connection_impl.cc:657] [C3] write ready [2021-02-28 10:51:10.714][17][trace][connection] [source/common/network/connection_impl.cc:586] [C3] read ready. dispatch_buffered_data=false [2021-02-28 10:51:10.714][17][trace][connection] [source/common/network/raw_buffer_socket.cc:25] [C3] read returns: 135 [2021-02-28 10:51:10.714][17][trace][connection] [source/common/network/raw_buffer_socket.cc:39] [C3] read error: Resource temporarily unavailable [2021-02-28 10:51:10.714][17][trace][http2] [source/common/http/http2/codec_impl.cc:644] [C3] dispatching 135 bytes [2021-02-28 10:51:10.714][17][debug][http2] [source/common/http/http2/codec_impl.cc:890] [C3] invalid http2: Remote peer returned unexpected data while we expected SETTINGS frame. Perhaps, peer does not support HTTP/2 properly. [2021-02-28 10:51:10.714][17][trace][http2] [source/common/http/http2/codec_impl.cc:671] [C3] dispatched 135 bytes [2021-02-28 10:51:10.714][17][trace][http2] [source/common/http/http2/codec_impl.cc:924] [C3] about to send frame type=7, flags=0 [2021-02-28 10:51:10.714][17][trace][http2] [source/common/http/http2/codec_impl.cc:946] [C3] send data: bytes=34 [2021-02-28 10:51:10.714][17][trace][connection] [source/common/network/connection_impl.cc:465] [C3] writing 34 bytes, end_stream false [2021-02-28 10:51:10.715][17][trace][http2] [source/common/http/http2/codec_impl.cc:853] [C3] sent frame type=7 [2021-02-28 10:51:10.715][17][debug][http2] [source/common/http/http2/codec_impl.cc:856] [C3] sent goaway code=1 [2021-02-28 10:51:10.715][17][debug][client] [source/common/http/codec_client.cc:137] [C3] Error dispatching received data: The user callback function failed [2021-02-28 10:51:10.715][17][debug][connection] [source/common/network/connection_impl.cc:132] [C3] closing data_to_write=34 type=1 [2021-02-28 10:51:10.715][17][trace][connection] [source/common/network/raw_buffer_socket.cc:68] [C3] write returns: 34 [2021-02-28 10:51:10.715][17][debug][connection] [source/common/network/connection_impl.cc:241] [C3] closing socket: 1 [2021-02-28 10:51:10.715][17][trace][connection] [source/common/network/connection_impl.cc:407] [C3] raising connection event 1 [2021-02-28 10:51:10.716][17][debug][client] [source/common/http/codec_client.cc:99] [C3] disconnect. resetting 1 pending requests [2021-02-28 10:51:10.716][17][debug][client] [source/common/http/codec_client.cc:125] [C3] request reset [2021-02-28 10:51:10.716][17][trace][main] [source/common/event/dispatcher_impl.cc:213] item added to deferred deletion list (size=1) [2021-02-28 10:51:10.716][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:159] [C3] destroying stream: 0 remaining [2021-02-28 10:51:10.716][17][debug][router] [source/common/router/router.cc:1026] [C2][S9636672460028773107] upstream reset: reset reason: connection termination, transport failure reason: [2021-02-28 10:51:10.716][17][debug][http] [source/common/http/filter_manager.cc:839] [C2][S9636672460028773107] Sending local reply with details upstream_reset_before_response_started{connection termination} [2021-02-28 10:51:10.716][17][trace][misc] [source/common/event/scaled_range_timer_manager_impl.cc:60] enableTimer called on 0x1e07bf442400 for 10000ms, min is 10000ms [2021-02-28 10:51:10.716][17][debug][http] [source/common/http/conn_manager_impl.cc:1484] [C2][S9636672460028773107] encoding headers via codec (end_stream=false): ':status', '503' 'content-length', '95' 'content-type', 'text/plain' 'date', 'Sun, 28 Feb 2021 10:51:10 GMT' 'server', 'envoy' [2021-02-28 10:51:10.716][17][trace][connection] [source/common/network/connection_impl.cc:465] [C2] writing 134 bytes, end_stream false [2021-02-28 10:51:10.716][17][trace][misc] [source/common/event/scaled_range_timer_manager_impl.cc:60] enableTimer called on 0x1e07bf442400 for 10000ms, min is 10000ms [2021-02-28 10:51:10.716][17][trace][http] [source/common/http/conn_manager_impl.cc:1493] [C2][S9636672460028773107] encoding data via codec (size=95 end_stream=true) [2021-02-28 10:51:10.716][17][trace][connection] [source/common/network/connection_impl.cc:465] [C2] writing 95 bytes, end_stream false [2021-02-28 10:51:10.716][17][trace][connection] [source/common/network/connection_impl.cc:346] [C2] readDisable: disable=false disable_count=1 state=0 buffer_length=0 [2021-02-28 10:51:10.716][17][trace][main] [source/common/event/dispatcher_impl.cc:213] item added to deferred deletion list (size=2) [2021-02-28 10:51:10.716][17][trace][misc] [source/common/event/scaled_range_timer_manager_impl.cc:60] enableTimer called on 0x1e07bf442380 for 3600000ms, min is 3600000ms [2021-02-28 10:51:10.716][17][debug][pool] [source/common/conn_pool/conn_pool_base.cc:343] [C3] client disconnected, failure reason: [2021-02-28 10:51:10.716][17][trace][main] [source/common/event/dispatcher_impl.cc:213] item added to deferred deletion list (size=3) [2021-02-28 10:51:10.716][17][trace][main] [source/common/event/dispatcher_impl.cc:89] clearing deferred deletion list (size=3) [2021-02-28 10:51:10.716][17][trace][connection] [source/common/network/connection_impl.cc:548] [C2] socket event: 2 [2021-02-28 10:51:10.716][17][trace][connection] [source/common/network/connection_impl.cc:657] [C2] write ready [2021-02-28 10:51:10.716][17][trace][connection] [source/common/network/raw_buffer_socket.cc:68] [C2] write returns: 229 [2021-02-28 10:51:10.747][17][trace][connection] [source/common/network/connection_impl.cc:548] [C2] socket event: 3 [2021-02-28 10:51:10.747][17][trace][connection] [source/common/network/connection_impl.cc:657] [C2] write ready [2021-02-28 10:51:10.747][17][trace][connection] [source/common/network/connection_impl.cc:586] [C2] read ready. dispatch_buffered_data=false [2021-02-28 10:51:10.747][17][trace][connection] [source/common/network/raw_buffer_socket.cc:25] [C2] read returns: 323 [2021-02-28 10:51:10.747][17][trace][connection] [source/common/network/raw_buffer_socket.cc:39] [C2] read error: Resource temporarily unavailable
Full trace log from the above session ... here.
Apologies and thanks in advance for any pointing or suggestions. I can provide any missing reference docs as needed.
Thanks Again!
ANSWER
Answered 2021-Feb-28 at 14:32I'm going to give it a few more days, but I think I discovered the issues ... which appear to be fat-fingers, copy & paste.
- I removed the offending empty http2 options as mentioned in the above comment. This resolved in the clusters being a bit cleaner:
- name: tags_service connect_timeout: 30s type: strict_dns lb_policy: round_robin # http2_protocol_options: {} !remove these lines load_assignment: cluster_name: tags_service endpoints: - lb_endpoints: - endpoint: address: socket_address: address: localapp-tags port_value: 80
- Once the "error" disappeared and I was left with a plain blank page and internal 404 errors (seen with curl) ... I realized that the router directs needed to be modified:
route: cluster: tags_service auto_host_rewrite: true prefix_rewrite: "/" # add this line so that we remove the /{route}/ from the gw idle_timeout: 10s
Now I will go back and re-enable some of the settings I had previously and maybe attempt the Edge Proxy Settings. At least it is running properly now and I learned a lot.
The best part is the role that SO plays in the process. (This is not sarcasm!) After jumping around for days with different working examples, I of course copied something into my configs that were not correct (guess I need to RTFM more). Posting a SO question makes one re-evaluate all the steps and processes taken and lights a bit of a fire.
Thanks SO!
QUESTION
I am trying to work with .NET 5 to update my information. So I started to review the eShopOnContainers-ServicesAndWebApps project here.
I have read its older versions (2.2 and 3.1) that is a rich sample with a lot of fantastic points. But here in .NET 5, at first glance, in Program.cs
, I see a lot of methods and properties without any classes which I can't understand and make me confused. How can we have methods without class in a cs file?
ANSWER
Answered 2021-Feb-25 at 12:08It's a feature of C# 9: top-level statements
Top-level statements remove unnecessary ceremony from many applications.
Only one file in your application may use top-level statements. If the compiler finds top-level statements in multiple source files, it’s an error. It’s also an error if you combine top-level statements with a declared program entry point method, typically a Main method. In a sense, you can think that one file contains the statements that would normally be in the Main method of a Program class.
So you can now write a program that contains only this line of code:
System.Console.WriteLine("Hello World!");
QUESTION
First of all as for "simplified DDD/CQS pattern" I am referencing https://github.com/dotnet-architecture/eShopOnContainers example dotnet application (which is not "strict" DDD/CQRS). However I am not dotnet developer and my question is related to general design patterns / code organization.
My own case is as follows: I have "conference" domain with "conference" root aggregate and "conference_edition" aggregate. In RDBMS language conference has many editions. In this domain all "communication" with outside world is done through root aggregate "conference". So there are use cases like Conference.create(), Conference.addDraftEdition(), Conference.publishEdition() etc.
I have also "booking" domain which manages bookings for specific conference editions (people book tickets for a conference edition).
My question is related to the read model / aggregate of "conference" service (domain?). CMS of this application needs following read model for a conference (example in json for simplicity):
{
"name": "My conference name",
"editions": [
{
// name, startDate, endDate can be directly read from "conference" domain aggregate
"name": "My conference 2021",
"startDate": "...",
"endDate": "...",
// following data come from "booking" domain
"completedBookingsCount": 20,
"pendingBookingsCount": 10,
"canceledBookingsCount": 2
},
...
]
}
My question is how to "make" this read model (as in simplified DDD example eShopOnContainers I assume that queries are allowed to directly query domain aggregates):
- I could make additional (additional to domain aggregates) read aggregate and update it by handling BookingCreated, BookingCompleted and BookingCanceled integration events from "booking" service to keep statistics per conference edition in "conference" service read model and then just read it along with "conference" and "edition" aggregates. In such case how I should organize my code regarding additional read aggregate? I mean it's still aggregate so it should "have" it's own domain with commands like "increaseCompletedEventsForConferenceEdition" etc? Or I should make it as simple as possible without all that domain constraints like for example:
class BookingCanceledIntegrationEventHandler {
handle (BookingCanceledIntegrationEvent event) {
editionStatisticsRepository.increaseCanceledBookingsCount(event.editionId);
if (event.prevBookingStatus == 'pending') {
editionStatisticsRepository.decreasePendingBookingsCount(event.editionId);
} else if (event.prevBookingStatus == 'completed') {
editionStatisticsRepository.decreaseCompletedBookingsCount(event.editionId);
}
}
}
Or maybe I should handle this read model "shaping" on api gateway side? I mean to query two services "conference" and "booking" and join data on api gateway side to compose required read model?
Or maybe I should add edition statistics as kind of ValueObject to "edition" aggregate? However AFAIK ValueObjects must be immutable and moreover it feels wrong to put these data (edition statistics) to "conference" domain...
How it should/could be organised according to DDD/CQS pattern?
ANSWER
Answered 2021-Feb-14 at 19:43Either of the first or second is reasonable. The choice between them will be heavily influenced by how much you want to spread business logic into infrastructure.
Since the conference and booking domains appear to be different bounded contexts, the third option is really only justifiable if there's some change to a conference edition that's only allowable if bookings are in a certain state.
QUESTION
It's a little bit difficult to explain, I apologies for that. But I need help. I'm working on generic approach for events. All my code base on eShopOnContainers example but my handlers should return a value.
In eShopOnContainers there is just Task
as a return type, so they can easy
var eventType = _subsManager.GetEventTypeByName(eventName);
var integrationEvent = JsonConvert.DeserializeObject(message, eventType);
var concreteType = typeof(IIntegrationEventHandler<>).MakeGenericType(eventType);
await (Task)concreteType.GetMethod("Handle").Invoke(handler, new object[] { integrationEvent });
Let's say I have
public interface IRequestMessageHandler
{
Task Handle(TRequest request);
}
In my case, I need to cast to Task
. Type for T
store in variable
Type concreteHandlerType = typeof(IRequestMessageHandler<,>).MakeGenericType(subscription.RequestType, subscription.ReplyType);
Type concreteReplyType = typeof(Task<>).MakeGenericType(subscription.ReplyType);
Then I need cast result to concreteReplyType
var reply = await (concreteReplyType)concreteType.GetMethod("Handle")?.Invoke(handler, new[] { integrationEvent });
Please help me because I don't understand how it is possible. Thank you in advance. Please let me know what information should I add to help you more understand.
Fiddle with code to reproduce https://dotnetfiddle.net/X3m4A1
ANSWER
Answered 2021-Jan-24 at 18:46To solve your problem a simple solution is to use dynamic
(see here).
var method = concreteType.GetMethod("Handle");
var task = (Task)method.Invoke(handler, new object[] { integrationEvent });
await task;
object result = ((dynamic)task).Result;
QUESTION
I can see comment in each dockfile
Keep the project list and command dotnet restore identical in all Dockfiles to maximize image cache utilization
But I have confusion with,
- it would build the fast(due to caching) but does it not take extra space in container FS.
- And in future if I add new project in solution should I make to change every dockerfile.
Here https://github.com/dotnet-architecture/eShopOnContainers/blob/dev/src/Services/Basket/Basket.API/Dockerfile basket-api docfile have copy command on projects.
ANSWER
Answered 2021-Jan-05 at 18:27The reason for doing this is to take advantage of Docker's layer caching. Yes, there is maintenance overhead to ensure that the list here reflects the actual set of project files that are defined in the repo.
The caching optimization comes into play for iterative development scenarios (inner loop). In those scenarios, you're typically making changes to source code files, not the project files. If you haven't changed any project files since you last built the Dockerfile, you get to take advantage of Docker's layer caching and skip the restoration of NuGet packages for those projects, which can be quite time consuming.
Yes, there is a small amount of extra space being included in the container image because the project files end up getting copied twice which results in duplicate data amongst the two layers. But that's fairly small because project files aren't that big.
There are always tradeoffs. In this case, the desire is to have a fast inner loop build time at the expense of maintenance and a small amount of extra space in the container image.
QUESTION
After a lot of reading and wading through finnicky solutions to this problem, it's bizarre that projects are structured in this way and yet there doesn't seem to be a straight forward way of creating it.
A brilliant example is Microsoft's eShopOnContainers project - https://github.com/dotnet-architecture/eShopOnContainers
The repository has a physical src folder, when you open the eShopOnContainers-ServicesAndWebApps.sln - the solution explorer displays the src as if it were a logical folder.
I find that the .sln also lives in a peculiar place - directly within the src folder, it's almost like this .sln has been created manually and projects added to it as necessary.
The same is the case with the Services folder and its contents, each project lives as Services/Identity/Identity.API - this full structure displays in both file explorer and solution explorer, but when I attempt to recreate this I can 1 of 2 scenarios -
- I create the folder structure as physical folders and they do not display in the solution explorer.
- I create the folder structure as logical/solution folders and they do not display in the file explorer.
This doesn't seem possible via the projection create GUI, I imagine this has been done by creating the projects and structure via the dotnet CLI, but I can't seem to figure it out, how is this done?
ANSWER
Answered 2020-Nov-18 at 20:11The solution to this via the Visual Studio GUI -
Create your logical folder structure as you see fit within Visual Studio, and then to have your physical folders match that structure - or even differ from it - create the path along with the project which you are creating, the file system will create the physical folders from the specified path and you will have achieved the desired effect.
Example:
Create C:\myproject\src\
Visual Studio -> File -> New -> Project -> Blank Solution (name: myprojectsolution)
Right click myprojectsolution -> Add -> New Solution Folder (name: src)
My the myprojectsolution.sln into C:\myproject\src\ then delete the myprojectsolution folder which was created with the .sln file.
Logical and Physical folders are now matching, the .sln lives directly in src, and src will display in the solution explorer.
At this point, let's say you want to create an Identity.IDP project at Microservices\Identity\Identity.IDP, with a matching logical and physical structure.
Visual Studio -> right click src -> Add -> New Solution Folder (name: Microservices)
Visual Studio -> right click Microservices -> Add -> New Solution Folder (named: Identity)
Visual Studio -> right click Identity -> Add -> New Project (name: Identity.IDP, location: C:\myproject\src\Microservices\Identity)
The above will result in a matching folder structure existing in both the solution explorer and the file explorer.
In summary - create your solution structure via the Visual Studio GUI, create your physical structure via specifying the path when creating the project.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install eShopOnContainers
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page