portal | An api-driven , in-kernel layer 2/3 load balancer | Load Balancing library
kandi X-RAY | portal Summary
kandi X-RAY | portal Summary
Portal is an api-driven, in-kernel layer 2/3 load balancer.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- startPortal is used to start the main process .
- putServices uploads services to a service
- LoadConfigFile loads configuration file
- parseReqService parses a service response .
- routes returns the router routes
- Sync updates the ipvsadm rules
- putService stores a service
- rest performs a HTTP request .
- postServer posts a server
- putServers adds servers to a cluster
portal Key Features
portal Examples and Code Snippets
Community Discussions
Trending Discussions on portal
QUESTION
I'm new to Azure and trying to set up my nextjs client app and my ASP.NET Core backend app. Everything seems to play well now, except for file uploads. It's working on localhost, but in production the backend returns a 404 web page (attached image) before reaching the actual API endpoint. I've also successfully tested to make a multipart/form-data POST request in Postman from my computer.
The way I implemented this is that I'm proxying the upload from the browser through an api route (client's server side) to the backend. I have to go via the client server side to append a Bearer token from a httpOnly cookie.
I've enabled CORS in Startup.cs:
...ANSWER
Answered 2022-Mar-10 at 06:35Cross-Origin Resource Sharing (CORS) allows JavaScript code running in a browser on an external host to interact with your backend.
To allow all, use "*" and remove all other origins from the list.
I could only allow origins, not headers and methods?
Add the below configuration in your web.config
file to allow headers and methods.
QUESTION
Thanks for taking the time to read. My current setup is as follows:
I have an azure function service up and running, an az function project in visual studio (which I have tested and it runs without issue), a build pipeline in azure devops that deploys a docker image with my function project to an azure container registry.
My problem:
When I try to setup my function service for CI/CD from my devops pipeline, I get the following error on the "functions" tab on my app: "Azure Functions runtime is unreachable". Also none of the functions from my code are listed. In the deployment center however, I get a message "Deployed successfully to production", and it shows my built docker container image name.
Troubleshooting:
In the deployment center of my function app (in the az portal), I set my app to read directly from the azure container registry (using the exact same docker image that my pipeline built earlier), and that worked perfectly - deployment successful and I could see my individual functions name. When I switched back to CI/CD deployment however I got the same problem as earlier.
Trying to see if anyone has had the same problem or could suggest a path forward for getting CI/CD integration working.
I pasted my yaml file below with some names changed for privacy.
...ANSWER
Answered 2022-Feb-28 at 17:26Configuration/typing issue in yml file. Repository name during build step was hard-coded (I didn't use a variable like what was posted above) and did not match repo name in deploy step because of a spelling error.
QUESTION
I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.
I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.
The goal is:
- A running managed Kubernetes cluster (OKE)
- 2 nodes at least
- 1 service that's accessible for external parties
The infra looks the following:
- A VCN for the whole thing
- A private subnet on 10.0.1.0/24
- A public subnet on 10.0.0.0/24
- NAT gateway for the private subnet
- Internet gateway for the public subnet
- Service gateway
- The corresponding security lists for both subnets which I won't share right now unless somebody asks for it
- A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled
- A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.
- A namespace in the K8S cluster (call it staging for now)
- A deployment which refers to a custom NextJS application serving traffic on port 3000
And now it's the point where I want to expose the service running on port 3000.
I have 2 obvious choices:
- Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow
- Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer
The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).
Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.
The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.
That's my problem and I couldn't figure out what could be the issue.
What I've tried so far:
- Switching from ARM machines to AMD ones - no change
- Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.
- Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly
- Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it
- Ran the Node Doctor on the nodes, everything is fine
- Checked the logs of kube-proxy, kube-flannel, core-dns, no error
- Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either
- Recreated the cluster from scratch
Edit: Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.
Edit2: Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.
Edit3: Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.
...ANSWER
Answered 2022-Jan-31 at 12:06Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.
Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.
QUESTION
I have an Azure Function (tried both Windows and Linux Consumption) using Azure App Service Authentication (Easy Auth) with a custom OpenId Connect provider to authenticate my Azure Function with an http trigger.
I configured a client in my Identity Provider (based on Duende Identity Server), acquired a token and then sent a request to the Azure Function (contains just the code that is initially created by Visual Studio when creating a Function App project).
This is the configuration I made in the Azure Portal:
When I now send the request to the Azure function endpoint I always get the following error:
...ANSWER
Answered 2021-Dec-19 at 18:10You should try using the Client ID as the scope while generating the token.
In some cases appending /.default
to the scope helps. Example eda25bbe-a724-43ba-8fa3-8977aba6fb36/.default
.
QUESTION
I create a Diagnostic Settings for a KeyVault resource in Azure portal. DS properties are Metrics = AllMetrics and Destination is a predefined Log Analytics Workspace. When I do an export (Automation - Export Template) from Portal, nothing from the diagnostic setting is included in the generated ARM json. I've noticed the same behavior when resource is an App Service.
Is this by design? A bug? Any other way to get the ARM json for the diagnostic setting I've defined?
...ANSWER
Answered 2021-Dec-17 at 08:39I tried the same in my environment and seems we cannot export the diagnostics settings for any service like key vault, app service , storage account etc
when we try to export the template for automation . But there are some sample Diagnostics settings Templates for few resources provided in Microsoft Documentation.
So , as per your settings it will something like below which I have tested by deploying :
QUESTION
I'm trying to deploy my VUE portal & it's Dotnet backend API on the same IIS site. I was only able to host the each on it's own site on IIS, but I want to host them both on the same site. Can someone please advise how?
...ANSWER
Answered 2021-Nov-25 at 11:57- Download & Install:
- .NET Core Hosting Bundle
- Microsoft URL Rewrite
- Restart the IIS service.
- Setup new IIS Website:
- Open IIS on the server.
- Right click the Sites folder and select "Add website"
- enter site name
- set the physical path to the Vue.js app directory
- leave the host name empty → click "OK".
- Right click the new site and select "Add Application"
- enter the alias "api" (without quotes)
- set the physical path to the ASP.NET Core api directory
- click "OK".
- In Application Pools
- right click the app pool for the new site → "Basic Settings"
- change the .NET CLR version to 'No Managed Code'.
- right click the app pool for the new site → "Advanced Settings"
- set ‘Identity’: LocalSystem
- Make sure "web.config" is as given in the article below! Vue.js + ASP.NET
& inside the Vue.js front-end folder
additional resources:
QUESTION
I have a Terraform script that create an Azure Key Vault, imports my SSL certificate (3DES .pfx file with a password), and creates an Application Gateway with a HTTP listener. I'm trying to change this to a HTTPS listener that uses my SSL certificate from KeyVault.
I've stepped through this process manually in Azure Portal and I have this working with PowerShell. Unfortunately I don't find Terraform's documentation clear on how this is supposed to be achieved.
Here are relevant snippets of my Application Gateway and certificate resources:
...ANSWER
Answered 2021-Sep-16 at 10:10The issue is that there isn't any access policy defined for the app gateway in the keyvault for which it not able to get the certififcate.
So, inorder to resolve this , you have to add an acess policy for the managed identity that is being used by the application gateway. So, after creating managed identity and before using in application gateway, you have to use something like below:
QUESTION
I'm working on a web application for a disaster management lab assignment that is using the Google Places and Maps JavaScript API. The goal is to have markers on the map which are attached to an event listener which is supposed to show an information window with the data about a disaster report. However, the window is not showing up when I click on the marker. The pointer finger icon shows when I hover over a point, yet no information window appears when I click on the marker. There are zero errors in the dev console when I run it through IntelliJ and Tomcat, and I tried changing addListener
to addEventListener
but it still doesn't work. I will post my code below but let me know if you need anything else to help. For security reasons, I have replaced my API key with MY_API_KEY
, so I guess you will have to have access to the Google API's yourself in order to help so I apologize for that. Thanks!
P.S.
When I tried creating the snippet it came up with the following error which I'm unsure where the error is coming from because there is no line 302 in the JS code:
{ "message": "Uncaught SyntaxError: Unexpected end of input", "filename": "https://stacksnippets.net/js", "lineno": 302, "colno": 5 }
Here's what the information window is supposed to look like:
...ANSWER
Answered 2021-Nov-12 at 20:12Thank you Randy for the solution! I had to modify the example from the Google Maps documentation to match what the lab wanted but I figured it out. I included the infowindow.setContent(marker['customInfo']);
from my original code and changed my code to match the syntax from the documentation.
Here's the working code for the Click Listener:
QUESTION
I am looking to build a bot that typically requires two numbers with a different meaning (role) in the same utterance. Let's take the example of a StockMarket order assistent (fictional, as example)
Some example utterances:
Buy 100 MSFT stock at limit of 340
Get me 200 Apple at maximum 239.4
Buy 40 AMZN at market price
In LUIS portal, I have defined two entities
StockSymbol
a List entity (for all stocks, linking their symbols and the names as synonyms).number
the prebuilt entity with two Roles:Amount
andLimit
When specifying the utterances shown as example, it shows that the entities get recognized. But I cannot find a way to specify the roles for the different number entities in my sample utterances. (in the examples, the first number instance of number is the Amount, and if a second it there, that is typically the Limit role.
Anyone an idea on how to define this and set this up?
Best regards
...ANSWER
Answered 2021-Nov-10 at 12:54There are 2 different ways to do this, First is to use roles for a prebuilt entity, go into the number prebuilt, click on Roles, add 2 different roles.
one for Amount another for Limit then you have to go in the utterances and label for the roles, you do that by going to the utterance, clicking on the @ symbol on the right, selecting the number prebuilt, selecting the role, then highlighting the numbers with that role.
Second approach is to use ML entities, create 2 ML entities, one for Amount one for Limit. Add the number as a feature and make it a required feature, and then go ahead and label the Amounts with the Amount entity and the limits with the Limit entity directly.
QUESTION
We tried to install our Hapis (Nodejs Version 14) Web service on our customer's server. It ran under HTTP for months, but when we went to enable HTTPS with the appropriate paths to the cert and key it fails when the service starts up with:
error:06065064:digital envelope routines:EVP_Decryptfinal_ex:bad decrypt
Their certificate and key are generated using the Venafi online portal. It gave them a crt and key. The crt uses a Signature algorithm: sha256RSA, Signature hash algorithm of sha256, and Thumbprint algorith: sha1.
Also, the private key is a RSA PRIVATE KEY with Proc-Type: 4,ENCRYPTED and DEK-Info: DES-EDE3-CBC.
I am not sure what is going on, because HTTPS works fine on our development servers.
- Is the problem in HapiJS?
- Is the problem with the certificate or key themselves?
- Is there an Node option I need to be passing in when creating the service?
Please help.
...ANSWER
Answered 2021-Aug-21 at 05:00The specified error 06065064:digital envelope routines:EVP_Decryptfinal_ex:bad decrypt
occurs in an SSL/TLS connection using OpenSSL (which is what nodejs modules like tls and https actually use) when the privatekey is encrypted (with a passphrase) and the correct passphrase is not provided to decrypt it. The described file format, beginning with a line -----BEGIN RSA PRIVATE KEY-----
followed by lines Proc-Type:
and DEK-Info:
is indeed one of the encrypted formats used by OpenSSL. Specifically this is the encrypted 'traditional' or 'legacy' format; the PKSC8 format added about 2000 but still considered new(!) uses -----BEGIN ENCRYPTED PRIVATE KEY-----
and no 822-style headers, only base64 (of the encrypted structure defined by PKCS8); see ursinely-verbose https://security.stackexchange.com/questions/39279/stronger-encryption-for-ssh-keys/#52564 about OpenSSH's use of OpenSSL, which is basically the same as nodejs's use.
The tls
module and others that build on it including https
ultimately read the key(s) and cert(s) using tls.createSecureContext
which accepts in options
a member passphrase
, or if you need to use multiple keys (and certs) you can provide a passphrase for each key as described in the linked doc.
Alternatively you can avoid the need for a passphrase by converting the key to an unencrypted file, if acceptable under applicable security policies and regulations. (Good policies may prohibit this, but they usually also prohibit getting the privatekey from or providing it to any other system, especially one 'online' somewhere, and your customer is doing the latter.) To retain traditional format do
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install portal
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page