apollo | reliable configuration management system | Microservice library

 by   apolloconfig Java Version: 2.0.1 License: Apache-2.0

kandi X-RAY | apollo Summary

kandi X-RAY | apollo Summary

apollo is a Java library typically used in Architecture, Microservice, Spring Boot, Spring, Docker applications. apollo has no bugs, it has build file available, it has a Permissive License and it has medium support. However apollo has 1 vulnerabilities. You can download it from GitHub, Maven.

Apollo is a reliable configuration management system. It can centrally manage the configurations of different applications and different clusters. It is suitable for microservice configuration management scenarios. The server side is developed based on Spring Boot and Spring Cloud, which can simply run without the need to install additional application containers such as Tomcat. The Java SDK does not rely on any framework and can run in all Java runtime environments. It also has good support for Spring/Spring Boot environments. The .Net SDK does not rely on any framework and can run in all .Net runtime environments. For more details of the product introduction, please refer Introduction to Apollo Configuration Center. For local demo purpose, please refer Quick Start.

            kandi-support Support

              apollo has a medium active ecosystem.
              It has 28093 star(s) with 10135 fork(s). There are 1271 watchers for this library.
              It had no major release in the last 12 months.
              There are 144 open issues and 2888 have been closed. On average issues are closed in 62 days. There are 6 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of apollo is 2.0.1

            kandi-Quality Quality

              apollo has no bugs reported.

            kandi-Security Security

              apollo has 1 vulnerability issues reported (0 critical, 1 high, 0 medium, 0 low).

            kandi-License License

              apollo is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              apollo releases are available to install and integrate.
              Deployable package is available in Maven.
              Build file is available. You can build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed apollo and discovered the below as its top functions. This is intended to give you an instant insight into apollo implemented functionality, and help decide if they suit your requirements.
            • Attempts to load the apollo config .
            • Do an http get with the request
            • Wait for long polling
            • Performs the import .
            • Retrieves the configuration for the given application .
            • Audits an instance .
            • Extracts all placeholder keys from a property string .
            • Polls a notification
            • Calculates and calculates the config changes .
            • Transforms a namespace into a NamespaceBO .
            Get all kandi verified functions for this library.

            apollo Key Features

            Unified management of the configurations of different environments and different clusters Apollo provides a unified interface to centrally manage the configurations of different environments, different clusters, and different namespaces The same codebase could have different configurations when deployed in different clusters With the namespace concept, it is easy to support multiple applications to share the same configurations, while also allowing them to customize the configurations Multiple languages is provided in user interface(currently Chinese and English)
            Configuration changes takes effect in real time (hot release) After the user modified the configuration and released it in Apollo, the sdk will receive the latest configurations in real time (1 second) and notify the application
            Release version management Every configuration releases are versioned, which is friendly to support configuration rollback
            Grayscale release Support grayscale configuration release, for example, after clicking release, it will only take effect for some application instances. After a period of observation, we could push the configurations to all application instances if there is no problem
            Authorization management, release approval and operation audit Great authorization mechanism is designed for applications and configurations management, and the management of configurations is divided into two operations: editing and publishing, therefore greatly reducing human errors All operations have audit logs for easy tracking of problems
            Client side configuration information monitoring It's very easy to see which instances are using the configurations and what versions they are using
            Rich SDKs available Provides native sdks of Java and .Net to facilitate application integration Support Spring Placeholder, Annotation and Spring Boot ConfigurationProperties for easy application use (requires Spring 3.1.1+) Http APIs are provided, so non-Java and .Net applications can integrate conveniently Rich third party sdks are also available, e.g. Golang, Python, NodeJS, PHP, C, etc
            Open platform API Apollo itself provides a unified configuration management interface, which supports features such as multi-environment, multi-data center configuration management, permissions, and process governance However, for the sake of versatility, Apollo will not put too many restrictions on the modification of the configuration, as long as it conforms to the basic format, it can be saved. In our research, we found that for some users, their configurations may have more complicated formats, such as xml, json, and the format needs to be verified There are also some users such as DAL, which not only have a specific format, but also need to verify the entered value before saving, such as checking whether the database, username and password match For this type of application, Apollo allows the application to modify and release configurations through open APIs, which has great authorization and permission control mechanism built in
            Simple deployment As an infrastructure service, the configuration center has very high availability requirements, which forces Apollo to rely on external dependencies as little as possible Currently, the only external dependency is MySQL, so the deployment is very simple. Apollo can run as long as Java and MySQL are installed Apollo also provides a packaging script, which can generate all required installation packages with just one click, and supports customization of runtime parameters

            apollo Examples and Code Snippets

            Error: You must `await server.start()` before calling `server.applyMiddleware()`
            Lines of Code : 26dot img1License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            import { ApolloServer } from 'apollo-server-express';
            import express from 'express';
            async function startApolloServer(typeDefs, resolvers) {
              // Same ApolloServer initialization as before
              const server = new ApolloServer({ typeDefs, res
            Error: You must `await server.start()` before calling `server.applyMiddleware()`
            Lines of Code : 19dot img2License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            const apolloServer = new ApolloServer({
              ... // other config
            // without this, apollo will throw an error.
            await apolloServer.start();
            const app = express();
              ... // other config
            How to download a GraphQL schema?
            Lines of Code : 2dot img3License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            apollo schema:download --header="X-Hasura-Admin-Secret: " --endpoint https://sample-backend-for-hasura-tutorial.hasura.app/v1/graphql schema.json
            Source GraphQL API: HTTP error 400 Bad Request
            Lines of Code : 7dot img4License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            {"extensions":{"code":"GRAPHQL_VALIDATION_FAILED"},"level":"warn","locations":[{"column":3,"line":2}],"message":"GraphQL introspection is not allowed by Apollo Server, but the query contained __schema or __type. To enable introspection, pa
            Graphql query is firing every time the page is loading. How to save in cache?
            Lines of Code : 20dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            export async function getStaticProps() {
              const apollo = require("../../../apollo/apolloClient"); // import client
              const GET_PROJECTS = require("../../../apollo/graphql").GET_PROJECTS; // import query
              const client = apollo.initializeA
            How to get all cache data using reactjs @apollo/client v3
            Lines of Code : 9dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            const cache = new InMemoryCache({"...Your option"})
            console.log(cache.data) // <- Your cache query
              // access through apollo client cache
              const client = u
            Angular 8 graphql.module localstorage.getItem not available
            Lines of Code : 95dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            import { Component, OnInit } from '@angular/core';
            import { Apollo } from 'apollo-angular';
            import gql from 'graphql-tag';
              selector: 'app-user-profile',
              templateUrl: './user-profile.component.html',
              styleUrls: ['./user-
            Is there a way to use existing cassandra(Along with data) with Janusgraph
            Lines of Code : 31dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            aploetz@cqlsh:stackoverflow> SELECT name, alma_mater, missions FROM astronauts WHERE name IN ('James A. Lovell Jr. ','Fred W. Haise Jr. ','John L. Swigert Jr. ');
             name                 | alma_mater             | missions
            How to add headers in login.vue?
            Lines of Code : 31dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            import { setContext } from 'apollo-link-context'
            const authLink = setContext((_, { headers }) => {
                // get the authentication token from ApplicationSettings if it exists
                const token = ApplicationSettings.getString("token");
            how can I create a managed Apollo federation gateway?
            Lines of Code : 10dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            (async () => {
              const server = new ApolloServer({
                // Apollo Graph Manager (previously known as Apollo Engine)
                // When enabled and an `ENGINE_API_KEY` is set in the environment,
                // provides metrics, schema mana

            Community Discussions


            Exclude Logs from Datadog Ingestion
            Asked 2022-Mar-19 at 22:38

            I have a kubernetes cluster that's running datadog and some microservices. Each microservice makes healthchecks every 5 seconds to make sure the service is up and running. I want to exclude these healthcheck logs from being ingested into Datadog.

            I think I need to use log_processing_rules and I've tried that but the healthcheck logs are still making it into the logs section of Datadog. My current Deployment looks like this:



            Answered 2022-Jan-12 at 20:28

            I think the problem is that you're defining multiple patterns; the docs state, If you want to match one or more patterns you must define them in a single expression.

            Try somtething like this and see what happens:

            Source https://stackoverflow.com/questions/70687054


            Custom Serilog sink with injection?
            Asked 2022-Mar-08 at 10:41

            I have create a simple Serilog sink project that looks like this :



            Answered 2022-Feb-23 at 18:28

            If you refer to the Provided Sinks list and examine the source code for some of them, you'll notice that the pattern is usually:

            1. Construct the sink configuration (usually taking values from IConfiguration, inline or a combination of both)
            2. Pass the configuration to the sink registration.

            Then the sink implementation instantiates the required services to push logs to.

            An alternate approach I could suggest is registering Serilog without any arguments (UseSerilog()) and then configure the static Serilog.Log class using the built IServiceProvider:

            Source https://stackoverflow.com/questions/71145751


            How to manage Google Cloud credentials for local development
            Asked 2022-Feb-14 at 23:35

            I searched a lot how to authenticate/authorize Google's client libraries and it seems no one agrees how to do it.

            Some people states that I should create a service account, create a key out from it and give that key to each developer that wants to act as this service account. I hate this solution because it leaks the identity of the service account to multiple person.

            Others mentioned that you simply log in with the Cloud SDK and ADC (Application Default Credentials) by doing:



            Answered 2021-Oct-02 at 14:00

            You can use a new gcloud feature and impersonate your local credential like that:

            Source https://stackoverflow.com/questions/69412702


            using webclient to call the grapql mutation API in spring boot
            Asked 2022-Jan-24 at 12:18

            I am stuck while calling the graphQL mutation API in spring boot. Let me explain my scenario, I have two microservice one is the AuditConsumeService which consume the message from the activeMQ, and the other is GraphQL layer which simply takes the data from the consume service and put it inside the database. Everything well when i try to push data using graphql playground or postman. How do I push data from AuditConsumeService. In the AuditConsumeService I am trying to send mutation API as a string. the method which is responsible to send that to graphQL layer is



            Answered 2022-Jan-23 at 21:40

            You have to send the query and body as variables in post request like shown here

            Source https://stackoverflow.com/questions/70823774


            Jdeps Module java.annotation not found
            Asked 2022-Jan-20 at 22:48

            I'm trying to create a minimal jre for Spring Boot microservices using jdeps and jlink, but I'm getting the following error when I get to the using jdeps part



            Answered 2021-Dec-28 at 14:39

            I have been struggling with a similar issue In my gradle spring boot project I am using the output of the following for adding modules in jlink in my dockerfile with (openjdk:17-alpine):

            Source https://stackoverflow.com/questions/70105271


            How to make a Spring Boot application quit on tomcat failure
            Asked 2022-Jan-15 at 09:55

            We have a bunch of microservices based on Spring Boot 2.5.4 also including spring-kafka:2.7.6 and spring-boot-actuator:2.5.4. All the services use Tomcat as servlet container and graceful shutdown enabled. These microservices are containerized using docker.
            Due to a misconfiguration, yesterday we faced a problem on one of these containers because it took a port already bound from another one.
            Log states:



            Answered 2021-Dec-17 at 08:38

            Since you have everything containerized, it's way simpler.

            Just set up a small healthcheck endpoint with Spring Web which serves to see if the server is still running, something like:

            Source https://stackoverflow.com/questions/70378200


            Deadlock on insert/select
            Asked 2021-Dec-26 at 12:54

            Ok, I'm totally lost on deadlock issue. I just don't know how to solve this.

            I have these three tables (I have removed not important columns):



            Answered 2021-Dec-26 at 12:54

            You are better off avoiding serializable isolation level. The way the serializable guarantee is provided is often deadlock prone.

            If you can't alter your stored procs to use more targeted locking hints that guarantee the results you require at a lesser isolation level then you can prevent this particular deadlock scenario shown by ensuring that all locks are taken out on ServiceChange first before any are taken out on ServiceChangeParameter.

            One way of doing this would be to introduce a table variable in spGetManageServicesRequest and materialize the results of

            Source https://stackoverflow.com/questions/70377745


            Rewrite host and port for outgoing request of a pod in an Istio Mesh
            Asked 2021-Nov-17 at 09:30

            I have to get the existing microservices run. They are given as docker images. They talk to each other by configured hostnames and ports. I started to use Istio to view and configure the outgoing calls of each microservice. Now I am at the point that I need to rewrite / redirect the host and the port of a request that goes out of one container. How can I do that with Istio?

            I will try to give a minimum example. There are two services, service-a and service-b.



            Answered 2021-Nov-16 at 10:56

            There are two solutions which can be used depending on necessity of using istio features.

            If no istio features are planned to use, it can be solved using native kubernetes. In turn, if some istio feature are intended to use, it can be solved using istio virtual service. Below are two options:

            1. Native kubernetes

            Service-x should be pointed to the backend of service-b deployment. Below is selector which points to deployment: service-b:

            Source https://stackoverflow.com/questions/69901156


            Checking list of conditions on API data
            Asked 2021-Aug-31 at 00:23

            I am using an API which is sending some data about products, every 1 second. on the other hand I have a list of user-created conditions. And I want to check if any data that comes, matches any of the conditions. and if so, I want to notify the user.

            for example , user condition maybe like this : price < 30000 and productName = 'chairNumber2'

            and the data would be something like this : {'data':[{'name':'chair1','price':'20000','color':blue},{'name':'chairNumber2','price':'45500','color':green},{'name':'chairNumber2','price':'27000','color':blue}]

            I am using microservice architecture, and on validating condition I am sending a message on RabbitMQ to my notification service

            I have tried the naïve solution (every 1 second, check every condition , and if any data meets the condition then pass on data my other service) but this takes so much RAM and time(time order is in n*m,n being the count of conditions, and m is the count of data), so I am looking for a better scenario



            Answered 2021-Aug-31 at 00:23

            It's an interesting problem. I have to confess I don't really know how I would do it - it depends a lot on exactly how fast the processing needs to occur, and a lot of other factors not mentioned - such as what constraints to do you have in terms of the technology stack you have, is it on-premise or in the cloud, must the solution be coded by you/your team or can you buy some $$ tool. For future reference, for architecture questions especially, any context you can provide is really helpful - e.g. constraints.

            I did think of Pub-Sub, which may offer patterns you can use, but you really just need a simple implementation that will work within your code base, AND very importantly you only have one consuming client, the RabbitMQ queue - it's not like you have X number of random clients wanting the data. So an off-the-shelf Pub-Sub solution might not be a good fit.

            Assuming you want a "home-grown" solution, this is what has come to mind so far:

            ("flow" connectors show data flow, which could be interpreted as a 'push'; where as the other lines are UML "dependency" lines; e.g. the match engine depends on data held in the batch, but it's agnostic as to how that happens).

            • The external data source is where the data is coming from. I had not made any assumptions about how that works or what control you have over it.
            • Interface, all this does is take the raw data and put it into batches that can be processed later by the Match Engine. How the interface works depends on how you want to balance (a) the data coming in, and (b) what you know the match engine expects.
            • Batches are thrown into a batch queue. It's job is to ensure that no data is lost before its processed, that processing can be managed (order of batch processing, resilience, etc).
            • Match engine, works fast on the assumption that the size of each batch is a manageable number of records/changes. It's job is to take changes and ask who's interested in them, and return the results to the RabbitMQ. So its inputs are just the batches and the user & user matching rules (more on that later). How this actually works I'm not sure, worst case it iterates through each rule seeing who has a match - what you're doing now, but...

            Key point: the queue would also allow you to scale-out the number of match engine instances - but, I don't know what affect that has downstream on the RabbitMQ and it's downstream consumers (the order in which the updates would arrive, etc).

            What's not shown: caching. The match engine needs to know what the matching rules are, and which users those rules relate to. The fastest way to do that look-up is probably in memory, not a database read (unless you can be smart about how that happens), which brings me to this addition:

            • Data Source is wherever the user data, and user matching rules, are kept. I have assumed they are external to "Your Solution" but it doesn't matter.
            • Cache is something that holds the user matches (rules) & user data. It's sole job is to hold these in a way that is optimized for the Match Engine to work fast. You could logically say it was part of the match engine, or separate. How you approach this might be determined by whether or not you intend to scale-out the match engine.
            • Data Provider is simply the component whose job it is to fetch user & rule data and make it available for caching.

            So, the Rule engine, cache and data provider could all be separate components, or logically parts of the one component / microservice.

            Source https://stackoverflow.com/questions/68970178


            Traefik v2 reverse proxy without Docker
            Asked 2021-Jul-14 at 10:26

            I have a dead simple Golang microservice (no Docker, just simple binary file) which returns simple message on GET-request.



            Answered 2021-Jul-14 at 10:26

            I've managed to find the answer.

            1. I'm not that smart if I've decided that Traefik would take /proxy and simply redicrect all request to /api/*. The official docs (https://doc.traefik.io/traefik/routing/routers/) says that (I'm quoting):

            Use Path if your service listens on the exact path only. For instance, Path: /products would match /products but not /products/shoes.

            Use a Prefix matcher if your service listens on a particular base path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and /products/shirts. Since the path is forwarded as-is, your service is expected to listen on /products.

            1. I did not use any middleware for replacing substring of path

            Now answer as example.

            First at all: code for microservice in main.go file

            Source https://stackoverflow.com/questions/68111670

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install apollo

            You can download it from GitHub, Maven.
            You can use apollo like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the apollo component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .


            Apollo TeamCommunity GovernanceContributing Guide
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • HTTPS


          • CLI

            gh repo clone apolloconfig/apollo

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link