changelogs | changelog finder and parser for packages | DevOps library

 by   pyupio Python Version: 0.15.0 License: MIT

kandi X-RAY | changelogs Summary

kandi X-RAY | changelogs Summary

changelogs is a Python library typically used in Devops, NPM applications. changelogs has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. You can install using 'pip install changelogs' or download it from GitHub, PyPI.

A changelog finder and parser for packages available on pypi, npm and rubygems.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              changelogs has a low active ecosystem.
              It has 49 star(s) with 20 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 11 open issues and 200 have been closed. On average issues are closed in 355 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of changelogs is 0.15.0

            kandi-Quality Quality

              changelogs has 0 bugs and 0 code smells.

            kandi-Security Security

              changelogs has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              changelogs code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              changelogs is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              changelogs releases are not available. You will need to build from source code and install.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              changelogs saves you 1620 person hours of effort in developing the same functionality from scratch.
              It has 3599 lines of code, 509 functions and 68 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed changelogs and discovered the below as its top functions. This is intended to give you an instant insight into changelogs implemented functionality, and help decide if they suit your requirements.
            • Returns a list of URLs for the given package
            • Bootstrap functions
            • Get a changelog
            • Load custom functions
            • Find the changelog on the given candidates
            • Validate a github repo URL
            • Returns True if the link contains a project name
            • Given a list of candidate URLs return a set of valid repo URLs
            • Find the URL to the given candidates
            • Get content from URLs
            • Get a git commit log
            • Parse git commit log
            • Parse the changelog
            • Get metadata for a given release
            Get all kandi verified functions for this library.

            changelogs Key Features

            No Key Features are available at this moment for changelogs.

            changelogs Examples and Code Snippets

            No Code Snippets are available at this moment for changelogs.

            Community Discussions

            QUESTION

            toChanglelogStream prints different kinds of changes
            Asked 2021-Jun-09 at 16:28

            ANSWER

            Answered 2021-Jun-09 at 16:27

            The reason for the difference has two parts, both of them defined in GroupAggFunction, which is the process function used to process this query.

            The first is this part of the code:

            Source https://stackoverflow.com/questions/67896731

            QUESTION

            Internal networking breaks after update from Docker 19.03.15 to 20.10.5
            Asked 2021-May-11 at 14:45

            I'm using the following docker-compose.yml to run a dockerized GitLab instance with one runner. Both are in the same network. (The bridge name is set explicitly because the firewall rules depend on it.)

            ...

            ANSWER

            Answered 2021-May-11 at 14:45

            Figured it out:

            My setup causes the runner to fetch the sources from gitlab.example.com:443 which is the port served by the Nginx proxy on the host. This worked on Docker 19, however, as of Docker 20 the runner container can't connect via the host anymore. This is totally okay, the solution: Tell the runner to contact the GitLab server in web directly when cloning the source. Each runner in config.toml has to contain:

            Source https://stackoverflow.com/questions/67483235

            QUESTION

            Liquibase runs sqlFile change twice
            Asked 2021-May-04 at 16:42

            For some reason, Liquibase is running sql file twice, I do not understand why:

            ...

            ANSWER

            Answered 2021-May-04 at 16:42

            Ok, the problem was that the folder sqlFiles was under "path": "classpath:/db/changesets" and that caused it to run the same SQL file twice. Once from the changelog and once as an independent SQL file.

            Source https://stackoverflow.com/questions/67388724

            QUESTION

            How to run liquibase changelogSyncSQL or changelogSync up to a tag and/or labels?
            Asked 2021-Apr-16 at 06:29

            I'm adding liquibase to an existing project where tables and data already exist. I would like to know how I might limit the scope of changelogSync[SQL] to a subset of available changes.

            Background

            I've run liquibase generateChangeLog to capture the current state and placed this into say src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__init01.yaml.

            I've also added another changeset to cover some new requirements in a new file. Let's call it src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__new-feature.yaml.

            I've added a main changelog file src/main/resources/db/changelog/db.changelog-master.yaml with the following contents:

            ...

            ANSWER

            Answered 2021-Apr-15 at 02:59

            As it stands, the only solution I've found is to generate the SQL and then manually edit its contents to filter the change sets which don't correspond to your current schema.

            Then you can apply the sql to the db.

            Source https://stackoverflow.com/questions/67066665

            QUESTION

            How to ensure Master and Dev branches are kept in sync when deploying from CI/CD with Semantic Versioning or Lerna Publish
            Asked 2021-Apr-01 at 23:33
            Setup

            I have several gitlab repos where the general setup involves having a master branch, a stage (pre-release) branch and a dev branch.

            Push permissions are disabled for all 3 branches.

            The workflow is to fork from the dev branch for any hot-fixes, bug fixes and features. When you are satisfied with the release you would submit a merge-request to dev. Eventually, when a stable build is ready inside dev; a merge-request would be submitted for the stage branch. Lastly, when you are satisfied with the pre-release you would submit a merge-request for the master branch.

            I have CI/CD configured so that tests, builds and deployments are automatically executed from the master and stage branches with the automatic generation of CHANGELOG.md files. stage branch deploys to the UAT s3 Bucket and master deploys to the production s3 Bucket.

            Deployment is handled through Semantic Versioning 2.0.0 which is responsible for bumping versions, generating changelogs and deploying.

            I have a similar setup to the one just described above except it is a monorepo so I am using Lerna to handle the publishing (deploying) with {"conventionalCommits": true} to replicate Semantic Versioning 2.0.0's behaviour. I am using independent versioning inside the monorepo.

            Both the Semantic Versioning 2.0.0 and the Lerna setup force the master branch to always be either behind or equal with the stage and dev branches; and the stage branch to always be behind or equal to the dev branch in kind of like a cascading effect.

            dev >= stage >= master

            The Problem

            Both Lerna Publish and Semantic Versioning make several changes to the files when publishing/deploying. Some of these changes include updating the CHANGELOG.md file and bumping the version inside of the package.json file.

            Both Lerna and Semantic Versioning eventually push these changes to the branch they are run from through the CI/CD.

            What this means is that if I merge from dev to stage, stage would then have the bumped version numbers and new changelogs pushed into it through either the Semantic Versioning or Lerna Publish executions. This would cause the stage branch to be ahead of the dev branch and would cause all the future forks from the dev branch to detach from the stage branch meaning that the next time I merge from dev to stage it's not going to be a simple fast-forward merge like it's meant to be and the merge will most likely encounter conflicts which would prevent any future merges or may fail the CI/CD.

            My Workaround

            For Semantic Versioning:

            • I have disabled the push feature so that the new, changed files are no longer committed and pushed (tags are still created and pushed)
            • I have created a script that converts the CHANGELOG.md file to a PDF and sends it to my teams email

            This works out well because Semantic Versioning uses tags to determine the changed files and decide how to bump the version. So, although the version inside the repo remains constant at 1.0.0 for example, Semantic Versioning is smart enough to increment the version from the latest tag not from what's in the package.json

            This unfortunately doesn't hold true for Lerna which still uses tags to determine changed files but then bumps from the version inside package.json which means that by not pushing the updated package.json with the new version, Lerna always bumps me from 1.0.0 to either 1.0.1, 1.1.0, or 2.0.0

            So I am stumped with Lerna.

            Question

            How should I be setting up my CI/CD to avoid the problem? I know the structure of my repo is common, and I haven't found anyone addressing this issue despite the countless users of Lerna and Semantic Versioning which tells me that I have obviously missed something as it is not a wide-spread issue.

            Possible Solution

            As I was writing this question, it crossed my mind that maybe I should only bump versions in dev and then deploy from stage and master. This would prevent stage and master from ever being ahead of dev, would this be the right way to do it?

            ...

            ANSWER

            Answered 2021-Apr-01 at 23:33

            Maintaining package version information in the repo, does not scale. Still, all the tools out there keep trying to make it work. I have nothing to offer in the way of an alternative (yet), other than to say that; release data should be managed by other means, than storing it in the source repo. What we're really talking about here are asynchronous processes, dev, build and release. The distributed nature of these systems, implies that we cannot treat repos as file shares and expect them to scale well.

            See my other rant on this topic. I haven't had time to do a good write-up on this yet. I would add that Git tags are meant to be handy milestone markers for devs to find the right git hash to go back to, to create a fix branch from. Commit messages are meant for change-logs and just one of the inputs into deciding what version to release from which build.

            No dev, working in a multi-dev, multi-branch, distributed environment, can possibly predict the appropriate semantic version to apply at some random point in the future. At least not without having full dictatorial control of every dev, branch and build/release environment. Even then, they would be hard pressed to make it work. It's that control, that current tooling implies, but in practice, it never scales.

            Consider that your package feed service likely has all, or a sufficient portion of your release history. As long you have only one feed service, you can use that to determine the version floor for your next release. Process your semantic commits, lookup the most recent version matching the tag your process is targeting (daily, beta, RC, none or whatever), calculate the next appropriate version, update appropriate version fields in source code, then build and test. If the feed service doesn't let you include hidden or deleted versions in your query, you'll have to use your own database. Do not check-in the modified files! Those fields should be zeroed in your repo, effectively marking local dev builds as 0.0.-dev or something along those lines.

            Automatic publishing of prerelease versions is okay, but there should be a human in the loop for release versions. Iff all the above steps succeed, apply a tag to the git hash that you just successfully built from.

            My dream CI/CD system, runs every commit to my release branch(es) through a test build and unit test runs, detects whether existing tests case were modified (auto-detect breaking changes!), analyzes the commit messages for indications of intentional breakage, and presents all of that information to the release build system on demand. My release build system produces a -alpha.build.### and runs all of the acceptance tests against it.

            If there is no known breakage and the intended target is a prerelease, it then updates the files containing version information and runs an incremental build with one final smoke test prior to automatic publication. Here's where there's some gray areas. Some prerelease targets should not include breakage, without human intervention, and for others it's okay. So I would have a special set of prerelease targets that do not allow automated publication of breakage, such as certain levels of dog-fooders, or bits targeted at my internal long running test infrastructure.

            If it's an untagged release target, then I would prefer it to build and package everything for consumption in my final stage of testing. This is where the automation verifies the package for conformance to policies, ensures that it can be unpacked correctly, and gathers sign-offs from area owners, department/division heads, etc, prior to publication. It might include some randomized test deployments, in cases where we're targeting a live system.

            And all of the above is just an over-simplified description really. It glosses over more than it clarifies, because there's just too much variation in the real-world needs of producers and consumers.

            Circling back to tools and control. There are many in the DevOps world who will tell you that the main point is to standardize around tooling. Which remind me of this xkcd commic.

            Source https://stackoverflow.com/questions/66907092

            QUESTION

            Will mongock work correctly with kubernetes replicas?
            Asked 2021-Apr-01 at 07:08

            Mongock looks very promising. We want to use it inside a kubernetes service that has multiple replicas that run in parallel.

            We are hoping that when our service is deployed, the first replica will acquire the mongockLock and all of its ChangeLogs/ChangeSets will be completed before the other replicas attempt to run them.

            We have a single instance of mongodb running in our kubernetes environment, and we want the mongock ChangeLogs/ChangeSets to execute only once.

            Will the mongockLock guarantee that only one replica will run the ChangeLogs/ChangeSets to completion?

            Or do I need to enable transactions (or some other configuration)?

            ...

            ANSWER

            Answered 2021-Feb-23 at 10:04

            I am going to provide the short answer first and then the long one. I suggest you to read the long one too in order to understand it properly.

            Short answer

            By default, Mongock guarantees that the ChangeLogs/changeSets will be run only by one pod at a time. The one owning the lock.

            Long answer

            What really happens behind the scenes(if it's not configured otherwise) is that when a pod takes the lock, the others will try to acquire it too, but they can't, so they are forced to wait for a while(configurable, but 4 mins by default)as many times as the lock is configured(3 times by default). After this, if i's not able to acquire it and there is still pending changes to apply, Mongock will throw an MongockException, which should mean the JVM startup fail(what happens by default in Spring).

            This is fine in Kubernetes, because it ensures it will restart the pods. So now, assuming the pods start again and changeLogs/changeSets are already applied, the pods start successfully because they don't even need to acquire the lock as there aren't pending changes to apply.

            Potential problem with MongoDB without transaction support and Frameworks like Spring

            Now, assuming the lock and the mutual exclusion is clear, I'd like to point out a potential issue that needs to be mitigated by the the changeLog/changeSet design.

            This issue applies if you are in an environment such as Kubernetes, which has a pod initialisation time, your migration take longer than that initialisation time an the Mongock process is executed before the pod becomes ready/health(and it's a condition for it). This last condition is highly desired as it ensures the application runs with the right version of the data.

            In this situation imagine the Pod starts the Mongock process. After the Kubernetes initialisation time, the process is still not finished, but Kubernetes stops the JVM abruptly. This means that some changeSets were successfully executed, some other not even started(no problem, they will be processed in the next attempt), but one changeSet was partially executed and marked as not done. This is the potential issue. The next time Mongock runs, it will see the changeSet as pending and it will execute it from the beginning. If you haven't designed your changeLogs/changeSets accordingly, you may experience some unexpected results because some part of the data process covered by that changeSet has already taken place and it will happen again.

            This, somehow needs to be mitigated. Either with the help of mechanisms like transactions, with a changeLog/changeSet design that takes this into account or both.

            Mongock currently provides transactions with “all or nothing”, but it doesn’t really help much as it will retry every time from scratch and will probably end up in an infinite loop. The next version 5 will provide transactions per ChangeLogs and changeSets, which together with good organisation, is the right solution for this.

            Meanwhile this issue can be addressed by following this design suggestions.

            Source https://stackoverflow.com/questions/66324374

            QUESTION

            apt - dpkg python-rpi.gpio dependency problems
            Asked 2021-Mar-18 at 21:52

            Every time i try to install something, upgrade or autoremove this error happens!

            I've searched the web and tried some fixes but they don't seem to work, I don't have python2.7 installed only 3 and can't install python2 or anything because of this. I've tryed to sudo rm python-rpi.gpio_0.7.0-0.1~bpo10+4_armhf.deb but the file will keep comming. Pls I really need some help since I've got some work on my pi and don't want to get a new image

            ...

            ANSWER

            Answered 2021-Jan-25 at 13:18

            Use the following commands:

            Source https://stackoverflow.com/questions/65793271

            QUESTION

            upgrading existing config.add_route, pyramid 1.4 to 1.10
            Asked 2021-Mar-06 at 04:58

            Code I'm working on:

            ...

            ANSWER

            Answered 2021-Mar-06 at 04:58

            From the change notes to 1.5a2 (2013-09-22), Backwards Incompatibilities:

            Removed the ability to pass the following arguments to pyramid.config.Configurator.add_route: view, view_context, view_for, view_permission, view_renderer, and view_attr. Using these arguments had been deprecated since Pyramid 1.1. Instead of passing view-related arguments to add_route, use a separate call to pyramid.config.Configurator.add_view to associate a view with a route using its route_name argument. Note that this impacts the pyramid.config.Configurator.add_static_view function too, because it delegates to add_route.

            For example:

            Source https://stackoverflow.com/questions/66489577

            QUESTION

            Why is the generation of my TYPO3 documentation failing without a proper error?
            Asked 2021-Feb-25 at 19:00

            I have a new laptop and I try to render the Changelogs of TYPO3 locally based on the steps on https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/RenderingDocs/Quickstart.html#render-documenation-with-docker. It continues until the end but show some non-zero exit codes at the end.

            ...

            ANSWER

            Answered 2021-Feb-18 at 16:06

            I found the issue with this. It seemed the docker container did not have enough memory allocated. I changed the available memory from 2 Gb to 4 Gb in Docker Desktop and this issue is solved with that.

            Source https://stackoverflow.com/questions/66256413

            QUESTION

            Nuxt / Vue JS - Writing a HTML template for a markdown file
            Asked 2021-Feb-18 at 13:57

            I have a Markdown file which contains release notes. For example:

            ...

            ANSWER

            Answered 2021-Feb-18 at 13:56

            The parse result of the content is stored in article.body.children[]. Each child contains the following node data:

            • tag - HTML tag (e.g., h2, h3)
            • type - Element type (e.g., element, text)
            • value - Element value (the text contents)
            • props - Extra props data, including id
            • children[] - Child nodes

            You could use that info to parse the nodes into a convenient data structure that stores release info, such as:

            • title from the text child of h2
            • id from props of h2
            • changes[] to hold the change lines, each containing:
              • id from props of h4
              • type from the text child of h3
              • text from the text child of h4

            Source https://stackoverflow.com/questions/66259349

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install changelogs

            You can install using 'pip install changelogs' or download it from GitHub, PyPI.
            You can use changelogs like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install changelogs

          • CLONE
          • HTTPS

            https://github.com/pyupio/changelogs.git

          • CLI

            gh repo clone pyupio/changelogs

          • sshUrl

            git@github.com:pyupio/changelogs.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular DevOps Libraries

            ansible

            by ansible

            devops-exercises

            by bregman-arie

            core

            by dotnet

            semantic-release

            by semantic-release

            Carthage

            by Carthage

            Try Top Libraries by pyupio

            safety

            by pyupioPython

            safety-db

            by pyupioPython

            pyup

            by pyupioPython

            pyup-django

            by pyupioPython

            dparse

            by pyupioPython