changelogs | changelog finder and parser for packages | DevOps library
kandi X-RAY | changelogs Summary
kandi X-RAY | changelogs Summary
A changelog finder and parser for packages available on pypi, npm and rubygems.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Returns a list of URLs for the given package
- Bootstrap functions
- Get a changelog
- Load custom functions
- Find the changelog on the given candidates
- Validate a github repo URL
- Returns True if the link contains a project name
- Given a list of candidate URLs return a set of valid repo URLs
- Find the URL to the given candidates
- Get content from URLs
- Get a git commit log
- Parse git commit log
- Parse the changelog
- Get metadata for a given release
changelogs Key Features
changelogs Examples and Code Snippets
Community Discussions
Trending Discussions on changelogs
QUESTION
I am reading at https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/data_stream_api/#examples-for-fromchangelogstream,
The EXAMPLE 1:
...ANSWER
Answered 2021-Jun-09 at 16:27The reason for the difference has two parts, both of them defined in GroupAggFunction
, which is the process function used to process this query.
The first is this part of the code:
QUESTION
I'm using the following docker-compose.yml to run a dockerized GitLab instance with one runner. Both are in the same network. (The bridge name is set explicitly because the firewall rules depend on it.)
...ANSWER
Answered 2021-May-11 at 14:45Figured it out:
My setup causes the runner
to fetch the sources from gitlab.example.com:443 which is the port served by the Nginx proxy on the host. This worked on Docker 19, however, as of Docker 20 the runner
container can't connect via the host anymore. This is totally okay, the solution: Tell the runner
to contact the GitLab server in web
directly when cloning the source. Each runner in config.toml
has to contain:
QUESTION
For some reason, Liquibase is running sql file twice, I do not understand why:
...ANSWER
Answered 2021-May-04 at 16:42Ok, the problem was that the folder sqlFiles
was under
"path": "classpath:/db/changesets"
and that caused it to run the same SQL file twice. Once from the changelog and once as an independent SQL file.
QUESTION
I'm adding liquibase to an existing project where tables and data already exist. I would like to know how I might limit the scope of changelogSync[SQL]
to a subset of available changes.
I've run liquibase generateChangeLog
to capture the current state and placed this into say src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__init01.yaml
.
I've also added another changeset to cover some new requirements in a new file. Let's call it src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__new-feature.yaml
.
I've added a main changelog file src/main/resources/db/changelog/db.changelog-master.yaml
with the following contents:
ANSWER
Answered 2021-Apr-15 at 02:59As it stands, the only solution I've found is to generate the SQL and then manually edit its contents to filter the change sets which don't correspond to your current schema.
Then you can apply the sql to the db.
QUESTION
I have several gitlab repos where the general setup involves having a master
branch, a stage
(pre-release) branch and a dev
branch.
Push permissions are disabled for all 3 branches.
The workflow is to fork from the dev
branch for any hot-fixes, bug fixes and features. When you are satisfied with the release you would submit a merge-request to dev
. Eventually, when a stable build is ready inside dev
; a merge-request would be submitted for the stage
branch. Lastly, when you are satisfied with the pre-release you would submit a merge-request for the master
branch.
I have CI/CD configured so that tests, builds and deployments are automatically executed from the master
and stage
branches with the automatic generation of CHANGELOG.md
files. stage
branch deploys to the UAT s3 Bucket and master
deploys to the production s3 Bucket.
Deployment is handled through Semantic Versioning 2.0.0
which is responsible for bumping versions, generating changelogs and deploying.
I have a similar setup to the one just described above except it is a monorepo so I am using Lerna
to handle the publishing (deploying) with {"conventionalCommits": true}
to replicate Semantic Versioning 2.0.0
's behaviour. I am using independent versioning inside the monorepo.
Both the Semantic Versioning 2.0.0
and the Lerna
setup force the master
branch to always be either behind or equal with the stage
and dev
branches; and the stage
branch to always be behind or equal to the dev
branch in kind of like a cascading effect.
dev
>= stage
>= master
Both Lerna Publish
and Semantic Versioning
make several changes to the files when publishing/deploying. Some of these changes include updating the CHANGELOG.md
file and bumping the version inside of the package.json
file.
Both Lerna and Semantic Versioning eventually push these changes to the branch they are run from through the CI/CD.
What this means is that if I merge from dev
to stage
, stage would then have the bumped version numbers and new changelogs pushed into it through either the Semantic Versioning or Lerna Publish executions. This would cause the stage
branch to be ahead of the dev
branch and would cause all the future forks from the dev
branch to detach from the stage
branch meaning that the next time I merge from dev
to stage
it's not going to be a simple fast-forward
merge like it's meant to be and the merge will most likely encounter conflicts which would prevent any future merges or may fail the CI/CD.
For Semantic Versioning:
- I have disabled the push feature so that the new, changed files are no longer committed and pushed (tags are still created and pushed)
- I have created a script that converts the
CHANGELOG.md
file to a PDF and sends it to my teams email
This works out well because Semantic Versioning uses tags to determine the changed files and decide how to bump the version. So, although the version inside the repo remains constant at 1.0.0
for example, Semantic Versioning is smart enough to increment the version from the latest tag not from what's in the package.json
This unfortunately doesn't hold true for Lerna which still uses tags to determine changed files but then bumps from the version inside package.json
which means that by not pushing the updated package.json
with the new version, Lerna always bumps me from 1.0.0
to either 1.0.1
, 1.1.0
, or 2.0.0
So I am stumped with Lerna.
QuestionHow should I be setting up my CI/CD to avoid the problem? I know the structure of my repo is common, and I haven't found anyone addressing this issue despite the countless users of Lerna and Semantic Versioning which tells me that I have obviously missed something as it is not a wide-spread issue.
Possible SolutionAs I was writing this question, it crossed my mind that maybe I should only bump versions in dev
and then deploy from stage
and master
. This would prevent stage
and master
from ever being ahead of dev
, would this be the right way to do it?
ANSWER
Answered 2021-Apr-01 at 23:33Maintaining package version information in the repo, does not scale. Still, all the tools out there keep trying to make it work. I have nothing to offer in the way of an alternative (yet), other than to say that; release data should be managed by other means, than storing it in the source repo. What we're really talking about here are asynchronous processes, dev, build and release. The distributed nature of these systems, implies that we cannot treat repos as file shares and expect them to scale well.
See my other rant on this topic. I haven't had time to do a good write-up on this yet. I would add that Git tags are meant to be handy milestone markers for devs to find the right git hash to go back to, to create a fix branch from. Commit messages are meant for change-logs and just one of the inputs into deciding what version to release from which build.
No dev, working in a multi-dev, multi-branch, distributed environment, can possibly predict the appropriate semantic version to apply at some random point in the future. At least not without having full dictatorial control of every dev, branch and build/release environment. Even then, they would be hard pressed to make it work. It's that control, that current tooling implies, but in practice, it never scales.
Consider that your package feed service likely has all, or a sufficient portion of your release history. As long you have only one feed service, you can use that to determine the version floor for your next release. Process your semantic commits, lookup the most recent version matching the tag your process is targeting (daily, beta, RC, none or whatever), calculate the next appropriate version, update appropriate version fields in source code, then build and test. If the feed service doesn't let you include hidden or deleted versions in your query, you'll have to use your own database. Do not check-in the modified files! Those fields should be zeroed in your repo, effectively marking local dev builds as 0.0.-dev or something along those lines.
Automatic publishing of prerelease versions is okay, but there should be a human in the loop for release versions. Iff all the above steps succeed, apply a tag to the git hash that you just successfully built from.
My dream CI/CD system, runs every commit to my release branch(es) through a test build and unit test runs, detects whether existing tests case were modified (auto-detect breaking changes!), analyzes the commit messages for indications of intentional breakage, and presents all of that information to the release build system on demand. My release build system produces a -alpha.build.### and runs all of the acceptance tests against it.
If there is no known breakage and the intended target is a prerelease, it then updates the files containing version information and runs an incremental build with one final smoke test prior to automatic publication. Here's where there's some gray areas. Some prerelease targets should not include breakage, without human intervention, and for others it's okay. So I would have a special set of prerelease targets that do not allow automated publication of breakage, such as certain levels of dog-fooders, or bits targeted at my internal long running test infrastructure.
If it's an untagged release target, then I would prefer it to build and package everything for consumption in my final stage of testing. This is where the automation verifies the package for conformance to policies, ensures that it can be unpacked correctly, and gathers sign-offs from area owners, department/division heads, etc, prior to publication. It might include some randomized test deployments, in cases where we're targeting a live system.
And all of the above is just an over-simplified description really. It glosses over more than it clarifies, because there's just too much variation in the real-world needs of producers and consumers.
Circling back to tools and control. There are many in the DevOps world who will tell you that the main point is to standardize around tooling. Which remind me of this xkcd commic.
QUESTION
Mongock looks very promising. We want to use it inside a kubernetes service that has multiple replicas that run in parallel.
We are hoping that when our service is deployed, the first replica will acquire the mongockLock and all of its ChangeLogs/ChangeSets will be completed before the other replicas attempt to run them.
We have a single instance of mongodb running in our kubernetes environment, and we want the mongock ChangeLogs/ChangeSets to execute only once.
Will the mongockLock guarantee that only one replica will run the ChangeLogs/ChangeSets to completion?
Or do I need to enable transactions (or some other configuration)?
...ANSWER
Answered 2021-Feb-23 at 10:04I am going to provide the short answer first and then the long one. I suggest you to read the long one too in order to understand it properly.
Short answerBy default, Mongock guarantees that the ChangeLogs/changeSets will be run only by one pod at a time. The one owning the lock.
Long answerWhat really happens behind the scenes(if it's not configured otherwise) is that when a pod takes the lock, the others will try to acquire it too, but they can't, so they are forced to wait for a while(configurable, but 4 mins by default)as many times as the lock is configured(3 times by default). After this, if i's not able to acquire it and there is still pending changes to apply, Mongock will throw an MongockException, which should mean the JVM startup fail(what happens by default in Spring).
This is fine in Kubernetes, because it ensures it will restart the pods. So now, assuming the pods start again and changeLogs/changeSets are already applied, the pods start successfully because they don't even need to acquire the lock as there aren't pending changes to apply.
Potential problem with MongoDB without transaction support and Frameworks like SpringNow, assuming the lock and the mutual exclusion is clear, I'd like to point out a potential issue that needs to be mitigated by the the changeLog/changeSet design.
This issue applies if you are in an environment such as Kubernetes, which has a pod initialisation time, your migration take longer than that initialisation time an the Mongock process is executed before the pod becomes ready/health(and it's a condition for it). This last condition is highly desired as it ensures the application runs with the right version of the data.
In this situation imagine the Pod starts the Mongock process. After the Kubernetes initialisation time, the process is still not finished, but Kubernetes stops the JVM abruptly. This means that some changeSets were successfully executed, some other not even started(no problem, they will be processed in the next attempt), but one changeSet was partially executed and marked as not done. This is the potential issue. The next time Mongock runs, it will see the changeSet as pending and it will execute it from the beginning. If you haven't designed your changeLogs/changeSets accordingly, you may experience some unexpected results because some part of the data process covered by that changeSet has already taken place and it will happen again.
This, somehow needs to be mitigated. Either with the help of mechanisms like transactions, with a changeLog/changeSet design that takes this into account or both.
Mongock currently provides transactions with “all or nothing”, but it doesn’t really help much as it will retry every time from scratch and will probably end up in an infinite loop. The next version 5 will provide transactions per ChangeLogs and changeSets, which together with good organisation, is the right solution for this.
Meanwhile this issue can be addressed by following this design suggestions.
QUESTION
Every time i try to install something, upgrade or autoremove this error happens!
I've searched the web and tried some fixes but they don't seem to work, I don't have python2.7 installed only 3 and can't install python2 or anything because of this. I've tryed to sudo rm python-rpi.gpio_0.7.0-0.1~bpo10+4_armhf.deb but the file will keep comming. Pls I really need some help since I've got some work on my pi and don't want to get a new image
...ANSWER
Answered 2021-Jan-25 at 13:18Use the following commands:
QUESTION
Code I'm working on:
...ANSWER
Answered 2021-Mar-06 at 04:58From the change notes to 1.5a2 (2013-09-22), Backwards Incompatibilities:
Removed the ability to pass the following arguments to
pyramid.config.Configurator.add_route
:view
,view_context
,view_for
,view_permission
,view_renderer
, andview_attr
. Using these arguments had been deprecated since Pyramid 1.1. Instead of passing view-related arguments toadd_route
, use a separate call topyramid.config.Configurator.add_view
to associate a view with a route using itsroute_name
argument. Note that this impacts thepyramid.config.Configurator.add_static_view
function too, because it delegates toadd_route
.
For example:
QUESTION
I have a new laptop and I try to render the Changelogs of TYPO3 locally based on the steps on https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/RenderingDocs/Quickstart.html#render-documenation-with-docker. It continues until the end but show some non-zero exit codes at the end.
...ANSWER
Answered 2021-Feb-18 at 16:06I found the issue with this. It seemed the docker container did not have enough memory allocated. I changed the available memory from 2 Gb to 4 Gb in Docker Desktop and this issue is solved with that.
QUESTION
I have a Markdown file which contains release notes. For example:
...ANSWER
Answered 2021-Feb-18 at 13:56The parse result of the content is stored in article.body.children[]
. Each child contains the following node data:
tag
- HTML tag (e.g.,h2
,h3
)type
- Element type (e.g.,element
,text
)value
- Element value (the text contents)props
- Extra props data, includingid
children[]
- Child nodes
You could use that info to parse the nodes into a convenient data structure that stores release info, such as:
title
from thetext
child ofh2
id
fromprops
ofh2
changes[]
to hold the change lines, each containing:id
fromprops
ofh4
type
from thetext
child ofh3
text
from thetext
child ofh4
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install changelogs
You can use changelogs like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page