packfile | Dockerfile alternative , built on Cloud Native Buildpacks | Continuous Deployment library
kandi X-RAY | packfile Summary
kandi X-RAY | packfile Summary
Packfile is an abstraction for writing modular Cloud Native Buildpacks. It enables you to efficiently build OCI (Docker) images using declarative TOML, YAML, or Go. Built on top of Cloud Native Buildpacks.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Build builds a build from a packfile
- writeBuildpack writes a buildpack file to dst .
- Detect runs the given Packfile .
- Run the main package .
- tryAfter runs the kernel after the given links .
- NewStreamer returns a new Streamer
- mergePackfiles merges src into destination .
- mergeRequire merges metadata into md .
- execCmd returns an exec . Cmd for the given exec command
- Run the packfile .
packfile Key Features
packfile Examples and Code Snippets
Community Discussions
Trending Discussions on packfile
QUESTION
If I run git fetch origin
and then git checkout
on a series of consecutive commits, I get a relatively small repo directory.
But if I run git fetch origin
and then git checkout FETCH_HEAD
on the same series of commits, the directory is relatively bloated. Specifically, there seem to be a bunch of large packfiles.
The behavior appears the same whether the commits are all in place at the time of the first fetch
or if they are committed immediately before each fetch.
The following examples use a public repo, so you can reproduce the behavior.
Why is the directory size of example 2 so much larger?
Example 1 (small):
...ANSWER
Answered 2022-Mar-25 at 19:08Because each fetch produces its own packfile and one packfile is more efficient than multiple packfiles. A lot more efficient. How?
First, the checkouts are a red herring. They don't affect the size of the .git/ directory.
Second, in the first example only the first git fetch origin
does anything. The rest will fetch nothing (unless something changed on origin).
Compression works by finding common long sequences within the data and reducing them to very short sequences. If
long block of legal mumbo jumbo
appears dozens of times it could be replaced with a few bytes. But the original long string must still be stored. If there's a single packfile it must only be stored once. If there's multiple packfiles it must be stored multiple times. You are, effectively, storing the whole history of changes up to that point in each packfile.
We can see in the example below that the first packfile is 113M, the second is 161M, the third is 177M, and the final fetch is 209M. The size of the final packfile is roughly equal to the size of the single garbage compacted packfile.
Why do multiple fetches result in multiple packfiles?git fetch
is very efficient. It will only fetch objects you not already have. Sending individual object files is inefficient. A smart Git server will send them as a single packfile.
When you do a single git fetch
on a fresh repository, Git asks the server for every object. The remote sends it a packfile of every object.
When you do git fetch ABC
and then git fetch DEF
s, Git tells the server "I already have everything up to ABC, give me all the objects up to DEF", so the server makes a new packfile of everything from ABC to DEF and sends it.
Eventually your repository will do an automatic garbage collection and repack these into a single packfile.
We can reduce the examples. I'm going to use Rails to illustrate because it has clearly defined tags to fetch.
QUESTION
I accidentally did some work on master, so I did some googling on how to commit those changes to a different branch other than master. I found this:
...ANSWER
Answered 2021-Sep-01 at 16:03Based on your git log
output, the two commits that are on master
but not on origin/master
are a290975977b819d6719cd4e2e96f5d0ba9b3bc76
and 11d13f421f3f6c7f8ff8e18cf0d95a005c0465bb
.
Commit a290975977b819d6719cd4e2e96f5d0ba9b3bc76
is the commit that you have, currently, as the tip commit of your master
branch. The tip commit is the commit that is the last commit on that branch at the moment. To understand this, see the notes below.
Commit a290975977b819d6719cd4e2e96f5d0ba9b3bc76
is a merge commit, with two parents. We see this here:
QUESTION
TL;DR: I'm looking for a way to get a TFS git repo to provide instructions for how to create a pull request after a git push origin. I think the best solution is a server-side hook if it's supported in TFS 2018.
I know that if I git push origin new_branch to a github repo, I can get a response that reads something like this:
...ANSWER
Answered 2021-Jun-30 at 08:49TFS 2018 doesn't support git server side hooks.
So your really great idea it's impossible.
I suggest to open a feature request about it like there is about pre-receive hook.
QUESTION
From what I can gather from git-scm docs, packfile negotiation ends when the client think the server know what commits to send, and then packfile is sent immediately. Well even if the commit is new, some of the content might already exist on the client, why not negotiate down to what files need to be sent as well? Is it because negotiating down to this level of detail is not worth the effort?
...ANSWER
Answered 2021-Apr-25 at 09:10If the file is unchanged between the commit being sent, and one which was discovered to be common, this can be handled without further negotiation: if the client has commit X, it must also have all the blobs in the tree of commit X.
If a blob with the same hash happens to exist in a commit which was not part of the negotiation, it would be possible to negotiate not to send it. In practice, though, this would probably be rare: you'd need a file on the server and a file in the client which were identical, but which had not appeared in the history which was already shared.
More common is that the file to be sent can be described by some delta from a previous version of the file. That's handled by the pack format - the server can send a delta with a base object it knows the client has.
If the server had a perfect view of the client's list of objects, it could optimise both the blobs it sent and how to deltify them, but the bandwidth to transmit that complete list would often be higher than the bandwidth saved.
QUESTION
During debugging of a production issue, I am dealing with an application (Gerrit) that holds references to RandomAccessFiles in a cache structure.
These files are referencing a git repositories packfiles.
During an out of band git gc
(not within the application) on a repository with no changes, it appears that:
- the same packfile is rewritten (same uuid);
- file descriptor is in the output list of lsof in the form (old-xxx.pack) but is instantly mark (deleted).
I have been searching numerous codebases for this rename to no avail.
My question is, could this be a filesystem quirk, if a rename/overwrite is done to a file with an open file descriptor by git gc?
lsof entry:
...ANSWER
Answered 2021-Apr-15 at 01:13If you do a standard git gc
in a repository with no changes, this is expected. Git names its packfiles by a hash of their contents. Because Git doesn't recompute deltas for existing packs, when you git gc
and there's only one pack and no loose objects, it's very likely that it will pack all the data into one pack that's the same as the old one.
When this happens, Git still has a file descriptor to the old pack open because it doesn't close packs immediately. This is because often it's necessary to access them again, so it will try to leave them open a little while. The old pack, which is still open, is renamed to the old name, and the new pack is renamed into place; the old pack is then deleted. On a Unix system, it's completely possible to delete a file for which you have the file descriptor open; when the last process closes its file descriptor, the storage is freed.
So this all seems completely normal for the scenario you're describing. Usually git gc
is not a no-op, since additional objects are added to or removed from the pack or multiple packs are combined into one. But, if you do run a git gc
immediately after running one with no intermediate changes, this is expected.
QUESTION
Is there a way to tell Git to create packfiles based on file types, and to specify the delta-ification and zlib compression for those classes / types?
I have a rather large repository, much of which is composed of image assets and translation files (.po), the latter actually being the largest fraction of the working copy and the repository data.
- For the image assets, neither delta nor zlib compression are useful: the images are already compress (so they don't compress well under zlib) and delta compression does nothing useful when small changes tend to cascade through the compressed image (and are rare anyway, usually once the asset is committed it's either left alone forever or replaced wholesale).
- For the PO files, while they're technically text files I would expect them to delta-compress very badly for this specific repository: the historical generator / exporter would export the translations in essentially random order so from one export to the next it's as if the entire file has been rewritten.
As a result, when the repository is repacked I'd like to try packing the images together neither delta-compress nor zlib-compress them, and the PO files together and zlib-compress them (at the maximum possible level) but not delta-compress them. This way they ought not waste cycles on useless compression work, and should avoid polluting the compression of more "normal" code files.
However my previous experiments in packfiles did not go well. Is there builtin support for this sort of segregation & configuration which I missed, or would I need to build the packs by hand using low-level commands or even libgit2 directly?
...ANSWER
Answered 2021-Jan-25 at 08:46Alas, no: the controls available are only the following:
core.bigFileThreshold
: files that exceed this size are not packed, merely zlib-compressed.pack.island
and several related settings: these set up so-called delta islands, as described in thegit pack-objects
documentation.
These do not come anywhere close to what you want. (Note: there is also core.compression
and two related items, but these are strictly global, not per-object.)
QUESTION
I'd like to remove objects (.git/objects
) from my repository that are no longer referenced by any refs, but I don't want to pack to pack files.
I tried git gc --no-prune
but it still removed all objects from my repo and left only packfiles (git count-objects
reports "0 objects, 0 kilobytes").
ANSWER
Answered 2020-Oct-12 at 21:40git gc
is a wrapper for a whole series of smaller maintenance operations:
git reflog expire
git repack
git prune-packed
(actually done bygit repack
automatically when you use-d
)git prune
- any others I may have forgotten
and git prune
is the one that specifically removes unreferenced objects. Note that git gc
supplies an expiry time so that git prune
won't remove unreferenced objects that are still being constructed. If you have no active Git commands that might be constructing objects, and don't want to provide the gc default grace period, you don't need to worry about this.
QUESTION
I am trying to create a simple WCX plugin for Total Commander.
I downloaded wcxhead.pas
from here (it's in the help file):
http://ghisler.fileburst.com/plugins/wcx_ref2.21se_chm.zip
This is the official file linked here: https://www.ghisler.com/plugins.htm
And created my own DLL:
...ANSWER
Answered 2020-Jan-15 at 15:58The header was clearly created from a pre-Unicode Delphi time. It suppose char
= AnsiChar
which is not true since Delphi 2009.
IIRC from forums, TotalCommander 32-bit is still compiled with Delphi 2 or Delphi 3, with a lot of custom RTL, to keep it small and efficient.
If you are using an Unicode version of Delphi, try to change char
into AnsiChar
, and use the *W()
exports with Unicode support.
Also don't use global variables to maintain the state of the packer access: you should use the THandle
value to open/close a given archive file. Two archives may be opened at the same time.
Edit: since you are using Delphi 7, this is not an Unicode problem. But I would try
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install packfile
testout/node-toml.tgz is a Node.js engine buildpack built from testdata/node-toml.
testout/node-yaml.tgz is a Node.js engine buildpack built from testdata/node-yaml.
testout/node-go.tgz is a Node.js engine buildpack built from testdata/node-go.
testout/npm-toml.tgz is an NPM buildpack built from testdata/npm-toml.
testout/npm-yaml.tgz is an NPM buildpack built from testdata/npm-yaml.
testout/npm-go.tgz is an NPM buildpack built from testdata/npm-go.
testout/ruby-yaml.tgz is a Ruby buildpack built from testdata/ruby-yaml.
testout/bundler-yaml.tgz is a Bundler buildpack built from testdata/bundler-yaml.
testout/ytt-yaml.tgz is a buildpack that builds YTT from testdata/ytt-yaml.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page