hsm | Finite state machine library based on the boost hana meta | SDK library
kandi X-RAY | hsm Summary
kandi X-RAY | hsm Summary
[Join the chat at =. The hana state machine (hsm) is a [finite state machine] library based on the [boost hana] meta programming library. It follows the principles of the [boost msm] and [boost sml] libraries, but tries to reduce own complex meta programming code to a minimum. The following table compares features among popular c++ state machine libraries. A click on a particular feature check mark will forward to the feature documentation.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of hsm
hsm Key Features
hsm Examples and Code Snippets
Community Discussions
Trending Discussions on hsm
QUESTION
We are considering to switch to an extended validation (EV) code signing certificate.
In order to fully automate the notarization with Apple, we had to switch our build machine to a Mac mini.
Reading up on the EV code signing process, two questions arose:
Can the password entry for the hardware token (HSM) be automated?The comment from Ingo Kegel on this SO question seems to indicate that you can pass the HSM password via --win-keystore-password=
command line option.
Is that correct?
Can a multi-platform build still happen on a single machine (the Mac mini)?The install4j help mentions 'different platforms':
On Windows, such a hardware token can be usually accessed through the Windows keystore. On a different platform, you have to choose the "Hardware security module PKCS #11 library" option and configure a native library that provides access to the keystore in the HSM through the PKCS #11 API.
Are there PKCS #11 libraries for MacOS? The library selection dialog asks for a DLL...
...ANSWER
Answered 2022-Apr-14 at 08:44The comment from Ingo Kegel on this SO question seems to indicate that you can pass the HSM password via --win-keystore-password= command line option.
Yes, that is correct. This option is available on non-Windows platforms as well for code signing of Windows executables.
Can a multi-platform build still happen on a single machine (the Mac mini)?
Yes, a multi-platform build that involves notarization can only be performed on macOS, because Apple does not allow notarization requests except from macOS.
Are there PKCS #11 libraries for MacOS? The library selection dialog asks for a DLL...
You need a library for your HSM, this will be a .so file on Linux or a *.dylib file on macOS. I have created an issue for the file chooser to show the correct file filter based on the current platform.
Whether such a library is available for macOS depends on the HSM. These libraries are loaded by the Java Cryptography Api (JCA) and install4j has no Windows-specific code in this respect.
QUESTION
2 different tenants (Subscription A in tenant A and Subscription B in tenant B)
We have one subscription in Azure cloud and we have setup Azure Keyvault. We can create keys there and use one of the key to encrypt disks of a virtual machine running in our subscription.
Our customer has their own Azure cloud subscription and for security and compliance purposes their requirement is that they must hold control of the key being used to encrypt disks of virtual machine in our subscription. For this we both have Azure keyvault with Premium tier and I was wondering if there is any guide which points out how to use Azure KeyVault HSM from Customer's subscription to create keys in to our subscription.
https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/hsm-protected-keys-byok
The above guide points out some of the vendors and how to use BYOK tool to transfer keys from HSM into Azure Keyvault.
We are looking for a way to use Azure KeyVault HSM from Customer's subscription to create keys in to our Azure Keyvault and which we can use to encrypt disks in our subscription.
Many thanks,
...ANSWER
Answered 2022-Feb-21 at 10:55If you have the permissions to access the two subscriptions, you can create an Azure Management Group to manager the access of the subscriptions into this Management Group.
For more details, you can see the document about "Management Groups".
QUESTION
i am writing a package for folks who want to predict values base on AADTMAJ, L, and Base_Past. The function provides two options 1) allow the user to enter there own regression coefficients, or 2) provide the user with pre defined coefficients. However, i have not been able to use return() correctly .
input data
...ANSWER
Answered 2022-Mar-11 at 13:33The function and your use of it have several problems. Notable on the list of problems since my first batch of comments:
You call it within a
rowwise
pipe but then passdata=data
, which means that it is ignoring the data coming in the pipe and instead looking at the whole thing. You might instead usedata=cur_data()
(since it is inside of amutate
, this works, ascur_data()
is defined bydplyr
for situations something like this).Your
helper_function
is ill-defined by assuming thatcustom.spf
is defined and available. Having a function rely on the presence of external variables not explicitly passed to it makes it fragile and can be rather difficult to troubleshoot. If for instancecustom.spf
were not defined in the calling environment, then this function will fail withobject 'custom.spf' not found
. Instead, I think you could use:
QUESTION
I am proposing to use AWS KMS to encrypt my database. However by boss challenge me that what if the someone in Amazon staff has access to steal my KMS and decrypt my database. The information inside the database is very important and cannot take any risk other people can decrypt it.
Is there other solution to solve this issue? to make sure no one can steal the Key?
Should we use some on-prem HSM to store the key instead ?
...ANSWER
Answered 2022-Mar-10 at 03:16As the FAQ points out, AWS KMS is designed such that
no one, including AWS employees, can retrieve your plaintext KMS keys from the service.
If you read further down, it also provides links to various articles detailing the specification and design of the KMS. And as you can see from the volumes of these articles, the full scope of design consideration and how it complies with FIPS certification is beyond the scope of this answer.
However, as an example, refer to the cryptographic details tech paper for some ideas of how it works. There are 2 areas mentioned where keys are present:
- In the KMS Keys Repository
- In the HSM modules
KMS Keys Repository
The repository serves as durable storage for the keys. Keys are, of course, stored encrypted. The article further explains that the key repository leverages on IAM roles.
Only under AWS IAM roles and accounts administered by each customer can customer KMS keys be created, deleted, or used to encrypt, decrypt, sign, or verify data.
This is the same way authentication and authorization to any other AWS services are managed. Hence, this is one way to prevent AWS employees from gaining access to the keys. How IAM works and how it is secured is once again beyond the scope of this answer.
HSM Modules
Unlike the KMS keys repository, the HSM Modules will have access to the plain text keys. However, the plain text keys are only loaded in-memory for the duration that they are used. They are not durably stored in the HSM modules.
These keys are made available only on the HSMs and only in memory for the necessary time needed to process your cryptographic request.
Hence, employees with access to these modules would be able to theoretically gain access to these keys. To mitigate this risk, if you go to the design goals section, the article further explains the modules use quorum-based access controls.
Multiple Amazon employees with role-specific access to quorum-based access controls are required to perform administrative actions on the HSMs.
That is, no single employee will have administrative access to these modules. Multiple employees are always required. Once again, how AWS assigns which roles to which employees at which management level is beyond the scope of this answer.
As the question requested, these are just some of the considerations of how the service is secured against AWS employees. For an organization to make a decision on whether to use AWS, usually it should be based on a comprehensive set of security policies and an audit whether AWS complies to these requirements.
EDIT
Since you mentioned also how to convince stakeholders, this is usually a business question rather than a technical one.
I would refer them to AWS compliance for evidence that AWS goes through rigorous 3rd party audits. Would then point out the security of a system is only as strong as the weakest link. That is, using AWS does not mean we automatically have AWS security. We have to ensure our software, our people, and our processes are secure against exploits. So unless we are sure we have better security profile than AWS (with all their compliance and audits), our focus and worry should be more on securing our resources.
QUESTION
I have tried to sign pdf using a smart card with Node JS chilkat but it fails. and I found this error when I put the smart card and install here driver on my computer then I execute my code : I cannot find the best solution
...ANSWER
Answered 2022-Feb-16 at 17:27I uploaded a new build here: https://www.npmjs.com/package/@chilkat/ck-node14-win64 It is the version "9.50.89-hotfix1".
Please give it a try. Also, I see you are passing "TnTrust Token" to LoadFromSmartCard. If the problem remains, try passing an empty string. This should cause Chilkat to try using the default Microsoft CNG Storage Provider, which may have better success.
QUESTION
I'm using iText 7.1.15 and SignDeferred to apply signatures to pdf documents. SignDeferred is required since the signature is created PKCS11 hardware token (usb key).
When i sign a "regular" pdf, e.g. created via word, i can apply multiple signatures and all signatures are shown as valid in the adobe acrobat reader.
If the pdf was created by combining multiple pdf documents with adobe DC, the first signature is valid but becomes invalid as soon as the seconds signature is applied.
Document in Adobe reader after the first signature is applied:
Document in Adobe reader after the second signature is applied:
The signatures of the same document are shown as valid in foxit reader.
I've found a similar issue on stackoverflow (multiple signatures invalidate first signature in iTextSharp pdf signing), but it was using iText 5 and i'm not sure it is the same problem.
Question: What can i do in order to keep both signatures valid in the Acrobat Reader?
Unsigned Pdf document on which the first signature becomes invalid: https://github.com/suntsu42/iTextDemoInvalidSecondSignature/blob/master/test.pdf
Twice signed document which is invalid: https://github.com/suntsu42/iTextDemoInvalidSecondSignature/blob/master/InvalidDocumentSignedTwice.pdf
Code used for signing
...ANSWER
Answered 2022-Jan-28 at 16:35As already mentioned in a comment, the example document "InvalidDocumentSignedTwice.pdf" has the signature not applied in an incremental update, so here it is obvious that former signatures will break. But this is not the issue of the OP's example project. Thus, the issue is processed with an eye on the actual outputs of the example project.
Analyzing the IssueWhen validating signed PDFs Adobe Acrobat executes two types of checks:
- It checks the signature itself and whether the revision of the PDF it covers is untouched.
- (If there are additions to the PDF after the revision covered by the signature:) It checks whether changes applied in incremental updates only consist of allowed changes.
The former check is pretty stable and standard but the second one is very whimsical and prone to incorrect negative validation results. Like in your case...
In case of your example document one can simply determine that the first check must positively validate the first signature: The file with only one (valid!) signature constitutes a byte-wise starting piece of the file with two signatures, so nothing can have been broken here.
Thus, the second type of check, the fickle type, must go wrong in the case at hand.
To find out what change one has to analyze the changes done during signing. A helpful fact is that doing the same using iText 5 does not produce the issue; thus, the change that triggered the check must be in what iText 7 does differently than iText 5 here. And the main difference in this context is that iText 7 has a more thorough tagging support than iText 5 and, therefore, also adds a reference to the new signature field to the document structure tree.
This by itself does not yet trigger the whimsical check, though, it only does so here because one outline element refers to the parent structure tree element of the change as its structure element (SE). Apparently Adobe Acrobat considers the change in the associated structure element as a change of the outline link and, therefore, as a (disallowed) change of the behavior of the document revision signed by the first signature.
So is this an iText error (adding entries to the structure tree) or an Adobe Acrobat error (complaining about the additions)? Well, in a tagged PDF (and your PDF has the corresponding Marked entry set to true) the content including annotations and form fields is expected to be tagged. Thus, addition of structure tree entries for the newly added signature field and its appearance not only should be allowed but actually recommended or even required! So this appears to be an error of Adobe Acrobat.
A Work-AroundKnowing that this appears to be an Adobe Acrobat bug is all well and good, but at the end of the day one might need a way now to sign such documents multiple times without current Adobe Acrobat calling that invalid.
It is possible to make iText believe there is no structure tree and no need to update a structure tree. This can be done by making the initialization of the document tag structure fail. For this we override the PdfDocument
method TryInitTagStructure
. As the iText PdfSigner
creates its document object internally, we do this in an override of the PdfSigner
method InitDocument
.
I.e. instead of PdfSigner
we use the class MySigner
defined like this:
QUESTION
I have created and activated Managed HSM using the following terraform script:
main.tf
...ANSWER
Answered 2022-Jan-06 at 07:23As mentioned in comments , you cannot find the HSM Key Vault in Portal , so you will have to use Azure Keyvault Powershell Module
or Azure Keyvault CLI Module
.
As a solution , You can add the below in your Terraform script to create a Disk Encryption Set with Managed HSM:
QUESTION
I’m working on to create Azure Key Vault Managed HSM using terraform. For that I have followed this documentation.
The above documentation contains the code for creating the HSM but not for the activation of managed HSM.
I want to provision and activate a managed HSM using Terraform. Is it possible or not through the terraform?
After Activate a managed HSM, I want to configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM. For that I have followed this documentation, but it contains the Azure CLI code.
...ANSWER
Answered 2021-Dec-28 at 16:08Unfortunately , its not directly possible to activate the Managed HSM from Terraform . Currently , you can only provision it from terraform or ARM template but for activating it has to be done only from PowerShell and Azure CLI. It is also the same while updating the storage account with customer managed key and assigning a key vault role assignment.
If you use azurerm_storage_account_customer_managed_key
, then you will get the below error:
Overall all HSM Key vault Operations needs to be performed on CLI or Powershell.
So , For workaround you can use local-exec
in terraform to directly run it without performing separate operations.
Code:
QUESTION
My problem concern the usage of an hsm with Java (openjdk 11.0.12). Hsm should be use for signature purpose, by SHA512 RSA algorithm. I could be wrong in a lot of the following sentences, I'm totally newbie with HSM & co, so I apologize in advance.
For what I've understand there are three kinds of approaches:
1- Using SUNPKCS11 provider
2- Using vendor lib (hsm is shipped with a couple of jar, in my case nCipher is shipped with nCipherKM.jar, which should be vendor provider.)
3- openssl (we have some software in c already doing this, I prefer to avoid)
The usage of vendor lib it's really easy, at least until the Get info call, which send an Unknown Parameter to HardServer, causing an unmarshable exception. This is difficult to debug, communication protocol isn't documented. Right now I've put this solution aside.
In any case I prefer the SUNPKCS11 solution, it doesn't work out of the box for me, but it was simple to debug and analyze. And should be a standard.
In this case i'm using European DSS library to interface with PKCS11Provider, making things a little simpler for me to configure and implement.
The problem occurs during SunPKCS11 (vanilla) initialization.
At some point it calls a method "P11Keystore.mapLabels()" that match, according to code and Oracle documentation, all private key handlers (CKA_PRIVATE_KEY) coming from that slot with certificate handlers (CKO_CERTIFICATE), looking for matching between cka_id, in order to build a software in memory keystore with aliases map containing the CKA_LABEL attributes. (Private key is unextractable so access is read only https://docs.oracle.com/javase/8/docs/technotes/guides/security/p11guide.html#KeyStoreRestrictions)
In signature initialization this private key entry is used to fetch from HSM (by some key attributes that I don't have) the private key handler.
The problem is that my hsm nCipher doesn't expose any object for CKO_CERTIFICATE, so the match returns 0 result and my software keystore is empty.
When I try to extract the private key handler from keystore I obtain nothing and I cannot initialize Signature object.
My predecessor manually wrapped the private key attributes inside a local jks, and rewrote a new provider in order to load certificate from file and not from HSM/PKCS11.
I dislike this solution, I don't want my application to have configuration depending on HSM certificate.. it's HSM certificate job to handle those keys, not mine.
Instead, I wrote another provider to fetch and use directly the private key handler, from CKA_PRIVATE_KEY, using a preconfigured CKA_LABEL, bypassing the certificate match. And it works.
However I dislike this solution too, it means more maintenance costs for a standard protocol, and the jar must be signed each time, which for me is a nuisance.
I have the feeling that I am approaching the problem from the wrong side, maybe because I'm a noob in the matter.
Explanation are over, so here my questions: 1- Am I wrong to claim that CKO_CERTIFICATE is a prerequisite for SunPKCS11? 1- Could/Should HSM expose CKO_CERTIFICATE object without malevolent side effects? 2- Is this missing object a limitation of nCipher HSM or, probably, a configuration missing during installation? (It works even without so it's a java prerequisite more than an HSM missing) 3- If the CKO_CERTIFICATE cannot be installed and exposed: Is it ok to implement our own provider to obtain the workaround, or could exist a better way to get it working?
Sorry for my English, I'm not a native one. Thanks to those who have come to read up to here and who will answer.
...ANSWER
Answered 2021-Dec-01 at 21:50..almost a month later..
I've finished my application, now I know a lot more about the argument, it works with following modes:
- Standard SunPKCS11 against a Docker SoftHSM2 image. HSM contains CKO_CERTIFICATE PUBLIC_KEY and PRIVATE_KEY, on the same slot, with same CKA_ID.All works fine and flawless.
- Custom PKCS11 extension, I have to copy/paste almost every class from java security package (because is Java 11~17 with sun.* packages), just to alter a couple of lines in Certificate retrieving logic, dropping CKO_CERTIFICTE request and loading it by file (crt/p12).
- P12, containing all information, used as mocked version for local use only.
I've tried to extend Bouncy Castle Fips provider, instead of SunPKCS11, without any luck.
In the end I think is not possible to accomplish what I need for, the problem is in the server configuration, which is not solvable from a client software. Anyway I'll fix server configuration, adopting the first working case, dropping custom PKCS11 solution, keeping it just for academic purpose.
QUESTION
we are currently getting our heads around gcp cloud kms and how to cater for disaster recovery. this is our current test setup:
Java using Spring boot + Google Tink using KMSEnvelopeAead + AesGcmJce (i.e. generated DEK by tink that will be encrypted via kms (KEK) and stored alongside the ciphertext), symmetric
project "A" (the initial project before disaster recovery)
-> KMS -> keyring "keyringABC" -> key "keyABC" -> imported custom key via import job. i can successfully encrypt/decrypt some text - all fine, all good
ANSWER
Answered 2021-Nov-29 at 10:39Yes, it has to be the exact same key with the exact same resource id including project id.The ciphertext for decryption should be exactly as returned from the encrypt call. So, you need to make sure it matches the project in which you created the KMS key. When you try to decrypt the data with the newly created key from project-B that was encrypted in project-A, it fails.
In your use-case the ciphertext you're trying to decrypt was encrypted using a different key. You should use the same key for both encryption and decryption, else KMS tells you that it could not find the key while actually the key was found.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install hsm
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page