KFP | KungFury inspired game coded in React Native | Frontend Framework library
kandi X-RAY | KFP Summary
kandi X-RAY | KFP Summary
Punch of KungFury by Peter Machowski. #Hi Please I need your help! to win the React Conf contest organised by ExponentJS. It would be great if you could take literally 59s and vote for my game - KungFuryPunch.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Remove any empty run .
KFP Key Features
KFP Examples and Code Snippets
Community Discussions
Trending Discussions on KFP
QUESTION
I have been following this video: https://www.youtube.com/watch?v=1ykDWsnL2LE&t=310s
Code located at: https://codelabs.developers.google.com/vertex-pipelines-intro#5 (I have done the last two steps as per the video which isn't an issue for google_cloud_pipeline_components version: 0.1.1)
I have created a pipeline in vertex ai which ran and used the following code to create the pipeline (from video not code extract in link above):
...ANSWER
Answered 2022-Mar-04 at 09:45As @scottlucas confirmed, this question was solved by upgrading to the latest version of google-cloud-aiplatform that can be done through pip install --upgrade google-cloud-aiplatform
.
Upgrading to the latest library ensures that all official documentations available to be used as reference, are aligned with the actual product.
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.
QUESTION
I am using GCP Vertex AI pipeline (KFP) and using google-cloud-aiplatform==1.10.0
, kfp==1.8.11
, google-cloud-pipeline-components==0.2.6
In a component I am getting a gcp_resources documentation :
ANSWER
Answered 2022-Feb-14 at 21:24In this case is the best way to extract the information. But, I recommend using the yarl library for complex uri to parse.
You can see this example:
QUESTION
I am currenlty trying to deploy a Vertex pipeline to achieve the following:
Train a custom model (from a custom training python package) and dump model artifacts (trained model and data preprocessor that will be sed at prediction time). This is step is working fine as I can see new resources being created in the storage bucket.
Create a model resource via
ModelUploadOp
. This step fails for some reason when specifyingserving_container_environment_variables
andserving_container_ports
with the error message in the errors section below. This is somewhat surprising as they are both needed by the prediction container and environment variables are passed as a dict as specified in the documentation.
This step works just fine usinggcloud
commands:
ANSWER
Answered 2022-Feb-04 at 09:10After some time researching the problem I've stumbled upon this Github issue. The problem was originated by a mismatch between google_cloud_pipeline_components
and kubernetes_api
docs. In this case, serving_container_environment_variables
is typed as an Optional[dict[str, str]]
whereas it should have been typed as a Optional[list[dict[str, str]]]
. A similar mismatch can be found for serving_container_ports
argument as well. Passing arguments following kubernetes documentation did the trick:
QUESTION
I am wondering how could I create a simple static HTML visualization for kubeflow pipelines using inline storage?
My use case is I'd like to pass a string with raw html containing a simple iframe.
The sample from the doc does not work for me (kfp sdk v1).
Here is the doc I followed : https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#web-app
Thanks
...ANSWER
Answered 2022-Feb-04 at 00:04UPDATE:
I tested the Output[HTML]
from kfp sdk v2 and it works but I came across other issues.
First of, Kubeflow html viewer creates an iframe with blank src and srcdoc="your static html". This made it impossible to use an iframe in your html as you'd have a nested iframe (the parent from the html viewer and the nested one from your actual html).
Solution :
I found a solution that works on KFP SDK v1 and v2 for all use cases, I used markdown visualization instead of HTML visualization. Since markdown supports inline HTML, I was able to directly paste my html to the markdown output. Compared to using HTML visualization, this supports iframe.
Here is some code to illustrate the solution :
QUESTION
I'm struggling to correctly set Vertex AI pipeline which does the following:
- read data from API and store to GCS and as as input for batch prediction.
- get an existing model (Video classification on Vertex AI)
- create Batch prediction job with input from point 1.
As it will be seen, I don't have much experience with Vertex Pipelines/Kubeflow thus I'm asking for help/advice, hope it's just some beginner mistake. this is the gist of the code I'm using as pipeline
ANSWER
Answered 2021-Dec-21 at 14:35I'm glad you solved most of your main issues and found a workaround for model declaration.
For your input.output
observation on gcs_source_uris
, the reason behind it is because the way the function/class returns the value. If you dig inside the class/methods of google_cloud_pipeline_components
you will find that it implements a structure that will allow you to use .outputs
from the returned value of the function called.
If you go to the implementation of one of the components of the pipeline you will find that it returns an output array from convert_method_to_component
function. So, in order to have that implemented in your custom class/function your function should return a value which can be called as an attribute. Below is a basic implementation of it.
QUESTION
I am using Kubeflow pipelines (KFP) with GCP Vertex AI pipelines. I am using kfp==1.8.5
(kfp SDK) and google-cloud-pipeline-components==0.1.7
. Not sure if I can find which version of Kubeflow is used on GCP.
I am bulding a component (yaml) using python inspired form this Github issue. I am defining an output like:
...ANSWER
Answered 2021-Nov-18 at 19:26I didn't realized in the first place that ConcatPlaceholder
accept both Artifact and string. This is exactly what I wanted to achieve:
QUESTION
I have an existing TFX pipeline here that I want to rewrite using the KubeFlow Pipelines SDK.
The existing pipeline is using many TFX Standard Components such as ExampleValidator
. When checking the KubeFlow SDK, I see a kfp.components.package but no existing prebuilt components like TFX provides.
Does the KubeFlow SDK have an equivalent to the TFX Standard Components?
...ANSWER
Answered 2021-Nov-09 at 06:22You don’t have to rewrite the components, there is no mapping of components of tfx in kfp, as they are not competitive tools.
With tfx you create the components and then you use an orchestrator to run them. Kubeflow pipelines is one of the orchestrators.
The tfx.orchestration.pipeline
will wrap your tfx components and create your pipeline.
We have two schedulers behind kubeflow pipelines: Argo (used by gcp) and Tekton (used by openshift). There are examples for tfx with kubeflow pipelines using tekton and tfx with kubeflow pipelines using argo in the respective repositories.
QUESTION
I'm here because I'm facing a problem with scheduled jobs in Google Cloud. In Vertex AI Workbench, I created a notebook in Python 3 that creates a pipeline that trains AutoML with data from the public credit card dataset. If I run the job at the end of its creation, everything works. However, if I schedule the job run as described here in Job Cloud Scheduler, the pipeline is enabled but the run fails.
Here is the code that I have:
...ANSWER
Answered 2021-Nov-09 at 07:41From the error you shared, apparently Cloud Function failed to create the job.
QUESTION
I'm using the following lines of code to specify the desired machine type and accelerator/GPU on a Kubeflow Pipeline (KFP) that I will be running on a serverless manner through Vertex AI/Pipelines.
...ANSWER
Answered 2021-Sep-20 at 02:13Currently, GCP don't support A2 Machine type for normal KF Components. A potential workaround right now is to use GCP custom job component that you can explicitly specify the machine type.
QUESTION
I am trying to run a custom package training pipeline using Kubeflow pipelines on Vertex AI. I have the training code packaged in Google Cloud Storage and my pipeline is:
...ANSWER
Answered 2021-Jun-28 at 14:17My original CustomPythonPackageTrainingJobRunOp
wasn't defining worker_pool_spec
which was the reason for the error. After I specified replica_count
and machine_type
the error resolved. Final training op is:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install KFP
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page