machinelearning | open source and cross-platform machine learning framework | Machine Learning library

 by   dotnet C# Version: v2.0.1-Preview License: MIT

kandi X-RAY | machinelearning Summary

machinelearning is a C# library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch applications. machinelearning has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
ML.NET is an open source and cross-platform machine learning framework for .NET.
    Support
      Quality
        Security
          License
            Reuse
            Support
              Quality
                Security
                  License
                    Reuse

                      kandi-support Support

                        summary
                        machinelearning has a medium active ecosystem.
                        summary
                        It has 8304 star(s) with 1802 fork(s). There are 594 watchers for this library.
                        summary
                        There were 1 major release(s) in the last 6 months.
                        summary
                        There are 740 open issues and 2745 have been closed. On average issues are closed in 130 days. There are 15 open pull requests and 0 closed requests.
                        summary
                        It has a neutral sentiment in the developer community.
                        summary
                        The latest version of machinelearning is v2.0.1-Preview
                        machinelearning Support
                          Best in #Machine Learning
                            Average in #Machine Learning
                            machinelearning Support
                              Best in #Machine Learning
                                Average in #Machine Learning

                                  kandi-Quality Quality

                                    summary
                                    machinelearning has 0 bugs and 0 code smells.
                                    machinelearning Quality
                                      Best in #Machine Learning
                                        Average in #Machine Learning
                                        machinelearning Quality
                                          Best in #Machine Learning
                                            Average in #Machine Learning

                                              kandi-Security Security

                                                summary
                                                machinelearning has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
                                                summary
                                                machinelearning code analysis shows 0 unresolved vulnerabilities.
                                                summary
                                                There are 0 security hotspots that need review.
                                                machinelearning Security
                                                  Best in #Machine Learning
                                                    Average in #Machine Learning
                                                    machinelearning Security
                                                      Best in #Machine Learning
                                                        Average in #Machine Learning

                                                          kandi-License License

                                                            summary
                                                            machinelearning is licensed under the MIT License. This license is Permissive.
                                                            summary
                                                            Permissive licenses have the least restrictions, and you can use them in most projects.
                                                            machinelearning License
                                                              Best in #Machine Learning
                                                                Average in #Machine Learning
                                                                machinelearning License
                                                                  Best in #Machine Learning
                                                                    Average in #Machine Learning

                                                                      kandi-Reuse Reuse

                                                                        summary
                                                                        machinelearning releases are available to install and integrate.
                                                                        summary
                                                                        Installation instructions, examples and code snippets are available.
                                                                        summary
                                                                        machinelearning saves you 361 person hours of effort in developing the same functionality from scratch.
                                                                        summary
                                                                        It has 862 lines of code, 2 functions and 1546 files.
                                                                        summary
                                                                        It has low code complexity. Code complexity directly impacts maintainability of the code.
                                                                        machinelearning Reuse
                                                                          Best in #Machine Learning
                                                                            Average in #Machine Learning
                                                                            machinelearning Reuse
                                                                              Best in #Machine Learning
                                                                                Average in #Machine Learning
                                                                                  Top functions reviewed by kandi - BETA
                                                                                  kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
                                                                                  Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
                                                                                  Get all kandi verified functions for this library.
                                                                                  Get all kandi verified functions for this library.

                                                                                  machinelearning Key Features

                                                                                  ML.NET is an open source and cross-platform machine learning framework for .NET.

                                                                                  machinelearning Examples and Code Snippets

                                                                                  No Code Snippets are available at this moment for machinelearning.
                                                                                  Community Discussions

                                                                                  Trending Discussions on machinelearning

                                                                                  ML.NET KMeans clustering - What is the Davies Boulding Index?
                                                                                  chevron right
                                                                                  Unpickle instance from Jupyter Notebook in Flask App
                                                                                  chevron right
                                                                                  venv - pip not found
                                                                                  chevron right
                                                                                  Why embarkation_point_2 field gets added when one_hot_encoder is applied to training data
                                                                                  chevron right
                                                                                  How to seldon-core quick-start on kind with port-forward?
                                                                                  chevron right
                                                                                  Accord.NET throw "Index was outside the bounds of the array"
                                                                                  chevron right
                                                                                  Seldon Core Loading sklearn/irir failed
                                                                                  chevron right
                                                                                  "bash: export: ... not a valid identifier" in CLI of git-bash shell "MINGW64" on Windows 10
                                                                                  chevron right
                                                                                  How to import kaggle datasets to PyCharm IDE
                                                                                  chevron right
                                                                                  Possible to select the DNN AI core for model evaluation on HoloLens 2?
                                                                                  chevron right

                                                                                  QUESTION

                                                                                  ML.NET KMeans clustering - What is the Davies Boulding Index?
                                                                                  Asked 2022-Mar-15 at 22:34

                                                                                  I am using the KMeans clustering algorithm from ML.NET (here) and when evaluating the model, I see Davies Bouldin Index in the model metrics.

                                                                                  What is the range of this index? What does its value of zero mean?

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-15 at 22:34

                                                                                  According to the documentation the Davies Bouldin Index is:

                                                                                  "The average ratio of within-cluster distances to between-cluster distances. The tighter the cluster, and the further apart the clusters are, the lower this value is." Also:

                                                                                  "Values closer to 0 are better. Clusters that are farther apart and less dispersed will result in a better score." You can find more information on the Davies Bouldin Index in the following here.

                                                                                  Source https://stackoverflow.com/questions/71424036

                                                                                  QUESTION

                                                                                  Unpickle instance from Jupyter Notebook in Flask App
                                                                                  Asked 2022-Feb-28 at 18:03

                                                                                  I have created a class for word2vec vectorisation which is working fine. But when I create a model pickle file and use that pickle file in a Flask App, I am getting an error like:

                                                                                  AttributeError: module '__main__' has no attribute 'GensimWord2VecVectorizer'

                                                                                  I am creating the model on Google Colab.

                                                                                  Code in Jupyter Notebook:

                                                                                  # Word2Vec Model
                                                                                  import numpy as np
                                                                                  from sklearn.base import BaseEstimator, TransformerMixin
                                                                                  from gensim.models import Word2Vec
                                                                                  
                                                                                  class GensimWord2VecVectorizer(BaseEstimator, TransformerMixin):
                                                                                  
                                                                                      def __init__(self, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None,
                                                                                                   sample=0.001, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5,
                                                                                                   ns_exponent=0.75, cbow_mean=1, hashfxn=hash, iter=5, null_word=0,
                                                                                                   trim_rule=None, sorted_vocab=1, batch_words=10000, compute_loss=False,
                                                                                                   callbacks=(), max_final_vocab=None):
                                                                                          self.size = size
                                                                                          self.alpha = alpha
                                                                                          self.window = window
                                                                                          self.min_count = min_count
                                                                                          self.max_vocab_size = max_vocab_size
                                                                                          self.sample = sample
                                                                                          self.seed = seed
                                                                                          self.workers = workers
                                                                                          self.min_alpha = min_alpha
                                                                                          self.sg = sg
                                                                                          self.hs = hs
                                                                                          self.negative = negative
                                                                                          self.ns_exponent = ns_exponent
                                                                                          self.cbow_mean = cbow_mean
                                                                                          self.hashfxn = hashfxn
                                                                                          self.iter = iter
                                                                                          self.null_word = null_word
                                                                                          self.trim_rule = trim_rule
                                                                                          self.sorted_vocab = sorted_vocab
                                                                                          self.batch_words = batch_words
                                                                                          self.compute_loss = compute_loss
                                                                                          self.callbacks = callbacks
                                                                                          self.max_final_vocab = max_final_vocab
                                                                                  
                                                                                      def fit(self, X, y=None):
                                                                                          self.model_ = Word2Vec(
                                                                                              sentences=X, corpus_file=None,
                                                                                              size=self.size, alpha=self.alpha, window=self.window, min_count=self.min_count,
                                                                                              max_vocab_size=self.max_vocab_size, sample=self.sample, seed=self.seed,
                                                                                              workers=self.workers, min_alpha=self.min_alpha, sg=self.sg, hs=self.hs,
                                                                                              negative=self.negative, ns_exponent=self.ns_exponent, cbow_mean=self.cbow_mean,
                                                                                              hashfxn=self.hashfxn, iter=self.iter, null_word=self.null_word,
                                                                                              trim_rule=self.trim_rule, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words,
                                                                                              compute_loss=self.compute_loss, callbacks=self.callbacks,
                                                                                              max_final_vocab=self.max_final_vocab)
                                                                                          return self
                                                                                  
                                                                                      def transform(self, X):
                                                                                          X_embeddings = np.array([self._get_embedding(words) for words in X])
                                                                                          return X_embeddings
                                                                                  
                                                                                      def _get_embedding(self, words):
                                                                                          valid_words = [word for word in words if word in self.model_.wv.vocab]
                                                                                          if valid_words:
                                                                                              embedding = np.zeros((len(valid_words), self.size), dtype=np.float32)
                                                                                              for idx, word in enumerate(valid_words):
                                                                                                  embedding[idx] = self.model_.wv[word]
                                                                                  
                                                                                              return np.mean(embedding, axis=0)
                                                                                          else:
                                                                                              return np.zeros(self.size)
                                                                                  
                                                                                  # column transformer
                                                                                  from sklearn.compose import ColumnTransformer
                                                                                  
                                                                                  ct = ColumnTransformer([
                                                                                      ('step1', GensimWord2VecVectorizer(), 'STATUS')
                                                                                  ], remainder='drop')
                                                                                  
                                                                                  # Create Model
                                                                                  from sklearn.svm import SVC
                                                                                  from sklearn.pipeline import Pipeline
                                                                                  from sklearn.model_selection import GridSearchCV
                                                                                  import pickle
                                                                                  import numpy as np
                                                                                  import dill
                                                                                  import torch
                                                                                  # ##########
                                                                                  # SVC - support vector classifier
                                                                                  # ##########
                                                                                  # defining parameter range
                                                                                  hyperparameters = {'C': [0.1, 1],
                                                                                                     'gamma': [1, 0.1],
                                                                                                     'kernel': ['rbf'],
                                                                                                     'probability': [True]}
                                                                                  model_sv = Pipeline([
                                                                                      ('column_transformers', ct),
                                                                                      ('model', GridSearchCV(SVC(), hyperparameters,
                                                                                                             refit=True, verbose=3)),
                                                                                  ])
                                                                                  model_sv_cEXT = model_sv.fit(X_train, y_train['cEXT'])
                                                                                  # Save the trained cEXT - SVM Model.
                                                                                  import joblib
                                                                                  joblib.dump(model_sv_cEXT, 'model_Word2Vec_sv_cEXT.pkl')
                                                                                  

                                                                                  Code in Flask App:

                                                                                  # Word2Vec
                                                                                  model_EXT_WV_SV = joblib.load('utility/model/MachineLearning/SVM/model_Word2Vec_sv_cEXT.pkl')
                                                                                  

                                                                                  I tried to copy the same class into my Flask file, but it is also not working.

                                                                                  import numpy as np
                                                                                  from sklearn.base import BaseEstimator, TransformerMixin
                                                                                  from gensim.models import Word2Vec
                                                                                  
                                                                                  class GensimWord2VecVectorizer(BaseEstimator, TransformerMixin):
                                                                                  
                                                                                      def __init__(self, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None,
                                                                                                   sample=0.001, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5,
                                                                                                   ns_exponent=0.75, cbow_mean=1, hashfxn=hash, iter=5, null_word=0,
                                                                                                   trim_rule=None, sorted_vocab=1, batch_words=10000, compute_loss=False,
                                                                                                   callbacks=(), max_final_vocab=None):
                                                                                          self.size = size
                                                                                          self.alpha = alpha
                                                                                          self.window = window
                                                                                          self.min_count = min_count
                                                                                          self.max_vocab_size = max_vocab_size
                                                                                          self.sample = sample
                                                                                          self.seed = seed
                                                                                          self.workers = workers
                                                                                          self.min_alpha = min_alpha
                                                                                          self.sg = sg
                                                                                          self.hs = hs
                                                                                          self.negative = negative
                                                                                          self.ns_exponent = ns_exponent
                                                                                          self.cbow_mean = cbow_mean
                                                                                          self.hashfxn = hashfxn
                                                                                          self.iter = iter
                                                                                          self.null_word = null_word
                                                                                          self.trim_rule = trim_rule
                                                                                          self.sorted_vocab = sorted_vocab
                                                                                          self.batch_words = batch_words
                                                                                          self.compute_loss = compute_loss
                                                                                          self.callbacks = callbacks
                                                                                          self.max_final_vocab = max_final_vocab
                                                                                  
                                                                                      def fit(self, X, y=None):
                                                                                          self.model_ = Word2Vec(
                                                                                              sentences=X, corpus_file=None,
                                                                                              size=self.size, alpha=self.alpha, window=self.window, min_count=self.min_count,
                                                                                              max_vocab_size=self.max_vocab_size, sample=self.sample, seed=self.seed,
                                                                                              workers=self.workers, min_alpha=self.min_alpha, sg=self.sg, hs=self.hs,
                                                                                              negative=self.negative, ns_exponent=self.ns_exponent, cbow_mean=self.cbow_mean,
                                                                                              hashfxn=self.hashfxn, iter=self.iter, null_word=self.null_word,
                                                                                              trim_rule=self.trim_rule, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words,
                                                                                              compute_loss=self.compute_loss, callbacks=self.callbacks,
                                                                                              max_final_vocab=self.max_final_vocab)
                                                                                          return self
                                                                                  
                                                                                      def transform(self, X):
                                                                                          X_embeddings = np.array([self._get_embedding(words) for words in X])
                                                                                          return X_embeddings
                                                                                  
                                                                                      def _get_embedding(self, words):
                                                                                          valid_words = [word for word in words if word in self.model_.wv.vocab]
                                                                                          if valid_words:
                                                                                              embedding = np.zeros((len(valid_words), self.size), dtype=np.float32)
                                                                                              for idx, word in enumerate(valid_words):
                                                                                                  embedding[idx] = self.model_.wv[word]
                                                                                  
                                                                                              return np.mean(embedding, axis=0)
                                                                                          else:
                                                                                              return np.zeros(self.size)
                                                                                  
                                                                                  # Word2Vec
                                                                                  model_EXT_WV_SV = joblib.load('utility/model/MachineLearning/SVM/model_Word2Vec_sv_cEXT.pkl')
                                                                                  

                                                                                  ANSWER

                                                                                  Answered 2022-Feb-24 at 11:48

                                                                                  Import GensimWord2VecVectorizer in your Flask Web app python file.

                                                                                  Source https://stackoverflow.com/questions/71231611

                                                                                  QUESTION

                                                                                  venv - pip not found
                                                                                  Asked 2021-Dec-14 at 15:04
                                                                                  • I am using Windows 10 & Python 3.9

                                                                                  • I am new to Python, after I created venv and facing the following errors:

                                                                                  • My venv is able to catch the pip path when I use the command where pip. Which is in F drive

                                                                                    (venv) F:\Documents\venv\MachineLearning\venv> where pip
                                                                                  
                                                                                       F:\Documents\venv\MachineLearning\venv\Scripts\pip.exe
                                                                                  
                                                                                       C:\Users\MALARMANNAN R\AppData\Local\Programs\Python\Python39\Scripts\pip.exe
                                                                                       F:\Additional softwares\python\Python\Scripts\pip.exe
                                                                                  
                                                                                • But when I use pip list or pip install , I am getting error : ModuleNotFoundError: No module named 'pip'

                                                                                •   (venv) F:\Documents\venv\MachineLearning\venv>pip list
                                                                                  
                                                                                    Traceback (most recent call last):
                                                                                        File "C:\Users\MALARMANNAN R\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main 
                                                                                             return _run_code(code, main_globals, None,
                                                                                        File "C:\Users\MALARMANNAN R\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code 
                                                                                             exec(code, run_globals)
                                                                                        File "F:\Documents\venv\MachineLearning\venv\Scripts\pip.exe\__main__.py", line 4, in 
                                                                                  
                                                                                   ModuleNotFoundError: No module named 'pip'        
                                                                                  

                                                                                  ANSWER

                                                                                  Answered 2021-Dec-14 at 15:04
                                                                                  • My problem is resolved.

                                                                                  • Instead of using pip install just need to use python -m pip install , within venv to avoid the following error

                                                                                    ModuleImportError : No module namded pip
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/70321475

                                                                                  QUESTION

                                                                                  Why embarkation_point_2 field gets added when one_hot_encoder is applied to training data
                                                                                  Asked 2021-Dec-03 at 14:44

                                                                                  Following the example of vertica at https://www.vertica.com/docs/11.0.x/HTML/Content/Authoring/AnalyzingData/MachineLearning/DataPreparation/EncodingCategoricalColumns.htm?tocpath=Analyzing%20Data%7CMachine%20Learning%20for%20Predictive%20Analytics%7CData%20Preparation%7C_____3

                                                                                  which uses Titanic data from kaggle,

                                                                                  ONE_HOT_ENCODER_FIT function coverts categorical data and creates a model which represents the new representation of categorical data

                                                                                  SELECT one_hot_encoder_fit('public.titanic_encoder','titanic_training','sex, embarkation_point'  USING PARAMETERS exclude_columns='', output_view='', extra_levels='{}');
                                                                                  
                                                                                  ==================
                                                                                  varchar_categories
                                                                                  ==================
                                                                                    category_name  |category_level|category_level_index
                                                                                  -----------------+--------------+--------------------
                                                                                  embarkation_point|      C       |         0
                                                                                  embarkation_point|      Q       |         1
                                                                                  embarkation_point|      S       |         2 <- note S is 2
                                                                                  embarkation_point|              |         3
                                                                                         sex       |    female    |         0
                                                                                         sex       |     male     |         1 <-- note male is 1
                                                                                  

                                                                                  Then on applying the model titanic_encoder like this on titanic_training data, why does embarkation_point_2 gets added? Should the output contain only the categorical value (say S) and its encoded value ? Why do I see values 0 and 1 and not 2 (which is the encoded value for S? Similar to sex M and sex_1 1

                                                                                  dbadmin@2e4e746b3e6c(*)=> select * from titanic_training limit 1;
                                                                                   passenger_id | survived | pclass |          name           | sex  | age | sibling_and_spouse_count | parent_and_child_count |  ticket   | fare | cabin | embarkation_point
                                                                                  --------------+----------+--------+-------------------------+------+-----+--------------------------+------------------------+-----------+------+-------+-------------------
                                                                                              1 |        0 |      3 | Braund, Mr. Owen Harris | male |  22 |                        1 |                      0 | A/5 21171 | 7.25 |       | S <-- note S
                                                                                  (1 row)
                                                                                  
                                                                                  
                                                                                  
                                                                                  dbadmin@2e4e746b3e6c(*)=> SELECT APPLY_ONE_HOT_ENCODER(* USING PARAMETERS model_name='titanic_encoder') from titanic_training limit 1;
                                                                                   passenger_id | survived | pclass |          name           | sex  | sex_1 | age | sibling_and_spouse_count | parent_and_child_count |  ticket   | fare | cabin | embarkation_point | embarkation_point_1 | embarkation_point_2 (<-- why this is here)?
                                                                                  --------------+----------+--------+-------------------------+------+-------+-----+--------------------------+------------------------+-----------+------+-------+-------------------+---------------------+---------------------
                                                                                              1 |        0 |      3 | Braund, Mr. Owen Harris | male <- note male|     1 <- note  encoded value of male |  22 |                        1 |                      0 | A/5 21171 | 7.25 |       | S <- note S                 |                   0 <- why this is here |                   1 <-- why this is here. Where is 2?
                                                                                  (1 row)
                                                                                  

                                                                                  Why there is no embarkation_point_3?

                                                                                  ANSWER

                                                                                  Answered 2021-Dec-03 at 14:44

                                                                                  There are many reasons to your output. First, read the documentation of the APPLY_ONE_HOT_ENCODER: https://www.vertica.com/docs/11.0.x/HTML/Content/Authoring/SQLReferenceManual/Functions/MachineLearning/APPLY_ONE_HOT_ENCODER.htm?tocpath=SQL%20Reference%20Manual%7CSQL%20Functions%7CMachine%20Learning%20Functions%7CTransformation%20Functions%7C_____5

                                                                                  Two parameters allow you to achieve your goals:

                                                                                  • drop_first: set it to false to get all the columns. One is dropped because of correlations purposes. You can read this article: https://inmachineswetrust.com/posts/drop-first-columns/ There are pros and cons.
                                                                                  • column_naming: set it to values but be careful. If you have categories with special characters, you might face some difficulties.

                                                                                  Badr

                                                                                  Source https://stackoverflow.com/questions/70206239

                                                                                  QUESTION

                                                                                  How to seldon-core quick-start on kind with port-forward?
                                                                                  Asked 2021-Oct-13 at 10:50

                                                                                  Following the documentation I try to setup the Seldon-Core quick-start https://docs.seldon.io/projects/seldon-core/en/v1.11.1/workflow/github-readme.html

                                                                                  I don't have LoadBalancer so I would like to use port-fowarding for accessing to the service.

                                                                                  I run the following script for setup the system:

                                                                                  #!/bin/bash -ev
                                                                                  kind create cluster --name seldon
                                                                                  
                                                                                  kubectl cluster-info --context kind-seldon
                                                                                  sleep 10
                                                                                  kubectl get pods -A
                                                                                  
                                                                                  istioctl install -y
                                                                                  sleep 10
                                                                                  kubectl get pods -A
                                                                                  
                                                                                  kubectl create namespace seldon-system
                                                                                  kubens seldon-system
                                                                                  helm install seldon-core seldon-core-operator \
                                                                                       --repo https://storage.googleapis.com/seldon-charts \
                                                                                       --set usageMetrics.enabled=true \
                                                                                       --namespace seldon-system \
                                                                                       --set istio.enabled=true
                                                                                  sleep 100
                                                                                  kubectl get validatingwebhookconfigurations
                                                                                  kubectl create namespace modelns
                                                                                  kubens modelns
                                                                                  kubectl apply -f - << END           
                                                                                  apiVersion: machinelearning.seldon.io/v1
                                                                                  kind: SeldonDeployment
                                                                                  metadata:
                                                                                    name: iris-model
                                                                                    namespace: modelns
                                                                                  spec:
                                                                                    name: iris
                                                                                    predictors:
                                                                                    - graph:
                                                                                        implementation: SKLEARN_SERVER
                                                                                        modelUri: gs://seldon-models/v1.12.0-dev/sklearn/iris
                                                                                        name: classifier
                                                                                      name: default
                                                                                      replicas: 1
                                                                                  END
                                                                                  sleep 100
                                                                                  kubectl get pods -A
                                                                                  kubectl get svc -A
                                                                                  INGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector="app=istio-ingressgateway" --output jsonpath='{.items[0].metadata.name}')
                                                                                  kubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80 &
                                                                                  

                                                                                  I gess the port-forwarding argument 8080:80 is probably wrong.

                                                                                  I'm using the following script for testing:

                                                                                  #!/bin/bash -ev
                                                                                  
                                                                                  export INGRESS_HOST=localhost
                                                                                  export INGRESS_PORT=8080
                                                                                  SERVICE_HOSTNAME=$(kubectl get inferenceservice sklearn-iris -n kserve-test -o jsonpath='{.status.url}' | cut -d "/" -f 3)
                                                                                  
                                                                                  curl -X POST http://$INGRESS_HOST:$INGRESS_PORT/seldon/modelns/iris-model/api/v1.0/predictions \
                                                                                      -H 'Content-Type: application/json' \
                                                                                      -d '{ "data": { "ndarray": [1,2,3,4] } }'
                                                                                  

                                                                                  But I got the following error:

                                                                                  Handling connection for 8080
                                                                                  E1012 10:52:32.074812   
                                                                                  52896 portforward.go:400] an error occurred forwarding 8080 -> 8080: 
                                                                                  error forwarding port 8080 to pod b9bd4ff03c6334f4af632044fe54e1c2531e95976a5fe074e30b4258d145508a, 
                                                                                  uid : failed to execute portforward in network namespace "/var/run/netns/cni-2b4d8573-3cfe-c70e-1c36-e0dc53cbd936": failed to connect to localhost:8080 inside namespace "b9bd4ff03c6334f4af632044fe54e1c2531e95976a5fe074e30b4258d145508a",
                                                                                   IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused
                                                                                  

                                                                                  Please can somebody known how to fix this? What is the right port forwarding argument?

                                                                                  ANSWER

                                                                                  Answered 2021-Oct-13 at 10:50

                                                                                  If you install with istio enabled you also need to install the istio gateway.

                                                                                  I've tested your flow and it didn't work, and then did work after installing the following istio gateway.

                                                                                  apiVersion: networking.istio.io/v1alpha3
                                                                                  kind: Gateway
                                                                                  metadata:
                                                                                    name: seldon-gateway
                                                                                    namespace: istio-system
                                                                                  spec:
                                                                                    selector:
                                                                                      istio: ingressgateway # use istio default controller
                                                                                    servers:
                                                                                    - port:
                                                                                        number: 80
                                                                                        name: http
                                                                                        protocol: HTTP
                                                                                      hosts:
                                                                                      - "*"
                                                                                  

                                                                                  You can read more about istio configuration on Seldon Core here: https://docs.seldon.io/projects/seldon-core/en/latest/ingress/istio.html

                                                                                  Source https://stackoverflow.com/questions/69538672

                                                                                  QUESTION

                                                                                  Accord.NET throw "Index was outside the bounds of the array"
                                                                                  Asked 2021-Sep-27 at 18:03

                                                                                  I was working on KNN from Accord.NET and I faced this error for some reason when I need to test model.

                                                                                  but this error message didn't help at all (Index was outside the bounds of the array) because this error happen in the library itself.

                                                                                  simple code with random data:

                                                                                          using Accord.MachineLearning;
                                                                                  
                                                                                          double[][] inputs =
                                                                                          {
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 5 ,1},
                                                                                              new double[] { 16, 2 ,0}, new double[] { 4, 15 ,1},
                                                                                          };
                                                                                  
                                                                                          int[] outputs =
                                                                                          {
                                                                                              0, 1, 0, 1, 0, 1,
                                                                                              0, 1, 0, 1, 0, 1,
                                                                                              0, 1, 0, 1, 0, 1,
                                                                                              0, 1, 0, 1, 0, 9
                                                                                          };
                                                                                  
                                                                                          var knn = new KNearestNeighbors(k: 15);
                                                                                          knn.Learn(inputs, outputs);
                                                                                  
                                                                                          //test
                                                                                          var t = new double[] { 16, 2, 0 };
                                                                                          int answer = knn.Decide(t);
                                                                                  

                                                                                  and here the exception:

                                                                                  but I found way around and I share solution with you below :

                                                                                  ANSWER

                                                                                  Answered 2021-Sep-27 at 18:03

                                                                                  after many days and after implementing this simple sample I found that output array should have continues range values (encoded : 0,1,2,3,....) . so here 9 is why that bug happen here 🙂

                                                                                      int[] outputs =
                                                                                      {
                                                                                          0, 1, 0, 1, 0, 1,
                                                                                          0, 1, 0, 1, 0, 1,
                                                                                          0, 1, 0, 1, 0, 1,
                                                                                          0, 1, 0, 1, 0, 2
                                                                                      };
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/69339578

                                                                                  QUESTION

                                                                                  Seldon Core Loading sklearn/irir failed
                                                                                  Asked 2021-Sep-06 at 12:46

                                                                                  I tried to load the iris model using seldon core, and unfortunately the following error occurred. SKLEARN_SERVER loads seldon’s sklearn/iris model with the following error.

                                                                                  starting microservice
                                                                                  2021-09-02 02:43:19,363 - seldon_core.microservice:main:206 - INFO:  Starting microservice.py:main
                                                                                  2021-09-02 02:43:19,363 - seldon_core.microservice:main:207 - INFO:  Seldon Core version: 1.10.0
                                                                                  2021-09-02 02:43:19,463 - seldon_core.microservice:main:362 - INFO:  Parse JAEGER_EXTRA_TAGS []
                                                                                  2021-09-02 02:43:19,463 - seldon_core.microservice:load_annotations:158 - INFO:  Found annotation kubernetes.io/config.seen:2021-09-02T02:41:35.820784600Z
                                                                                  2021-09-02 02:43:19,463 - seldon_core.microservice:load_annotations:158 - INFO:  Found annotation kubernetes.io/config.source:api
                                                                                  2021-09-02 02:43:19,463 - seldon_core.microservice:load_annotations:158 - INFO:  Found annotation prometheus.io/path:/stats/prometheus
                                                                                  2021-09-02 02:43:19,463 - seldon_core.microservice:load_annotations:158 - INFO:  Found annotation prometheus.io/port:15020
                                                                                  2021-09-02 02:43:19,463 - seldon_core.microservice:load_annotations:158 - INFO:  Found annotation prometheus.io/scrape:true
                                                                                  2021-09-02 02:43:19,464 - seldon_core.microservice:load_annotations:158 - INFO:  Found annotation sidecar.istio.io/status:{\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-data\",\"istio-podinfo\",\"istio-token\",\"istiod-ca-cert\"],\"imagePullSecrets\":null}
                                                                                  2021-09-02 02:43:19,559 - seldon_core.microservice:main:365 - INFO:  Annotations: {'kubernetes.io/config.seen': '2021-09-02T02:41:35.820784600Z', 'kubernetes.io/config.source': 'api', 'prometheus.io/path': '/stats/prometheus', 'prometheus.io/port': '15020', 'prometheus.io/scrape': 'true', 'sidecar.istio.io/status': '{\\"initContainers\\":[\\"istio-init\\"],\\"containers\\":[\\"istio-proxy\\"],\\"volumes\\":[\\"istio-envoy\\",\\"istio-data\\",\\"istio-podinfo\\",\\"istio-token\\",\\"istiod-ca-cert\\"],\\"imagePullSecrets\\":null}'}
                                                                                  2021-09-02 02:43:19,559 - seldon_core.microservice:main:369 - INFO:  Importing SKLearnServer
                                                                                  2021-09-02 02:43:20,562 - SKLearnServer:__init__:21 - INFO:  Model uri: /mnt/models
                                                                                  2021-09-02 02:43:20,563 - SKLearnServer:__init__:22 - INFO:  method: predict_proba
                                                                                  2021-09-02 02:43:20,564 - SKLearnServer:load:26 - INFO:  load
                                                                                  2021-09-02 02:43:20,565 - root:download:31 - INFO:  Copying contents of /mnt/models to local
                                                                                  2021-09-02 02:43:20,659 - SKLearnServer:load:30 - INFO:  model file: /mnt/models/model.joblib
                                                                                  Traceback (most recent call last):
                                                                                    File "/opt/conda/bin/seldon-core-microservice", line 8, in 
                                                                                      sys.exit(main())
                                                                                    File "/opt/conda/lib/python3.7/site-packages/seldon_core/microservice.py", line 379, in main
                                                                                      user_object = user_class(**parameters)
                                                                                    File "/microservice/SKLearnServer.py", line 23, in __init__
                                                                                      self.load()
                                                                                    File "/microservice/SKLearnServer.py", line 31, in load
                                                                                      self._joblib = joblib.load(model_file)
                                                                                    File "/opt/conda/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load
                                                                                      obj = _unpickle(fobj, filename, mmap_mode)
                                                                                    File "/opt/conda/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle
                                                                                      obj = unpickler.load()
                                                                                    File "/opt/conda/lib/python3.7/pickle.py", line 1088, in load
                                                                                      dispatch[key[0]](self)
                                                                                    File "/opt/conda/lib/python3.7/pickle.py", line 1376, in load_global
                                                                                      klass = self.find_class(module, name)
                                                                                    File "/opt/conda/lib/python3.7/pickle.py", line 1426, in find_class
                                                                                      __import__(module, level=0)
                                                                                  ModuleNotFoundError: No module named 'sklearn.linear_model.logistic'
                                                                                  

                                                                                  It looks like a version issue with the sklearn package in seldon's sklearn inference server. This is my seldonDeployment file:

                                                                                  apiVersion: machinelearning.seldon.io/v1
                                                                                  kind: SeldonDeployment
                                                                                  metadata:
                                                                                    name: "sklearn"
                                                                                  spec:
                                                                                    name: "sklearn"
                                                                                    predictors:
                                                                                      - componentSpecs:
                                                                                        - spec:
                                                                                            containers:
                                                                                            - name: classifier
                                                                                              env:
                                                                                              - name: GUNICORN_THREADS
                                                                                                value: "10"
                                                                                              - name: GUNICORN_WORKERS
                                                                                                value: "1"
                                                                                              resources:
                                                                                                requests:
                                                                                                  cpu: 5m
                                                                                                  memory: 10Mi
                                                                                                limits:
                                                                                                  cpu: 50m
                                                                                                  memory: 100Mi
                                                                                        graph:
                                                                                          children: []
                                                                                          implementation: SKLEARN_SERVER
                                                                                          modelUri: gs://seldon-models/sklearn/iris
                                                                                          name: classifier
                                                                                        name: default
                                                                                        replicas: 2
                                                                                  

                                                                                  This is my sklearn Inference server configuration:

                                                                                      "SKLEARN_SERVER":{
                                                                                          "protocols":{
                                                                                              "kfserving":{
                                                                                                  "defaultImageVersion":"0.3.2",
                                                                                                  "image":"seldonio/mlserver"
                                                                                              },
                                                                                              "seldon":{
                                                                                                  "defaultImageVersion":"1.10.0",
                                                                                                  "image":"seldonio/sklearnserver"
                                                                                              }
                                                                                          }
                                                                                      }
                                                                                  

                                                                                  Is there something wrong with me?

                                                                                  ANSWER

                                                                                  Answered 2021-Sep-06 at 12:46

                                                                                  This is because the version of the seldon core does not match the version of the model. Note that the example model for seldon-core version 1.10.0 is under gs://seldon-models/v1.11.0-dev.

                                                                                  Source https://stackoverflow.com/questions/69023624

                                                                                  QUESTION

                                                                                  "bash: export: ... not a valid identifier" in CLI of git-bash shell "MINGW64" on Windows 10
                                                                                  Asked 2021-Jul-12 at 08:58

                                                                                  Using git for Windows, I always get the following output when opening a new git-bash terminal:

                                                                                  bash: export: `C:\Users\username\Projects\proj1\src\packages\;C:\Users\username\Projects\proj2\MachineLearning\;C:\Users\username\Projects\proj2\MachineLearning\azure_components\': not a valid identifier
                                                                                  

                                                                                  Next, I examined the PATH variable via echo $PATH:

                                                                                  C:\Users\username\Projects\proj1-venv/Scripts:/c/Users/username/bin:/mingw64/bin:/usr/local/bin:/usr/bin:/bin:/mingw64/bin:/usr/bin:/c/Users/username/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell/v1.0:/c/WINDOWS/System32/OpenSSH:/c/Program Files/dotnet:/c/ProgramData/chocolatey/bin:/cmd:/c/users/username/.pyenv/pyenv-win/versions/3.9.0a4/Scripts:/c/Program Files/PuTTY:/c/Program Files/Amazon/AWSCLIV2:/c/ProgramData/chocolatey/lib/gsudo/bin:/c/Program Files/Amazon/AWSSAMCLI/bin:/c/Users/username/AppData/Local/Terraform:/c/Users/username/AppData/Local/iPython:/c/Program Files/Docker/Docker/resources/bin:/c/ProgramData/DockerDesktop/version-bin:/c/Program Files/Microsoft SQL Server/130/Tools/Binn:/c/Program Files/Microsoft SQL Server/Client SDK/ODBC/170/Tools/Binn:/c/Program Files (x86)/Microsoft SQL Server/150/DTS/Binn:/c/Program Files/Azure Data Studio/bin:/c/Users/username/.pyenv/pyenv-win/bin:/c/Users/username/.pyenv/pyenv-win/shims:/c/Users/username/.pyenv/pyenv-win/bin:/c/Users/username/.pyenv/pyenv-win/shims:/c/Users/username/AppData/Local/Microsoft/WindowsApps:/c/Users/username/AppData/Local/Programs/Microsoft_VS_Code/bin:/c/Users/username/.pyenv/pyenv-win/bin:/c/Users/username/.pyenv/pyenv-win/shims:/c/Users/username/AppData/Local/Pandoc:/c/texlive/2021/bin/win32:/c/Users/username/AppData/Local/Terraform:/c/Users/username/AppData/Local/iPython:/c/Users/username/.dotnet/tools:/c/Program Files/Azure Data Studio/bin:/usr/bin/vendor_perl:/usr/bin/core_perl
                                                                                  

                                                                                  I tried to export the content of $PATH explicitely using export $PATH which throws the same error bash: export: ... not a valid identifier many times over:

                                                                                  bash: export: `C:\Users\username\Projects\proj1-venv/Scripts:/c/Users/username/bin:/mingw64/bin:/usr/local/bin:/usr/bin:/bin:/mingw64/bin:/usr/bin:/c/Users/username/bin:/c/WINDOWS/system32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell/v1.0:/c/WINDOWS/System32/OpenSSH:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/dotnet:/c/ProgramData/chocolatey/bin:/cmd:/c/users/username/.pyenv/pyenv-win/versions/3.9.0a4/Scripts:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/PuTTY:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Amazon/AWSCLIV2:/c/ProgramData/chocolatey/lib/gsudo/bin:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Amazon/AWSSAMCLI/bin:/c/Users/username/AppData/Local/Terraform:/c/Users/username/AppData/Local/iPython:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Docker/Docker/resources/bin:/c/ProgramData/DockerDesktop/version-bin:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Microsoft': not a valid identifier
                                                                                  bash: export: `Server/130/Tools/Binn:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Microsoft': not a valid identifier
                                                                                  bash: export: `Server/Client': not a valid identifier
                                                                                  bash: export: `SDK/ODBC/170/Tools/Binn:/c/Program': not a valid identifier
                                                                                  bash: export: `(x86)/Microsoft': not a valid identifier
                                                                                  bash: export: `Server/150/DTS/Binn:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Azure': not a valid identifier
                                                                                  bash: export: `Studio/bin:/c/Users/username/.pyenv/pyenv-win/bin:/c/Users/username/.pyenv/pyenv-win/shims:/c/Users/username/.pyenv/pyenv-win/bin:/c/Users/username/.pyenv/pyenv-win/shims:/c/Users/username/AppData/Local/Microsoft/WindowsApps:/c/Users/username/AppData/Local/Programs/Microsoft_VS_Code/bin:/c/Users/username/.pyenv/pyenv-win/bin:/c/Users/username/.pyenv/pyenv-win/shims:/c/Users/username/AppData/Local/Pandoc:/c/texlive/2021/bin/win32:/c/Users/username/AppData/Local/Terraform:/c/Users/username/AppData/Local/iPython:/c/Users/username/.dotnet/tools:/c/Program': not a valid identifier
                                                                                  bash: export: `Files/Azure': not a valid identifier
                                                                                  bash: export: `Studio/bin:/usr/bin/vendor_perl:/usr/bin/core_perl': not a valid identifier
                                                                                  

                                                                                  I suspect that it is related to the space characters in many of the paths, which is quite common on Windows, but not liked by UNIX-systems.

                                                                                  Assuming this is the culprit, how would I best get around this issue? If something else is responsible for this undesired behavior, I would also like to understand that.

                                                                                  ANSWER

                                                                                  Answered 2021-Jul-12 at 08:58

                                                                                  As correctly pointed out in the first comment under my OP by @Zilog80, I had to check and remove all $ after the export-command in the following bash start-up scripts:

                                                                                  1. .bashrc
                                                                                  2. .bash_profile
                                                                                  3. .profile

                                                                                  In my case, all the fuss boiled down to the following line in my ~/.bashrc - script:

                                                                                  export $PYTHONPATH
                                                                                  

                                                                                  This had to be replaced with

                                                                                  export PYTHONPATH
                                                                                  

                                                                                  Now, opening a new terminal-session does not throw these "bash: export: … not a valid identifier" - errors anymore.

                                                                                  Source https://stackoverflow.com/questions/68344089

                                                                                  QUESTION

                                                                                  How to import kaggle datasets to PyCharm IDE
                                                                                  Asked 2021-Jul-05 at 20:18

                                                                                  I'm able to download kaggle using PIP command. Able to place the kaggle.json file into the appropriate folder. Able to see the competitions present in it. But when I'm trying to download the data files then an error getting displayed.

                                                                                  CODE:

                                                                                  import kaggle
                                                                                  from kaggle.api.kaggle_api_extended import KaggleApi
                                                                                  api = KaggleApi()
                                                                                  api.authenticate()
                                                                                  lis1 = api.competitions_list(search='LANL-Earthquake-Prediction')
                                                                                  api.competition_download_files('LANL-Earthquake-Prediction')
                                                                                  

                                                                                  Error Message: Getting below error message.

                                                                                  Traceback (most recent call last):
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\Importing datasets\Importing Kaggle through API.py", line 11, in 
                                                                                      api.competition_download_files('LANL-Earthquake-Prediction')
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\api\kaggle_api_extended.py", line 718, in competition_download_files
                                                                                      self.competitions_data_download_files_with_http_info(
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\api\kaggle_api.py", line 400, in competitions_data_download_files_with_http_info
                                                                                      return self.api_client.call_api(
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\api_client.py", line 329, in call_api
                                                                                      return self.__call_api(resource_path, method,
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\api_client.py", line 161, in __call_api
                                                                                      response_data = self.request(
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\api_client.py", line 351, in request
                                                                                      return self.rest_client.GET(url,
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\rest.py", line 247, in GET
                                                                                      return self.request("GET", url,
                                                                                    File "C:\Users\rpremala003\PycharmProjects\MachineLearning\venv\lib\site-packages\kaggle\rest.py", line 241, in request
                                                                                      raise ApiException(http_resp=r)
                                                                                  kaggle.rest.ApiException: (403)
                                                                                  Reason: Forbidden
                                                                                  HTTP response headers: HTTPHeaderDict({'Date': 'Sun, 04 Jul 2021 19:34:22 GMT', 'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Set-Cookie': 'ka_sessionid=232614eb67b588938ac922774220f567; max-age=2626560; path=/, GCLB=CPDc0e-esJK-ZQ; path=/; HttpOnly', 'Vary': 'Accept-Encoding', 'Access-Control-Allow-Credentials': 'true', 'Turbolinks-Location': 'https://www.kaggle.com/api/v1/competitions/data/download-all/LANL-Earthquake-Prediction', 'X-Kaggle-MillisecondsElapsed': '91', 'X-Kaggle-RequestId': 'd485fdb0e4674834b5c1b444fd9885de', 'X-Kaggle-ApiVersion': '1.5.12', 'X-Frame-Options': 'SAMEORIGIN', 'Strict-Transport-Security': 'max-age=63072000; includeSubDomains; preload', 'Content-Security-Policy': "object-src 'none'; script-src 'nonce-q8ly/A24jHgL5gZmidMI6A==' 'report-sample' 'unsafe-inline' 'unsafe-eval' 'strict-dynamic' https: http:; frame-src 'self' https://www.kaggleusercontent.com https://www.youtube.com/embed/ https://polygraph-cool.github.io https://www.google.com/recaptcha/ https://form.jotform.com https://submit.jotform.us https://submit.jotformpro.com https://submit.jotform.com https://www.docdroid.com https://www.docdroid.net https://kaggle-static.storage.googleapis.com https://kaggle-static-staging.storage.googleapis.com https://kkb-dev.jupyter-proxy.kaggle.net https://kkb-staging.jupyter-proxy.kaggle.net https://kkb-production.jupyter-proxy.kaggle.net https://kkb-production.firebaseapp.com https://apis.google.com https://content-sheets.googleapis.com/ https://accounts.google.com/ https://storage.googleapis.com https://docs.google.com; base-uri 'none'; report-uri https://csp.withgoogle.com/csp/kaggle/20201130;", 'X-Content-Type-Options': 'nosniff', 'Referrer-Policy': 'strict-origin-when-cross-origin', 'Via': '1.1 google', 'Alt-Svc': 'clear'})
                                                                                  HTTP response body: b'{"code":403,"message":"You must accept this competition\\u0027s rules before you\\u0027ll be able to download files."}'
                                                                                  

                                                                                  ANSWER

                                                                                  Answered 2021-Jul-05 at 20:18

                                                                                  The error returned describes the root of the issue:

                                                                                  HTTP response body: b'{"code":403,"message":"You must accept this competition\\u0027s rules before you\\u0027ll be able to download files."}'
                                                                                  

                                                                                  From the documentation of the Kaggle API:

                                                                                  Just like participating in a Competition normally through the user interface, you must read and accept the rules in order to download data or make submissions. You cannot accept Competition rules via the API. You must do this by visiting the Kaggle website and accepting the rules there.

                                                                                  If you have accepted the competition, perhaps you are not authenticating with a token corresponding to that user.

                                                                                  Source https://stackoverflow.com/questions/68248286

                                                                                  QUESTION

                                                                                  Possible to select the DNN AI core for model evaluation on HoloLens 2?
                                                                                  Asked 2021-Jun-07 at 17:27

                                                                                  Can anyone tell me if one can directly select the DNN AI core for neural network evaluation on HoloLens 2.

                                                                                  I have read about the HPU, which includes and DNN AI core in the GitHub repo here. But in the doc for the devices that can be used only CPU and GPU are listed.

                                                                                  ANSWER

                                                                                  Answered 2021-Jun-07 at 17:27

                                                                                  Currently Windows AI only supports inference on CPUs or GPUs.

                                                                                  Unfortunately there isn't a way to perform inference on the HoloLens2 HPU DNN AI Core at the moment.

                                                                                  Source https://stackoverflow.com/questions/66347068

                                                                                  Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                                                                                  Vulnerabilities

                                                                                  No vulnerabilities reported

                                                                                  Install machinelearning

                                                                                  Learn more about the basics of ML.NET.
                                                                                  Build your first ML.NET model by following our ML.NET Getting Started tutorial.
                                                                                  Check out our documentation and tutorials.
                                                                                  See the API Reference documentation.
                                                                                  Clone our ML.NET Samples GitHub repo and run some sample apps.
                                                                                  Take a look at some ML.NET Community Samples.
                                                                                  Watch some videos on the ML.NET videos YouTube playlist.

                                                                                  Support

                                                                                  ML.NET runs on Windows, Linux, and macOS using .NET Core, or Windows using .NET Framework. ML.NET also runs on ARM64, Apple M1, and Blazor Web Assembly. However, there are some limitations. 64-bit is supported on all platforms. 32-bit is supported on Windows, except for TensorFlow and LightGBM related functionality.
                                                                                  Find more information at:
                                                                                  Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                  Find more libraries
                                                                                  Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                  Save this library and start creating your kit

                                                                                  Share this Page

                                                                                  share link

                                                                                  Reuse Pre-built Kits with machinelearning

                                                                                  Consider Popular Machine Learning Libraries

                                                                                  tensorflow

                                                                                  by tensorflow

                                                                                  youtube-dl

                                                                                  by ytdl-org

                                                                                  models

                                                                                  by tensorflow

                                                                                  pytorch

                                                                                  by pytorch

                                                                                  keras

                                                                                  by keras-team

                                                                                  Try Top Libraries by dotnet

                                                                                  aspnetcore

                                                                                  by dotnetC#

                                                                                  core

                                                                                  by dotnetPowerShell

                                                                                  maui

                                                                                  by dotnetC#

                                                                                  roslyn

                                                                                  by dotnetC#

                                                                                  efcore

                                                                                  by dotnetC#

                                                                                  Compare Machine Learning Libraries with Highest Support

                                                                                  youtube-dl

                                                                                  by ytdl-org

                                                                                  scikit-learn

                                                                                  by scikit-learn

                                                                                  models

                                                                                  by tensorflow

                                                                                  tensorflow

                                                                                  by tensorflow

                                                                                  keras

                                                                                  by keras-team

                                                                                  Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                  Find more libraries
                                                                                  Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                  Save this library and start creating your kit