machinelearning-samples | open source and cross-platform machine learning framework | Machine Learning library

 by   dotnet PowerShell Version: 186179 License: MIT

kandi X-RAY | machinelearning-samples Summary

kandi X-RAY | machinelearning-samples Summary

machinelearning-samples is a PowerShell library typically used in Artificial Intelligence, Machine Learning, Deep Learning applications. machinelearning-samples has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.

Note: We'd love to hear your thoughts about MLOps. Let us know in this survey.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              machinelearning-samples has a medium active ecosystem.
              It has 4055 star(s) with 2591 fork(s). There are 298 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 135 open issues and 321 have been closed. On average issues are closed in 49 days. There are 50 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of machinelearning-samples is 186179

            kandi-Quality Quality

              machinelearning-samples has no bugs reported.

            kandi-Security Security

              machinelearning-samples has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              machinelearning-samples is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              machinelearning-samples releases are available to install and integrate.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of machinelearning-samples
            Get all kandi verified functions for this library.

            machinelearning-samples Key Features

            No Key Features are available at this moment for machinelearning-samples.

            machinelearning-samples Examples and Code Snippets

            No Code Snippets are available at this moment for machinelearning-samples.

            Community Discussions

            QUESTION

            Correct pb file to move Tensorflow model into ML.NET
            Asked 2020-Nov-24 at 10:57

            I have a TensorFlow model that I built (a 1D CNN) that I would now like to implement into .NET.
            In order to do so I need to know the Input and Output nodes.
            When I uploaded the model on Netron I get a different graph depending on my save method and the only one that looks correct comes from an h5 upload. Here is the model.summary():

            If I save the model as an h5 model.save("Mn_pb_model.h5") and load that into the Netron to graph it, everything looks correct:

            However, ML.NET will not accept h5 format so it needs to be saved as a pb.

            In looking through samples of adopting TensorFlow in ML.NET, this sample shows a TensorFlow model that is saved in a similar format to the SavedModel format - recommended by TensorFlow (and also recommended by ML.NET here "Download an unfrozen [SavedModel format] ..."). However when saving and loading the pb file into Netron I get this:

            And zoomed in a little further (on the far right side),

            As you can see, it looks nothing like it should.
            Additionally the input nodes and output nodes are not correct so it will not work for ML.NET (and I think something is wrong).
            I am using the recommended way from TensorFlow to determine the Input / Output nodes:

            When I try to obtain a frozen graph and load it into Netron, at first it looks correct, but I don't think that it is:

            There are four reasons I do not think this is correct.

            • it is very different from the graph when it was uploaded as an h5 (which looks correct to me).
            • as you can see from earlier, I am using 1D convolutions throughout and this is showing that it goes to 2D (and remains that way).
            • this file size is 128MB whereas the one in the TensorFlow to ML.NET example is only 252KB. Even the Inception model is only 56MB.
            • if I load the Inception model in TensorFlow and save it as an h5, it looks the same as from the ML.NET resource, yet when I save it as a frozen graph it looks different. If I take the same model and save it in the recommended SavedModel format, it shows up all messed up in Netron. Take any model you want and save it in the recommended SavedModel format and you will see for yourself (I've tried it on a lot of different models).

            Additionally in looking at the model.summary() of Inception with it's graph, it is similar to its graph in the same way my model.summary() is to the h5 graph.

            It seems like there should be an easier way (and a correct way) to save a TensorFlow model so it can be used in ML.NET.

            Please show that your suggested solution works: In the answer that you provide, please check that it works (load the pb model [this should also have a Variables folder in order to work for ML.NET] into Netron and show that it is the same as the h5 model, e.g., screenshot it). So that we are all trying the same thing, here is a link to a MNIST ML crash course example. It takes less than 30s to run the program and produces a model called my_model. From here you can save it according to your method and upload it to see the graph on Netron. Here is the h5 model upload:

            ...

            ANSWER

            Answered 2020-Nov-24 at 10:57

            This answer is made of 3 parts:

            • going through other programs
            • NOT going through other programs
            • Difference between op-level graph and conceptual graph (and why Netron show you different graphs)

            1. Going through other programs:

            ML.net needs an ONNX model, not a pb file.

            There is several ways to convert your model from TensorFlow to an ONNX model you could load in ML.net :

            This SO post could help you too: Load model with ML.NET saved with keras

            And here you will find more informations on the h5 and pb files formats, what they contain, etc.: https://www.tensorflow.org/guide/keras/save_and_serialize#weights_only_saving_in_savedmodel_format

            2. But you are asking "TensorFlow -> ML.NET without going through other programs":

            2.A An overview of the problem:

            First, the pl file format you made using the code you provided from seems, from what you say, to not be the same as the one used in the example you mentionned in comment (https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/text-classification-tf)

            Could to try to use the pb file that will be generated via tf.saved_model.save ? Is it working ?

            A thought about this microsoft blog post:

            From this page we can read:

            In ML.NET you can load a frozen TensorFlow model .pb file (also called “frozen graph def” which is essentially a serialized graph_def protocol buffer written to disk)

            and:

            That TensorFlow .pb model file that you see in the diagram (and the labels.txt codes/Ids) is what you create/train in Azure Cognitive Services Custom Vision then exporte as a frozen TensorFlow model file to be used by ML.NET C# code.

            So, this pb file is a type of file generated from Azure Cognitive Services Custom Vision. Perharps you could try this way too ?

            2.B Now, we'll try to provide the solution:

            In fact, in TensorFlow 1.x you could save a frozen graph easily, using freeze_graph.

            But TensorFlow 2.x does not support freeze_graph and converter_variables_to_constants.

            You could read some usefull informations here too: Tensorflow 2.0 : frozen graph support

            Some users are wondering how to do in TF 2.x: how to freeze graph in tensorflow 2.0 (https://github.com/tensorflow/tensorflow/issues/27614)

            There are some solutions however to create the pb file you could load in ML.net as you want:

            https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/

            How to save Keras model as frozen graph? (already linked in your question though)

            Difference between op-level graph and conceptual graph (and why Netron show you different graphs):

            As @mlneural03 said in a comment to you question, Netron shows a different graph depending on what file format you give:

            • If you load a h5 file, Netron wil display the conceptual graph
            • If you load a pb file, Netron wil display the op-level graph

            What is the difference between a op-level graph and a conceptual graph ?

            • In TensorFlow, the nodes of the op-level graph represent the operations ("ops"), like tf.add , tf.matmul , tf.linalg.inv, etc.
            • The conceptual graph will show you your your model's structure.

            That's completely different things.

            "ops" is an abbreviation for "operations". Operations are nodes that perform the computations.

            So, that's why you get a very large graph with a lot of nodes when you load the pb fil in Netron: you see all the computation nodes of the graph. but when you load the h5 file in Netron, you "just" see your model's tructure, the design of your model.

            In TensorFlow, you can view your graph with TensorBoard:

            • By default, TensorBoard displays the op-level graph.
            • To view the coneptual graph, in TensorBoard, select the "keras" tag.

            There is a Jupyter Notebook that explains very clearly the difference between the op-level graph and the coneptual graph here: https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/graphs.ipynb

            You can also read this "issue" on the TensorFlow Github too, related to your question: https://github.com/tensorflow/tensorflow/issues/39699

            In a nutshell:

            In fact there is no problem, just a little misunderstanding (and that's OK, we can't know everything).

            You would like to see the same graphs when loading the h5 file and the pb file in Netron, but it has to be unsuccessful, because the files does not contains the same graphs. These graphs are two ways of displaying the same model.

            The pb file created with the method we described will be the correct pb file to load whith ML.NET, as described in the Microsoft's tutorial we talked about. SO, if you load you correct pb file as described in these tutorials, you wil load your real/true model.

            Source https://stackoverflow.com/questions/64794378

            QUESTION

            C# ML.Net Image classification: Does GPU acceleration help improve the performance of predictions and how can I tell if it is?
            Asked 2020-Aug-31 at 21:20

            I'm currently working on a desktop tool in .NET Framework 4.8 that takes in a list of images with potential cracks and uses a model trained with ML.Net (C#) to perform crack detection. Ideally, I'd like the prediction to take less than 100ms on 10 images (Note: a single image prediction takes between 36-41ms).

            At first, I tried performing multiple predictions in different threads using a list of PredictionEngines and a Parallel.For-loop (using a list of threads since there is no PredictionEnginePool implementation for .Net Framework). I later learned that using an ITransformer to do predictions is a recommended, thread-safe, approach for .Net Framework and moved to using that, but in both cases it did not give me the performance I was hoping for.

            It takes around 255-281ms (267.1ms on average) to execute the following code:

            ...

            ANSWER

            Answered 2020-Aug-31 at 19:31

            It's likely a version mismatch.

            TensorFlow supports CUDA® 10.1 (TensorFlow >= 2.1.0)

            https://www.tensorflow.org/install/gpu

            You can check your output window for reasons why it would not be connecting to your GPU.

            Source https://stackoverflow.com/questions/63641000

            QUESTION

            Machine Learning to recognize important words in a sentence
            Asked 2020-Mar-31 at 19:30

            I want to use machine learning to extract rock-climbing related names/locations from a sentence. I've already "classified" a bunch of data like this:

            ...

            ANSWER

            Answered 2020-Mar-31 at 19:30

            You can easily try some pre-trained NER algorithms like Stanford's or Spacy. Probably, they will not become sufficient for you so at that step you need to determine your entity types and make some labeling to train your own NER algorithm.

            You can start to check out Stanford NER and Spacy NER module.

            Edit: You can change classifier type to take different results.

            Example result on Stanford Online Demo Tool:

            Source https://stackoverflow.com/questions/60955945

            QUESTION

            Microsoft ML.Net SDCA Regression Trainer Can't Find Input Column Data
            Asked 2020-Jan-10 at 16:48

            I've decided to try and get to grips with Microsoft's new ML.Net library.

            I'm trying to do my own version of the taxi fair example with some demo data I have, however it always throws an error saying it can find one of my columns.

            Here is my code.

            ...

            ANSWER

            Answered 2019-Dec-17 at 14:20

            Modify all your lines starting with pipeline.Append(...). The Append(...) method is not void, but returns kind of an IEstimator. You must assign the return value back to you pipeline. Change all the

            Source https://stackoverflow.com/questions/59372243

            QUESTION

            Exception in PredictionEngineBase when using Time Series model with PredictionEnginePool (ML.NET)
            Asked 2019-Nov-11 at 04:18

            I've created a Time Series model using the method described here resulting in this code:

            ...

            ANSWER

            Answered 2019-Nov-11 at 04:18

            This code has been added after my time, so I don't have firsthand knowledge.

            However, as far as I know ML.NET, the answer is yes: most likely, PredictionEnginePool does not support the time series prediction.

            The reason is, the time-series prediction engine is actually a 'state machine'. You need to feed all the data, in the correct sequence, to one prediction engine, so that it correctly reacts to this 'time series'.

            Prediction engine pool is solving a completely different scenario: if you have truly stateless models, you may instantiate a handful of interchangeable instances (a pool) of prediction engines, and the predictions will be handled with whatever engine is currently free.

            These 'stateless' models are represented by a 'row to row mapper' concept in the codebase. Basically, the prediction of this model is determined exclusively, and solely, on one row of data.

            Source https://stackoverflow.com/questions/58722150

            QUESTION

            After upgrading to ml.net v0.10 Fit() ist not working
            Asked 2019-Feb-08 at 07:53

            I'm using .NET-Framework 4.6.1

            After upgrading ML.NET to v0.10 I cannot run my code. I build my pipeline and then I have an error when executing Fit()-Method.

            Message = "Method not found: \"System.Collections.Generic.IEnumerable1 System.Linq.Enumerable.Append(System.Collections.Generic.IEnumerable1, !!0)\"."

            using System.Collections.Generic; is in my directives.

            Am I missing something or should I stick with v0.9 for now?

            Thanks

            ...

            ANSWER

            Answered 2019-Feb-06 at 21:49
            • The API in ML.NET v0.10 and moving to 0.11, is being changed so it is consistent across many different classes in the API.

            The issue you are facing might probably be caused because in many API methods we changed the order of the parameters. Hence, if those parameters are of the same type, it will compile but it won't work properly.

            Check all your parameters being used in the ML.NET API so you make sure they are right. Something that might help is to provide the name of the parameters, like:

            Source https://stackoverflow.com/questions/54551450

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install machinelearning-samples

            You can download it from GitHub.

            Support

            In addition to the ML.NET samples provided by Microsoft, we're also highlighting samples created by the community showcased in this separated page: ML.NET Community Samples. Those Community Samples are not maintained by Microsoft but by their owners. If you have created any cool ML.NET sample, please, add its info into this REQUEST issue and we'll publish its information in the mentioned page, eventually.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link