TensorFlowExperiments | Code for some of my TensorFlow experiments | Machine Learning library

 by   GjjvdBurg Python Version: Current License: GPL-3.0

kandi X-RAY | TensorFlowExperiments Summary

kandi X-RAY | TensorFlowExperiments Summary

TensorFlowExperiments is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow applications. TensorFlowExperiments has no bugs, it has no vulnerabilities, it has a Strong Copyleft License and it has low support. However TensorFlowExperiments build file is not available. You can download it from GitHub.

Code for some of my TensorFlow experiments
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              TensorFlowExperiments has a low active ecosystem.
              It has 21 star(s) with 13 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              TensorFlowExperiments has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of TensorFlowExperiments is current.

            kandi-Quality Quality

              TensorFlowExperiments has 0 bugs and 1 code smells.

            kandi-Security Security

              TensorFlowExperiments has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              TensorFlowExperiments code analysis shows 0 unresolved vulnerabilities.
              There are 1 security hotspots that need review.

            kandi-License License

              TensorFlowExperiments is licensed under the GPL-3.0 License. This license is Strong Copyleft.
              Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.

            kandi-Reuse Reuse

              TensorFlowExperiments releases are not available. You will need to build from source code and install.
              TensorFlowExperiments has no build file. You will be need to create the build yourself to build the component from source.
              TensorFlowExperiments saves you 41 person hours of effort in developing the same functionality from scratch.
              It has 109 lines of code, 8 functions and 1 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed TensorFlowExperiments and discovered the below as its top functions. This is intended to give you an instant insight into TensorFlowExperiments implemented functionality, and help decide if they suit your requirements.
            • Autoencoder
            • A layer of fc layer
            • Creates a weight variable
            • Create a bias variable
            • Create summaries for loss
            • Create an image summary layer
            • Create an image
            Get all kandi verified functions for this library.

            TensorFlowExperiments Key Features

            No Key Features are available at this moment for TensorFlowExperiments.

            TensorFlowExperiments Examples and Code Snippets

            No Code Snippets are available at this moment for TensorFlowExperiments.

            Community Discussions

            QUESTION

            Why does keras model predict slower after compile?
            Asked 2020-Jan-15 at 06:13

            In theory, the prediction should be constant as the weights have a fixed size. How do I get my speed back after compile (without the need to remove optimizer)?

            See associated experiment: https://nbviewer.jupyter.org/github/off99555/TensorFlowExperiments/blob/master/test-prediction-speed-after-compile.ipynb?flush_cache=true

            ...

            ANSWER

            Answered 2019-Oct-16 at 12:51

            UPDATE: see actual answer posted as a separate answer; this post contains supplemental info

            .compile() sets up the majority of TF/Keras graph, including losses, metrics, gradients, and partly the optimizer and its weights - which guarantees a notable slowdown.

            What is unexpected is the extent of slowdown - 10-fold on my own experiment, and for predict(), which doesn't update any weights. Looking into TF2's source code, graph elements appear tightly intertwined, with resources not necessarily being allocated "fairly".

            Possible overlook by developers on predict's performance for an uncompiled model, as models are typically used compiled - but in practice, this is an unacceptable difference. It's also possible it's a "necessary evil", as there is a simple workaround (see below).

            This isn't a complete answer, and I hope someone can provide it here - if not, I'd suggest opening a Github issue on TensorFlow. (OP has; here)

            Workaround: train a model, save its weights, re-build the model without compiling, load the weights. Do not save the entire model (e.g. model.save()), as it'll load compiled - instead use model.save_weights() and model.load_weights().

            Workaround 2: above, but use load_model(path, compile=False); suggestion credit: D. Möller

            UPDATE: to clarify, optimizer is not fully instantiated with compile, including its weights and updates tensors - this is done when the first call to a fitting function is made (fit, train_on_batch, etc), via model._make_train_function().

            The observed behavior is thus even more strange. Worse yet, building the optimizer does not elicit any further slowdowns (see below) - suggesting "graph size" is not the main explanation here.

            EDIT: on some models, a 30x slowdown. TensorFlow, what have you done. Example below:

            Source https://stackoverflow.com/questions/58378374

            QUESTION

            Why does keras model get bigger after training?
            Asked 2019-Jul-16 at 13:59

            I notice that I create a model using tensorflow.keras.Sequential(), save it and the file size is around 5 MiB, but after I call model.fit(..), the file size is increased to 17 MiB. I copied the model to reduce the filesize and see that the accuracy is the same.

            My question is, what exactly is the content of extra 12 MiB that fit() produces? How can I access such content? If I remove those extra 12 MiB, could it affect prediction accuracy or any weird side-effect?

            See my experiment code here: https://nbviewer.jupyter.org/github/off99555/TensorFlowExperiments/blob/master/test-save-keras-model.ipynb

            ...

            ANSWER

            Answered 2019-Jul-16 at 13:59

            The answer is that it's the size of the Adam optimizer state. When I change the optimizer to SGD (the vanilla optimizer), the size is not big anymore. As far as I know, the Adam optimizer maintains gradients information of previous training iterations. And the gradient size can be as big as the model size. That's why it causes the file size to be so big.

            With this in mind, when you save your model please make sure to set include_optimizer=False if you seem to use an optimizer that maintains big state similar to Adam.

            Beware though, it means that you cannot load the model and continue training it again, it should only be used for inference.

            Source https://stackoverflow.com/questions/57058178

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install TensorFlowExperiments

            You can download it from GitHub.
            You can use TensorFlowExperiments like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/GjjvdBurg/TensorFlowExperiments.git

          • CLI

            gh repo clone GjjvdBurg/TensorFlowExperiments

          • sshUrl

            git@github.com:GjjvdBurg/TensorFlowExperiments.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link