transfer_learning | Transfer Learning , Multi-Modal Learning | Machine Learning library

 by   hycis Python Version: Current License: No License

kandi X-RAY | transfer_learning Summary

kandi X-RAY | transfer_learning Summary

transfer_learning is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Generative adversarial networks applications. transfer_learning has no bugs, it has no vulnerabilities and it has low support. However transfer_learning build file is not available. You can download it from GitHub.

Mozi, Transfer Learning, Multi-Modal Learning, Theano
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              transfer_learning has a low active ecosystem.
              It has 28 star(s) with 12 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. On average issues are closed in 1676 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of transfer_learning is current.

            kandi-Quality Quality

              transfer_learning has 0 bugs and 0 code smells.

            kandi-Security Security

              transfer_learning has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              transfer_learning code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              transfer_learning does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              transfer_learning releases are not available. You will need to build from source code and install.
              transfer_learning has no build file. You will be need to create the build yourself to build the component from source.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of transfer_learning
            Get all kandi verified functions for this library.

            transfer_learning Key Features

            No Key Features are available at this moment for transfer_learning.

            transfer_learning Examples and Code Snippets

            No Code Snippets are available at this moment for transfer_learning.

            Community Discussions

            QUESTION

            Transfer Learning/Fine Tuning - how to keep BatchNormalization in inference mode?
            Asked 2022-Apr-02 at 15:34

            In the following tutorial Transfer learning and fine-tuning by TensorFlow it is explained that that when unfreezing a model that contains BatchNormalization (BN) layers, these should be kept in inference mode by passing training=False when calling the base model.

            […]

            Important notes about BatchNormalization layer

            Many image models contain BatchNormalization layers. That layer is a special case on every imaginable count. Here are a few things to keep in mind.

            • BatchNormalization contains 2 non-trainable weights that get updated during training. These are the variables tracking the mean and variance of the inputs.
            • When you set bn_layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean & variance statistics. This is not the case for other layers in general, as weight trainability & inference/training modes are two orthogonal concepts. But the two are tied in the case of the BatchNormalization layer.
            • When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training=False when calling the base model. Otherwise the updates applied to the non-trainable weights will suddenly destroy what the model has learned.

            […]

            In the examples they pass training=False when calling the base model, but later they set base_model.trainable=True, which for my understanding is the opposite of inference mode, because the BN layers will be set to trainable as well.

            For my understanding there would have to be 0 trainable_weights and 4 non_trainable_weights for inference mode, which would be identical to when setting the bn_layer.trainable=False, which they stated would be the case for running the bn_layer in inference mode.

            I checked the number of trainable_weights and number of non_trainable_weights and they are both 2.

            I am confused by the tutorial, how can I really be sure BN layer are in inference mode when doing fine tuning on a model?

            Does setting training=False on the model overwrite the behavior of bn_layer.trainable=True? So that even if the trainable_weights get listed with 2 these would not get updated during training (fine tuning)?

            Update:

            Here I found some further information: BatchNormalization layer - on keras.io.

            [...]

            About setting layer.trainable = False on a BatchNormalization layer:

            The meaning of setting layer.trainable = False is to freeze the layer, i.e. its internal state will not change during training: its trainable weights will not be updated during fit() or train_on_batch(), and its state updates will not be run.

            Usually, this does not necessarily mean that the layer is run in inference mode (which is normally controlled by the training argument that can be passed when calling a layer). "Frozen state" and "inference mode" are two separate concepts.

            However, in the case of the BatchNormalization layer, setting trainable = False on the layer means that the layer will be subsequently run in inference mode (meaning that it will use the moving mean and the moving variance to normalize the current batch, rather than using the mean and variance of the current batch).

            This behavior has been introduced in TensorFlow 2.0, in order to enable layer.trainable = False to produce the most commonly expected behavior in the convnet fine-tuning use case.

            Note that: - Setting trainable on an model containing other layers will recursively set the trainable value of all inner layers. - If the value of the trainable attribute is changed after calling compile() on a model, the new value doesn't take effect for this model until compile() is called again.

            Question:

            1. In case I want to fine tune the whole model, so I am going to unfreeze the base_model.trainable = True, would I have to manually set the BN layers to bn_layer.trainable = False in order to keep them in inference mode?
            2. What does happen when with the call of the base_model passing training=False and additionally setting base_model.trainable=True? Do layers like BatchNormalization and Dropout stay in inference mode?
            ...

            ANSWER

            Answered 2022-Feb-06 at 08:59

            After reading the documentation and having a look on the source code of TensorFlows implementations of tf.keras.layers.Layer, tf.keras.layers.Dense, and tf.keras.layers.BatchNormalization I got the following understanding.

            If training = False is passed on calling the layer or the model/base model, it will run in inference mode. This has nothing to do with the attribute trainable, which means something different. It would probably lead to less misunderstanding, if they would have called the parameter training_mode instead of training. I would have preferred defining it the other way round and calling it inference_mode .

            When doing Transfer Learning or Fine Tuning training = False should be passed on calling the base model itself. As far as I saw until now this will only affect layers like tf.keras.layers.Dropout and tf.keras.layers.BatchNormalization and will have not effect on the other layers. Running in inference mode via training = False will result in:

            • tf.layers.Dropout not to apply the dropout rate at all. As tf.layers.Dropout has no trainable weights, setting the attribute trainable = False will have no effect at all on this layer.
            • tf.keras.layers.BatchNormalization normalizing its inputs using the mean and variance of its moving statistics learned during training

            The attribute trainable will only activate or deactivate updating the trainable weights of a layer.

            Source https://stackoverflow.com/questions/70998847

            QUESTION

            importing model from TF model Zoo and training
            Asked 2021-Aug-17 at 08:02

            Refereing this tutorial https://www.tensorflow.org/tutorials/images/transfer_learning , I created and trained a resnet model

            ...

            ANSWER

            Answered 2021-Aug-17 at 08:02

            use model.predict(img) when inference is done on one image instead of model(img)

            Source https://stackoverflow.com/questions/68740677

            QUESTION

            Transfer learning with Keras. What should the final layer activation be?
            Asked 2021-Jun-18 at 08:56

            I refer to Keras documentation here: https://keras.io/guides/transfer_learning/ which demonstrated a typical transfer learning workflow

            First, instantiate a base model with pre-trained weights.

            ...

            ANSWER

            Answered 2021-Jun-18 at 08:56

            the model will output a numerical value which is then compared with the binary value, either 1 or 0. The difference will create a loss which will then be reduced via back propagation which will drive the model output to the desired value. I think however it is better practice to use a sigmoid activation for the layer. On another level they always warn you to initially freeze the weights of the base model for initial training because of exploding gradients. I have run at least 500 models using transfer learning by leaving the base model as trainable. As a matter of fact in general I get better results training the entire model for N epochs than I do if I first train the model with the base model non trainable for M epochs, then fine tune the model for K epochs where K + M = N. In other words for the same number of epochs I get better results training the entire model vs the two step process with the same number of epochs. The difference though is in general fairly small in terms of validation and test accuracy. Further they say that if you change the base model from not trainable to trainable you have to recompile the model. I do not think that is true based on the code below

            Source https://stackoverflow.com/questions/68030740

            QUESTION

            Better understanding of training parameter for Keras-Model call method needed
            Asked 2021-Mar-28 at 00:57

            I'd like to get a better understanding of the parameter training, when calling a Keras model.

            In all tutorials (like here) it is explained, that when you are doing a custom train step, you should call the model like this (because some layers may behave differently depending if you want to do training or inference):

            ...

            ANSWER

            Answered 2021-Mar-28 at 00:57

            training is a boolean argument that determines whether this call function runs in training mode or inference mode. For example, the Dropout layer is primarily used to as regularize in model training, randomly dropping weights but in inference time or prediction time we don't want it to happen.

            Source https://stackoverflow.com/questions/66830302

            QUESTION

            Graph disconnected: cannot obtain value for tensor KerasTensor() Transfer learning
            Asked 2021-Mar-24 at 15:04

            I'm trying to implement transfer learning on my own model but failing. My implementation follows the guides here

            https://keras.io/guides/transfer_learning/

            How to do transfer-learning on our own models?

            https://github.com/anujshah1003/Transfer-Learning-in-keras---custom-data/blob/master/transfer_learning_resnet50_custom_data.py

            tensoflow 2.4.1
            Keras 2.4.3

            Old Model (Works really well):

            ...

            ANSWER

            Answered 2021-Mar-21 at 08:57

            Here a simple way to operate transfer learning with your model

            Source https://stackoverflow.com/questions/66728679

            QUESTION

            How can MobileNetV2 have the same number of parameters for different custom input shapes?
            Asked 2020-Dec-23 at 00:12

            I'm following the tensorflow2 tutorial on fine-tunning and transfer learning using a MobileNetV2 as base architecture.

            The first thing I noticed is that the biggest input shape available for pre-trained 'imagenet' weights is (224, 224, 3). I tried to use a custom shape (640, 640, 3) and as per the documentation, it gives a warning saying that the weights for the (224, 224, 3) shape were loaded.

            So if I load a network like this:

            ...

            ANSWER

            Answered 2020-Dec-10 at 05:30

            After checking in more detail it seems that the number of parameters depends on the kernel sizes and the number of filters of each convolutional layer, as well as the number of neurons on the final fully connected layer and some due to Batch Normalization layers in between.

            Since none of these aspects depend on the size of the input images, that is, the spatial resolution may change in the output of each Convolution layer, but the size of the convolutional kernel will still be the same (e.g. 3x3x3), consequently, the number of parameters will also be fixed.

            The number of parameters of this kind of network (i.e. Convolutional Neural Networks) is independent of the spatial size of the input. Nevertheless, the number of channels (e.g. 3 in an RGB colored image) must be exactly 3.

            Source https://stackoverflow.com/questions/65153063

            QUESTION

            Passing `training=true` when using doing tensorflow training
            Asked 2020-Dec-13 at 13:16

            TensorFlow's official tutorial says that we should pass base_model(trainin=False) during training in order for the BN layer not to update mean and variance. my question is: why? why we don't need to update mean and variance, I mean BN has imagenet mean and variance and why it is useful to use imagenet's mean and variance, and not update them on new data? even during fine tunning, in this case whole model updates weights but BN layer still is going to have imagenet mean and variance. edit: i am using this tutorial :https://www.tensorflow.org/tutorials/images/transfer_learning

            ...

            ANSWER

            Answered 2020-Dec-13 at 13:16

            When model is trained from initialization, batchnorm should be enabled to tune their mean and variance as you mentioned. Finetuning or transfer learning is a bit different thing: you already has a model that can do more than you need and you want to perform particular specialization of pre-trained model to do your task/work on your data set. In this case part of weights are frozen and only some layers closest to output are changed. Since BN layers are used all around model you should froze them as well. Check again this explanation:

            Important note about BatchNormalization layers Many models contain tf.keras.layers.BatchNormalization layers. This layer is a special case and precautions should be taken in the context of fine-tuning, as shown later in this tutorial.

            When you set layer.trainable = False, the BatchNormalization layer will run in inference mode, and will not update its mean and variance statistics.

            When you unfreeze a model that contains BatchNormalization layers in order to do fine-tuning, you should keep the BatchNormalization layers in inference mode by passing training = False when calling the base model. Otherwise, the updates applied to the non-trainable weights will destroy what the model has learned.

            Source: transfer learning, details regarding freeze

            Source https://stackoverflow.com/questions/65274684

            QUESTION

            Tensorflow: Classifying images in batches
            Asked 2020-Dec-09 at 18:01

            I have followed this TensorFlow tutorial to classify images using transfer learning approach. Using almost 16,000 manually classified images (with about 40/60 split of 1/0) added on top of the pre-trained MobileNet V2 model, my model achieved 96% accuracy on the hold out test set. I then saved the resulting model.

            Next, I would like to use this trained model to classify new images. To do so, I have adapted one of the portions of the tutorial's code (in the end where it says #Retrieve a batch of images from the test set) in the way described below. The code works, however, it only processes one batch of 32 images and that's it (there are hundreds of images in the source folder). What am I missing here? Please advise.

            ...

            ANSWER

            Answered 2020-Dec-09 at 18:01

            QUESTION

            Understanding the input_shape parameter of hub.KerasLayer
            Asked 2020-Jul-16 at 13:08

            When transfer learning is done, one could use a model from the tf hub. Like MobilNetV2 or Inception. These models expects the inputs, the images in a certain size. So one has to resize the images into this size before applying the models. In this tutorial the following is used:

            ...

            ANSWER

            Answered 2020-Jul-16 at 13:08

            This is a good observation.

            TLDR, different Input Shapes can be passed for Models of tf.keras.applications with the argument, include_top = False but that is not possible when we use tf.keras.applications with the argument, include_top = True and when we use Models of Tensorflow Hub.

            Detailed Explanation:

            This Tensorflow Hub Documentation states

            Source https://stackoverflow.com/questions/62850250

            QUESTION

            Number of units in the last dense layer in case of binary classification
            Asked 2020-Jul-11 at 14:10

            My question is related to this one here. I am using the cats and dogs dataset. So there are only these two outcomes. I found two implementations. First one uses:

            ...

            ANSWER

            Answered 2020-Jul-11 at 14:10

            Both are correct. One is using binary classification and another one is using categorical classification. Let's try to find the differences.

            Binary Classification: In this case, the output layer has only one neuron. From this single neuron output, you have to decide either it's a cat or a dog. You can set any threshold level to classify the output. Let's say cats are labeled as 0 and dogs are labeled as 1 and your threshold value is 0.5. So, if the output is greater than 0.5, then it's a dog because it's closer to 1 otherwise it's a cat. In this case, binary_crossentropy is being used for most of the cases.

            Categorical Classification: The number of output layers are exactly the same as the number of classes. This time you're not allowed to label your data as 0 or 1. Label shape should be same as the output layer. In your case, your output layer has two neurons(for classes). You will have to label your data in the same way. To achieve this, you will have to encode your label data. We call this "one-hot-encode". the cats will be encoded as (1,0) and the dogs will be encoded as (0,1) for example. Now your prediction will have two floating-point numbers. If the first number is greater than the second, it's a cat otherwise it's a dog. We call this numbers - confidence score. Let's say, for a test image, your model predicted (0.70, 0.30). which means your model is 70% for confident that it's a cat and 30% confident that it's a dog. Please note that the value of the output layer completely depends on the activation of your layer. To know deeper, please read about activation functions.

            Source https://stackoverflow.com/questions/62850038

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install transfer_learning

            You can download it from GitHub.
            You can use transfer_learning like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/hycis/transfer_learning.git

          • CLI

            gh repo clone hycis/transfer_learning

          • sshUrl

            git@github.com:hycis/transfer_learning.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link