kandi background
Explore Kits

tensorflow | An Open Source Machine Learning Framework for Everyone | Machine Learning library

 by   tensorflow C++ Version: v2.9.0-rc1 License: Apache-2.0

 by   tensorflow C++ Version: v2.9.0-rc1 License: Apache-2.0

Download this library from

kandi X-RAY | tensorflow Summary

tensorflow is a C++ library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow applications. tensorflow has a Permissive License and it has medium support. However tensorflow has 210 bugs and it has 5 vulnerabilities. You can download it from GitHub.
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications. TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well. TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages. Keep up-to-date with release announcements and security updates by subscribing to announce@tensorflow.org. See all the mailing lists.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • tensorflow has a medium active ecosystem.
  • It has 164372 star(s) with 86673 fork(s). There are 7863 watchers for this library.
  • There were 2 major release(s) in the last 6 months.
  • There are 2240 open issues and 32273 have been closed. On average issues are closed in 213 days. There are 191 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of tensorflow is v2.9.0-rc1
tensorflow Support
Best in #Machine Learning
Average in #Machine Learning
tensorflow Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • tensorflow has 210 bugs (14 blocker, 3 critical, 121 major, 72 minor) and 7277 code smells.
tensorflow Quality
Best in #Machine Learning
Average in #Machine Learning
tensorflow Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • tensorflow has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • tensorflow code analysis shows 5 unresolved vulnerabilities (0 blocker, 3 critical, 2 major, 0 minor).
  • There are 300 security hotspots that need review.
tensorflow Security
Best in #Machine Learning
Average in #Machine Learning
tensorflow Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • tensorflow is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
tensorflow License
Best in #Machine Learning
Average in #Machine Learning
tensorflow License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • tensorflow releases are available to install and integrate.
  • Installation instructions, examples and code snippets are available.
tensorflow Reuse
Best in #Machine Learning
Average in #Machine Learning
tensorflow Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi has reviewed tensorflow and discovered the below as its top functions. This is intended to give you an instant insight into tensorflow implemented functionality, and help decide if they suit your requirements.

  • Run a single model iteration .
  • Splits the given computation into multiple tensors .
  • Decorate a function .
  • Compute an RNN layer .
  • Compute the eigenvalues of a Hermitagonal matrix .
  • Decorator for functions .
  • Produce an RNN layer .
  • Return the gradient of an einsum operator .
  • Creates a csv dataset .
  • Extracts inputs and attrs .

tensorflow Key Features

An Open Source Machine Learning Framework for Everyone

Install

copy iconCopydownload iconDownload
$ pip install tensorflow

WebSocket not working when trying to send generated answer by keras

copy iconCopydownload iconDownload
while True:
    question = input("")
    ints = predict(question)
    answer = response(ints, json_data)
    print(answer)

Could not resolve com.google.guava:guava:30.1-jre - Gradle project sync failed. Basic functionality will not work properly - in kotlin project

copy iconCopydownload iconDownload
    repositories {
        mavenCentral()
        google()
    }

-----------------------
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
    repositories {
        mavenCentral()
        google()
    }
    dependencies {
        classpath 'com.android.tools.build:gradle:7.1.1'

        // NOTE: Do not place your application dependencies here; they belong
        // in the individual module build.gradle files
    }
}

allprojects {
    repositories {
        mavenCentral()
        google()
    }
}

task clean(type: Delete) {
    delete rootProject.buildDir
}

Tensorflow setup on RStudio/ R | CentOS

copy iconCopydownload iconDownload
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")

Saving model on Tensorflow 2.7.0 with data augmentation layer

copy iconCopydownload iconDownload
import tensorflow as tf
import numpy as np

class RandomColorDistortion(tf.keras.layers.Layer):
    def __init__(self, contrast_range=[0.5, 1.5], 
                 brightness_delta=[-0.2, 0.2], **kwargs):
        super(RandomColorDistortion, self).__init__(**kwargs)
        self.contrast_range = contrast_range
        self.brightness_delta = brightness_delta
    
    def call(self, images, training=None):
        if not training:
            return images
        contrast = np.random.uniform(
            self.contrast_range[0], self.contrast_range[1])
        brightness = np.random.uniform(
            self.brightness_delta[0], self.brightness_delta[1])
        
        images = tf.image.adjust_contrast(images, contrast)
        images = tf.image.adjust_brightness(images, brightness)
        images = tf.clip_by_value(images, 0, 1)
        return images
    
    def get_config(self):
        config = super(RandomColorDistortion, self).get_config()
        config.update({"contrast_range": self.contrast_range, "brightness_delta": self.brightness_delta})
        return config
        
input_shape_rgb = (256, 256, 3)
data_augmentation_rgb = tf.keras.Sequential(
  [ 
    tf.keras.layers.RandomFlip("horizontal"),
    tf.keras.layers.RandomFlip("vertical"),
    tf.keras.layers.RandomRotation(0.5),
    tf.keras.layers.RandomZoom(0.5),
    tf.keras.layers.RandomContrast(0.5),
    RandomColorDistortion(name='random_contrast_brightness/none'),
  ]
)
input_shape = (256, 256, 3)
padding = 'same'
kernel_size = 3
model = tf.keras.Sequential([
  tf.keras.layers.Input(input_shape),
  data_augmentation_rgb,
  tf.keras.layers.Rescaling((1./255)),
  tf.keras.layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
     data_format='channels_last'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'), # best 1
  tf.keras.layers.Dropout(0.1),
  tf.keras.layers.Dense(128, activation='relu'), # best 1
  tf.keras.layers.Dropout(0.1),
  tf.keras.layers.Dense(64, activation='relu'), # best 1
  tf.keras.layers.Dropout(0.1),
  tf.keras.layers.Dense(5, activation = 'softmax')
 ])

model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
model.save("test", save_format='h5')
model = tf.keras.models.load_model('test.h5', custom_objects={'RandomColorDistortion': RandomColorDistortion})
-----------------------
import tensorflow as tf
import numpy as np

class RandomColorDistortion(tf.keras.layers.Layer):
    def __init__(self, contrast_range=[0.5, 1.5], 
                 brightness_delta=[-0.2, 0.2], **kwargs):
        super(RandomColorDistortion, self).__init__(**kwargs)
        self.contrast_range = contrast_range
        self.brightness_delta = brightness_delta
    
    def call(self, images, training=None):
        if not training:
            return images
        contrast = np.random.uniform(
            self.contrast_range[0], self.contrast_range[1])
        brightness = np.random.uniform(
            self.brightness_delta[0], self.brightness_delta[1])
        
        images = tf.image.adjust_contrast(images, contrast)
        images = tf.image.adjust_brightness(images, brightness)
        images = tf.clip_by_value(images, 0, 1)
        return images
    
    def get_config(self):
        config = super(RandomColorDistortion, self).get_config()
        config.update({"contrast_range": self.contrast_range, "brightness_delta": self.brightness_delta})
        return config
        
input_shape_rgb = (256, 256, 3)
data_augmentation_rgb = tf.keras.Sequential(
  [ 
    tf.keras.layers.RandomFlip("horizontal"),
    tf.keras.layers.RandomFlip("vertical"),
    tf.keras.layers.RandomRotation(0.5),
    tf.keras.layers.RandomZoom(0.5),
    tf.keras.layers.RandomContrast(0.5),
    RandomColorDistortion(name='random_contrast_brightness/none'),
  ]
)
input_shape = (256, 256, 3)
padding = 'same'
kernel_size = 3
model = tf.keras.Sequential([
  tf.keras.layers.Input(input_shape),
  data_augmentation_rgb,
  tf.keras.layers.Rescaling((1./255)),
  tf.keras.layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
     data_format='channels_last'),
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
  tf.keras.layers.MaxPooling2D(),
  tf.keras.layers.BatchNormalization(),

  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'), # best 1
  tf.keras.layers.Dropout(0.1),
  tf.keras.layers.Dense(128, activation='relu'), # best 1
  tf.keras.layers.Dropout(0.1),
  tf.keras.layers.Dense(64, activation='relu'), # best 1
  tf.keras.layers.Dropout(0.1),
  tf.keras.layers.Dense(5, activation = 'softmax')
 ])

model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()
model.save("test", save_format='h5')
model = tf.keras.models.load_model('test.h5', custom_objects={'RandomColorDistortion': RandomColorDistortion})

ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'

copy iconCopydownload iconDownload
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (
    BatchNormalization, SeparableConv2D, MaxPooling2D, Activation, Flatten, Dropout, Dense
)
from tensorflow.keras import backend as K


class CancerNet:
    @staticmethod
    def build(width, height, depth, classes):
        model = Sequential()
        shape = (height, width, depth)
        channelDim = -1

        if K.image_data_format() == "channels_first":
            shape = (depth, height, width)
            channelDim = 1

        model.add(SeparableConv2D(32, (3, 3), padding="same", input_shape=shape))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        model.add(SeparableConv2D(64, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(SeparableConv2D(64, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        model.add(SeparableConv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(SeparableConv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(SeparableConv2D(128, (3, 3), padding="same"))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=channelDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        model.add(Flatten())
        model.add(Dense(256))
        model.add(Activation("relu"))
        model.add(BatchNormalization())
        model.add(Dropout(0.5))

        model.add(Dense(classes))
        model.add(Activation("softmax"))

        return model

model = CancerNet()
-----------------------
from tensorflow.keras.layers import BatchNormalization

Accuracy in Calculating Fourth Derivative using Finite Differences in Tensorflow

copy iconCopydownload iconDownload
start = tf.constant(0.0, dtype = tf.float64)
end = tf.constant(30.0, dtype = tf.float64)
x = tf.linspace(start, end, 1000)
with tf.compat.v1.Session() as sess2:
  x = tf.Variable(tf.linspace(0, 30, 1000))
  sess2.run(tf.compat.v1.initialize_all_variables())
  with tf.GradientTape() as t4:
    with tf.GradientTape() as t3:
      with tf.GradientTape() as t2:
        with tf.GradientTape() as t1:
          y = tf.tanh(x)

        der1 = t1.gradient(y, x)
      der2 = t2.gradient(der1, x)
    der3 = t3.gradient(der2, x)
  der4 = t4.gradient(der3, x)
  print(der4)

  plt.plot(sess2.run(der4))
x = np.linspace(0.0, 30, 1000)
sech = 1/np.cosh(x)
theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)

finite_diff_err = theoretical[2:-2] - from_finite_diff
autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]

print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))

line, = plt.plot(np.log10(np.abs(autodiff_err)))
line.set_label('Autodiff log error')
line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
line2.set_label('Finite difference log error')
plt.legend()
Max err with autodiff: 3.1086244689504383e-15
Max err with a finite difference: 0.007830900165363808
-----------------------
start = tf.constant(0.0, dtype = tf.float64)
end = tf.constant(30.0, dtype = tf.float64)
x = tf.linspace(start, end, 1000)
with tf.compat.v1.Session() as sess2:
  x = tf.Variable(tf.linspace(0, 30, 1000))
  sess2.run(tf.compat.v1.initialize_all_variables())
  with tf.GradientTape() as t4:
    with tf.GradientTape() as t3:
      with tf.GradientTape() as t2:
        with tf.GradientTape() as t1:
          y = tf.tanh(x)

        der1 = t1.gradient(y, x)
      der2 = t2.gradient(der1, x)
    der3 = t3.gradient(der2, x)
  der4 = t4.gradient(der3, x)
  print(der4)

  plt.plot(sess2.run(der4))
x = np.linspace(0.0, 30, 1000)
sech = 1/np.cosh(x)
theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)

finite_diff_err = theoretical[2:-2] - from_finite_diff
autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]

print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))

line, = plt.plot(np.log10(np.abs(autodiff_err)))
line.set_label('Autodiff log error')
line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
line2.set_label('Finite difference log error')
plt.legend()
Max err with autodiff: 3.1086244689504383e-15
Max err with a finite difference: 0.007830900165363808
-----------------------
start = tf.constant(0.0, dtype = tf.float64)
end = tf.constant(30.0, dtype = tf.float64)
x = tf.linspace(start, end, 1000)
with tf.compat.v1.Session() as sess2:
  x = tf.Variable(tf.linspace(0, 30, 1000))
  sess2.run(tf.compat.v1.initialize_all_variables())
  with tf.GradientTape() as t4:
    with tf.GradientTape() as t3:
      with tf.GradientTape() as t2:
        with tf.GradientTape() as t1:
          y = tf.tanh(x)

        der1 = t1.gradient(y, x)
      der2 = t2.gradient(der1, x)
    der3 = t3.gradient(der2, x)
  der4 = t4.gradient(der3, x)
  print(der4)

  plt.plot(sess2.run(der4))
x = np.linspace(0.0, 30, 1000)
sech = 1/np.cosh(x)
theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)

finite_diff_err = theoretical[2:-2] - from_finite_diff
autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]

print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))

line, = plt.plot(np.log10(np.abs(autodiff_err)))
line.set_label('Autodiff log error')
line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
line2.set_label('Finite difference log error')
plt.legend()
Max err with autodiff: 3.1086244689504383e-15
Max err with a finite difference: 0.007830900165363808
-----------------------
start = tf.constant(0.0, dtype = tf.float64)
end = tf.constant(30.0, dtype = tf.float64)
x = tf.linspace(start, end, 1000)
with tf.compat.v1.Session() as sess2:
  x = tf.Variable(tf.linspace(0, 30, 1000))
  sess2.run(tf.compat.v1.initialize_all_variables())
  with tf.GradientTape() as t4:
    with tf.GradientTape() as t3:
      with tf.GradientTape() as t2:
        with tf.GradientTape() as t1:
          y = tf.tanh(x)

        der1 = t1.gradient(y, x)
      der2 = t2.gradient(der1, x)
    der3 = t3.gradient(der2, x)
  der4 = t4.gradient(der3, x)
  print(der4)

  plt.plot(sess2.run(der4))
x = np.linspace(0.0, 30, 1000)
sech = 1/np.cosh(x)
theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)

finite_diff_err = theoretical[2:-2] - from_finite_diff
autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]

print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))

line, = plt.plot(np.log10(np.abs(autodiff_err)))
line.set_label('Autodiff log error')
line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
line2.set_label('Finite difference log error')
plt.legend()
Max err with autodiff: 3.1086244689504383e-15
Max err with a finite difference: 0.007830900165363808
-----------------------
x = tf.linspace(tf.constant(0.0, dtype='float64'), 
                tf.constant(30, dtype='float64'), 1000)

AssertionError: Tried to export a function which references untracked resource

copy iconCopydownload iconDownload
AssertionError: Tried to export a function which references untracked resource\
Tensor("77040:0", shape=(), dtype=resource). 
TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
-----------------------
class Model(keras.Model):
    inputs_embedding: InputsEmbedding = None  # <-- This caused the problem

    def __init__(config, *args, **kwargs):
        super().__init__(*args, **kwargs)
        if config.embeddings is not None:
            self.inputs_embedding = InputsEmbedding(config.embeddings)
        # ...
import tempfile

import numpy as np
import tensorflow as tf
from tensorflow.keras import layers


class ModelA(tf.keras.Model):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelB(tf.keras.Model):
    model_layer: layers.Layer = None

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        # This is probably working because layers.Lambda has no trainable variables
        self.model_layer = layers.Lambda(lambda x: x)

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelC(tf.keras.Model):
    model_layer: layers.Layer = None

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelD(tf.keras.Model):
    model_layer: layers.Layer

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


def save_tmp_model(model: tf.keras.Model):
    name = model.__class__.__name__
    print(f'Saving model {name}')
    try:
        model.save(tempfile.mkdtemp(prefix=f"{name}_"))
    except Exception as e:
        print(f"Unable to save model: {name}")
        print('Error message:')
        print(str(e))
        return

    print(f".. success!")


def main():
    inputs = np.random.rand(1, 50, 16)

    model_a = ModelA()
    model_b = ModelB()
    model_c = ModelC()
    model_d = ModelD()

    # Build models
    model_a(inputs)
    model_b(inputs)
    model_c(inputs)
    model_d(inputs)

    # Save models
    save_tmp_model(model_a)
    save_tmp_model(model_b)
    save_tmp_model(model_c)
    save_tmp_model(model_d)


if __name__ == '__main__':
    main()
Saving model ModelA
.. success!
Saving model ModelB
.. success!
Saving model ModelC
Unable to save model: ModelC
Error message:
Tried to export a function which references untracked resource Tensor("1198:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
<tf.Variable 'model_c/layer_normalization_1/gamma:0' shape=(16,) dtype=float32>
Saving model ModelD
.. success!
-----------------------
class Model(keras.Model):
    inputs_embedding: InputsEmbedding = None  # <-- This caused the problem

    def __init__(config, *args, **kwargs):
        super().__init__(*args, **kwargs)
        if config.embeddings is not None:
            self.inputs_embedding = InputsEmbedding(config.embeddings)
        # ...
import tempfile

import numpy as np
import tensorflow as tf
from tensorflow.keras import layers


class ModelA(tf.keras.Model):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelB(tf.keras.Model):
    model_layer: layers.Layer = None

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        # This is probably working because layers.Lambda has no trainable variables
        self.model_layer = layers.Lambda(lambda x: x)

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelC(tf.keras.Model):
    model_layer: layers.Layer = None

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelD(tf.keras.Model):
    model_layer: layers.Layer

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


def save_tmp_model(model: tf.keras.Model):
    name = model.__class__.__name__
    print(f'Saving model {name}')
    try:
        model.save(tempfile.mkdtemp(prefix=f"{name}_"))
    except Exception as e:
        print(f"Unable to save model: {name}")
        print('Error message:')
        print(str(e))
        return

    print(f".. success!")


def main():
    inputs = np.random.rand(1, 50, 16)

    model_a = ModelA()
    model_b = ModelB()
    model_c = ModelC()
    model_d = ModelD()

    # Build models
    model_a(inputs)
    model_b(inputs)
    model_c(inputs)
    model_d(inputs)

    # Save models
    save_tmp_model(model_a)
    save_tmp_model(model_b)
    save_tmp_model(model_c)
    save_tmp_model(model_d)


if __name__ == '__main__':
    main()
Saving model ModelA
.. success!
Saving model ModelB
.. success!
Saving model ModelC
Unable to save model: ModelC
Error message:
Tried to export a function which references untracked resource Tensor("1198:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
<tf.Variable 'model_c/layer_normalization_1/gamma:0' shape=(16,) dtype=float32>
Saving model ModelD
.. success!
-----------------------
class Model(keras.Model):
    inputs_embedding: InputsEmbedding = None  # <-- This caused the problem

    def __init__(config, *args, **kwargs):
        super().__init__(*args, **kwargs)
        if config.embeddings is not None:
            self.inputs_embedding = InputsEmbedding(config.embeddings)
        # ...
import tempfile

import numpy as np
import tensorflow as tf
from tensorflow.keras import layers


class ModelA(tf.keras.Model):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelB(tf.keras.Model):
    model_layer: layers.Layer = None

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        # This is probably working because layers.Lambda has no trainable variables
        self.model_layer = layers.Lambda(lambda x: x)

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelC(tf.keras.Model):
    model_layer: layers.Layer = None

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


class ModelD(tf.keras.Model):
    model_layer: layers.Layer

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.model_layer = layers.LayerNormalization()

    def call(self, inputs, training=None, mask=None):
        return self.model_layer(inputs)

    def get_config(self):
        return dict()


def save_tmp_model(model: tf.keras.Model):
    name = model.__class__.__name__
    print(f'Saving model {name}')
    try:
        model.save(tempfile.mkdtemp(prefix=f"{name}_"))
    except Exception as e:
        print(f"Unable to save model: {name}")
        print('Error message:')
        print(str(e))
        return

    print(f".. success!")


def main():
    inputs = np.random.rand(1, 50, 16)

    model_a = ModelA()
    model_b = ModelB()
    model_c = ModelC()
    model_d = ModelD()

    # Build models
    model_a(inputs)
    model_b(inputs)
    model_c(inputs)
    model_d(inputs)

    # Save models
    save_tmp_model(model_a)
    save_tmp_model(model_b)
    save_tmp_model(model_c)
    save_tmp_model(model_d)


if __name__ == '__main__':
    main()
Saving model ModelA
.. success!
Saving model ModelB
.. success!
Saving model ModelC
Unable to save model: ModelC
Error message:
Tried to export a function which references untracked resource Tensor("1198:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
<tf.Variable 'model_c/layer_normalization_1/gamma:0' shape=(16,) dtype=float32>
Saving model ModelD
.. success!

Stopping and starting a deep learning google cloud VM instance causes tensorflow to stop recognizing GPU

copy iconCopydownload iconDownload
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
-----------------------
gsutil cp gs://dl-platform-public-nvidia/b191551132/restart_patch.sh /tmp/restart_patch.sh

chmod +x /tmp/restart_patch.sh

sudo /tmp/restart_patch.sh

sudo service jupyter restart

Tensorflow - Multi-GPU doesn’t work for model(inputs) nor when computing the gradients

copy iconCopydownload iconDownload
mirrored_strategy = tf.distribute.MirroredStrategy()
print(f'using distribution strategy\nnumber of gpus:{mirrored_strategy.num_replicas_in_sync}')

dataset=tf.data.Dataset.from_tensor_slices(np.random.rand(64,224,224,3)).batch(8)

#create distributed dataset
ds = mirrored_strategy.experimental_distribute_dataset(dataset)

#make variables mirrored
with mirrored_strategy.scope():
  resnet50=tf.keras.applications.resnet50.ResNet50()

def step_fn(pre_images):
  with tf.GradientTape(watch_accessed_variables=False) as tape:
       tape.watch(pre_images)
       outputs = resnet50(pre_images)[:,0:1]
  return tf.squeeze(tape.batch_jacobian(outputs, pre_images))

#define distributed step function using strategy.run and strategy.gather
@tf.function
def distributed_step_fn(pre_images):
  per_replica_grads = mirrored_strategy.run(step_fn, args=(pre_images,))
  return mirrored_strategy.gather(per_replica_grads,0)

#loop over distributed dataset with distributed_step_fn
for result in map(distributed_step_fn,ds):
  print(result.numpy().shape)

Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found

copy iconCopydownload iconDownload
 Move to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
Rename file cusolver64_11.dll  To  cusolver64_10.dll 
 cusolver64_10.dll 
-----------------------
 Move to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
Rename file cusolver64_11.dll  To  cusolver64_10.dll 
 cusolver64_10.dll 
-----------------------
 Move to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
Rename file cusolver64_11.dll  To  cusolver64_10.dll 
 cusolver64_10.dll 
-----------------------
CUDA Toolkit 11.0 (May 2020)
cuDNN v8.0.4 (September 28th, 2020), for CUDA 11.0 
-----------------------
c> python
Python 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:10:34) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2021-05-28 08:11:24.517894: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
>>> print(tf.version.VERSION)
2.5.0
>>> print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')),
...       '\nDevice: ', tf.config.list_physical_devices('GPU'))
2021-05-28 08:12:19.501812: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2021-05-28 08:12:19.530869: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1080 with Max-Q Design computeCapability: 6.1
coreClock: 1.468GHz coreCount: 20 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 298.32GiB/s
2021-05-28 08:12:19.531377: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-05-28 08:12:19.597785: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-05-28 08:12:19.597992: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2021-05-28 08:12:19.618849: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2021-05-28 08:12:19.634321: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2021-05-28 08:12:19.677539: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library **cusolver64_11.dll**
2021-05-28 08:12:19.731541: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2021-05-28 08:12:19.746271: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2021-05-28 08:12:19.746674: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
Num GPUs Available:  1
Device:  [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
>>>

Community Discussions

Trending Discussions on tensorflow
  • What is XlaBuilder for?
  • WebSocket not working when trying to send generated answer by keras
  • Could not resolve com.google.guava:guava:30.1-jre - Gradle project sync failed. Basic functionality will not work properly - in kotlin project
  • Tensorflow setup on RStudio/ R | CentOS
  • Saving model on Tensorflow 2.7.0 with data augmentation layer
  • Is it possible to use a collection of hyperspectral 1x1 pixels in a CNN model purposed for more conventional datasets (CIFAR-10/MNIST)?
  • ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'
  • Accuracy in Calculating Fourth Derivative using Finite Differences in Tensorflow
  • AssertionError: Tried to export a function which references untracked resource
  • Stopping and starting a deep learning google cloud VM instance causes tensorflow to stop recognizing GPU
Trending Discussions on tensorflow

QUESTION

What is XlaBuilder for?

Asked 2022-Mar-20 at 18:41

What's the XLA class XlaBuilder for? The docs describe its interface but don't provide a motivation.

The presentation in the docs, and indeed the comment above XlaBuilder in the source code

// A convenient interface for building up computations.

suggests it's no more than a utility. However, this doesn't appear to explain its behaviour in other places. For example, we can construct an XlaOp with an XlaBuilder via e.g.

XlaOp ConstantLiteral(XlaBuilder* builder, const LiteralSlice& literal);

Here, it's not clear to me what role builder plays (note functions for constructing XlaOps aren't documented on the published docs). Further, when I add two XlaOps (with + or Add) it appears the ops must be constructed with the same builder, else I see

F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error Invalid argument: No XlaOp with handle -1

Indeed, XlaOp retains a handle for an XlaBuilder. This suggests to me that the XlaBuilder has a more fundamental significance.

Beyond the title question, is there a use case for using multiple XlaBuilders, or would you typically use one global instance for everything?

ANSWER

Answered 2021-Dec-15 at 01:32

XlaBuilder is the C++ API for building up XLA computations -- conceptually this is like building up a function, full of various operations, that you could execute over and over again on different input data.

Some background, XLA serves as an abstraction layer for creating executable blobs that run on various target accelerators (CPU, GPU, TPU, IPU, ...), conceptually kind of an "accelerator virtual machine" with conceptual similarities to earlier systems like PeakStream or the line of work that led to ArBB.

The XlaBuilder is a way to enqueue operations into a "computation" (similar to a function) that you want to run against the various set of accelerators that XLA can target. The operations at this level are often referred to as "High Level Operations" (HLOs).

The returned XlaOp represents the result of the operation you've just enqueued. (Aside/nerdery: this is a classic technique used in "builder" APIs that represent the program in "Static Single Assignment" form under the hood, the operation itself and the result of the operation can be unified as one concept!)

XLA computations are very similar to functions, so you can think of what you're doing with an XlaBuilder like building up a function. (Aside: they're called "computations" because they do a little bit more than a straightforward function -- conceptually they are coroutines that can talk to an external "host" world and also talk to each other via networking facilities.)

So the fact XlaOps can't be used across XlaBuilders may make more sense with that context -- in the same way that when building up a function you can't grab intermediate results in the internals of other functions, you have to compose them with function calls / parameters. In XlaBuilder you can Call another built computation, which is a reason you might use multiple builders.

As you note, you can choose to inline everything into one "mega builder", but often programs are structured as functions that get composed together, and ultimately get called from a few different "entry points". XLA currently aggressively specializes for the entry points it sees API users using, but this is a design artifact similar to inlining decisions, XLA can conceptually reuse computations built up / invoked from multiple callers if it thought that was the right thing to do. Usually it's most natural to enqueue things into XLA however is convenient for your description from the "outside world", and allow XLA to inline and aggressively specialize the "entry point" computations you've built up as you execute them, in Just-in-Time compilation fashion.

Source https://stackoverflow.com/questions/70339753

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install tensorflow

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.
You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Support

The TensorFlow project strives to abide by generally accepted best practices in open-source software development:.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Compare Machine Learning Libraries with Highest Support
Compare Machine Learning Libraries with Highest Quality
Compare Machine Learning Libraries with Highest Security
Compare Machine Learning Libraries with Permissive License
Compare Machine Learning Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.