Explore all Tensorflow open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Tensorflow

tensorflow

TensorFlow 2.9.0-rc1

models

TensorFlow Official Models 2.7.1

transformers

v4.18.0: Checkpoint sharding, vision models

keras

Keras Release 2.9.0 RC2

faceswap

Faceswap Windows and Linux Installers v2.0.0

Popular Libraries in Tensorflow

tensorflow

by tensorflow doticonc++doticon

star image 164372 doticonApache-2.0

An Open Source Machine Learning Framework for Everyone

models

by tensorflow doticonpythondoticon

star image 73392 doticonNOASSERTION

Models and examples built with TensorFlow

transformers

by huggingface doticonpythondoticon

star image 61400 doticonApache-2.0

šŸ¤— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

keras

by keras-team doticonpythondoticon

star image 55007 doticonApache-2.0

Deep Learning for humans

TensorFlow-Examples

by aymericdamien doticonjupyter notebookdoticon

star image 41052 doticonNOASSERTION

TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

faceswap

by deepfakes doticonpythondoticon

star image 38275 doticonGPL-3.0

Deepfakes Software For All

caffe

by BVLC doticonc++doticon

star image 31723 doticonNOASSERTION

Caffe: a fast open framework for deep learning.

Deep-Learning-Papers-Reading-Roadmap

by floodsung doticonpythondoticon

star image 30347 doticon

Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!

bert

by google-research doticonpythondoticon

star image 28940 doticonApache-2.0

TensorFlow code and pre-trained models for BERT

Trending New libraries in Tensorflow

yolov5

by ultralytics doticonpythondoticon

star image 25236 doticonGPL-3.0

YOLOv5 šŸš€ in PyTorch > ONNX > CoreML > TFLite

eat_tensorflow2_in_30_days

by lyhue1991 doticonpythondoticon

star image 8872 doticonApache-2.0

Tensorflow2.0 šŸŽšŸŠ is delicious, just eat it! šŸ˜‹šŸ˜‹

detr

by facebookresearch doticonpythondoticon

star image 7464 doticonApache-2.0

End-to-End Object Detection with Transformers

CLIP

by openai doticonjupyter notebookdoticon

star image 7185 doticonMIT

Contrastive Language-Image Pretraining

DeepSpeed

by microsoft doticonpythondoticon

star image 6633 doticonMIT

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

gpt-neo

by EleutherAI doticonpythondoticon

star image 6100 doticonMIT

An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

PaddleGAN

by PaddlePaddle doticonpythondoticon

star image 5212 doticonApache-2.0

PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer, Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.

Real-Time-Person-Removal

by jasonmayes doticonjavascriptdoticon

star image 4798 doticonApache-2.0

Removing people from complex backgrounds in real time using TensorFlow.js in the web browser

MegEngine

by MegEngine doticonc++doticon

star image 4177 doticonNOASSERTION

MegEngine ę˜Æäø€äøŖåæ«é€Ÿć€åÆę‹“å±•ć€ę˜“äŗŽä½æē”Øäø”ę”Æꌁč‡ŖåŠØ걂åƼēš„ę·±åŗ¦å­¦ä¹ ę”†ęž¶

Top Authors in Tensorflow

1

PacktPublishing

204 Libraries

star icon13498

2

microsoft

95 Libraries

star icon86990

3

IBM

87 Libraries

star icon3964

4

aws-samples

75 Libraries

star icon2325

5

google

70 Libraries

star icon70986

6

tensorflow

69 Libraries

star icon384312

7

facebookresearch

69 Libraries

star icon42215

8

llSourcell

55 Libraries

star icon10064

9

taki0112

50 Libraries

star icon14754

10

titu1994

46 Libraries

star icon8104

1

204 Libraries

star icon13498

2

95 Libraries

star icon86990

3

87 Libraries

star icon3964

4

75 Libraries

star icon2325

5

70 Libraries

star icon70986

6

69 Libraries

star icon384312

7

69 Libraries

star icon42215

8

55 Libraries

star icon10064

9

50 Libraries

star icon14754

10

46 Libraries

star icon8104

Trending Kits in Tensorflow

No Trending Kits are available at this moment for Tensorflow

Trending Discussions on Tensorflow

What is XlaBuilder for?

WebSocket not working when trying to send generated answer by keras

Could not resolve com.google.guava:guava:30.1-jre - Gradle project sync failed. Basic functionality will not work properly - in kotlin project

Tensorflow setup on RStudio/ R | CentOS

Saving model on Tensorflow 2.7.0 with data augmentation layer

Is it possible to use a collection of hyperspectral 1x1 pixels in a CNN model purposed for more conventional datasets (CIFAR-10/MNIST)?

ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'

Accuracy in Calculating Fourth Derivative using Finite Differences in Tensorflow

AssertionError: Tried to export a function which references untracked resource

Stopping and starting a deep learning google cloud VM instance causes tensorflow to stop recognizing GPU

QUESTION

What is XlaBuilder for?

Asked 2022-Mar-20 at 18:41

What's the XLA class XlaBuilder for? The docs describe its interface but don't provide a motivation.

The presentation in the docs, and indeed the comment above XlaBuilder in the source code

1// A convenient interface for building up computations.
2

suggests it's no more than a utility. However, this doesn't appear to explain its behaviour in other places. For example, we can construct an XlaOp with an XlaBuilder via e.g.

1// A convenient interface for building up computations.
2XlaOp ConstantLiteral(XlaBuilder* builder, const LiteralSlice& literal);
3

Here, it's not clear to me what role builder plays (note functions for constructing XlaOps aren't documented on the published docs). Further, when I add two XlaOps (with + or Add) it appears the ops must be constructed with the same builder, else I see

1// A convenient interface for building up computations.
2XlaOp ConstantLiteral(XlaBuilder* builder, const LiteralSlice& literal);
3F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error Invalid argument: No XlaOp with handle -1
4

Indeed, XlaOp retains a handle for an XlaBuilder. This suggests to me that the XlaBuilder has a more fundamental significance.

Beyond the title question, is there a use case for using multiple XlaBuilders, or would you typically use one global instance for everything?

ANSWER

Answered 2021-Dec-15 at 01:32

XlaBuilder is the C++ API for building up XLA computations -- conceptually this is like building up a function, full of various operations, that you could execute over and over again on different input data.

Some background, XLA serves as an abstraction layer for creating executable blobs that run on various target accelerators (CPU, GPU, TPU, IPU, ...), conceptually kind of an "accelerator virtual machine" with conceptual similarities to earlier systems like PeakStream or the line of work that led to ArBB.

The XlaBuilder is a way to enqueue operations into a "computation" (similar to a function) that you want to run against the various set of accelerators that XLA can target. The operations at this level are often referred to as "High Level Operations" (HLOs).

The returned XlaOp represents the result of the operation you've just enqueued. (Aside/nerdery: this is a classic technique used in "builder" APIs that represent the program in "Static Single Assignment" form under the hood, the operation itself and the result of the operation can be unified as one concept!)

XLA computations are very similar to functions, so you can think of what you're doing with an XlaBuilder like building up a function. (Aside: they're called "computations" because they do a little bit more than a straightforward function -- conceptually they are coroutines that can talk to an external "host" world and also talk to each other via networking facilities.)

So the fact XlaOps can't be used across XlaBuilders may make more sense with that context -- in the same way that when building up a function you can't grab intermediate results in the internals of other functions, you have to compose them with function calls / parameters. In XlaBuilder you can Call another built computation, which is a reason you might use multiple builders.

As you note, you can choose to inline everything into one "mega builder", but often programs are structured as functions that get composed together, and ultimately get called from a few different "entry points". XLA currently aggressively specializes for the entry points it sees API users using, but this is a design artifact similar to inlining decisions, XLA can conceptually reuse computations built up / invoked from multiple callers if it thought that was the right thing to do. Usually it's most natural to enqueue things into XLA however is convenient for your description from the "outside world", and allow XLA to inline and aggressively specialize the "entry point" computations you've built up as you execute them, in Just-in-Time compilation fashion.

Source https://stackoverflow.com/questions/70339753

QUESTION

WebSocket not working when trying to send generated answer by keras

Asked 2022-Feb-17 at 12:52

I am implementing a simple chatbot using keras and WebSockets. I now have a model that can make a prediction about the user input and send the according answer.

When I do it through command line it works fine, however when I try to send the answer through my WebSocket, the WebSocket doesn't even start anymore.

Here is my working WebSocket code:

1@sock.route('/api')
2def echo(sock):
3    while True:
4        # get user input from browser
5        user_input = sock.receive()
6        # print user input on console
7        print(user_input)
8        # read answer from console
9        response = input()
10        # send response to browser
11        sock.send(response)
12

Here is my code to communicate with the keras model on command line:

1@sock.route('/api')
2def echo(sock):
3    while True:
4        # get user input from browser
5        user_input = sock.receive()
6        # print user input on console
7        print(user_input)
8        # read answer from console
9        response = input()
10        # send response to browser
11        sock.send(response)
12while True:
13    question = input("")
14    ints = predict(question)
15    answer = response(ints, json_data)
16    print(answer)
17

Used methods are those:

1@sock.route('/api')
2def echo(sock):
3    while True:
4        # get user input from browser
5        user_input = sock.receive()
6        # print user input on console
7        print(user_input)
8        # read answer from console
9        response = input()
10        # send response to browser
11        sock.send(response)
12while True:
13    question = input("")
14    ints = predict(question)
15    answer = response(ints, json_data)
16    print(answer)
17def predict(sentence):
18    bag_of_words = convert_sentence_in_bag_of_words(sentence)
19    # pass bag as list and get index 0
20    prediction = model.predict(np.array([bag_of_words]))[0]
21    ERROR_THRESHOLD = 0.25
22    accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
23
24    accepted_results.sort(key=lambda x: x[1], reverse=True)
25
26    output = []
27    for accepted_result in accepted_results:
28        output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
29        print(output)
30    return output
31
32
33def response(intents, json):
34    tag = intents[0]['intent']
35    intents_as_list = json['intents']
36    for i in intents_as_list:
37        if i['tag'] == tag:
38            res = random.choice(i['responses'])
39            break
40    return res
41

So when I start the WebSocket with the working code I get this output:

1@sock.route('/api')
2def echo(sock):
3    while True:
4        # get user input from browser
5        user_input = sock.receive()
6        # print user input on console
7        print(user_input)
8        # read answer from console
9        response = input()
10        # send response to browser
11        sock.send(response)
12while True:
13    question = input("")
14    ints = predict(question)
15    answer = response(ints, json_data)
16    print(answer)
17def predict(sentence):
18    bag_of_words = convert_sentence_in_bag_of_words(sentence)
19    # pass bag as list and get index 0
20    prediction = model.predict(np.array([bag_of_words]))[0]
21    ERROR_THRESHOLD = 0.25
22    accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
23
24    accepted_results.sort(key=lambda x: x[1], reverse=True)
25
26    output = []
27    for accepted_result in accepted_results:
28        output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
29        print(output)
30    return output
31
32
33def response(intents, json):
34    tag = intents[0]['intent']
35    intents_as_list = json['intents']
36    for i in intents_as_list:
37        if i['tag'] == tag:
38            res = random.choice(i['responses'])
39            break
40    return res
41 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
42 * Restarting with stat
43 * Serving Flask app 'server' (lazy loading)
44 * Environment: production
45   WARNING: This is a development server. Do not use it in a production deployment.
46   Use a production WSGI server instead.
47 * Debug mode: on
48

But as soon as I have anything of my model in the server.py class I get this output:

1@sock.route('/api')
2def echo(sock):
3    while True:
4        # get user input from browser
5        user_input = sock.receive()
6        # print user input on console
7        print(user_input)
8        # read answer from console
9        response = input()
10        # send response to browser
11        sock.send(response)
12while True:
13    question = input("")
14    ints = predict(question)
15    answer = response(ints, json_data)
16    print(answer)
17def predict(sentence):
18    bag_of_words = convert_sentence_in_bag_of_words(sentence)
19    # pass bag as list and get index 0
20    prediction = model.predict(np.array([bag_of_words]))[0]
21    ERROR_THRESHOLD = 0.25
22    accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
23
24    accepted_results.sort(key=lambda x: x[1], reverse=True)
25
26    output = []
27    for accepted_result in accepted_results:
28        output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
29        print(output)
30    return output
31
32
33def response(intents, json):
34    tag = intents[0]['intent']
35    intents_as_list = json['intents']
36    for i in intents_as_list:
37        if i['tag'] == tag:
38            res = random.choice(i['responses'])
39            break
40    return res
41 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
42 * Restarting with stat
43 * Serving Flask app 'server' (lazy loading)
44 * Environment: production
45   WARNING: This is a development server. Do not use it in a production deployment.
46   Use a production WSGI server instead.
47 * Debug mode: on
482022-02-13 11:31:38.887640: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
492022-02-13 11:31:38.887734: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
50Metal device set to: Apple M1
51
52systemMemory: 16.00 GB
53maxCacheSize: 5.33 GB
54

It is enough when I just have an import at the top like this: from chatty import response, predict - even though they are unused.

ANSWER

Answered 2022-Feb-16 at 19:53

There is no problem with your websocket route. Could you please share how you are triggering this route? Websocket is a different protocol and I'm suspecting that you are using a HTTP client to test websocket. For example in Postman:

Postman New Screen

HTTP requests are different than websocket requests. So, you should use appropriate client to test websocket.

Source https://stackoverflow.com/questions/71099818

QUESTION

Could not resolve com.google.guava:guava:30.1-jre - Gradle project sync failed. Basic functionality will not work properly - in kotlin project

Asked 2022-Feb-14 at 19:47

It was a project that used to work well in the past, but after updating, the following errors appear.

1plugins {
2    id 'com.android.application'
3    id 'kotlin-android'
4}
5
6android {
7    compileSdkVersion 30
8    buildToolsVersion "30.0.3"
9
10    defaultConfig {
11        applicationId "com.example.retrofit_test"
12        minSdkVersion 21
13        targetSdkVersion 30
14        versionCode 1
15        versionName "1.0"
16
17        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
18    }
19
20    buildTypes {
21        release {
22            minifyEnabled false
23            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
24        }
25    }
26    compileOptions {
27        sourceCompatibility JavaVersion.VERSION_1_8
28        targetCompatibility JavaVersion.VERSION_1_8
29    }
30    kotlinOptions {
31        jvmTarget = '1.8'
32    }
33}
34
35dependencies {
36
37//    implementation 'com.google.guava:guava:30.1.1-jre'
38    implementation 'com.google.guava:guava:30.1-jre'
39
40//    implementation 'org.jetbrains.kotlin:kotlin-stdlib:1.5.30-M1'
41
42    implementation 'androidx.core:core-ktx:1.6.0'
43    implementation 'androidx.appcompat:appcompat:1.3.1'
44    implementation 'com.google.android.material:material:1.4.0'
45    implementation 'androidx.constraintlayout:constraintlayout:2.1.0'
46    implementation 'androidx.navigation:navigation-fragment-ktx:2.3.5'
47    implementation 'androidx.navigation:navigation-ui-ktx:2.3.5'
48    implementation 'androidx.lifecycle:lifecycle-livedata-ktx:2.3.1'
49    implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.3.1'
50    implementation 'androidx.navigation:navigation-fragment-ktx:2.3.5'
51    implementation 'androidx.navigation:navigation-ui-ktx:2.3.5'
52
53    implementation 'com.squareup.retrofit2:retrofit:2.9.0'
54    implementation 'com.google.code.gson:gson:2.8.7'
55    implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
56    implementation 'com.squareup.okhttp3:logging-interceptor:4.9.1'
57
58    implementation 'com.github.bumptech.glide:glide:4.12.0'
59    implementation 'android.arch.persistence.room:guava:1.1.1'
60    annotationProcessor 'com.github.bumptech.glide:compiler:4.11.0'
61
62    testImplementation 'junit:junit:4.13.2'
63    androidTestImplementation 'androidx.test.ext:junit:1.1.3'
64    androidTestImplementation 'androidx.test.espresso:espresso-core:3.4.0'
65}
66

If we need more source code to check, I will update it.

The error contents are as follows.

A problem occurred configuring root project 'Retrofit_Test'.

1plugins {
2    id 'com.android.application'
3    id 'kotlin-android'
4}
5
6android {
7    compileSdkVersion 30
8    buildToolsVersion "30.0.3"
9
10    defaultConfig {
11        applicationId "com.example.retrofit_test"
12        minSdkVersion 21
13        targetSdkVersion 30
14        versionCode 1
15        versionName "1.0"
16
17        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
18    }
19
20    buildTypes {
21        release {
22            minifyEnabled false
23            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
24        }
25    }
26    compileOptions {
27        sourceCompatibility JavaVersion.VERSION_1_8
28        targetCompatibility JavaVersion.VERSION_1_8
29    }
30    kotlinOptions {
31        jvmTarget = '1.8'
32    }
33}
34
35dependencies {
36
37//    implementation 'com.google.guava:guava:30.1.1-jre'
38    implementation 'com.google.guava:guava:30.1-jre'
39
40//    implementation 'org.jetbrains.kotlin:kotlin-stdlib:1.5.30-M1'
41
42    implementation 'androidx.core:core-ktx:1.6.0'
43    implementation 'androidx.appcompat:appcompat:1.3.1'
44    implementation 'com.google.android.material:material:1.4.0'
45    implementation 'androidx.constraintlayout:constraintlayout:2.1.0'
46    implementation 'androidx.navigation:navigation-fragment-ktx:2.3.5'
47    implementation 'androidx.navigation:navigation-ui-ktx:2.3.5'
48    implementation 'androidx.lifecycle:lifecycle-livedata-ktx:2.3.1'
49    implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.3.1'
50    implementation 'androidx.navigation:navigation-fragment-ktx:2.3.5'
51    implementation 'androidx.navigation:navigation-ui-ktx:2.3.5'
52
53    implementation 'com.squareup.retrofit2:retrofit:2.9.0'
54    implementation 'com.google.code.gson:gson:2.8.7'
55    implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
56    implementation 'com.squareup.okhttp3:logging-interceptor:4.9.1'
57
58    implementation 'com.github.bumptech.glide:glide:4.12.0'
59    implementation 'android.arch.persistence.room:guava:1.1.1'
60    annotationProcessor 'com.github.bumptech.glide:compiler:4.11.0'
61
62    testImplementation 'junit:junit:4.13.2'
63    androidTestImplementation 'androidx.test.ext:junit:1.1.3'
64    androidTestImplementation 'androidx.test.espresso:espresso-core:3.4.0'
65}
66   Could not resolve all artifacts for configuration ':classpath'.
67   Could not find org.jetbrains.kotlin:kotlin-gradle-plugin:1.5.30.
68     Searched in the following locations:
69       - https://dl.google.com/dl/android/maven2/org/jetbrains/kotlin/kotlin-gradle-plugin/1.5.30/kotlin-gradle-plugin-1.5.30.pom
70     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
71     Required by:
72         project :
73   Could not find org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.4.32.
74     Searched in the following locations:
75       - https://dl.google.com/dl/android/maven2/org/jetbrains/kotlin/kotlin-stdlib-jdk8/1.4.32/kotlin-stdlib-jdk8-1.4.32.pom
76     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
77     Required by:
78         project : > com.android.tools.build:gradle:7.0.2
79         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
80         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:repository:30.0.2
81         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aaptcompiler:7.0.2
82         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2
83         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.lint:lint-model:30.0.2
84         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
85         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-host-retention-proto:30.0.2
86         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
87         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder-model:7.0.2
88         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:gradle-api:7.0.2
89         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2 > com.android.tools:common:30.0.2
90         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.analytics-library:tracker:30.0.2
91         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:manifest-merger:30.0.2
92   Could not find org.apache.httpcomponents:httpmime:4.5.6.
93     Searched in the following locations:
94       - https://dl.google.com/dl/android/maven2/org/apache/httpcomponents/httpmime/4.5.6/httpmime-4.5.6.pom
95     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
96     Required by:
97         project : > com.android.tools.build:gradle:7.0.2
98         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdklib:30.0.2
99         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:crash:30.0.2
100   Could not find commons-io:commons-io:2.4.
101     Searched in the following locations:
102       - https://dl.google.com/dl/android/maven2/commons-io/commons-io/2.4/commons-io-2.4.pom
103     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
104     Required by:
105         project : > com.android.tools.build:gradle:7.0.2
106         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
107   Could not find org.ow2.asm:asm:7.0.
108     Searched in the following locations:
109       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm/7.0/asm-7.0.pom
110     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
111     Required by:
112         project : > com.android.tools.build:gradle:7.0.2
113         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
114         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:gradle-api:7.0.2
115   Could not find org.ow2.asm:asm-analysis:7.0.
116     Searched in the following locations:
117       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm-analysis/7.0/asm-analysis-7.0.pom
118     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
119     Required by:
120         project : > com.android.tools.build:gradle:7.0.2
121   Could not find org.ow2.asm:asm-commons:7.0.
122     Searched in the following locations:
123       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm-commons/7.0/asm-commons-7.0.pom
124     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
125     Required by:
126         project : > com.android.tools.build:gradle:7.0.2
127         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
128   Could not find org.ow2.asm:asm-util:7.0.
129     Searched in the following locations:
130       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm-util/7.0/asm-util-7.0.pom
131     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
132     Required by:
133         project : > com.android.tools.build:gradle:7.0.2
134         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
135   Could not find org.bouncycastle:bcpkix-jdk15on:1.56.
136     Searched in the following locations:
137       - https://dl.google.com/dl/android/maven2/org/bouncycastle/bcpkix-jdk15on/1.56/bcpkix-jdk15on-1.56.pom
138     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
139     Required by:
140         project : > com.android.tools.build:gradle:7.0.2
141         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
142         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
143         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:apkzlib:7.0.2
144   Could not find org.glassfish.jaxb:jaxb-runtime:2.3.2.
145     Searched in the following locations:
146       - https://dl.google.com/dl/android/maven2/org/glassfish/jaxb/jaxb-runtime/2.3.2/jaxb-runtime-2.3.2.pom
147     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
148     Required by:
149         project : > com.android.tools.build:gradle:7.0.2
150         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
151         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdklib:30.0.2
152         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:repository:30.0.2
153         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
154   Could not find net.sf.jopt-simple:jopt-simple:4.9.
155     Searched in the following locations:
156       - https://dl.google.com/dl/android/maven2/net/sf/jopt-simple/jopt-simple/4.9/jopt-simple-4.9.pom
157     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
158     Required by:
159         project : > com.android.tools.build:gradle:7.0.2
160         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
161   Could not find com.squareup:javapoet:1.10.0.
162     Searched in the following locations:
163       - https://dl.google.com/dl/android/maven2/com/squareup/javapoet/1.10.0/javapoet-1.10.0.pom
164     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
165     Required by:
166         project : > com.android.tools.build:gradle:7.0.2
167         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
168   Could not find com.google.protobuf:protobuf-java:3.10.0.
169     Searched in the following locations:
170       - https://dl.google.com/dl/android/maven2/com/google/protobuf/protobuf-java/3.10.0/protobuf-java-3.10.0.pom
171     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
172     Required by:
173         project : > com.android.tools.build:gradle:7.0.2
174         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
175         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.ddms:ddmlib:30.0.2
176         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aapt2-proto:7.0.2-7396180
177         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aaptcompiler:7.0.2
178         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-device-provider-gradle-proto:30.0.2
179         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-host-retention-proto:30.0.2
180         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
181         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
182         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2 > com.android.tools.analytics-library:protos:30.0.2
183         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.analytics-library:tracker:30.0.2
184   Could not find com.google.protobuf:protobuf-java-util:3.10.0.
185     Searched in the following locations:
186       - https://dl.google.com/dl/android/maven2/com/google/protobuf/protobuf-java-util/3.10.0/protobuf-java-util-3.10.0.pom
187     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
188     Required by:
189         project : > com.android.tools.build:gradle:7.0.2
190         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
191   Could not find com.google.code.gson:gson:2.8.6.
192     Searched in the following locations:
193       - https://dl.google.com/dl/android/maven2/com/google/code/gson/gson/2.8.6/gson-2.8.6.pom
194     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
195     Required by:
196         project : > com.android.tools.build:gradle:7.0.2
197         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
198         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdklib:30.0.2
199         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2
200         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
201         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
202         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:manifest-merger:30.0.2
203   Could not find io.grpc:grpc-core:1.21.1.
204     Searched in the following locations:
205       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-core/1.21.1/grpc-core-1.21.1.pom
206     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
207     Required by:
208         project : > com.android.tools.build:gradle:7.0.2
209         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
210   Could not find io.grpc:grpc-netty:1.21.1.
211     Searched in the following locations:
212       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-netty/1.21.1/grpc-netty-1.21.1.pom
213     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
214     Required by:
215         project : > com.android.tools.build:gradle:7.0.2
216         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
217   Could not find io.grpc:grpc-protobuf:1.21.1.
218     Searched in the following locations:
219       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-protobuf/1.21.1/grpc-protobuf-1.21.1.pom
220     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
221     Required by:
222         project : > com.android.tools.build:gradle:7.0.2
223         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
224   Could not find io.grpc:grpc-stub:1.21.1.
225     Searched in the following locations:
226       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-stub/1.21.1/grpc-stub-1.21.1.pom
227     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
228     Required by:
229         project : > com.android.tools.build:gradle:7.0.2
230         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
231   Could not find com.google.crypto.tink:tink:1.3.0-rc2.
232     Searched in the following locations:
233       - https://dl.google.com/dl/android/maven2/com/google/crypto/tink/tink/1.3.0-rc2/tink-1.3.0-rc2.pom
234     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
235     Required by:
236         project : > com.android.tools.build:gradle:7.0.2
237   Could not find com.google.flatbuffers:flatbuffers-java:1.12.0.
238     Searched in the following locations:
239       - https://dl.google.com/dl/android/maven2/com/google/flatbuffers/flatbuffers-java/1.12.0/flatbuffers-java-1.12.0.pom
240     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
241     Required by:
242         project : > com.android.tools.build:gradle:7.0.2
243   Could not find org.tensorflow:tensorflow-lite-metadata:0.1.0-rc2.
244     Searched in the following locations:
245       - https://dl.google.com/dl/android/maven2/org/tensorflow/tensorflow-lite-metadata/0.1.0-rc2/tensorflow-lite-metadata-0.1.0-rc2.pom
246     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
247     Required by:
248         project : > com.android.tools.build:gradle:7.0.2
249   Could not find org.bouncycastle:bcprov-jdk15on:1.56.
250     Searched in the following locations:
251       - https://dl.google.com/dl/android/maven2/org/bouncycastle/bcprov-jdk15on/1.56/bcprov-jdk15on-1.56.pom
252     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
253     Required by:
254         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
255         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
256         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:apkzlib:7.0.2
257   Could not find com.google.guava:guava:30.1-jre.
258     Searched in the following locations:
259       - https://dl.google.com/dl/android/maven2/com/google/guava/guava/30.1-jre/guava-30.1-jre.pom
260     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
261     Required by:
262         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
263         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aaptcompiler:7.0.2
264         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:crash:30.0.2
265         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2
266         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
267         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder-test-api:7.0.2
268         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
269         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
270         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:gradle-api:7.0.2
271         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2 > com.android.tools:common:30.0.2
272         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.analytics-library:tracker:30.0.2
273   Could not find org.jetbrains.kotlin:kotlin-reflect:1.4.32.
274     Searched in the following locations:
275       - https://dl.google.com/dl/android/maven2/org/jetbrains/kotlin/kotlin-reflect/1.4.32/kotlin-reflect-1.4.32.pom
276     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
277     Required by:
278         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
279   Could not find javax.inject:javax.inject:1.
280     Searched in the following locations:
281       - https://dl.google.com/dl/android/maven2/javax/inject/javax.inject/1/javax.inject-1.pom
282     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
283     Required by:
284         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
285         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
286         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
287   Could not find net.sf.kxml:kxml2:2.3.0.
288     Searched in the following locations:
289       - https://dl.google.com/dl/android/maven2/net/sf/kxml/kxml2/2.3.0/kxml2-2.3.0.pom
290     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
291     Required by:
292         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
293         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.ddms:ddmlib:30.0.2
294         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.lint:lint-model:30.0.2
295         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.layoutlib:layoutlib-api:30.0.2
296         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:manifest-merger:30.0.2
297   Could not find org.jetbrains.intellij.deps:trove4j:1.0.20181211.
298     Searched in the following locations:
299       - https://dl.google.com/dl/android/maven2/org/jetbrains/intellij/deps/trove4j/1.0.20181211/trove4j-1.0.20181211.pom
300     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
301     Required by:
302         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
303   Could not find xerces:xercesImpl:2.12.0.
304     Searched in the following locations:
305       - https://dl.google.com/dl/android/maven2/xerces/xercesImpl/2.12.0/xercesImpl-2.12.0.pom
306     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
307     Required by:
308         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
309   Could not find org.apache.commons:commons-compress:1.20.
310     Searched in the following locations:
311       - https://dl.google.com/dl/android/maven2/org/apache/commons/commons-compress/1.20/commons-compress-1.20.pom
312     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
313

ANSWER

Answered 2021-Sep-17 at 11:03

Add mavenCentral() in Build Script

1plugins {
2    id 'com.android.application'
3    id 'kotlin-android'
4}
5
6android {
7    compileSdkVersion 30
8    buildToolsVersion "30.0.3"
9
10    defaultConfig {
11        applicationId "com.example.retrofit_test"
12        minSdkVersion 21
13        targetSdkVersion 30
14        versionCode 1
15        versionName "1.0"
16
17        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
18    }
19
20    buildTypes {
21        release {
22            minifyEnabled false
23            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
24        }
25    }
26    compileOptions {
27        sourceCompatibility JavaVersion.VERSION_1_8
28        targetCompatibility JavaVersion.VERSION_1_8
29    }
30    kotlinOptions {
31        jvmTarget = '1.8'
32    }
33}
34
35dependencies {
36
37//    implementation 'com.google.guava:guava:30.1.1-jre'
38    implementation 'com.google.guava:guava:30.1-jre'
39
40//    implementation 'org.jetbrains.kotlin:kotlin-stdlib:1.5.30-M1'
41
42    implementation 'androidx.core:core-ktx:1.6.0'
43    implementation 'androidx.appcompat:appcompat:1.3.1'
44    implementation 'com.google.android.material:material:1.4.0'
45    implementation 'androidx.constraintlayout:constraintlayout:2.1.0'
46    implementation 'androidx.navigation:navigation-fragment-ktx:2.3.5'
47    implementation 'androidx.navigation:navigation-ui-ktx:2.3.5'
48    implementation 'androidx.lifecycle:lifecycle-livedata-ktx:2.3.1'
49    implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.3.1'
50    implementation 'androidx.navigation:navigation-fragment-ktx:2.3.5'
51    implementation 'androidx.navigation:navigation-ui-ktx:2.3.5'
52
53    implementation 'com.squareup.retrofit2:retrofit:2.9.0'
54    implementation 'com.google.code.gson:gson:2.8.7'
55    implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
56    implementation 'com.squareup.okhttp3:logging-interceptor:4.9.1'
57
58    implementation 'com.github.bumptech.glide:glide:4.12.0'
59    implementation 'android.arch.persistence.room:guava:1.1.1'
60    annotationProcessor 'com.github.bumptech.glide:compiler:4.11.0'
61
62    testImplementation 'junit:junit:4.13.2'
63    androidTestImplementation 'androidx.test.ext:junit:1.1.3'
64    androidTestImplementation 'androidx.test.espresso:espresso-core:3.4.0'
65}
66   Could not resolve all artifacts for configuration ':classpath'.
67   Could not find org.jetbrains.kotlin:kotlin-gradle-plugin:1.5.30.
68     Searched in the following locations:
69       - https://dl.google.com/dl/android/maven2/org/jetbrains/kotlin/kotlin-gradle-plugin/1.5.30/kotlin-gradle-plugin-1.5.30.pom
70     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
71     Required by:
72         project :
73   Could not find org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.4.32.
74     Searched in the following locations:
75       - https://dl.google.com/dl/android/maven2/org/jetbrains/kotlin/kotlin-stdlib-jdk8/1.4.32/kotlin-stdlib-jdk8-1.4.32.pom
76     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
77     Required by:
78         project : > com.android.tools.build:gradle:7.0.2
79         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
80         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:repository:30.0.2
81         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aaptcompiler:7.0.2
82         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2
83         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.lint:lint-model:30.0.2
84         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
85         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-host-retention-proto:30.0.2
86         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
87         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder-model:7.0.2
88         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:gradle-api:7.0.2
89         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2 > com.android.tools:common:30.0.2
90         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.analytics-library:tracker:30.0.2
91         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:manifest-merger:30.0.2
92   Could not find org.apache.httpcomponents:httpmime:4.5.6.
93     Searched in the following locations:
94       - https://dl.google.com/dl/android/maven2/org/apache/httpcomponents/httpmime/4.5.6/httpmime-4.5.6.pom
95     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
96     Required by:
97         project : > com.android.tools.build:gradle:7.0.2
98         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdklib:30.0.2
99         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:crash:30.0.2
100   Could not find commons-io:commons-io:2.4.
101     Searched in the following locations:
102       - https://dl.google.com/dl/android/maven2/commons-io/commons-io/2.4/commons-io-2.4.pom
103     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
104     Required by:
105         project : > com.android.tools.build:gradle:7.0.2
106         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
107   Could not find org.ow2.asm:asm:7.0.
108     Searched in the following locations:
109       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm/7.0/asm-7.0.pom
110     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
111     Required by:
112         project : > com.android.tools.build:gradle:7.0.2
113         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
114         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:gradle-api:7.0.2
115   Could not find org.ow2.asm:asm-analysis:7.0.
116     Searched in the following locations:
117       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm-analysis/7.0/asm-analysis-7.0.pom
118     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
119     Required by:
120         project : > com.android.tools.build:gradle:7.0.2
121   Could not find org.ow2.asm:asm-commons:7.0.
122     Searched in the following locations:
123       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm-commons/7.0/asm-commons-7.0.pom
124     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
125     Required by:
126         project : > com.android.tools.build:gradle:7.0.2
127         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
128   Could not find org.ow2.asm:asm-util:7.0.
129     Searched in the following locations:
130       - https://dl.google.com/dl/android/maven2/org/ow2/asm/asm-util/7.0/asm-util-7.0.pom
131     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
132     Required by:
133         project : > com.android.tools.build:gradle:7.0.2
134         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
135   Could not find org.bouncycastle:bcpkix-jdk15on:1.56.
136     Searched in the following locations:
137       - https://dl.google.com/dl/android/maven2/org/bouncycastle/bcpkix-jdk15on/1.56/bcpkix-jdk15on-1.56.pom
138     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
139     Required by:
140         project : > com.android.tools.build:gradle:7.0.2
141         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
142         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
143         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:apkzlib:7.0.2
144   Could not find org.glassfish.jaxb:jaxb-runtime:2.3.2.
145     Searched in the following locations:
146       - https://dl.google.com/dl/android/maven2/org/glassfish/jaxb/jaxb-runtime/2.3.2/jaxb-runtime-2.3.2.pom
147     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
148     Required by:
149         project : > com.android.tools.build:gradle:7.0.2
150         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
151         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdklib:30.0.2
152         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:repository:30.0.2
153         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
154   Could not find net.sf.jopt-simple:jopt-simple:4.9.
155     Searched in the following locations:
156       - https://dl.google.com/dl/android/maven2/net/sf/jopt-simple/jopt-simple/4.9/jopt-simple-4.9.pom
157     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
158     Required by:
159         project : > com.android.tools.build:gradle:7.0.2
160         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
161   Could not find com.squareup:javapoet:1.10.0.
162     Searched in the following locations:
163       - https://dl.google.com/dl/android/maven2/com/squareup/javapoet/1.10.0/javapoet-1.10.0.pom
164     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
165     Required by:
166         project : > com.android.tools.build:gradle:7.0.2
167         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
168   Could not find com.google.protobuf:protobuf-java:3.10.0.
169     Searched in the following locations:
170       - https://dl.google.com/dl/android/maven2/com/google/protobuf/protobuf-java/3.10.0/protobuf-java-3.10.0.pom
171     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
172     Required by:
173         project : > com.android.tools.build:gradle:7.0.2
174         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
175         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.ddms:ddmlib:30.0.2
176         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aapt2-proto:7.0.2-7396180
177         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aaptcompiler:7.0.2
178         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-device-provider-gradle-proto:30.0.2
179         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-host-retention-proto:30.0.2
180         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
181         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
182         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2 > com.android.tools.analytics-library:protos:30.0.2
183         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.analytics-library:tracker:30.0.2
184   Could not find com.google.protobuf:protobuf-java-util:3.10.0.
185     Searched in the following locations:
186       - https://dl.google.com/dl/android/maven2/com/google/protobuf/protobuf-java-util/3.10.0/protobuf-java-util-3.10.0.pom
187     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
188     Required by:
189         project : > com.android.tools.build:gradle:7.0.2
190         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
191   Could not find com.google.code.gson:gson:2.8.6.
192     Searched in the following locations:
193       - https://dl.google.com/dl/android/maven2/com/google/code/gson/gson/2.8.6/gson-2.8.6.pom
194     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
195     Required by:
196         project : > com.android.tools.build:gradle:7.0.2
197         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
198         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdklib:30.0.2
199         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2
200         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
201         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
202         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:manifest-merger:30.0.2
203   Could not find io.grpc:grpc-core:1.21.1.
204     Searched in the following locations:
205       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-core/1.21.1/grpc-core-1.21.1.pom
206     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
207     Required by:
208         project : > com.android.tools.build:gradle:7.0.2
209         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
210   Could not find io.grpc:grpc-netty:1.21.1.
211     Searched in the following locations:
212       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-netty/1.21.1/grpc-netty-1.21.1.pom
213     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
214     Required by:
215         project : > com.android.tools.build:gradle:7.0.2
216         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
217   Could not find io.grpc:grpc-protobuf:1.21.1.
218     Searched in the following locations:
219       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-protobuf/1.21.1/grpc-protobuf-1.21.1.pom
220     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
221     Required by:
222         project : > com.android.tools.build:gradle:7.0.2
223         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
224   Could not find io.grpc:grpc-stub:1.21.1.
225     Searched in the following locations:
226       - https://dl.google.com/dl/android/maven2/io/grpc/grpc-stub/1.21.1/grpc-stub-1.21.1.pom
227     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
228     Required by:
229         project : > com.android.tools.build:gradle:7.0.2
230         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
231   Could not find com.google.crypto.tink:tink:1.3.0-rc2.
232     Searched in the following locations:
233       - https://dl.google.com/dl/android/maven2/com/google/crypto/tink/tink/1.3.0-rc2/tink-1.3.0-rc2.pom
234     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
235     Required by:
236         project : > com.android.tools.build:gradle:7.0.2
237   Could not find com.google.flatbuffers:flatbuffers-java:1.12.0.
238     Searched in the following locations:
239       - https://dl.google.com/dl/android/maven2/com/google/flatbuffers/flatbuffers-java/1.12.0/flatbuffers-java-1.12.0.pom
240     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
241     Required by:
242         project : > com.android.tools.build:gradle:7.0.2
243   Could not find org.tensorflow:tensorflow-lite-metadata:0.1.0-rc2.
244     Searched in the following locations:
245       - https://dl.google.com/dl/android/maven2/org/tensorflow/tensorflow-lite-metadata/0.1.0-rc2/tensorflow-lite-metadata-0.1.0-rc2.pom
246     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
247     Required by:
248         project : > com.android.tools.build:gradle:7.0.2
249   Could not find org.bouncycastle:bcprov-jdk15on:1.56.
250     Searched in the following locations:
251       - https://dl.google.com/dl/android/maven2/org/bouncycastle/bcprov-jdk15on/1.56/bcprov-jdk15on-1.56.pom
252     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
253     Required by:
254         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
255         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
256         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:apkzlib:7.0.2
257   Could not find com.google.guava:guava:30.1-jre.
258     Searched in the following locations:
259       - https://dl.google.com/dl/android/maven2/com/google/guava/guava/30.1-jre/guava-30.1-jre.pom
260     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
261     Required by:
262         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
263         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:aaptcompiler:7.0.2
264         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:crash:30.0.2
265         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.analytics-library:shared:30.0.2
266         project : > com.android.tools.build:gradle:7.0.2 > androidx.databinding:databinding-compiler-common:7.0.2
267         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder-test-api:7.0.2
268         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.utp:android-test-plugin-result-listener-gradle-proto:30.0.2
269         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
270         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:gradle-api:7.0.2
271         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2 > com.android.tools:common:30.0.2
272         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.analytics-library:tracker:30.0.2
273   Could not find org.jetbrains.kotlin:kotlin-reflect:1.4.32.
274     Searched in the following locations:
275       - https://dl.google.com/dl/android/maven2/org/jetbrains/kotlin/kotlin-reflect/1.4.32/kotlin-reflect-1.4.32.pom
276     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
277     Required by:
278         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
279   Could not find javax.inject:javax.inject:1.
280     Searched in the following locations:
281       - https://dl.google.com/dl/android/maven2/javax/inject/javax.inject/1/javax.inject-1.pom
282     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
283     Required by:
284         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
285         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:bundletool:1.6.0
286         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2
287   Could not find net.sf.kxml:kxml2:2.3.0.
288     Searched in the following locations:
289       - https://dl.google.com/dl/android/maven2/net/sf/kxml/kxml2/2.3.0/kxml2-2.3.0.pom
290     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
291     Required by:
292         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
293         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.ddms:ddmlib:30.0.2
294         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.lint:lint-model:30.0.2
295         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.layoutlib:layoutlib-api:30.0.2
296         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools.build:builder:7.0.2 > com.android.tools.build:manifest-merger:30.0.2
297   Could not find org.jetbrains.intellij.deps:trove4j:1.0.20181211.
298     Searched in the following locations:
299       - https://dl.google.com/dl/android/maven2/org/jetbrains/intellij/deps/trove4j/1.0.20181211/trove4j-1.0.20181211.pom
300     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
301     Required by:
302         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
303   Could not find xerces:xercesImpl:2.12.0.
304     Searched in the following locations:
305       - https://dl.google.com/dl/android/maven2/xerces/xercesImpl/2.12.0/xercesImpl-2.12.0.pom
306     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
307     Required by:
308         project : > com.android.tools.build:gradle:7.0.2 > com.android.tools:sdk-common:30.0.2
309   Could not find org.apache.commons:commons-compress:1.20.
310     Searched in the following locations:
311       - https://dl.google.com/dl/android/maven2/org/apache/commons/commons-compress/1.20/commons-compress-1.20.pom
312     If the artifact you are trying to retrieve can be found in the repository but without metadata in 'Maven POM' format, you need to adjust the 'metadataSources { ... }' of the repository declaration.
313    repositories {
314        mavenCentral()
315        google()
316    }
317
318

Source https://stackoverflow.com/questions/69205327

QUESTION

Tensorflow setup on RStudio/ R | CentOS

Asked 2022-Feb-11 at 09:36

For the last 5 days, I am trying to make Keras/Tensorflow packages work in R. I am using RStudio for installation and have used conda, miniconda, virtualenv but it crashes each time in the end. Installing a library should not be a nightmare especially when we are talking about R (one of the best statistical languages) and TensorFlow (one of the best deep learning libraries). Can someone share a reliable way to install Keras/Tensorflow on CentOS 7?

Following are the steps I am using to install tensorflow in RStudio.

Since RStudio simply crashes each time I run tensorflow::tf_config() I have no way to check what is going wrong.

enter image description here

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41

Update 1 The only way RStudio does not crash while installing tensorflow is by executing following steps -

First, I created a new virtual environment using conda

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44

Then from within RStudio, I installed reticulate and activated the virtual environment which I earlier created using conda

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50

As soon as I try to import tensorflow in RStudio, it loads the library /lib64/libstdc++.so.6 instead of /root/.conda/envs/py38/lib/libstdc++.so.6 and I get the following error -

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65

Here is what inside /lib64/libstdc++.so.6

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93

To resolve the library issue, I added the path of the correct libstdc++.so.6 library having GLIBCXX_3.4.20 in RStudio.

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94

and, also

1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95

But still I get the same error ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20'. Somehow RStudio still loads /lib64/libstdc++.so.6 first instead of /root/.conda/envs/py38/lib/libstdc++.so.6

Instead of RStudio, if I execute the above steps in the R console, then also I get the exact same error.

Update 2: A solution is posted here

ANSWER

Answered 2022-Jan-16 at 00:08

Perhaps my failed attempts will help someone else solve this problem; my approach:

  • boot up a clean CentOS 7 vm
  • install R and some dependencies
1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95sudo yum install epel-release
96sudo yum install R
97sudo yum install libxml2-devel
98sudo yum install openssl-devel
99sudo yum install libcurl-devel
100sudo yumĀ installĀ libXcompositeĀ libXcursorĀ libXiĀ libXtstĀ libXrandrĀ alsa-libĀ mesa-libEGLĀ libXdamageĀ mesa-libGLĀ libXScrnSaver
101
  • Download and install Anaconda via linux installer script
  • Create a new conda env
1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95sudo yum install epel-release
96sudo yum install R
97sudo yum install libxml2-devel
98sudo yum install openssl-devel
99sudo yum install libcurl-devel
100sudo yumĀ installĀ libXcompositeĀ libXcursorĀ libXiĀ libXtstĀ libXrandrĀ alsa-libĀ mesa-libEGLĀ libXdamageĀ mesa-libGLĀ libXScrnSaver
101conda init
102conda create --name tf
103conda activate tf
104condaĀ installĀ -c conda-forge tensorflow
105

**From within this conda env you can import tensorflow in python without error; now to access tf via R

  • install an updated gcc via devtoolset
1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95sudo yum install epel-release
96sudo yum install R
97sudo yum install libxml2-devel
98sudo yum install openssl-devel
99sudo yum install libcurl-devel
100sudo yumĀ installĀ libXcompositeĀ libXcursorĀ libXiĀ libXtstĀ libXrandrĀ alsa-libĀ mesa-libEGLĀ libXdamageĀ mesa-libGLĀ libXScrnSaver
101conda init
102conda create --name tf
103conda activate tf
104condaĀ installĀ -c conda-forge tensorflow
105sudo yum install centos-release-scl
106sudo yum install devtoolset-7-gcc*
107
  • attempt to use tensorflow in R via the reticulate package
1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95sudo yum install epel-release
96sudo yum install R
97sudo yum install libxml2-devel
98sudo yum install openssl-devel
99sudo yum install libcurl-devel
100sudo yumĀ installĀ libXcompositeĀ libXcursorĀ libXiĀ libXtstĀ libXrandrĀ alsa-libĀ mesa-libEGLĀ libXdamageĀ mesa-libGLĀ libXScrnSaver
101conda init
102conda create --name tf
103conda activate tf
104condaĀ installĀ -c conda-forge tensorflow
105sudo yum install centos-release-scl
106sudo yum install devtoolset-7-gcc*
107scl enable devtoolset-7 R
108install.packages("remotes")
109remotes::install_github('rstudio/reticulate')
110reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
111reticulate::repl_python()
112# This works as expected but the command "import tensorflow" crashes R
113# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
114
115# Also tried:
116install.packages("devtools")
117devtools::install_github('rstudio/tensorflow')
118devtools::install_github('rstudio/keras')
119library(tensorflow)
120install_tensorflow() # "successful"
121tensorflow::tf_config()
122# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
123
  • try older versions of tensorflow/keras
1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95sudo yum install epel-release
96sudo yum install R
97sudo yum install libxml2-devel
98sudo yum install openssl-devel
99sudo yum install libcurl-devel
100sudo yumĀ installĀ libXcompositeĀ libXcursorĀ libXiĀ libXtstĀ libXrandrĀ alsa-libĀ mesa-libEGLĀ libXdamageĀ mesa-libGLĀ libXScrnSaver
101conda init
102conda create --name tf
103conda activate tf
104condaĀ installĀ -c conda-forge tensorflow
105sudo yum install centos-release-scl
106sudo yum install devtoolset-7-gcc*
107scl enable devtoolset-7 R
108install.packages("remotes")
109remotes::install_github('rstudio/reticulate')
110reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
111reticulate::repl_python()
112# This works as expected but the command "import tensorflow" crashes R
113# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
114
115# Also tried:
116install.packages("devtools")
117devtools::install_github('rstudio/tensorflow')
118devtools::install_github('rstudio/keras')
119library(tensorflow)
120install_tensorflow() # "successful"
121tensorflow::tf_config()
122# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
123devtools::install_github('rstudio/tensorflow@v2.4.0')
124devtools::install_github('rstudio/keras@v2.4.0')
125library(tensorflow)
126tf_config()
127# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
128
  • Try an updated version of R (v4.0)
1devtools::install_github("rstudio/reticulate")
2devtools::install_github("rstudio/keras") # This package also installs tensorflow
3library(reticulate)
4reticulate::install_miniconda()
5reticulate::use_miniconda("r-reticulate")
6library(tensorflow)
7tensorflow::tf_config() **# Crashes at this point**
8
9sessionInfo()
10
11
12R version 3.6.0 (2019-04-26)
13Platform: x86_64-redhat-linux-gnu (64-bit)
14Running under: CentOS Linux 7 (Core)
15
16Matrix products: default
17BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
18
19locale:
20 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
21 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
22 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
23 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
24 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
25[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
26
27attached base packages:
28[1] stats     graphics  grDevices utils     datasets  methods   base     
29
30other attached packages:
31[1] tensorflow_2.7.0.9000 keras_2.7.0.9000      reticulate_1.22-9000 
32
33loaded via a namespace (and not attached):
34 [1] Rcpp_1.0.7      lattice_0.20-45 png_0.1-7       zeallot_0.1.0  
35 [5] rappdirs_0.3.3  grid_3.6.0      R6_2.5.1        jsonlite_1.7.2 
36 [9] magrittr_2.0.1  tfruns_1.5.0    rlang_0.4.12    whisker_0.4    
37[13] Matrix_1.3-4    generics_0.1.1  tools_3.6.0     compiler_3.6.0 
38[17] base64enc_0.1-3
39
40
41conda create --name py38 python=3.8.0
42conda activate py38
43conda install tensorflow=2.4
44devtools::install_github("rstudio/reticulate")
45library(reticulate)
46reticulate::use_condaenv("/root/.conda/envs/py38", required = TRUE)
47reticulate::use_python("/root/.conda/envs/py38/bin/python3.8", required = TRUE)
48reticulate::py_available(initialize = TRUE)
49ts <- reticulate::import("tensorflow")
50Error in py_module_import(module, convert = convert) : 
51  ImportError: Traceback (most recent call last):
52  File "/root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64, in <module>
53    from tensorflow.python._pywrap_tensorflow_internal import *
54  File "/home/R/x86_64-redhat-linux-gnu-library/3.6/reticulate/python/rpytools/loader.py", line 39, in _import_hook
55    module = _import(
56ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/.conda/envs/py38/lib/python3.8/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)
57
58
59Failed to load the native TensorFlow runtime.
60
61See https://www.tensorflow.org/install/errors
62
63for some common reasons and solutions.  Include the entire stack trace
64above this error message when asking for help.
65> strings /lib64/libstdc++.so.6 | grep GLIBC
66
67GLIBCXX_3.4
68GLIBCXX_3.4.1
69GLIBCXX_3.4.2
70GLIBCXX_3.4.3
71GLIBCXX_3.4.4
72GLIBCXX_3.4.5
73GLIBCXX_3.4.6
74GLIBCXX_3.4.7
75GLIBCXX_3.4.8
76GLIBCXX_3.4.9
77GLIBCXX_3.4.10
78GLIBCXX_3.4.11
79GLIBCXX_3.4.12
80GLIBCXX_3.4.13
81GLIBCXX_3.4.14
82GLIBCXX_3.4.15
83GLIBCXX_3.4.16
84GLIBCXX_3.4.17
85GLIBCXX_3.4.18
86GLIBCXX_3.4.19
87GLIBC_2.3
88GLIBC_2.2.5
89GLIBC_2.14
90GLIBC_2.4
91GLIBC_2.3.2
92GLIBCXX_DEBUG_MESSAGE_LENGTH
93system('export LD_LIBRARY_PATH=/root/.conda/envs/py38/lib/:$LD_LIBRARY_PATH')
94Sys.setenv("LD_LIBRARY_PATH" = "/root/.conda/envs/py38/lib")
95sudo yum install epel-release
96sudo yum install R
97sudo yum install libxml2-devel
98sudo yum install openssl-devel
99sudo yum install libcurl-devel
100sudo yumĀ installĀ libXcompositeĀ libXcursorĀ libXiĀ libXtstĀ libXrandrĀ alsa-libĀ mesa-libEGLĀ libXdamageĀ mesa-libGLĀ libXScrnSaver
101conda init
102conda create --name tf
103conda activate tf
104condaĀ installĀ -c conda-forge tensorflow
105sudo yum install centos-release-scl
106sudo yum install devtoolset-7-gcc*
107scl enable devtoolset-7 R
108install.packages("remotes")
109remotes::install_github('rstudio/reticulate')
110reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
111reticulate::repl_python()
112# This works as expected but the command "import tensorflow" crashes R
113# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
114
115# Also tried:
116install.packages("devtools")
117devtools::install_github('rstudio/tensorflow')
118devtools::install_github('rstudio/keras')
119library(tensorflow)
120install_tensorflow() # "successful"
121tensorflow::tf_config()
122# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
123devtools::install_github('rstudio/tensorflow@v2.4.0')
124devtools::install_github('rstudio/keras@v2.4.0')
125library(tensorflow)
126tf_config()
127# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
128# deactivate conda
129sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
130export R_VERSION=4.0.0
131curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
132sudo yum install R-${R_VERSION}-1-1.x86_64.rpm
133
134scl enable devtoolset-7 /opt/R/4.0.0/bin/R
135install.packages("devtools")
136devtools::install_github('rstudio/reticulate')
137reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
138reticulate::repl_python()
139# 'import tensorflow' resulted in "core dumped"
140

I guess the issue is with R/CentOS, as you can import and use tensorflow via python normally, but I'm not sure what else to try.

I would also like to say that I had no issues with Ubuntu (which is specifically supported by tensorflow, along with macOS and Windows), and I came across these docs that might be some help: https://wiki.hpcc.msu.edu/display/ITH/Installing+TensorFlow+using+anaconda / https://wiki.hpcc.msu.edu/pages/viewpage.action?pageId=22709999

Source https://stackoverflow.com/questions/70645074

QUESTION

Saving model on Tensorflow 2.7.0 with data augmentation layer

Asked 2022-Feb-04 at 17:25

I am getting an error when trying to save a model with data augmentation layers with Tensorflow version 2.7.0.

Here is the code of data augmentation:

1input_shape_rgb = (img_height, img_width, 3)
2data_augmentation_rgb = tf.keras.Sequential(
3  [ 
4    layers.RandomFlip("horizontal"),
5    layers.RandomFlip("vertical"),
6    layers.RandomRotation(0.5),
7    layers.RandomZoom(0.5),
8    layers.RandomContrast(0.5),
9    RandomColorDistortion(name='random_contrast_brightness/none'),
10  ]
11)
12

Now I build my model like this:

1input_shape_rgb = (img_height, img_width, 3)
2data_augmentation_rgb = tf.keras.Sequential(
3  [ 
4    layers.RandomFlip("horizontal"),
5    layers.RandomFlip("vertical"),
6    layers.RandomRotation(0.5),
7    layers.RandomZoom(0.5),
8    layers.RandomContrast(0.5),
9    RandomColorDistortion(name='random_contrast_brightness/none'),
10  ]
11)
12# Build the model
13input_shape = (img_height, img_width, 3)
14
15model = Sequential([
16  layers.Input(input_shape),
17  data_augmentation_rgb,
18  layers.Rescaling((1./255)),
19
20  layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
21     data_format='channels_last'),
22  layers.MaxPooling2D(),
23  layers.BatchNormalization(),
24
25  layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
26  layers.MaxPooling2D(),
27  layers.BatchNormalization(),
28
29  layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
30  layers.MaxPooling2D(),
31  layers.BatchNormalization(),
32
33  layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
34  layers.MaxPooling2D(),
35  layers.BatchNormalization(),
36
37  layers.Flatten(),
38  layers.Dense(128, activation='relu'), # best 1
39  layers.Dropout(0.1),
40  layers.Dense(128, activation='relu'), # best 1
41  layers.Dropout(0.1),
42  layers.Dense(64, activation='relu'), # best 1
43  layers.Dropout(0.1),
44  layers.Dense(num_classes, activation = 'softmax')
45 ])
46
47 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=metrics)
48 model.summary()
49

Then after the training is done I just make:

1input_shape_rgb = (img_height, img_width, 3)
2data_augmentation_rgb = tf.keras.Sequential(
3  [ 
4    layers.RandomFlip("horizontal"),
5    layers.RandomFlip("vertical"),
6    layers.RandomRotation(0.5),
7    layers.RandomZoom(0.5),
8    layers.RandomContrast(0.5),
9    RandomColorDistortion(name='random_contrast_brightness/none'),
10  ]
11)
12# Build the model
13input_shape = (img_height, img_width, 3)
14
15model = Sequential([
16  layers.Input(input_shape),
17  data_augmentation_rgb,
18  layers.Rescaling((1./255)),
19
20  layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
21     data_format='channels_last'),
22  layers.MaxPooling2D(),
23  layers.BatchNormalization(),
24
25  layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
26  layers.MaxPooling2D(),
27  layers.BatchNormalization(),
28
29  layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
30  layers.MaxPooling2D(),
31  layers.BatchNormalization(),
32
33  layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
34  layers.MaxPooling2D(),
35  layers.BatchNormalization(),
36
37  layers.Flatten(),
38  layers.Dense(128, activation='relu'), # best 1
39  layers.Dropout(0.1),
40  layers.Dense(128, activation='relu'), # best 1
41  layers.Dropout(0.1),
42  layers.Dense(64, activation='relu'), # best 1
43  layers.Dropout(0.1),
44  layers.Dense(num_classes, activation = 'softmax')
45 ])
46
47 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=metrics)
48 model.summary()
49model.save("./")
50

And I'm getting this error:

1input_shape_rgb = (img_height, img_width, 3)
2data_augmentation_rgb = tf.keras.Sequential(
3  [ 
4    layers.RandomFlip("horizontal"),
5    layers.RandomFlip("vertical"),
6    layers.RandomRotation(0.5),
7    layers.RandomZoom(0.5),
8    layers.RandomContrast(0.5),
9    RandomColorDistortion(name='random_contrast_brightness/none'),
10  ]
11)
12# Build the model
13input_shape = (img_height, img_width, 3)
14
15model = Sequential([
16  layers.Input(input_shape),
17  data_augmentation_rgb,
18  layers.Rescaling((1./255)),
19
20  layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
21     data_format='channels_last'),
22  layers.MaxPooling2D(),
23  layers.BatchNormalization(),
24
25  layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
26  layers.MaxPooling2D(),
27  layers.BatchNormalization(),
28
29  layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
30  layers.MaxPooling2D(),
31  layers.BatchNormalization(),
32
33  layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
34  layers.MaxPooling2D(),
35  layers.BatchNormalization(),
36
37  layers.Flatten(),
38  layers.Dense(128, activation='relu'), # best 1
39  layers.Dropout(0.1),
40  layers.Dense(128, activation='relu'), # best 1
41  layers.Dropout(0.1),
42  layers.Dense(64, activation='relu'), # best 1
43  layers.Dropout(0.1),
44  layers.Dense(num_classes, activation = 'softmax')
45 ])
46
47 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=metrics)
48 model.summary()
49model.save("./")
50---------------------------------------------------------------------------
51KeyError                                  Traceback (most recent call last)
52<ipython-input-84-87d3f09f8bee> in <module>()
53----> 1 model.save("./")
54
55
56/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in 
57 error_handler(*args, **kwargs)
58 65     except Exception as e:  # pylint: disable=broad-except
59 66       filtered_tb = _process_traceback_frames(e.__traceback__)
60 ---> 67       raise e.with_traceback(filtered_tb) from None
61 68     finally:
62 69       del filtered_tb
63
64 /usr/local/lib/python3.7/dist- 
65 packages/tensorflow/python/saved_model/function_serialization.py in 
66 serialize_concrete_function(concrete_function, node_ids, coder)
67 66   except KeyError:
68 67     raise KeyError(
69 ---> 68         f"Failed to add concrete function '{concrete_function.name}' to 
70 object-"
71 69         f"based SavedModel as it captures tensor {capture!r} which is 
72 unsupported"
73 70         " or not reachable from root. "
74
75 KeyError: "Failed to add concrete function 
76 'b'__inference_sequential_46_layer_call_fn_662953'' to object-based SavedModel as it 
77 captures tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which 
78 is unsupported or not reachable from root. One reason could be that a stateful 
79 object or a variable that the function depends on is not assigned to an attribute of 
80 the serialized trackable object (see SaveTest.test_captures_unreachable_variable)."
81

I inspected the reason of getting this error by changing the architecture of my model and I just found that reason came from the data_augmentation layer since the RandomFlip and RandomRotation and others are changed from layers.experimental.prepocessing.RandomFlip to layers.RandomFlip, but still the error appears.

ANSWER

Answered 2022-Feb-04 at 17:25

This seems to be a bug in Tensorflow 2.7 when using model.save combined with the parameter save_format="tf", which is set by default. The layers RandomFlip, RandomRotation, RandomZoom, and RandomContrast are causing the problems, since they are not serializable. Interestingly, the Rescaling layer can be saved without any problems. A workaround would be to simply save your model with the older Keras H5 format model.save("test", save_format='h5'):

1input_shape_rgb = (img_height, img_width, 3)
2data_augmentation_rgb = tf.keras.Sequential(
3  [ 
4    layers.RandomFlip("horizontal"),
5    layers.RandomFlip("vertical"),
6    layers.RandomRotation(0.5),
7    layers.RandomZoom(0.5),
8    layers.RandomContrast(0.5),
9    RandomColorDistortion(name='random_contrast_brightness/none'),
10  ]
11)
12# Build the model
13input_shape = (img_height, img_width, 3)
14
15model = Sequential([
16  layers.Input(input_shape),
17  data_augmentation_rgb,
18  layers.Rescaling((1./255)),
19
20  layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
21     data_format='channels_last'),
22  layers.MaxPooling2D(),
23  layers.BatchNormalization(),
24
25  layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
26  layers.MaxPooling2D(),
27  layers.BatchNormalization(),
28
29  layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
30  layers.MaxPooling2D(),
31  layers.BatchNormalization(),
32
33  layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
34  layers.MaxPooling2D(),
35  layers.BatchNormalization(),
36
37  layers.Flatten(),
38  layers.Dense(128, activation='relu'), # best 1
39  layers.Dropout(0.1),
40  layers.Dense(128, activation='relu'), # best 1
41  layers.Dropout(0.1),
42  layers.Dense(64, activation='relu'), # best 1
43  layers.Dropout(0.1),
44  layers.Dense(num_classes, activation = 'softmax')
45 ])
46
47 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=metrics)
48 model.summary()
49model.save("./")
50---------------------------------------------------------------------------
51KeyError                                  Traceback (most recent call last)
52<ipython-input-84-87d3f09f8bee> in <module>()
53----> 1 model.save("./")
54
55
56/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in 
57 error_handler(*args, **kwargs)
58 65     except Exception as e:  # pylint: disable=broad-except
59 66       filtered_tb = _process_traceback_frames(e.__traceback__)
60 ---> 67       raise e.with_traceback(filtered_tb) from None
61 68     finally:
62 69       del filtered_tb
63
64 /usr/local/lib/python3.7/dist- 
65 packages/tensorflow/python/saved_model/function_serialization.py in 
66 serialize_concrete_function(concrete_function, node_ids, coder)
67 66   except KeyError:
68 67     raise KeyError(
69 ---> 68         f"Failed to add concrete function '{concrete_function.name}' to 
70 object-"
71 69         f"based SavedModel as it captures tensor {capture!r} which is 
72 unsupported"
73 70         " or not reachable from root. "
74
75 KeyError: "Failed to add concrete function 
76 'b'__inference_sequential_46_layer_call_fn_662953'' to object-based SavedModel as it 
77 captures tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which 
78 is unsupported or not reachable from root. One reason could be that a stateful 
79 object or a variable that the function depends on is not assigned to an attribute of 
80 the serialized trackable object (see SaveTest.test_captures_unreachable_variable)."
81import tensorflow as tf
82import numpy as np
83
84class RandomColorDistortion(tf.keras.layers.Layer):
85    def __init__(self, contrast_range=[0.5, 1.5], 
86                 brightness_delta=[-0.2, 0.2], **kwargs):
87        super(RandomColorDistortion, self).__init__(**kwargs)
88        self.contrast_range = contrast_range
89        self.brightness_delta = brightness_delta
90    
91    def call(self, images, training=None):
92        if not training:
93            return images
94        contrast = np.random.uniform(
95            self.contrast_range[0], self.contrast_range[1])
96        brightness = np.random.uniform(
97            self.brightness_delta[0], self.brightness_delta[1])
98        
99        images = tf.image.adjust_contrast(images, contrast)
100        images = tf.image.adjust_brightness(images, brightness)
101        images = tf.clip_by_value(images, 0, 1)
102        return images
103    
104    def get_config(self):
105        config = super(RandomColorDistortion, self).get_config()
106        config.update({"contrast_range": self.contrast_range, "brightness_delta": self.brightness_delta})
107        return config
108        
109input_shape_rgb = (256, 256, 3)
110data_augmentation_rgb = tf.keras.Sequential(
111  [ 
112    tf.keras.layers.RandomFlip("horizontal"),
113    tf.keras.layers.RandomFlip("vertical"),
114    tf.keras.layers.RandomRotation(0.5),
115    tf.keras.layers.RandomZoom(0.5),
116    tf.keras.layers.RandomContrast(0.5),
117    RandomColorDistortion(name='random_contrast_brightness/none'),
118  ]
119)
120input_shape = (256, 256, 3)
121padding = 'same'
122kernel_size = 3
123model = tf.keras.Sequential([
124  tf.keras.layers.Input(input_shape),
125  data_augmentation_rgb,
126  tf.keras.layers.Rescaling((1./255)),
127  tf.keras.layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
128     data_format='channels_last'),
129  tf.keras.layers.MaxPooling2D(),
130  tf.keras.layers.BatchNormalization(),
131
132  tf.keras.layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
133  tf.keras.layers.MaxPooling2D(),
134  tf.keras.layers.BatchNormalization(),
135
136  tf.keras.layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
137  tf.keras.layers.MaxPooling2D(),
138  tf.keras.layers.BatchNormalization(),
139
140  tf.keras.layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
141  tf.keras.layers.MaxPooling2D(),
142  tf.keras.layers.BatchNormalization(),
143
144  tf.keras.layers.Flatten(),
145  tf.keras.layers.Dense(128, activation='relu'), # best 1
146  tf.keras.layers.Dropout(0.1),
147  tf.keras.layers.Dense(128, activation='relu'), # best 1
148  tf.keras.layers.Dropout(0.1),
149  tf.keras.layers.Dense(64, activation='relu'), # best 1
150  tf.keras.layers.Dropout(0.1),
151  tf.keras.layers.Dense(5, activation = 'softmax')
152 ])
153
154model.compile(loss='categorical_crossentropy', optimizer='adam')
155model.summary()
156model.save("test", save_format='h5')
157

Loading your model with your custom layer would look like this then:

1input_shape_rgb = (img_height, img_width, 3)
2data_augmentation_rgb = tf.keras.Sequential(
3  [ 
4    layers.RandomFlip("horizontal"),
5    layers.RandomFlip("vertical"),
6    layers.RandomRotation(0.5),
7    layers.RandomZoom(0.5),
8    layers.RandomContrast(0.5),
9    RandomColorDistortion(name='random_contrast_brightness/none'),
10  ]
11)
12# Build the model
13input_shape = (img_height, img_width, 3)
14
15model = Sequential([
16  layers.Input(input_shape),
17  data_augmentation_rgb,
18  layers.Rescaling((1./255)),
19
20  layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
21     data_format='channels_last'),
22  layers.MaxPooling2D(),
23  layers.BatchNormalization(),
24
25  layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
26  layers.MaxPooling2D(),
27  layers.BatchNormalization(),
28
29  layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
30  layers.MaxPooling2D(),
31  layers.BatchNormalization(),
32
33  layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
34  layers.MaxPooling2D(),
35  layers.BatchNormalization(),
36
37  layers.Flatten(),
38  layers.Dense(128, activation='relu'), # best 1
39  layers.Dropout(0.1),
40  layers.Dense(128, activation='relu'), # best 1
41  layers.Dropout(0.1),
42  layers.Dense(64, activation='relu'), # best 1
43  layers.Dropout(0.1),
44  layers.Dense(num_classes, activation = 'softmax')
45 ])
46
47 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=metrics)
48 model.summary()
49model.save("./")
50---------------------------------------------------------------------------
51KeyError                                  Traceback (most recent call last)
52<ipython-input-84-87d3f09f8bee> in <module>()
53----> 1 model.save("./")
54
55
56/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in 
57 error_handler(*args, **kwargs)
58 65     except Exception as e:  # pylint: disable=broad-except
59 66       filtered_tb = _process_traceback_frames(e.__traceback__)
60 ---> 67       raise e.with_traceback(filtered_tb) from None
61 68     finally:
62 69       del filtered_tb
63
64 /usr/local/lib/python3.7/dist- 
65 packages/tensorflow/python/saved_model/function_serialization.py in 
66 serialize_concrete_function(concrete_function, node_ids, coder)
67 66   except KeyError:
68 67     raise KeyError(
69 ---> 68         f"Failed to add concrete function '{concrete_function.name}' to 
70 object-"
71 69         f"based SavedModel as it captures tensor {capture!r} which is 
72 unsupported"
73 70         " or not reachable from root. "
74
75 KeyError: "Failed to add concrete function 
76 'b'__inference_sequential_46_layer_call_fn_662953'' to object-based SavedModel as it 
77 captures tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which 
78 is unsupported or not reachable from root. One reason could be that a stateful 
79 object or a variable that the function depends on is not assigned to an attribute of 
80 the serialized trackable object (see SaveTest.test_captures_unreachable_variable)."
81import tensorflow as tf
82import numpy as np
83
84class RandomColorDistortion(tf.keras.layers.Layer):
85    def __init__(self, contrast_range=[0.5, 1.5], 
86                 brightness_delta=[-0.2, 0.2], **kwargs):
87        super(RandomColorDistortion, self).__init__(**kwargs)
88        self.contrast_range = contrast_range
89        self.brightness_delta = brightness_delta
90    
91    def call(self, images, training=None):
92        if not training:
93            return images
94        contrast = np.random.uniform(
95            self.contrast_range[0], self.contrast_range[1])
96        brightness = np.random.uniform(
97            self.brightness_delta[0], self.brightness_delta[1])
98        
99        images = tf.image.adjust_contrast(images, contrast)
100        images = tf.image.adjust_brightness(images, brightness)
101        images = tf.clip_by_value(images, 0, 1)
102        return images
103    
104    def get_config(self):
105        config = super(RandomColorDistortion, self).get_config()
106        config.update({"contrast_range": self.contrast_range, "brightness_delta": self.brightness_delta})
107        return config
108        
109input_shape_rgb = (256, 256, 3)
110data_augmentation_rgb = tf.keras.Sequential(
111  [ 
112    tf.keras.layers.RandomFlip("horizontal"),
113    tf.keras.layers.RandomFlip("vertical"),
114    tf.keras.layers.RandomRotation(0.5),
115    tf.keras.layers.RandomZoom(0.5),
116    tf.keras.layers.RandomContrast(0.5),
117    RandomColorDistortion(name='random_contrast_brightness/none'),
118  ]
119)
120input_shape = (256, 256, 3)
121padding = 'same'
122kernel_size = 3
123model = tf.keras.Sequential([
124  tf.keras.layers.Input(input_shape),
125  data_augmentation_rgb,
126  tf.keras.layers.Rescaling((1./255)),
127  tf.keras.layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, 
128     data_format='channels_last'),
129  tf.keras.layers.MaxPooling2D(),
130  tf.keras.layers.BatchNormalization(),
131
132  tf.keras.layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4
133  tf.keras.layers.MaxPooling2D(),
134  tf.keras.layers.BatchNormalization(),
135
136  tf.keras.layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3
137  tf.keras.layers.MaxPooling2D(),
138  tf.keras.layers.BatchNormalization(),
139
140  tf.keras.layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3
141  tf.keras.layers.MaxPooling2D(),
142  tf.keras.layers.BatchNormalization(),
143
144  tf.keras.layers.Flatten(),
145  tf.keras.layers.Dense(128, activation='relu'), # best 1
146  tf.keras.layers.Dropout(0.1),
147  tf.keras.layers.Dense(128, activation='relu'), # best 1
148  tf.keras.layers.Dropout(0.1),
149  tf.keras.layers.Dense(64, activation='relu'), # best 1
150  tf.keras.layers.Dropout(0.1),
151  tf.keras.layers.Dense(5, activation = 'softmax')
152 ])
153
154model.compile(loss='categorical_crossentropy', optimizer='adam')
155model.summary()
156model.save("test", save_format='h5')
157model = tf.keras.models.load_model('test.h5', custom_objects={'RandomColorDistortion': RandomColorDistortion})
158

where RandomColorDistortion is the name of your custom layer.

Source https://stackoverflow.com/questions/69955838

QUESTION

Is it possible to use a collection of hyperspectral 1x1 pixels in a CNN model purposed for more conventional datasets (CIFAR-10/MNIST)?

Asked 2021-Dec-17 at 09:08

I have created a working CNN model in Keras/Tensorflow, and have successfully used the CIFAR-10 & MNIST datasets to test this model. The functioning code as seen below:

1import keras
2from keras.datasets import cifar10
3from keras.utils import to_categorical
4from keras.models import Sequential
5from keras.layers import Dense, Activation, Dropout, Conv2D, Flatten, MaxPooling2D
6from keras.layers.normalization import BatchNormalization
7
8(X_train, y_train), (X_test, y_test) = cifar10.load_data()
9
10#reshape data to fit model
11X_train = X_train.reshape(50000,32,32,3)
12X_test = X_test.reshape(10000,32,32,3)
13
14y_train = to_categorical(y_train)
15y_test = to_categorical(y_test)
16
17
18# Building the model 
19
20#1st Convolutional Layer
21model.add(Conv2D(filters=64, input_shape=(32,32,3), kernel_size=(11,11), strides=(4,4), padding='same'))
22model.add(BatchNormalization())
23model.add(Activation('relu'))
24model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
25
26#2nd Convolutional Layer
27model.add(Conv2D(filters=224, kernel_size=(5, 5), strides=(1,1), padding='same'))
28model.add(BatchNormalization())
29model.add(Activation('relu'))
30model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
31
32#3rd Convolutional Layer
33model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
34model.add(BatchNormalization())
35model.add(Activation('relu'))
36
37#4th Convolutional Layer
38model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
39model.add(BatchNormalization())
40model.add(Activation('relu'))
41
42#5th Convolutional Layer
43model.add(Conv2D(filters=160, kernel_size=(3,3), strides=(1,1), padding='same'))
44model.add(BatchNormalization())
45model.add(Activation('relu'))
46model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
47
48model.add(Flatten())
49
50# 1st Fully Connected Layer
51model.add(Dense(4096, input_shape=(32,32,3,)))
52model.add(BatchNormalization())
53model.add(Activation('relu'))
54# Add Dropout to prevent overfitting
55model.add(Dropout(0.4))
56
57#2nd Fully Connected Layer
58model.add(Dense(4096))
59model.add(BatchNormalization())
60model.add(Activation('relu'))
61#Add Dropout
62model.add(Dropout(0.4))
63
64#3rd Fully Connected Layer
65model.add(Dense(1000))
66model.add(BatchNormalization())
67model.add(Activation('relu'))
68#Add Dropout
69model.add(Dropout(0.4))
70
71#Output Layer
72model.add(Dense(10))
73model.add(BatchNormalization())
74model.add(Activation('softmax'))
75
76
77#compile model using accuracy to measure model performance
78opt = keras.optimizers.Adam(learning_rate = 0.0001)
79model.compile(optimizer=opt, loss='categorical_crossentropy', 
80              metrics=['accuracy'])
81
82
83#train the model
84model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
85

From this point after utilising the aforementioned datasets, I wanted to go one further and use a dataset with more channels than a greyscale or rgb presented, hence the inclusion of a hyperspectral dataset. When looking for a hyperspectral dataset I came across this one.

The issue at this stage was realising that this hyperspectral dataset was one image, with each value in the ground truth relating to each pixel. At this stage I reformatted the data from this into a collection of hyperspectral data/pixels.

Code reformatting corrected dataset for x_train & x_test:

1import keras
2from keras.datasets import cifar10
3from keras.utils import to_categorical
4from keras.models import Sequential
5from keras.layers import Dense, Activation, Dropout, Conv2D, Flatten, MaxPooling2D
6from keras.layers.normalization import BatchNormalization
7
8(X_train, y_train), (X_test, y_test) = cifar10.load_data()
9
10#reshape data to fit model
11X_train = X_train.reshape(50000,32,32,3)
12X_test = X_test.reshape(10000,32,32,3)
13
14y_train = to_categorical(y_train)
15y_test = to_categorical(y_test)
16
17
18# Building the model 
19
20#1st Convolutional Layer
21model.add(Conv2D(filters=64, input_shape=(32,32,3), kernel_size=(11,11), strides=(4,4), padding='same'))
22model.add(BatchNormalization())
23model.add(Activation('relu'))
24model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
25
26#2nd Convolutional Layer
27model.add(Conv2D(filters=224, kernel_size=(5, 5), strides=(1,1), padding='same'))
28model.add(BatchNormalization())
29model.add(Activation('relu'))
30model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
31
32#3rd Convolutional Layer
33model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
34model.add(BatchNormalization())
35model.add(Activation('relu'))
36
37#4th Convolutional Layer
38model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
39model.add(BatchNormalization())
40model.add(Activation('relu'))
41
42#5th Convolutional Layer
43model.add(Conv2D(filters=160, kernel_size=(3,3), strides=(1,1), padding='same'))
44model.add(BatchNormalization())
45model.add(Activation('relu'))
46model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
47
48model.add(Flatten())
49
50# 1st Fully Connected Layer
51model.add(Dense(4096, input_shape=(32,32,3,)))
52model.add(BatchNormalization())
53model.add(Activation('relu'))
54# Add Dropout to prevent overfitting
55model.add(Dropout(0.4))
56
57#2nd Fully Connected Layer
58model.add(Dense(4096))
59model.add(BatchNormalization())
60model.add(Activation('relu'))
61#Add Dropout
62model.add(Dropout(0.4))
63
64#3rd Fully Connected Layer
65model.add(Dense(1000))
66model.add(BatchNormalization())
67model.add(Activation('relu'))
68#Add Dropout
69model.add(Dropout(0.4))
70
71#Output Layer
72model.add(Dense(10))
73model.add(BatchNormalization())
74model.add(Activation('softmax'))
75
76
77#compile model using accuracy to measure model performance
78opt = keras.optimizers.Adam(learning_rate = 0.0001)
79model.compile(optimizer=opt, loss='categorical_crossentropy', 
80              metrics=['accuracy'])
81
82
83#train the model
84model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
85import keras
86import scipy
87import numpy as np
88import matplotlib.pyplot as plt
89from keras.utils import to_categorical
90from scipy import io
91
92mydict = scipy.io.loadmat('Indian_pines_corrected.mat')
93dataset = np.array(mydict.get('indian_pines_corrected'))
94
95
96#This is creating the split between x_train and x_test from the original dataset 
97# x_train after this code runs will have a shape of (121, 145, 200) 
98# x_test after this code runs will have a shape of (24, 145, 200)
99x_train = np.zeros((121,145,200), dtype=np.int)
100x_test = np.zeros((24,145,200), dtype=np.int)    
101
102xtemp = np.array_split(dataset, [121])
103x_train = np.array(xtemp[0])
104x_test = np.array(xtemp[1])
105
106# x_train will have a shape of (17545, 200) 
107# x_test will have a shape of (3480, 200)
108x_train = x_train.reshape(-1, x_train.shape[-1])
109x_test = x_test.reshape(-1, x_test.shape[-1])
110

Code reformatting ground truth dataset for Y_train & Y_test:

1import keras
2from keras.datasets import cifar10
3from keras.utils import to_categorical
4from keras.models import Sequential
5from keras.layers import Dense, Activation, Dropout, Conv2D, Flatten, MaxPooling2D
6from keras.layers.normalization import BatchNormalization
7
8(X_train, y_train), (X_test, y_test) = cifar10.load_data()
9
10#reshape data to fit model
11X_train = X_train.reshape(50000,32,32,3)
12X_test = X_test.reshape(10000,32,32,3)
13
14y_train = to_categorical(y_train)
15y_test = to_categorical(y_test)
16
17
18# Building the model 
19
20#1st Convolutional Layer
21model.add(Conv2D(filters=64, input_shape=(32,32,3), kernel_size=(11,11), strides=(4,4), padding='same'))
22model.add(BatchNormalization())
23model.add(Activation('relu'))
24model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
25
26#2nd Convolutional Layer
27model.add(Conv2D(filters=224, kernel_size=(5, 5), strides=(1,1), padding='same'))
28model.add(BatchNormalization())
29model.add(Activation('relu'))
30model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
31
32#3rd Convolutional Layer
33model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
34model.add(BatchNormalization())
35model.add(Activation('relu'))
36
37#4th Convolutional Layer
38model.add(Conv2D(filters=288, kernel_size=(3,3), strides=(1,1), padding='same'))
39model.add(BatchNormalization())
40model.add(Activation('relu'))
41
42#5th Convolutional Layer
43model.add(Conv2D(filters=160, kernel_size=(3,3), strides=(1,1), padding='same'))
44model.add(BatchNormalization())
45model.add(Activation('relu'))
46model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))
47
48model.add(Flatten())
49
50# 1st Fully Connected Layer
51model.add(Dense(4096, input_shape=(32,32,3,)))
52model.add(BatchNormalization())
53model.add(Activation('relu'))
54# Add Dropout to prevent overfitting
55model.add(Dropout(0.4))
56
57#2nd Fully Connected Layer
58model.add(Dense(4096))
59model.add(BatchNormalization())
60model.add(Activation('relu'))
61#Add Dropout
62model.add(Dropout(0.4))
63
64#3rd Fully Connected Layer
65model.add(Dense(1000))
66model.add(BatchNormalization())
67model.add(Activation('relu'))
68#Add Dropout
69model.add(Dropout(0.4))
70
71#Output Layer
72model.add(Dense(10))
73model.add(BatchNormalization())
74model.add(Activation('softmax'))
75
76
77#compile model using accuracy to measure model performance
78opt = keras.optimizers.Adam(learning_rate = 0.0001)
79model.compile(optimizer=opt, loss='categorical_crossentropy', 
80              metrics=['accuracy'])
81
82
83#train the model
84model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=30)
85import keras
86import scipy
87import numpy as np
88import matplotlib.pyplot as plt
89from keras.utils import to_categorical
90from scipy import io
91
92mydict = scipy.io.loadmat('Indian_pines_corrected.mat')
93dataset = np.array(mydict.get('indian_pines_corrected'))
94
95
96#This is creating the split between x_train and x_test from the original dataset 
97# x_train after this code runs will have a shape of (121, 145, 200) 
98# x_test after this code runs will have a shape of (24, 145, 200)
99x_train = np.zeros((121,145,200), dtype=np.int)
100x_test = np.zeros((24,145,200), dtype=np.int)    
101
102xtemp = np.array_split(dataset, [121])
103x_train = np.array(xtemp[0])
104x_test = np.array(xtemp[1])
105
106# x_train will have a shape of (17545, 200) 
107# x_test will have a shape of (3480, 200)
108x_train = x_train.reshape(-1, x_train.shape[-1])
109x_test = x_test.reshape(-1, x_test.shape[-1])
110truthDataset = scipy.io.loadmat('Indian_pines_gt.mat')
111gTruth = truthDataset.get('indian_pines_gt')
112
113#This is creating the split between Y_train and Y_test from the original dataset 
114# Y_train after this code runs will have a shape of (121, 145) 
115# Y_test after this code runs will have a shape of (24, 145)
116
117Y_train = np.zeros((121,145), dtype=np.int)
118Y_test = np.zeros((24,145), dtype=np.int)    
119
120ytemp = np.array_split(gTruth, [121])
121Y_train = np.array(ytemp[0])
122Y_test = np.array(ytemp[1])
123
124# Y_train will have a shape of (17545) 
125# Y_test will have a shape of (3480)
126Y_train = Y_train.reshape(-1)
127Y_test = Y_test.reshape(-1)
128
129
130#17 binary categories ranging from 0-16
131
132#Y_train one-hot encode target column
133Y_train = to_categorical(Y_train)
134
135#Y_test one-hot encode target column
136Y_test = to_categorical(Y_test, num_classes = 17)
137

My thought process was that, despite the initial image being broken down into 1x1 patches, the large number of channels each patch possessed with their respective values would aid in categorisation of the dataset.

Essentially I'd want to input this reformatted data into my model (seen within the first code fragment in this post), however I'm uncertain if I am taking the wrong approach to this due to my inexperience with this area of expertise. I was expecting to input a shape of (1,1,200), i.e the shape of x_train & x_test would be (17545,1,1,200) & (3480,1,1,200) respectively.

ANSWER

Answered 2021-Dec-16 at 10:18

If the hyperspectral dataset is given to you as a large image with many channels, I suppose that the classification of each pixel should depend on the pixels around it (otherwise I would not format the data as an image, i.e. without grid structure). Given this assumption, breaking up the input picture into 1x1 parts is not a good idea as you are loosing the grid structure.

I further suppose that the order of the channels is arbitrary, which implies that convolution over the channels is probably not meaningful (which you however did not plan to do anyways).

Instead of reformatting the data the way you did, you may want to create a model that takes an image as input and also outputs an "image" containing the classifications for each pixel. I.e. if you have 10 classes and take a (145, 145, 200) image as input, your model would output a (145, 145, 10) image. In that architecture you would not have any fully-connected layers. Your output layer would also be a convolutional layer.

That however means that you will not be able to keep your current architecture. That is because the tasks for MNIST/CIFAR10 and your hyperspectral dataset are not the same. For MNIST/CIFAR10 you want to classify an image in it's entirety, while for the other dataset you want to assign a class to each pixel (while most likely also using the pixels around each pixel).


Some further ideas:

  • If you want to turn the pixel classification task on the hyperspectral dataset into a classification task for an entire image, maybe you can reformulate that task as "classifying a hyperspectral image as the class of it's center (or top-left, or bottom-right, or (21th, 104th), or whatever) pixel". To obtain the data from your single hyperspectral image, for each pixel, I would shift the image such that the target pixel is at the desired location (e.g. the center). All pixels that "fall off" the border could be inserted at the other side of the image.
  • If you want to stick with a pixel classification task but need more data, maybe split up the single hyperspectral image you have into many smaller images (e.g. 10x10x200). You may even want to use images of many different sizes. If you model only has convolution and pooling layers and you make sure to maintain the sizes of the image, that should work out.

Source https://stackoverflow.com/questions/70226626

QUESTION

ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'

Asked 2021-Nov-13 at 07:14

i have an import problem when executing my code:

1from keras.models import Sequential
2from keras.layers.normalization import BatchNormalization
3
1from keras.models import Sequential
2from keras.layers.normalization import BatchNormalization
32021-10-06 22:27:14.064885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
42021-10-06 22:27:14.064974: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
5Traceback (most recent call last):
6  File "C:\Data\breast-cancer-classification\train_model.py", line 10, in <module>
7    from cancernet.cancernet import CancerNet
8  File "C:\Data\breast-cancer-classification\cancernet\cancernet.py", line 2, in <module>
9    from keras.layers.normalization import BatchNormalization
10ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization' (C:\Users\Catalin\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\normalization\__init__.py)
11
  • Keras version: 2.6.0
  • Tensorflow: 2.6.0
  • Python version: 3.9.7

The library it is installed also with

1from keras.models import Sequential
2from keras.layers.normalization import BatchNormalization
32021-10-06 22:27:14.064885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
42021-10-06 22:27:14.064974: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
5Traceback (most recent call last):
6  File "C:\Data\breast-cancer-classification\train_model.py", line 10, in <module>
7    from cancernet.cancernet import CancerNet
8  File "C:\Data\breast-cancer-classification\cancernet\cancernet.py", line 2, in <module>
9    from keras.layers.normalization import BatchNormalization
10ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization' (C:\Users\Catalin\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\normalization\__init__.py)
11pip install numpy opencv-python pillow tensorflow keras imutils scikit-learn matplotlib
12

Do you have any ideas?

library path

ANSWER

Answered 2021-Oct-06 at 20:27

You're using outdated imports for tf.keras. Layers can now be imported directly from tensorflow.keras.layers:

1from keras.models import Sequential
2from keras.layers.normalization import BatchNormalization
32021-10-06 22:27:14.064885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
42021-10-06 22:27:14.064974: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
5Traceback (most recent call last):
6  File "C:\Data\breast-cancer-classification\train_model.py", line 10, in <module>
7    from cancernet.cancernet import CancerNet
8  File "C:\Data\breast-cancer-classification\cancernet\cancernet.py", line 2, in <module>
9    from keras.layers.normalization import BatchNormalization
10ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization' (C:\Users\Catalin\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\layers\normalization\__init__.py)
11pip install numpy opencv-python pillow tensorflow keras imutils scikit-learn matplotlib
12from tensorflow.keras.models import Sequential
13from tensorflow.keras.layers import (
14    BatchNormalization, SeparableConv2D, MaxPooling2D, Activation, Flatten, Dropout, Dense
15)
16from tensorflow.keras import backend as K
17
18
19class CancerNet:
20    @staticmethod
21    def build(width, height, depth, classes):
22        model = Sequential()
23        shape = (height, width, depth)
24        channelDim = -1
25
26        if K.image_data_format() == "channels_first":
27            shape = (depth, height, width)
28            channelDim = 1
29
30        model.add(SeparableConv2D(32, (3, 3), padding="same", input_shape=shape))
31        model.add(Activation("relu"))
32        model.add(BatchNormalization(axis=channelDim))
33        model.add(MaxPooling2D(pool_size=(2, 2)))
34        model.add(Dropout(0.25))
35
36        model.add(SeparableConv2D(64, (3, 3), padding="same"))
37        model.add(Activation("relu"))
38        model.add(BatchNormalization(axis=channelDim))
39        model.add(SeparableConv2D(64, (3, 3), padding="same"))
40        model.add(Activation("relu"))
41        model.add(BatchNormalization(axis=channelDim))
42        model.add(MaxPooling2D(pool_size=(2, 2)))
43        model.add(Dropout(0.25))
44
45        model.add(SeparableConv2D(128, (3, 3), padding="same"))
46        model.add(Activation("relu"))
47        model.add(BatchNormalization(axis=channelDim))
48        model.add(SeparableConv2D(128, (3, 3), padding="same"))
49        model.add(Activation("relu"))
50        model.add(BatchNormalization(axis=channelDim))
51        model.add(SeparableConv2D(128, (3, 3), padding="same"))
52        model.add(Activation("relu"))
53        model.add(BatchNormalization(axis=channelDim))
54        model.add(MaxPooling2D(pool_size=(2, 2)))
55        model.add(Dropout(0.25))
56
57        model.add(Flatten())
58        model.add(Dense(256))
59        model.add(Activation("relu"))
60        model.add(BatchNormalization())
61        model.add(Dropout(0.5))
62
63        model.add(Dense(classes))
64        model.add(Activation("softmax"))
65
66        return model
67
68model = CancerNet()
69

Source https://stackoverflow.com/questions/69471749

QUESTION

Accuracy in Calculating Fourth Derivative using Finite Differences in Tensorflow

Asked 2021-Sep-16 at 13:01

I am writing a small code to calculate the fourth derivative using the method of finite differences in tensorflow. This is as follows:

1def action(y,x):
2    #spacing between points.
3    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
4    
5    #fourth derivative 
6    dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
7
8    return dy4
9
10x = tf.linspace(0.0, 30, 1000)
11y = tf.tanh(x)
12dy4 = action(y,x)
13
14sess = tf.compat.v1.Session()
15plt.plot(sess.run(dy4))
16

This results in the following graph:

Graph

However if I use essentially the same code but just using numpy, the results are much cleaner:

1def action(y,x):
2    #spacing between points.
3    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
4    
5    #fourth derivative 
6    dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
7
8    return dy4
9
10x = tf.linspace(0.0, 30, 1000)
11y = tf.tanh(x)
12dy4 = action(y,x)
13
14sess = tf.compat.v1.Session()
15plt.plot(sess.run(dy4))
16def fourth_deriv(y, x):
17    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
18    dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
19    return dy
20
21x = np.linspace(0.0, 30, 1000)
22test = fourth_deriv(np.tanh(x), x)
23plt.plot(test)
24

Which gives:

enter image description here

What is the issue here? I was thinking at first that the separation between points could be too small to give an accurate computation, but clearly, that's not the case if numpy can handle it fine.

ANSWER

Answered 2021-Sep-16 at 13:01

The issue is related to the choice of floating-point types.

  • tf.linspace automatically selects tf.float32 as its type, while
  • np.linspace creates a float64 array, which has much more precision.

Making the following modification:

1def action(y,x):
2    #spacing between points.
3    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
4    
5    #fourth derivative 
6    dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
7
8    return dy4
9
10x = tf.linspace(0.0, 30, 1000)
11y = tf.tanh(x)
12dy4 = action(y,x)
13
14sess = tf.compat.v1.Session()
15plt.plot(sess.run(dy4))
16def fourth_deriv(y, x):
17    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
18    dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
19    return dy
20
21x = np.linspace(0.0, 30, 1000)
22test = fourth_deriv(np.tanh(x), x)
23plt.plot(test)
24start = tf.constant(0.0, dtype = tf.float64)
25end = tf.constant(30.0, dtype = tf.float64)
26x = tf.linspace(start, end, 1000)
27

causes a smooth plot to appear:
enter image description here

It's worth noting further that Tensorflow does include an automatic differentiation, which is crucial for machine learning training and is hence well-tested - you can use gradient tapes to access it and evaluate a fourth derivative without the imprecision of numeric differentiation using finite differences:

1def action(y,x):
2    #spacing between points.
3    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
4    
5    #fourth derivative 
6    dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
7
8    return dy4
9
10x = tf.linspace(0.0, 30, 1000)
11y = tf.tanh(x)
12dy4 = action(y,x)
13
14sess = tf.compat.v1.Session()
15plt.plot(sess.run(dy4))
16def fourth_deriv(y, x):
17    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
18    dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
19    return dy
20
21x = np.linspace(0.0, 30, 1000)
22test = fourth_deriv(np.tanh(x), x)
23plt.plot(test)
24start = tf.constant(0.0, dtype = tf.float64)
25end = tf.constant(30.0, dtype = tf.float64)
26x = tf.linspace(start, end, 1000)
27with tf.compat.v1.Session() as sess2:
28  x = tf.Variable(tf.linspace(0, 30, 1000))
29  sess2.run(tf.compat.v1.initialize_all_variables())
30  with tf.GradientTape() as t4:
31    with tf.GradientTape() as t3:
32      with tf.GradientTape() as t2:
33        with tf.GradientTape() as t1:
34          y = tf.tanh(x)
35
36        der1 = t1.gradient(y, x)
37      der2 = t2.gradient(der1, x)
38    der3 = t3.gradient(der2, x)
39  der4 = t4.gradient(der3, x)
40  print(der4)
41
42  plt.plot(sess2.run(der4))
43

The accuracy of this method is far better than can be achieved using finite difference methods. The following code compares the accuracy of auto diff with the accuracy of the finite difference method:

1def action(y,x):
2    #spacing between points.
3    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
4    
5    #fourth derivative 
6    dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
7
8    return dy4
9
10x = tf.linspace(0.0, 30, 1000)
11y = tf.tanh(x)
12dy4 = action(y,x)
13
14sess = tf.compat.v1.Session()
15plt.plot(sess.run(dy4))
16def fourth_deriv(y, x):
17    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
18    dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
19    return dy
20
21x = np.linspace(0.0, 30, 1000)
22test = fourth_deriv(np.tanh(x), x)
23plt.plot(test)
24start = tf.constant(0.0, dtype = tf.float64)
25end = tf.constant(30.0, dtype = tf.float64)
26x = tf.linspace(start, end, 1000)
27with tf.compat.v1.Session() as sess2:
28  x = tf.Variable(tf.linspace(0, 30, 1000))
29  sess2.run(tf.compat.v1.initialize_all_variables())
30  with tf.GradientTape() as t4:
31    with tf.GradientTape() as t3:
32      with tf.GradientTape() as t2:
33        with tf.GradientTape() as t1:
34          y = tf.tanh(x)
35
36        der1 = t1.gradient(y, x)
37      der2 = t2.gradient(der1, x)
38    der3 = t3.gradient(der2, x)
39  der4 = t4.gradient(der3, x)
40  print(der4)
41
42  plt.plot(sess2.run(der4))
43x = np.linspace(0.0, 30, 1000)
44sech = 1/np.cosh(x)
45theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)
46
47finite_diff_err = theoretical[2:-2] - from_finite_diff
48autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]
49
50print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
51print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))
52
53line, = plt.plot(np.log10(np.abs(autodiff_err)))
54line.set_label('Autodiff log error')
55line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
56line2.set_label('Finite difference log error')
57plt.legend()
58

and yields the following output:

1def action(y,x):
2    #spacing between points.
3    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
4    
5    #fourth derivative 
6    dy4 = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
7
8    return dy4
9
10x = tf.linspace(0.0, 30, 1000)
11y = tf.tanh(x)
12dy4 = action(y,x)
13
14sess = tf.compat.v1.Session()
15plt.plot(sess.run(dy4))
16def fourth_deriv(y, x):
17    h = (x[-1] - x[0]) / (int(x.shape[0]) - 1)
18    dy = (y[4:] - 4*y[3:-1] + 6*y[2:-2] - 4*y[1:-3] + y[:-4])/(h*h*h*h)
19    return dy
20
21x = np.linspace(0.0, 30, 1000)
22test = fourth_deriv(np.tanh(x), x)
23plt.plot(test)
24start = tf.constant(0.0, dtype = tf.float64)
25end = tf.constant(30.0, dtype = tf.float64)
26x = tf.linspace(start, end, 1000)
27with tf.compat.v1.Session() as sess2:
28  x = tf.Variable(tf.linspace(0, 30, 1000))
29  sess2.run(tf.compat.v1.initialize_all_variables())
30  with tf.GradientTape() as t4:
31    with tf.GradientTape() as t3:
32      with tf.GradientTape() as t2:
33        with tf.GradientTape() as t1:
34          y = tf.tanh(x)
35
36        der1 = t1.gradient(y, x)
37      der2 = t2.gradient(der1, x)
38    der3 = t3.gradient(der2, x)
39  der4 = t4.gradient(der3, x)
40  print(der4)
41
42  plt.plot(sess2.run(der4))
43x = np.linspace(0.0, 30, 1000)
44sech = 1/np.cosh(x)
45theoretical = 16*np.tanh(x) * np.power(sech, 4) - 8*np.power(np.tanh(x), 3)*np.power(sech,2)
46
47finite_diff_err = theoretical[2:-2] - from_finite_diff
48autodiff_err = theoretical[2:-2] - from_autodiff[2:-2]
49
50print('Max err with autodiff: %s' % np.max(np.abs(autodiff_err)))
51print('Max err with finite difference: %s' % np.max(np.abs(finite_diff_err)))
52
53line, = plt.plot(np.log10(np.abs(autodiff_err)))
54line.set_label('Autodiff log error')
55line2, = plt.plot(np.log10(np.abs(finite_diff_err)))
56line2.set_label('Finite difference log error')
57plt.legend()
58Max err with autodiff: 3.1086244689504383e-15
59Max err with a finite difference: 0.007830900165363808
60

and the following plot (the two lines overlap after around 600 on the X-axis): enter image description here

Source https://stackoverflow.com/questions/69125173

QUESTION

AssertionError: Tried to export a function which references untracked resource

Asked 2021-Sep-07 at 11:23

I wrote a unit-test in order to safe a model after noticing that I am not able to do so (anymore) during training.

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21

Trying to save the model will always throw the following error:

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21E         AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
22E         
23E         Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
24E         <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
25

Note: As you can see in the code above, but I am not able to retrieve this tensor with tf.compat.v1.get_default_graph().get_tensor_by_name("77040:0").

I tried the following too, but the result is always empty:

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21E         AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
22E         
23E         Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
24E         <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
25model(batch)  # Build the model
26
27tensor_name = "77040"
28
29var_names = [var.name for var in model.trainable_weights]
30weights = list(filter(lambda var: tensor_name in var, var_names))
31
32var_names = [var.name for var in model.trainable_variables]
33variables = list(filter(lambda var: tensor_name in var, var_names))
34
35print(weights)
36print(variables)
37

The problem is that I do not understand why I am getting this because the affected layer is tracked by Keras as you can see in the screenshot below. I took it during a debug-session in the call() function.

enter image description here

I have no explanation for this and I am running out of ideas what the issue might be here.

The transformations list in the screenshot is a property of and getting constructed by a layer InputsEmbedding like so:

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21E         AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
22E         
23E         Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
24E         <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
25model(batch)  # Build the model
26
27tensor_name = "77040"
28
29var_names = [var.name for var in model.trainable_weights]
30weights = list(filter(lambda var: tensor_name in var, var_names))
31
32var_names = [var.name for var in model.trainable_variables]
33variables = list(filter(lambda var: tensor_name in var, var_names))
34
35print(weights)
36print(variables)
37class InputsEmbedding(layers.Layer, TimeReduction):
38    def __init__(self, config: InputsEmbeddingConfig, **kwargs):
39        super().__init__(**kwargs)
40
41        if config.transformations is None or not len(config.transformations):
42            raise RuntimeError("No transformations provided.")
43
44        self.config = config
45
46        self.transformations = list()
47        for transformation in self.config.transformations:
48            layer_name, layer_params = list(transformation.items())[0]
49            layer = _get_layer(layer_name, layer_params)
50            self.transformations.append(layer)
51
52        self.init_time_reduction_layer()
53
54    def get_config(self):
55        return self.config.dict()
56
57
58def _get_layer(name: str, params: dict) -> layers.Layer:
59    if name == "conv2d_stack":
60        return ConvolutionStack(**params)
61    elif name == "stack_frames":
62        return StackFrames(**params)
63    else:
64        raise RuntimeError(f"Unsupported or unknown time-reduction layer {name}")
65

In order to verify that the problem is not the InputsEmbedding, I created a unit-text for saving a model that is using just this particular layer.

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21E         AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
22E         
23E         Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
24E         <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
25model(batch)  # Build the model
26
27tensor_name = "77040"
28
29var_names = [var.name for var in model.trainable_weights]
30weights = list(filter(lambda var: tensor_name in var, var_names))
31
32var_names = [var.name for var in model.trainable_variables]
33variables = list(filter(lambda var: tensor_name in var, var_names))
34
35print(weights)
36print(variables)
37class InputsEmbedding(layers.Layer, TimeReduction):
38    def __init__(self, config: InputsEmbeddingConfig, **kwargs):
39        super().__init__(**kwargs)
40
41        if config.transformations is None or not len(config.transformations):
42            raise RuntimeError("No transformations provided.")
43
44        self.config = config
45
46        self.transformations = list()
47        for transformation in self.config.transformations:
48            layer_name, layer_params = list(transformation.items())[0]
49            layer = _get_layer(layer_name, layer_params)
50            self.transformations.append(layer)
51
52        self.init_time_reduction_layer()
53
54    def get_config(self):
55        return self.config.dict()
56
57
58def _get_layer(name: str, params: dict) -> layers.Layer:
59    if name == "conv2d_stack":
60        return ConvolutionStack(**params)
61    elif name == "stack_frames":
62        return StackFrames(**params)
63    else:
64        raise RuntimeError(f"Unsupported or unknown time-reduction layer {name}")
65@pytest.mark.usefixtures("maybe_run_functions_eagerly")
66def test_inputs_embedding_save_model():
67    convolutions = [
68        "filters=2, kernel_size=(3, 3), strides=(2, 1)",
69        "filters=4, kernel_size=(3, 3), strides=(2, 1)",
70        "filters=8, kernel_size=(3, 4), strides=(1, 1)",
71    ]
72
73    config = InputsEmbeddingConfig()
74    config.transformations = [dict(conv2d_stack=dict(convolutions=convolutions)), dict(stack_frames=dict(n=2))]
75
76    num_features = 8
77    num_channels = 3
78
79    inputs = layers.Input(shape=(None, num_features, num_channels))
80    x = inputs
81    x, _ = InputsEmbedding(config)(x)
82    model = keras.Model(inputs=inputs, outputs=x)
83    model.build(input_shape=(1, 20, num_features, num_channels))
84
85    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
86    try:
87        model.save(directory)
88    finally:
89        shutil.rmtree(directory)
90

Here I am able to save this layer without any issues:

enter image description here

ConvolutionStack

As it seems to be relevant, here is the (rather ugly) implementation of ConvolutionStack:

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21E         AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
22E         
23E         Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
24E         <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
25model(batch)  # Build the model
26
27tensor_name = "77040"
28
29var_names = [var.name for var in model.trainable_weights]
30weights = list(filter(lambda var: tensor_name in var, var_names))
31
32var_names = [var.name for var in model.trainable_variables]
33variables = list(filter(lambda var: tensor_name in var, var_names))
34
35print(weights)
36print(variables)
37class InputsEmbedding(layers.Layer, TimeReduction):
38    def __init__(self, config: InputsEmbeddingConfig, **kwargs):
39        super().__init__(**kwargs)
40
41        if config.transformations is None or not len(config.transformations):
42            raise RuntimeError("No transformations provided.")
43
44        self.config = config
45
46        self.transformations = list()
47        for transformation in self.config.transformations:
48            layer_name, layer_params = list(transformation.items())[0]
49            layer = _get_layer(layer_name, layer_params)
50            self.transformations.append(layer)
51
52        self.init_time_reduction_layer()
53
54    def get_config(self):
55        return self.config.dict()
56
57
58def _get_layer(name: str, params: dict) -> layers.Layer:
59    if name == "conv2d_stack":
60        return ConvolutionStack(**params)
61    elif name == "stack_frames":
62        return StackFrames(**params)
63    else:
64        raise RuntimeError(f"Unsupported or unknown time-reduction layer {name}")
65@pytest.mark.usefixtures("maybe_run_functions_eagerly")
66def test_inputs_embedding_save_model():
67    convolutions = [
68        "filters=2, kernel_size=(3, 3), strides=(2, 1)",
69        "filters=4, kernel_size=(3, 3), strides=(2, 1)",
70        "filters=8, kernel_size=(3, 4), strides=(1, 1)",
71    ]
72
73    config = InputsEmbeddingConfig()
74    config.transformations = [dict(conv2d_stack=dict(convolutions=convolutions)), dict(stack_frames=dict(n=2))]
75
76    num_features = 8
77    num_channels = 3
78
79    inputs = layers.Input(shape=(None, num_features, num_channels))
80    x = inputs
81    x, _ = InputsEmbedding(config)(x)
82    model = keras.Model(inputs=inputs, outputs=x)
83    model.build(input_shape=(1, 20, num_features, num_channels))
84
85    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
86    try:
87        model.save(directory)
88    finally:
89        shutil.rmtree(directory)
90from typing import List
91
92import tensorflow as tf
93from tensorflow.keras import layers
94from tensorflow.python.keras.layers import convolutional
95
96from speech.lab.layers import InputsRequirements
97from speech.lab.models import conv_util, models_util
98
99
100class ConvolutionStack(layers.Layer):
101    def __init__(
102        self,
103        convolutions: List[str],
104        kernel_regularizer: dict = None,
105        bias_regularizer: dict = None,
106        **kwargs
107    ):
108        super().__init__(**kwargs)
109        self.config = dict(
110            convolutions=convolutions,
111            kernel_regularizer=kernel_regularizer,
112            bias_regularizer=bias_regularizer
113        )
114        self.conv_stack_config = [eval(f"dict({convolution})") for convolution in convolutions]
115        self.conv_blocks = list()
116
117        if kernel_regularizer is not None:
118            kernel_regularizer = models_util.maybe_to_regularizer(kernel_regularizer)
119        if bias_regularizer is not None:
120            bias_regularizer = models_util.maybe_to_regularizer(bias_regularizer)
121
122        for block_config in self.conv_stack_config:
123            block = _new_convolution_block(
124                **block_config,
125                kernel_regularizer=kernel_regularizer,
126                bias_regularizer=bias_regularizer,
127            )
128            self.conv_blocks.append(block)
129
130        self.drop_dim2 = layers.Lambda(tf.squeeze, arguments=dict(axis=-2))
131        self.expand_last = layers.Lambda(tf.expand_dims, arguments=dict(axis=-1))
132
133    @property
134    def inputs_requirements(self) -> InputsRequirements:
135        requirements, frame_look_back = conv_util.get_conv2d_stack_requirements(self.conv_stack_config)
136        first = requirements[0]
137        t_min, f_size = first["min_size"]
138        t_grow, f_grow = first["grow_size"]
139        return InputsRequirements(
140            frame_look_back=frame_look_back,
141            t_min=t_min,
142            t_grow=t_grow,
143            f_min=f_size,
144            f_grow=f_grow,
145        )
146
147    def call(self, inputs, training=None, mask=None, **kwargs):
148        """
149        :param inputs:
150            Tensor taking the form [batch, time, freq, channel]
151        :param training:
152        :param mask:
153        :param kwargs:
154        :return:
155            Tensor taking the form [batch, time, freq, 1]
156        """
157
158        if training:
159            t_min = self.inputs_requirements.t_min
160            t_grow = self.inputs_requirements.t_grow
161            pad = conv_util.get_padding_for_loss(tf.shape(inputs)[1], t_min=t_min, t_grow=t_grow)
162            inputs = tf.pad(inputs, ((0, 0), (0, pad), (0, 0), (0, 0)))
163
164            if mask is not None:
165                mask = tf.pad(mask, ((0, 0), (0, pad)))
166
167        f_min = self.inputs_requirements.f_min
168        f_grow = self.inputs_requirements.f_grow
169        assert (inputs.shape[2] - f_min) % f_grow == 0, (
170            f'Inputs dimension "freq" ' f"expected to be {f_min} + n * {f_grow}  but got {inputs.shape[2]} instead."
171        )
172
173        x = inputs
174        for block in self.conv_blocks:
175
176            for layer in block:
177
178                if mask is not None and isinstance(layer, convolutional.Conv):
179                    st, _ = layer.strides
180                    kt = tf.maximum(layer.kernel_size[0] - 1, 1)
181                    mask = mask[:, :-kt][:, ::st]
182                    mask = tf.pad(mask, ((0, 0), (0, tf.maximum(2 - layer.kernel_size[0], 0))))
183
184                x = layer(x, training=training)
185
186        return self.expand_last(self.drop_dim2(x)), mask
187
188    def get_config(self):
189        return self.config
190
191
192def _new_convolution_block(
193    filters: int,
194    kernel_size: tuple,
195    strides: tuple,
196    use_bias: bool = False,
197    use_norm: bool = True,
198    kernel_regularizer=None,
199    bias_regularizer=None,
200    activation=None,
201):
202    assert strides[0] % 2 == 0 or strides[0] == 1, "Strides on the time axis must be divisible by 2 or be exactly 1."
203
204    if activation is not None:
205        activation_layer = layers.Activation(activation)
206    else:
207        activation_layer = layers.Lambda(lambda x: x)
208
209    if use_norm:
210        norm_layer = layers.LayerNormalization()
211    else:
212        norm_layer = layers.Lambda(lambda x: x)
213
214    return (
215        layers.Conv2D(
216            filters=filters,
217            kernel_size=kernel_size,
218            strides=strides,
219            use_bias=use_bias,
220            kernel_regularizer=kernel_regularizer,
221            bias_regularizer=bias_regularizer,
222        ),
223        norm_layer,
224        activation_layer,
225    )
226

ANSWER

Answered 2021-Sep-06 at 13:25

Your issue is not related to 'transformer_transducer/transducer_encoder/inputs_embedding/ convolution_stack/conv2d/kernel:0'.
The error code tells you this element is referring to a non trackable element. It seems the non-trackable object is not directly assigned to an attribute of this conv2d/kernel:0.

To solve your issue, we need to localize Tensor("77040:0", shape=(), dtype=resource) from this error code:

1@pytest.mark.usefixtures("maybe_run_functions_eagerly")
2def test_save_model(speech_model: Tuple[TransducerBase, SpeechFeaturesConfig]):
3    model, speech_features_config = speech_model
4    speech_features_config: SpeechFeaturesConfig
5    channels = 3 if speech_features_config.add_delta_deltas else 1
6    num_mel_bins = speech_features_config.num_mel_bins
7    enc_inputs = np.random.rand(1, 50, num_mel_bins, channels)
8    dec_inputs = np.expand_dims(np.random.randint(0, 25, size=10), axis=1)
9    inputs = enc_inputs, dec_inputs
10    model(inputs)
11
12    # Throws KeyError:
13    # graph = tf.compat.v1.get_default_graph()
14    # tensor = graph.get_tensor_by_name("77040:0")
15
16    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
17    try:
18        model.save(directory)
19    finally:
20        shutil.rmtree(directory)
21E         AssertionError: Tried to export a function which references untracked resource Tensor("77040:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
22E         
23E         Trackable Python objects referring to this tensor (from gc.get_referrers, limited to two hops):
24E         <tf.Variable 'transformer_transducer/transducer_encoder/inputs_embedding/convolution_stack/conv2d/kernel:0' shape=(3, 3, 3, 32) dtype=float32>
25model(batch)  # Build the model
26
27tensor_name = "77040"
28
29var_names = [var.name for var in model.trainable_weights]
30weights = list(filter(lambda var: tensor_name in var, var_names))
31
32var_names = [var.name for var in model.trainable_variables]
33variables = list(filter(lambda var: tensor_name in var, var_names))
34
35print(weights)
36print(variables)
37class InputsEmbedding(layers.Layer, TimeReduction):
38    def __init__(self, config: InputsEmbeddingConfig, **kwargs):
39        super().__init__(**kwargs)
40
41        if config.transformations is None or not len(config.transformations):
42            raise RuntimeError("No transformations provided.")
43
44        self.config = config
45
46        self.transformations = list()
47        for transformation in self.config.transformations:
48            layer_name, layer_params = list(transformation.items())[0]
49            layer = _get_layer(layer_name, layer_params)
50            self.transformations.append(layer)
51
52        self.init_time_reduction_layer()
53
54    def get_config(self):
55        return self.config.dict()
56
57
58def _get_layer(name: str, params: dict) -> layers.Layer:
59    if name == "conv2d_stack":
60        return ConvolutionStack(**params)
61    elif name == "stack_frames":
62        return StackFrames(**params)
63    else:
64        raise RuntimeError(f"Unsupported or unknown time-reduction layer {name}")
65@pytest.mark.usefixtures("maybe_run_functions_eagerly")
66def test_inputs_embedding_save_model():
67    convolutions = [
68        "filters=2, kernel_size=(3, 3), strides=(2, 1)",
69        "filters=4, kernel_size=(3, 3), strides=(2, 1)",
70        "filters=8, kernel_size=(3, 4), strides=(1, 1)",
71    ]
72
73    config = InputsEmbeddingConfig()
74    config.transformations = [dict(conv2d_stack=dict(convolutions=convolutions)), dict(stack_frames=dict(n=2))]
75
76    num_features = 8
77    num_channels = 3
78
79    inputs = layers.Input(shape=(None, num_features, num_channels))
80    x = inputs
81    x, _ = InputsEmbedding(config)(x)
82    model = keras.Model(inputs=inputs, outputs=x)
83    model.build(input_shape=(1, 20, num_features, num_channels))
84
85    directory = tempfile.mkdtemp(prefix=f"{model.__class__.__name__}_")
86    try:
87        model.save(directory)
88    finally:
89        shutil.rmtree(directory)
90from typing import List
91
92import tensorflow as tf
93from tensorflow.keras import layers
94from tensorflow.python.keras.layers import convolutional
95
96from speech.lab.layers import InputsRequirements
97from speech.lab.models import conv_util, models_util
98
99
100class ConvolutionStack(layers.Layer):
101    def __init__(
102        self,
103        convolutions: List[str],
104        kernel_regularizer: dict = None,
105        bias_regularizer: dict = None,
106        **kwargs
107    ):
108        super().__init__(**kwargs)
109        self.config = dict(
110            convolutions=convolutions,
111            kernel_regularizer=kernel_regularizer,
112            bias_regularizer=bias_regularizer
113        )
114        self.conv_stack_config = [eval(f"dict({convolution})") for convolution in convolutions]
115        self.conv_blocks = list()
116
117        if kernel_regularizer is not None:
118            kernel_regularizer = models_util.maybe_to_regularizer(kernel_regularizer)
119        if bias_regularizer is not None:
120            bias_regularizer = models_util.maybe_to_regularizer(bias_regularizer)
121
122        for block_config in self.conv_stack_config:
123            block = _new_convolution_block(
124                **block_config,
125                kernel_regularizer=kernel_regularizer,
126                bias_regularizer=bias_regularizer,
127            )
128            self.conv_blocks.append(block)
129
130        self.drop_dim2 = layers.Lambda(tf.squeeze, arguments=dict(axis=-2))
131        self.expand_last = layers.Lambda(tf.expand_dims, arguments=dict(axis=-1))
132
133    @property
134    def inputs_requirements(self) -> InputsRequirements:
135        requirements, frame_look_back = conv_util.get_conv2d_stack_requirements(self.conv_stack_config)
136        first = requirements[0]
137        t_min, f_size = first["min_size"]
138        t_grow, f_grow = first["grow_size"]
139        return InputsRequirements(
140            frame_look_back=frame_look_back,
141            t_min=t_min,
142            t_grow=t_grow,
143            f_min=f_size,
144            f_grow=f_grow,
145        )
146
147    def call(self, inputs, training=None, mask=None, **kwargs):
148        """
149        :param inputs:
150            Tensor taking the form [batch, time, freq, channel]
151        :param training:
152        :param mask:
153        :param kwargs:
154        :return:
155            Tensor taking the form [batch, time, freq, 1]
156        """
157
158        if training:
159            t_min = self.inputs_requirements.t_min
160            t_grow = self.inputs_requirements.t_grow
161            pad = conv_util.get_padding_for_loss(tf.shape(inputs)[1], t_min=t_min, t_grow=t_grow)
162            inputs = tf.pad(inputs, ((0, 0), (0, pad), (0, 0), (0, 0)))
163
164            if mask is not None:
165                mask = tf.pad(mask, ((0, 0), (0, pad)))
166
167        f_min = self.inputs_requirements.f_min
168        f_grow = self.inputs_requirements.f_grow
169        assert (inputs.shape[2] - f_min) % f_grow == 0, (
170            f'Inputs dimension "freq" ' f"expected to be {f_min} + n * {f_grow}  but got {inputs.shape[2]} instead."
171        )
172
173        x = inputs
174        for block in self.conv_blocks:
175
176            for layer in block:
177
178                if mask is not None and isinstance(layer, convolutional.Conv):
179                    st, _ = layer.strides
180                    kt = tf.maximum(layer.kernel_size[0] - 1, 1)
181                    mask = mask[:, :-kt][:, ::st]
182                    mask = tf.pad(mask, ((0, 0), (0, tf.maximum(2 - layer.kernel_size[0], 0))))
183
184                x = layer(x, training=training)
185
186        return self.expand_last(self.drop_dim2(x)), mask
187
188    def get_config(self):
189        return self.config
190
191
192def _new_convolution_block(
193    filters: int,
194    kernel_size: tuple,
195    strides: tuple,
196    use_bias: bool = False,
197    use_norm: bool = True,
198    kernel_regularizer=None,
199    bias_regularizer=None,
200    activation=None,
201):
202    assert strides[0] % 2 == 0 or strides[0] == 1, "Strides on the time axis must be divisible by 2 or be exactly 1."
203
204    if activation is not None:
205        activation_layer = layers.Activation(activation)
206    else:
207        activation_layer = layers.Lambda(lambda x: x)
208
209    if use_norm:
210        norm_layer = layers.LayerNormalization()
211    else:
212        norm_layer = layers.Lambda(lambda x: x)
213
214    return (
215        layers.Conv2D(
216            filters=filters,
217            kernel_size=kernel_size,
218            strides=strides,
219            use_bias=use_bias,
220            kernel_regularizer=kernel_regularizer,
221            bias_regularizer=bias_regularizer,
222        ),
223        norm_layer,
224        activation_layer,
225    )
226AssertionError: Tried to export a function which references untracked resource\
227Tensor("77040:0", shape=(), dtype=resource). 
228TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.
229
Edit:

Thanks to your comments, we found that "ConvolutionStack" seems to reproduce the error.

The problem only occurs if I use the ConvolutionStack layer in InputsEmbedding but I can save both of them successfully in a standalone model.

I understand you cannot share the config of this layer and that's why I suggest you try and localize this Tensor(77040:0 from the ConvolutionStack.

This untrackable tensor must be an artifcat or a temporary tensor created by a process of a function of ConvolutionStack.

Try to find a tensor that could be passed from a function to another instead of being assigned to an attribute of a layer's class

Source https://stackoverflow.com/questions/69040420

QUESTION

Stopping and starting a deep learning google cloud VM instance causes tensorflow to stop recognizing GPU

Asked 2021-Jul-18 at 15:05

I am using the pre-built deep learning VM instances offered by google cloud, with an Nvidia tesla K80 GPU attached. I choose to have Tensorflow 2.5 and CUDA 11.0 automatically installed. When I start the instance, everything works great - I can run:

1Import tensorflow as tf
2tf.config.list_physical_devices()
3

And my function returns the CPU, accelerated CPU, and the GPU. Similarly, if I run tf.test.is_gpu_available(), the function returns True.

However, if I log out, stop the instance, and then restart the instance, running the same exact code only sees the CPU and tf.test.is_gpu_available() results in False. I get an error that looks like the driver initialization is failing:

1Import tensorflow as tf
2tf.config.list_physical_devices()
3 E tensorflow/stream_executor/cuda/cuda_driver.cc:355] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
4

Running nvidia-smi shows that the computer still sees the GPU, but my tensorflow canā€™t see it.

Does anyone know what could be causing this? I donā€™t want to have to reinstall everything when Iā€™m restarting the instance.

ANSWER

Answered 2021-Jun-25 at 09:11

Some people (sadly not me) are able to resolve this by setting the following at the beginning of their script/main:

1Import tensorflow as tf
2tf.config.list_physical_devices()
3 E tensorflow/stream_executor/cuda/cuda_driver.cc:355] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
4import os
5os.environ["CUDA_VISIBLE_DEVICES"] = "0"
6

I had to reinstall CUDA drivers and from then on it worked even after restarting the instance. You can configure your system settings on NVIDIAs website and it will provide you the commands you need to follow to install cuda. It also asks you if you want to uninstall the previous cuda version (yes!).This is luckily also very fast.

Source https://stackoverflow.com/questions/68119561

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Tensorflow

Tutorials and Learning Resources are not available at this moment for Tensorflow

Share this Page

share link

Get latest updates on Tensorflow