spektral | Graph Neural Networks with Keras and Tensorflow | Machine Learning library
kandi X-RAY | spektral Summary
kandi X-RAY | spektral Summary
Spektral is a Python library for graph deep learning, based on the Keras API and TensorFlow 2. The main goal of this project is to provide a simple but flexible framework for creating graph neural networks (GNNs). You can use Spektral for classifying the users of a social network, predicting molecular properties, generating new graphs with GANs, clustering nodes, predicting links, and any other task where data is described by graphs.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Reads the file
- Preprocess features
- Convert an index into a boolean mask
- Compute the dot product of two arrays
- Process code blocks
- Processes a list block
- Count the number of leading spaces in a string
- Reads the graph
- Normalize data
- Read the DBLP
- Evaluate the network
- Builds the architecture
- Read page data
- Reads all of the files from the data directory
- Collate a batch
- Gather a sparse matrix
- Reads the model
- Return the signature of the graph
- Plot a subgraph
- Select the shortest path
- Explains a single node
- Read QM9 dataset
- Calculate the edges of x
- Call the layer
- Download the dataset
- R Chebyshev polynomial polynomials
spektral Key Features
spektral Examples and Code Snippets
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from grnf.tf import GraphRandomNeuralFeatures
X_in = Input(shape=(N, F))
A_in = Input(shape=(N, N))
psi = GraphRandomNeuralFeatures(64, activation="relu")([X_i
from spektral.layers import GlobalSumPool
class FCN0(Model):
def __init__(self, channels, outputs):
super().__init__()
self.dense1 = Dense(channels, activation="relu")
self.dropout = Dropout(0.5)
self.d
intermediate_output = intermediate_layer_model.predict(model_input, batch_size=N)
intermediate_output = intermediate_layer_model.predict_on_batch(model_input)
Cannot multiply A and B becaus
Community Discussions
Trending Discussions on spektral
QUESTION
I am trying to implement a GCN layer using tensorflow, but it is not learning. Can someone check what potential issue could be?
I have tried normalizing the adjacency matrix and even replaced it with identity so that the GCN layer becomes a simple MLP. But there is no change. I think, I have made some fundamental/silly mistake in my implementation which I am not able to find. Can someone let me know what the issue could be?
...ANSWER
Answered 2021-Nov-26 at 03:48Your model is learning but doesn't converge. Consider checking/adding data ,use simpler model, or tuning parameters while training (e.g: learning rate, batches size).
QUESTION
I want to compare the performance of classification problem using GIN vs. Fully Connected Network. I have started with example from the spektral library TUDataset classification with GIN. I have created custom dataset for my problem and it is being loaded using DisjointLoader from spektral.data.
I am seeing my supervised learning is showing good results on this data using GIN network. However, to compare these results with Fully Connected network, I am facing problem in loading inputs from dataset into FC network input. The dataset is stored in graph format with Node attributes matrix and adjacency matrix. There are 18 nodes in the graph and each node has 7 attributes in the attribute matrix.
I have tried loading the FC network with just Node attributes matrix but I am facing mismatch error.
here is the FC network that I have defined instead of GIN0 network from the example shared above:
...ANSWER
Answered 2021-Oct-07 at 07:53The problem is that your FC network does not have a global pooling layer (also sometimes called "readout"), and so the output of the network will have shape (batch_size * 18, 1) instead of (batch_size, 1) which is the shape of the target.
Essentially, your FC network is suitable for node-level prediction, but not graph-level prediction. To fix this, you can introduce a global pooling layer as follows:
QUESTION
So... I have checked a few posts on this issue (there should be many that I haven't checked but I think it's reasonable to seek help with a question now), but I haven't found any solution that might suit my situation.
This OOM error message always emerge (with no single exception) in the second round of a whatever-fold training loop, and when re-running the training code again after a first run. So this might be an issue related to this post: A previous stackoverflow question for OOM linked with tf.nn.embedding_lookup(), but I am not sure which function my issue lies in.
My NN is a GCN with two graph convolutional layers, and I am running the code on a server with several 10 GB Nvidia P102-100 GPUs. Have set batch_size to 1 but nothing has changed. Also am using Jupyter Notebook rather than running python scripts with command because in command line I cannot even run one round... Btw does anyone know why some code can run without problem on Jupyter while popping OOM in command line? It seems a bit strange to me.
UPDATE: After replacing Flatten() with GlobalMaxPool(), the error disappeared and I can run the code smoothly. However, if I further add one GC layer, the error would come in the first round. Thus, I guess the core issue is still there...
UPDATE2: Tried to replace tf.Tensor
with tf.SparseTensor
. Successful but of no use. Also tried to set up the mirrored strategy as mentioned in ML_Engine's answer, but it looks like one of the GPU is occupied most highly and OOM still came out. Perhaps it's kind of "data parallel" and cannot solve my problem since I have set batch_size
to 1?
Code (adapted from GCNG):
...ANSWER
Answered 2021-Apr-20 at 13:42You can make use of distributed strategies in tensorflow to make sure that your multi-GPU set up is being used appropriately:
QUESTION
I'm trying to access output of intermediate layers of Graph Convolutional Networks (GCN) and model.predict is throwing InvalidArgument Error for input value where as model.fit is working fine with the same input.
Here is my code and it using 'CORA' citation dataset from OGB provided by spektral library that provide algorithms and examples for Graph Convolutional network. My code is based on one of the example from the same library, here
...ANSWER
Answered 2020-Sep-04 at 09:25The predict
function of a Keras Model
has a default argument of batch_size=32
.
You can solve it in two ways.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spektral
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page