spektral | Graph Neural Networks with Keras and Tensorflow | Machine Learning library

 by   danielegrattarola Python Version: 1.3.1 License: MIT

kandi X-RAY | spektral Summary

kandi X-RAY | spektral Summary

spektral is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. spektral has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install spektral' or download it from GitHub, PyPI.

Spektral is a Python library for graph deep learning, based on the Keras API and TensorFlow 2. The main goal of this project is to provide a simple but flexible framework for creating graph neural networks (GNNs). You can use Spektral for classifying the users of a social network, predicting molecular properties, generating new graphs with GANs, clustering nodes, predicting links, and any other task where data is described by graphs.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              spektral has a medium active ecosystem.
              It has 2258 star(s) with 332 fork(s). There are 43 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 59 open issues and 199 have been closed. On average issues are closed in 79 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of spektral is 1.3.1

            kandi-Quality Quality

              spektral has 0 bugs and 0 code smells.

            kandi-Security Security

              spektral has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              spektral code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              spektral is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              spektral releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              spektral saves you 3107 person hours of effort in developing the same functionality from scratch.
              It has 6690 lines of code, 479 functions and 106 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed spektral and discovered the below as its top functions. This is intended to give you an instant insight into spektral implemented functionality, and help decide if they suit your requirements.
            • Reads the file
            • Preprocess features
            • Convert an index into a boolean mask
            • Compute the dot product of two arrays
            • Process code blocks
            • Processes a list block
            • Count the number of leading spaces in a string
            • Reads the graph
            • Normalize data
            • Read the DBLP
            • Evaluate the network
            • Builds the architecture
            • Read page data
            • Reads all of the files from the data directory
            • Collate a batch
            • Gather a sparse matrix
            • Reads the model
            • Return the signature of the graph
            • Plot a subgraph
            • Select the shortest path
            • Explains a single node
            • Read QM9 dataset
            • Calculate the edges of x
            • Call the layer
            • Download the dataset
            • R Chebyshev polynomial polynomials
            Get all kandi verified functions for this library.

            spektral Key Features

            No Key Features are available at this moment for spektral.

            spektral Examples and Code Snippets

            Graph Random Neural Features,Implementation
            Pythondot img1Lines of Code : 11dot img1License : Permissive (MIT)
            copy iconCopy
            from tensorflow.keras.layers import Input, Dense
            from tensorflow.keras.models import Model
            from grnf.tf import GraphRandomNeuralFeatures
            X_in = Input(shape=(N, F))
            A_in = Input(shape=(N, N))
            psi = GraphRandomNeuralFeatures(64, activation="relu")([X_i  
            copy iconCopy
            from spektral.layers import GlobalSumPool
            
            class FCN0(Model):
                def __init__(self, channels, outputs):
                    super().__init__()
                    self.dense1 = Dense(channels, activation="relu")
                    self.dropout = Dropout(0.5)
                    self.d
            copy iconCopy
            intermediate_output = intermediate_layer_model.predict(model_input, batch_size=N)
            
            intermediate_output = intermediate_layer_model.predict_on_batch(model_input)
            
            Cannot multiply A and B becaus

            Community Discussions

            QUESTION

            GCN model is not learning
            Asked 2021-Nov-28 at 03:45

            I am trying to implement a GCN layer using tensorflow, but it is not learning. Can someone check what potential issue could be?

            I have tried normalizing the adjacency matrix and even replaced it with identity so that the GCN layer becomes a simple MLP. But there is no change. I think, I have made some fundamental/silly mistake in my implementation which I am not able to find. Can someone let me know what the issue could be?

            ...

            ANSWER

            Answered 2021-Nov-26 at 03:48

            Your model is learning but doesn't converge. Consider checking/adding data ,use simpler model, or tuning parameters while training (e.g: learning rate, batches size).

            Source https://stackoverflow.com/questions/70119237

            QUESTION

            How to use python spektral library DisjointLoader to feed FC network instead of Graph Isomorphism Network?
            Asked 2021-Oct-08 at 06:35

            I want to compare the performance of classification problem using GIN vs. Fully Connected Network. I have started with example from the spektral library TUDataset classification with GIN. I have created custom dataset for my problem and it is being loaded using DisjointLoader from spektral.data.

            I am seeing my supervised learning is showing good results on this data using GIN network. However, to compare these results with Fully Connected network, I am facing problem in loading inputs from dataset into FC network input. The dataset is stored in graph format with Node attributes matrix and adjacency matrix. There are 18 nodes in the graph and each node has 7 attributes in the attribute matrix.

            I have tried loading the FC network with just Node attributes matrix but I am facing mismatch error.

            here is the FC network that I have defined instead of GIN0 network from the example shared above:

            ...

            ANSWER

            Answered 2021-Oct-07 at 07:53

            The problem is that your FC network does not have a global pooling layer (also sometimes called "readout"), and so the output of the network will have shape (batch_size * 18, 1) instead of (batch_size, 1) which is the shape of the target.

            Essentially, your FC network is suitable for node-level prediction, but not graph-level prediction. To fix this, you can introduce a global pooling layer as follows:

            Source https://stackoverflow.com/questions/69476773

            QUESTION

            How to solve "OOM when allocating tensor with shape[XXX]" in tensorflow (when training a GCN)
            Asked 2021-Apr-23 at 07:58

            So... I have checked a few posts on this issue (there should be many that I haven't checked but I think it's reasonable to seek help with a question now), but I haven't found any solution that might suit my situation.

            This OOM error message always emerge (with no single exception) in the second round of a whatever-fold training loop, and when re-running the training code again after a first run. So this might be an issue related to this post: A previous stackoverflow question for OOM linked with tf.nn.embedding_lookup(), but I am not sure which function my issue lies in.

            My NN is a GCN with two graph convolutional layers, and I am running the code on a server with several 10 GB Nvidia P102-100 GPUs. Have set batch_size to 1 but nothing has changed. Also am using Jupyter Notebook rather than running python scripts with command because in command line I cannot even run one round... Btw does anyone know why some code can run without problem on Jupyter while popping OOM in command line? It seems a bit strange to me.

            UPDATE: After replacing Flatten() with GlobalMaxPool(), the error disappeared and I can run the code smoothly. However, if I further add one GC layer, the error would come in the first round. Thus, I guess the core issue is still there...

            UPDATE2: Tried to replace tf.Tensor with tf.SparseTensor. Successful but of no use. Also tried to set up the mirrored strategy as mentioned in ML_Engine's answer, but it looks like one of the GPU is occupied most highly and OOM still came out. Perhaps it's kind of "data parallel" and cannot solve my problem since I have set batch_size to 1?

            Code (adapted from GCNG):

            ...

            ANSWER

            Answered 2021-Apr-20 at 13:42

            You can make use of distributed strategies in tensorflow to make sure that your multi-GPU set up is being used appropriately:

            Source https://stackoverflow.com/questions/67178061

            QUESTION

            Keras, Inconsistent behavior when using model.predict for accessing intermediate layers output with spektral GCN
            Asked 2020-Sep-04 at 09:25

            I'm trying to access output of intermediate layers of Graph Convolutional Networks (GCN) and model.predict is throwing InvalidArgument Error for input value where as model.fit is working fine with the same input.

            Here is my code and it using 'CORA' citation dataset from OGB provided by spektral library that provide algorithms and examples for Graph Convolutional network. My code is based on one of the example from the same library, here

            ...

            ANSWER

            Answered 2020-Sep-04 at 09:25
            Solution

            The predict function of a Keras Model has a default argument of batch_size=32. You can solve it in two ways.

            Source https://stackoverflow.com/questions/63731349

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install spektral

            Spektral is compatible with Python 3.5+, and is tested on Ubuntu 16.04+ and MacOS. Other Linux distros should work as well, but Windows is not supported for now.

            Support

            Spektral is an open-source project available on Github, and contributions of all types are welcome. Feel free to open a pull request if you have something interesting that you want to add to the framework.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install spektral

          • CLONE
          • HTTPS

            https://github.com/danielegrattarola/spektral.git

          • CLI

            gh repo clone danielegrattarola/spektral

          • sshUrl

            git@github.com:danielegrattarola/spektral.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Machine Learning Libraries

            tensorflow

            by tensorflow

            youtube-dl

            by ytdl-org

            models

            by tensorflow

            pytorch

            by pytorch

            keras

            by keras-team

            Try Top Libraries by danielegrattarola

            keras-gat

            by danielegrattarolaPython

            twitter-sentiment-cnn

            by danielegrattarolaPython

            GINR

            by danielegrattarolaHTML

            deep-q-atari

            by danielegrattarolaPython

            deep-q-snake

            by danielegrattarolaPython