pytorch-lightning | lightweight PyTorch wrapper for high-performance | Machine Learning library

 by   PyTorchLightning Python Version: 1.6.1 License: Apache-2.0

kandi X-RAY | pytorch-lightning Summary

kandi X-RAY | pytorch-lightning Summary

pytorch-lightning is a Python library typically used in Institutions, Learning, Education, Artificial Intelligence, Machine Learning, Deep Learning, Pytorch applications. pytorch-lightning has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install pytorch-lightning' or download it from GitHub, PyPI.

Lightning disentangles PyTorch code to decouple the science from the engineering.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              pytorch-lightning has a highly active ecosystem.
              It has 18124 star(s) with 2310 fork(s). There are 219 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 437 open issues and 4413 have been closed. On average issues are closed in 104 days. There are 119 open pull requests and 0 closed requests.
              It has a positive sentiment in the developer community.
              The latest version of pytorch-lightning is 1.6.1

            kandi-Quality Quality

              pytorch-lightning has 0 bugs and 0 code smells.

            kandi-Security Security

              pytorch-lightning has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              pytorch-lightning code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              pytorch-lightning is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              pytorch-lightning releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              It has 58844 lines of code, 6409 functions and 455 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed pytorch-lightning and discovered the below as its top functions. This is intended to give you an instant insight into pytorch-lightning implemented functionality, and help decide if they suit your requirements.
            • Configure optimizers .
            • Check and set final flags .
            • Adds argparse arguments to cls .
            • Reset the train dataloader .
            • Apply a function to two collections .
            • Register DDP comm wrapper .
            • Broadcast a list of objects into a tensor .
            • Dump a checkpoint .
            • Apply a function to a collection .
            • Initialize the meta device .
            Get all kandi verified functions for this library.

            pytorch-lightning Key Features

            No Key Features are available at this moment for pytorch-lightning.

            pytorch-lightning Examples and Code Snippets

            copy iconCopy
            import torch
            import torch.nn as nn
            
            class SimpsonsNet(nn.Module):
                def __init__(self):
                    super(SimpsonsNet, self).__init__()
                    self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
                    self.conv2 = nn.Conv2d(16, 32, kernel_size=3  
            Distributed PyTorch Lightning Training on Ray,Hyperparameter Tuning with Ray Tune
            Pythondot img2Lines of Code : 36dot img2License : Permissive (Apache-2.0)
            copy iconCopy
            def train_mnist(config):
                
                # Create your PTL model.
                model = MNISTClassifier(config)
            
                # Create the Tune Reporting Callback
                metrics = {"loss": "ptl/val_loss", "acc": "ptl/val_accuracy"}
                callbacks = [TuneReportCallback(metrics, on  
            PyTorch-Lightning Docs
            Pythondot img3Lines of Code : 23dot img3no licencesLicense : No License
            copy iconCopy
            from typing import Optional
            
            
            def my_func(param_a: int, param_b: Optional[float] = None) -> str:
                """Sample function.
            
                Args:
                    param_a: first parameter
                    param_b: second parameter
            
                Return:
                    sum of both numbers
            
                Ex  
            horovod - pytorch lightning spark mnist
            Pythondot img4Lines of Code : 176dot img4License : Non-SPDX
            copy iconCopy
            import argparse
            import os
            import subprocess
            import sys
            from packaging import version
            
            import numpy as np
            
            import pyspark
            import pyspark.sql.types as T
            from pyspark import SparkConf
            from pyspark.ml.evaluation import MulticlassClassificationEvaluator
            i  
            horovod - pytorch lightning mnist
            Pythondot img5Lines of Code : 159dot img5License : Non-SPDX
            copy iconCopy
            import argparse
            import os
            from filelock import FileLock
            import tempfile
            
            import torch
            import torch.multiprocessing as mp
            import torch.nn as nn
            import torch.nn.functional as F
            import torch.optim as optim
            from torchvision import datasets, transforms
            #   
            horovod - pytorch lightning spark mnist legacy
            Pythondot img6Lines of Code : 107dot img6License : Non-SPDX
            copy iconCopy
            import argparse
            import os
            import subprocess
            import sys
            from packaging import version
            
            import numpy as np
            
            import pyspark
            import pyspark.sql.types as T
            from pyspark import SparkConf
            from pyspark.ml.evaluation import MulticlassClassificationEvaluator
            i  
            How to know the trained model is correct?
            Pythondot img7Lines of Code : 4dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            model_test = model_test.load_from_checkpoint(path)
            
            model_test = BYOL.load_from_checkpoint(path)
            
            How to seperate code into train, val and test functions for pytorch cnn?
            Pythondot img8Lines of Code : 4dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            # criterion is passed if you want to register the validation loss too
            def validate_model(model, eval_loader, criterion):
               ...
            
            EarlyStopping callback in PyTorch Lightning problem
            Pythondot img9Lines of Code : 18dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            def validation_step(self, batch, batch_idx):
                input_ids, attention_mask, targets = batch['input_ids'], batch['attention_mask'], batch['label'].squeeze()
                logits = self(batch)
                loss = F.cross_entropy(logits, targets)
                acc = accu
            AttributeError: 'list' object has no attribute 'view' while training network
            Pythondot img10Lines of Code : 6dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
             29   def validation_step(self, val_batch, batch_idx):
             30     x = val_batch
             31     x = x.view(x.size(0), -1)        # here is your problem
            
            x = torch.tensor(val_batch)
            

            Community Discussions

            QUESTION

            How to know the trained model is correct?
            Asked 2022-Mar-21 at 16:05

            I use PyTorch Lightning for model training, during which I use ModelCheckpoint to save loading points. Finally, I would like to know whether the model is loaded correctly. Let me know if you require further information?

            ...

            ANSWER

            Answered 2022-Mar-21 at 16:05

            load_from_checkpoint() will return a model with trained weights, so you need to assign it to a new variable.

            Source https://stackoverflow.com/questions/71558917

            QUESTION

            Pretrained lightning-bolts VAE not doing proper inference on training dataset
            Asked 2022-Feb-01 at 20:11

            I'm using the CIFAR-10 pre-trained VAE from lightning-bolts. It should be able to regenerate images with the quality shown on this picture taken from the docs (LHS are the real images, RHS are the generated)

            However, when I write a simple script that loads the model, the weights, and tests it over the training set, I get a much worse reconstruction (top row are real images, bottom row are the generated ones):

            Here is a link to a self-contained colab notebook that reproduces the steps I've followed to produce the pictures.

            Am I doing something wrong on my inference process? Could it be that the weights are not as "good" as the docs claim?

            Thanks!

            ...

            ANSWER

            Answered 2022-Feb-01 at 20:11

            First, the image from the docs you show is for the AE, not the VAE. The results for the VAE look much worse:

            https://pl-bolts-weights.s3.us-east-2.amazonaws.com/vae/vae-cifar10/vae_output.png

            Second, the docs state "Both input and generated images are normalized versions as the training was done with such images." So when you load the data you should specify normalize=True. When you plot your data, you will need to 'unnormalize' the data as well:

            Source https://stackoverflow.com/questions/70197274

            QUESTION

            How to disable automatic checkpoint loading
            Asked 2021-Dec-09 at 14:53

            Im trying to run a loop over a set of parameters and I wan't to make a new network for each parameter and let it learn a few epochs.

            Currently my code looks like this:

            ...

            ANSWER

            Answered 2021-Dec-09 at 14:53

            I think, in your settings, you want to disable automatic checkpointing:

            Source https://stackoverflow.com/questions/70291523

            QUESTION

            Colab PyTorch | ImportError: /usr/local/lib/python3.7/dist-packages/_XLAC.cpython-37m-x86_64-linux-gnu.so
            Asked 2021-Dec-03 at 09:29

            On Google Colaboratory, I have tried all 3 runtimes: CPU, GPU, TPU. All give the same error.

            Cells:

            ...

            ANSWER

            Answered 2021-Aug-19 at 14:08

            Searching online; there semes to be many causes for this same problem.

            In my case, setting Accelerator to None in Google Colaboratory solved this.

            Source https://stackoverflow.com/questions/68846290

            QUESTION

            ImportError after installing torchtext 0.11.0 with conda
            Asked 2021-Nov-24 at 22:00

            I have installed pytorch version 1.10.0 alongside torchtext, torchvision and torchaudio using conda. My PyTorch is cpu-only, and I have experimented with both conda install pytorch-mutex -c pytorch and conda install pytorch cpuonly -c pytorch to install the cpuonly version, both yielding the same eror that I will describe in the following lines.

            I have also installed pytorch-lightning in conda, alongside jsonargparse[summaries via pip in the environment.

            I have written this code to see whether LightningCLI works or not.

            ...

            ANSWER

            Answered 2021-Nov-24 at 22:00

            So in order to fix the problem, I had to change my environment.yaml in order to force pytorch to install from the pytorch channel.

            So this is my environment.yaml now:

            Source https://stackoverflow.com/questions/70098916

            QUESTION

            Save a model weights when a program receives TIME LIMIT while learning on a SLURM cluster
            Asked 2021-Oct-05 at 11:43

            I use a deep learning models written in pytorch_lightning (pytorch) and train them on slurm clusters. I submit job like this:

            ...

            ANSWER

            Answered 2021-Oct-05 at 11:43

            You can use Slurm's signalling mechanism to pass a signal to your application when it's within a certain number of seconds of the timelimit (see man sbatch). In your submission script use --signal=USR1@30 to send USR1 30 seconds before the timelimit is reached. Your submit script would contain these lines:

            Source https://stackoverflow.com/questions/69441573

            QUESTION

            How to test a model before fine-tuning in Pytorch Lightning?
            Asked 2021-Sep-20 at 13:25

            Doing things on Google Colab.

            • transformers: 4.10.2
            • pytorch-lightning: 1.2.7
            ...

            ANSWER

            Answered 2021-Sep-20 at 13:25

            The Trainer needs to call its .fit() in order to set up a lot of things and then only you can do .test() or other methods.

            You are right about putting a .fit() just before .test() but the fit call needs to a valid one. You have to feed a dataloader/datamodule to it. But since you don't want to do a training/validation in this fit call, just pass limit_[train/val]_batches=0 while Trainer construction.

            Source https://stackoverflow.com/questions/69249187

            QUESTION

            Define following multiplication of two tensors in pytorch lightning
            Asked 2021-Sep-11 at 12:25

            I would like to to multiply following two tensors x (of shape (BS, N, C)) and y (of shape (BS,1,C)) in the following way:

            ...

            ANSWER

            Answered 2021-Sep-10 at 14:08

            Is there anything wrong with x*y? As you can see in the code below, it yields exactly the same output as your function:

            Source https://stackoverflow.com/questions/69132547

            QUESTION

            Pytorch Lightning Automatic Logging - AttributeError: 'NoneType' object has no attribute '_results'
            Asked 2021-Aug-25 at 17:45

            Unable to use Automatic Logging (self.log) when calling training_step() on Pytorch Lightning, what am I missing? Here is a minimal example:

            ...

            ANSWER

            Answered 2021-Aug-25 at 17:45

            This is NOT the correct usage of LightningModule class. You can't call a hook (namely .training_step()) manually and expect everything to work fine.

            You need to setup a Trainer as suggested by PyTorch Lightning at the very start of its tutorial - it is a requirement. The functions (or hooks) that you define in a LightningModule merely tells Lightning "what to do" in a specific situation (in this case, at each training step). It is the Trainer that actually "orchestrates" the training by instantiating the necessary environment (including Logging functionality) and feeding it into the Lightning Module whenever needed.

            So, do it the way Lightning suggests and it will work.

            Source https://stackoverflow.com/questions/68890203

            QUESTION

            How to disable logging from PyTorch-Lightning logger?
            Asked 2021-Aug-18 at 14:21

            Logger in PyTorch-Lightning prints information about the model to be trained (or evaluated) and the progress during the training,

            However, in my case I would like to hide all messages from the logger in order not to flood the output in Jupyter Notebook.

            I've looked into the API of the Trainer class on the official docs page https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags and it seems like there is no option to turn off the messages from the logger.

            There is a parameter log_every_n_steps which can be set to big value, but nevertheless, the logging result after each epoch is displayed.

            How can one disable the logging?

            ...

            ANSWER

            Answered 2021-Aug-16 at 19:23

            Maybe try like that?

            logging.getLogger("package").propagate = False

            Source https://stackoverflow.com/questions/68807896

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install pytorch-lightning

            Simple installation from PyPI.

            Support

            The lightning community is maintained by.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link