learn2learn | A PyTorch Library for Meta-learning Research | Machine Learning library
kandi X-RAY | learn2learn Summary
kandi X-RAY | learn2learn Summary
learn2learn is a software library for meta-learning research.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Start pretraining
- Train the model
- Adapt a batch of training data
- Clone the AMLPL
- Resets gradients
- Runs l2l vision
- Fast adaptation function
- Compute the MAM policy loss
- Compute the generalized advantage
- Loop forever
- Compute the classifier
- Create a Google spreadsheet service
- Forward computation
- Compute a random block of data
- Compute prototypes
- Main loop
- Download data files
- Forward a single action
- Perform an action
- Compute generalized advantage
- Run SVM logits
- Return a pretrained model
- Adapt the loss function
- Adapts the loss function
- Adapts a trained model
- Clone the MAML object
- Performs the training
- Learn the model
- Compute the Kronecker algorithm
learn2learn Key Features
learn2learn Examples and Code Snippets
Community Discussions
Trending Discussions on learn2learn
QUESTION
I was going through the omniglot maml example and saw that they have net.train()
at the top of their testing code. This seems like a mistake since that means the stats from each task at meta-testing is shared:
ANSWER
Answered 2021-Nov-25 at 19:54TLDR: Use mdl.train()
since that uses batch statistics (but inference will not be deterministic anymore). You probably won't want to use mdl.eval()
in meta-learning.
BN intended behaviour:
- Importantly, during inference (eval/testing) running_mean, running_std is used - that was calculated from training(because they want a deterministic output and to use estimates of the population statistics).
- During training the batch statistics is used but a population statistic is estimated with running averages. I assume the reason batch_stats is used during training is to introduce noise that regularizes training (noise robustness)
- in meta-learning I think using batch statistics is the best during testing (and not calculate the running means) since we are supposed to be seeing new /tasksdistribution anyway. Price we pay is loss of determinism. Could be interesting just out of curiosity what the accuracy is using population stats estimated from meta-trian.
This is likely why I don't see divergence in my testing with the mdl.train()
.
So just make sure you use mdl.train()
(since that uses batch statistics https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) but that either the new running stats that cheat aren't saved or used later.
QUESTION
I've been trying to install PyTorch 1.9 with Cuda (ideally 11) on my HPC but I cannot.
The cluster says:
...ANSWER
Answered 2021-Sep-23 at 06:45First of all, as @Francois suggested, try to uninstall the CPU only version of pytorch. Also in your installation script, you should use either conda
or pip3
.
Then you may want to try the following attempts:
- using
conda
: addconda-forge
channel to your command (conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge
). And make sureconda
is updated. - using
pip
: insert--no-cache-dir
into your command (pip3 --no-cache-dir install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
) to avoid theMemoryError
.
QUESTION
I am doing a meta learning research and am using the MAML optimization provided by learn2learn. However as one of the baseline, I would like to test a non-meta-learning approach, i.e. the traditional training + testing.
Due to the lightning's internal usage of optimizer it seems that it is difficult to make the MAML work with learn2learn in lightning, so I couldn't use lightning in my meta-learning setup, however for my baseline, I really like to use lightning in that it provides many handy functionalities like deepspeed or ddp out of the box.
Here is my question, other than setting up two separate folders/repos, how could I mix the vanilia pytorch (learn2learn) with pytorch lightning (baseline)? What is the best practice?
Thanks!
...ANSWER
Answered 2021-Jul-14 at 18:27Decided to answer my question. So I ended up using the torch lightning's manual optimization so that I can customize the optimization step. This would make both approaches using the same framework, and I think is better than maintaining 2 separate repos.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install learn2learn
You can use learn2learn like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page