pytorch-transformer | Yet another pytorch implementation | Machine Learning library
kandi X-RAY | pytorch-transformer Summary
kandi X-RAY | pytorch-transformer Summary
Yet another pytorch implementation of the Transformer model
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Compute dot product attention
- Calculate dot product attention
- Forward forward function
- Ginal GelU
pytorch-transformer Key Features
pytorch-transformer Examples and Code Snippets
Community Discussions
Trending Discussions on pytorch-transformer
QUESTION
When fine-tuning the sloBERTa Transformer model, based on CamemBERT, for a multiclass classification task with SimpleTransformers, I want to use the model argument "max_seq_length": 512, as previous work states that it gives better results than 128, but the inclusion of this argument triggers the error below. The error is the same in Kaggle and Google Colab environment, and terminating the execution and reruning it does not help. The error is triggered not matter how small the number of training epochs is, and the dataset contains only 600 instances (with text as strings, and labels as integers). I've tried lowering the max_seq_length to 509, 500 and 128, but the error persists.
The setup without this argument works normally and allows training with 90 epochs, so I otherwise have enough memory.
...ANSWER
Answered 2022-Jan-02 at 13:52This happened because max_seq_length
defines the number of input neurons for the model thus increasing the number of trainable parameters which will require it to allocate more memory which might exceed your memory limits on those platforms.
Most of the time, max_seq_length
is up the dataset, and sometimes adding too much could be wasteful in terms of training time and model size.
What you can do is to find the max number of words per sample in your training dataset and use that as your max_seq_length
.
QUESTION
I need to work with the pretrained BERT model ('dbmdz/bert-base-italian-xxl-cased'
) from Huggingface with Tensorflow (at this link).
After reading this on the website,
Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!
I raised the issue and promptly a download link to an archive containing the following files was given to me. The files are the following ones:
...ANSWER
Answered 2021-Aug-25 at 17:16You can try the following snippet to load dbmdz/bert-base-italian-xxl-cased
in tensorflow
.
QUESTION
I am building a Docker container based on python:3.7-slim-stretch
(same problem also happens on python:3.7-slim-stretch
), and it is getting Killed
on
ANSWER
Answered 2021-Feb-22 at 06:09I experience something similar on Windows when my docker containers run out of memory in WSL. I think the settings are different for Mac, but it looks like there is info here on setting the VM RAM/disk size/swap file settings for Docker for Desktop on Mac:
QUESTION
I have a sentence like: "I like sitting in my new chair and _____ about life"
.
And I have a SPECIFIC set of tokens like ["watch", "run", "think", "apple", "light"]
I would like to calculate the probability of each of those tokens to appear as the next word in that incomplete sentence. Hopefully I should get that the probability of "think"
is higher that "apple"
for instance.
I am working with pytorch-transformers (GPT2LMHeadModel specifically), and a possible solution is to evaluate the score of the full sentence with each of the tokens, but when number of tokens to evaluate is on the order of 100 or 1000 then the computation time starts to be too long.
It must be possible to process the sentence only once and somehow use the hidden states to calculate the probabilities of the set of tokens, but I don't know how to do it.
Any ideas? Thanks in advance
EDIT:
The actual code looks like the one below (estimating the probability for the full sentence every time). For every sentence it takes about 0.1 seconds to run the score()
method, which turns into hours if I want to evaluate some thousands of words.
ANSWER
Answered 2020-Aug-03 at 14:50Your example produced the following output and took around 48.5 seconds with 282 candiates to finish in my environment (I only conducted a 3 runs):
QUESTION
I started working on this about two months ago on Google Colab for a midterm project and everything worked perfectly. Now I am modifying it for a final project and keep getting the error 'RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768]'. It looks like pytorch recently pushed a new version 1.5, so I downgraded to version 1.4 and still got the same error. Same with 1.3, and I know I wasn't using anything lower since that came out last year. I checked it with my midterm code and still got the same error, so I don't know what's going on. Here is the chunk of code related to downloading and using the model.
...ANSWER
Answered 2020-Apr-29 at 03:54You can try transformers instead of pytorch_transformers.
! pip install transformers
(Google Colab)
In terminal,
pip install transformers
QUESTION
I wanted to test TextGeneration with CTRL using PyTorch-Transformers, before using it for fine-tuning. But it doesn't prompt anything like it does with GPT-2 and other similar language generation models. I'm very new for this and am stuck and can't figure out what's going on.
This is the procedure I followed in my Colab notebook,
...ANSWER
Answered 2020-Mar-02 at 00:18The solution was to increase the RAM. Since I was using the Google Colab's free GPU, I was going through this: GitHub issue and found this useful: Solution
The following piece of code will crash the session in Colab and select 'Get more RAM', which will increase the RAM up to 25.51GB
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install pytorch-transformer
You can use pytorch-transformer like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page