simpletransformers | Language Modelling , Language Generation | Natural Language Processing library

 by   ThilinaRajapakse Python Version: 0.70.1 License: Apache-2.0

kandi X-RAY | simpletransformers Summary

kandi X-RAY | simpletransformers Summary

simpletransformers is a Python library typically used in Artificial Intelligence, Natural Language Processing, Deep Learning, Pytorch, Bert, Neural Network, Transformer applications. simpletransformers has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install simpletransformers' or download it from GitHub, PyPI.

Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              simpletransformers has a highly active ecosystem.
              It has 3698 star(s) with 708 fork(s). There are 60 watchers for this library.
              There were 1 major release(s) in the last 6 months.
              There are 45 open issues and 1040 have been closed. On average issues are closed in 65 days. There are 11 open pull requests and 0 closed requests.
              It has a positive sentiment in the developer community.
              The latest version of simpletransformers is 0.70.1

            kandi-Quality Quality

              simpletransformers has 0 bugs and 0 code smells.

            kandi-Security Security

              simpletransformers has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              simpletransformers code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              simpletransformers is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              simpletransformers releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              simpletransformers saves you 9633 person hours of effort in developing the same functionality from scratch.
              It has 20507 lines of code, 434 functions and 157 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed simpletransformers and discovered the below as its top functions. This is intended to give you an instant insight into simpletransformers implemented functionality, and help decide if they suit your requirements.
            • Perform prediction on the model
            • Return the threshold if x is greater than the given threshold
            • Calculate loss function
            • Generate a dictionary of inputs
            • Predict from to_predict
            • Calculate the loss
            • Convert tokens to word logits
            • Get the current session object
            • Predict outputs
            • Generate the model
            • Interactively interactively interactively
            • Load a hdf5 dataset
            • Convert model to onnx
            • Interactively interactively
            • Train the wandb run
            • Encodes sentences
            • Convert examples into Feature
            • Convert the model to onnx
            • Build a pandas DataFrame containing the hard coded values of the prediction
            • Splits a document into titles and texts
            • Train a new tokenizer
            • Convert examples to features
            • Train a model
            • Runs prediction on the given dataset
            • Basic transformer view
            • Generates tokens from the model
            • Builds a classification dataset
            Get all kandi verified functions for this library.

            simpletransformers Key Features

            No Key Features are available at this moment for simpletransformers.

            simpletransformers Examples and Code Snippets

            How To Train-From-Scratch
            Pythondot img1Lines of Code : 3dot img1no licencesLicense : No License
            copy iconCopy
            python train_tokenizer.py
            
            model = BartModel(pretrained_model=None,args=model_args, model_config='config.json', vocab_file="./tokenize")
            
            python train.py
              
            sentence-transformers - train sts qqp crossdomain
            Pythondot img2Lines of Code : 119dot img2License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """
            The script shows how to train Augmented SBERT (Domain-Transfer/Cross-Domain) strategy for STSb-QQP dataset.
            For our example below we consider STSb (source) and QQP (target) datasets respectively.
            
            Methodology:
            Three steps are followed for AugSBER  
            sentence-transformers - train sts indomain bm25
            Pythondot img3Lines of Code : 117dot img3License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """
            The script shows how to train Augmented SBERT (In-Domain) strategy for STSb dataset with BM25 sampling.
            We utlise easy and practical elasticsearch (https://www.elastic.co/) for BM25 sampling.
            
            Installations:
            For this example, elasticsearch to be   
            sentence-transformers - train sts indomain semantic
            Pythondot img4Lines of Code : 116dot img4License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """
            The script shows how to train Augmented SBERT (In-Domain) strategy for STSb dataset with Semantic Search Sampling.
            
            
            Methodology:
            Three steps are followed for AugSBERT data-augmentation strategy with Semantic Search - 
                1. Fine-tune cross-enco  
            Simple Transformers producing nothing?
            Pythondot img5Lines of Code : 15dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            model = Seq2SeqModel(
                encoder_decoder_type="marian",
                encoder_decoder_name="Helsinki-NLP/opus-mt-en-mul",
                args=args,
                use_cuda=True,
            )
            
            # Input
            to_predict = ["They went to the public swimming pool.", "
            unable to mmap 1024 bytes - Cannot allocate memory - even though there is more than enough ram
            Pythondot img6Lines of Code : 2dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            model_args.use_multiprocessing = False
            
            why is my fastapi or uvicorn getting shutdown?
            Pythondot img7Lines of Code : 23dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            from multiprocessing import set_start_method
            from multiprocessing import Process, Manager
            try:
                set_start_method('spawn')
            except RuntimeError:
                pass
            @app.get("/article_classify")
            def classification(text:str):
                """function to class
            How can I specify a training set and test set from separate dataframes?
            Pythondot img8Lines of Code : 9dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            # create X
            X = df[]
            # create y
            y = df[]
            # split to train and test
            X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123, stratify = y)
            
            df = df1.append(df2)
            

            Community Discussions

            QUESTION

            Simple Transformers producing nothing?
            Asked 2022-Feb-22 at 11:54

            I have a simple transformers script looking like this.

            ...

            ANSWER

            Answered 2022-Feb-22 at 11:54

            Use this model instead.

            Source https://stackoverflow.com/questions/71200243

            QUESTION

            SimpleTransformers "max_seq_length" argument results in CUDA out of memory error in Kaggle and Google Colab
            Asked 2022-Jan-02 at 14:09

            When fine-tuning the sloBERTa Transformer model, based on CamemBERT, for a multiclass classification task with SimpleTransformers, I want to use the model argument "max_seq_length": 512, as previous work states that it gives better results than 128, but the inclusion of this argument triggers the error below. The error is the same in Kaggle and Google Colab environment, and terminating the execution and reruning it does not help. The error is triggered not matter how small the number of training epochs is, and the dataset contains only 600 instances (with text as strings, and labels as integers). I've tried lowering the max_seq_length to 509, 500 and 128, but the error persists.

            The setup without this argument works normally and allows training with 90 epochs, so I otherwise have enough memory.

            ...

            ANSWER

            Answered 2022-Jan-02 at 13:52

            This happened because max_seq_length defines the number of input neurons for the model thus increasing the number of trainable parameters which will require it to allocate more memory which might exceed your memory limits on those platforms.

            Most of the time, max_seq_length is up the dataset, and sometimes adding too much could be wasteful in terms of training time and model size.

            What you can do is to find the max number of words per sample in your training dataset and use that as your max_seq_length.

            Source https://stackoverflow.com/questions/70556326

            QUESTION

            using gpu with simple transformer mt5 training
            Asked 2021-Nov-13 at 13:10

            mt5 fine-tuning does not use gpu(volatile gpu utill 0%)

            Hi, im trying to fine tuning for ko-en translation with mt5-base model. I think the Cuda setting was done correctly(cuda available is True) But during training, the training set doesn't use GPU except getting dataset first(very short time).

            I want to use GPU resource efficiently and get advice about translation model fine-tuning here is my code and training env.

            ...

            ANSWER

            Answered 2021-Nov-11 at 09:26

            it jus out of memory cases. The parameter and dataset weren't loaded on my gpu memory. so i changed my model mt5-base to mt5-small, delete save point, reduce dataset

            Source https://stackoverflow.com/questions/69923334

            QUESTION

            How to log artifacts in wandb while using saimpletransformers?
            Asked 2021-Oct-20 at 11:26

            I am creating a Question Answering model using simpletransformers. I would also like to use wandb to track model artifacts. As I understand from wandb docs, there is an integration touchpoint for simpletransformers but there is no mention of logging artifacts.

            I would like to log artifacts generated at the train, validation, and test phase such as train.json, eval.json, test.json, output/nbest_predictions_test.json and best performing model.

            ...

            ANSWER

            Answered 2021-Oct-20 at 11:26

            Currently simpleTransformers doesn't support logging artifacts within the training/testing scripts. But you can do it manually:

            Source https://stackoverflow.com/questions/69640534

            QUESTION

            How to get probability of an answer using BERT model and is there a way to ask multiple questions for a context
            Asked 2021-Aug-28 at 13:27

            I am new to AI models and currently experimenting with the QandA model. Particularly I am interested in following 2 models. 1. from transformers import BertForQuestionAnswering
            2. from simpletransformers.question_answering import QuestionAnsweringModel

            Using option 1 BertForQuestionAnswering I am getting the desired results. However I can ask only one question at a time. Also I am not getting the probability of the answer.

            below is the code for BertForQuestionAnswering from transformers.

            ...

            ANSWER

            Answered 2021-Aug-28 at 13:27

            You can use the huggingface question answering pipeline to achieve that:

            Source https://stackoverflow.com/questions/68747152

            QUESTION

            unable to mmap 1024 bytes - Cannot allocate memory - even though there is more than enough ram
            Asked 2021-Jun-14 at 11:16

            I'm currently working on a seminar paper on nlp, summarization of sourcecode function documentation. I've therefore created my own dataset with ca. 64000 samples (37453 is the size of the training dataset) and I want to fine tune the BART model. I use for this the package simpletransformers which is based on the huggingface package. My dataset is a pandas dataframe. An example of my dataset:

            My code:

            ...

            ANSWER

            Answered 2021-Jun-08 at 08:27

            While I do not know how to deal with this problem directly, I had a somewhat similar issue(and solved). The difference is:

            • I use fairseq
            • I can run my code on google colab with 1 GPU
            • Got RuntimeError: unable to mmap 280 bytes from file : Cannot allocate memory (12) immediately when I tried to run it on multiple GPUs.

            From the other people's code, I found that he uses python -m torch.distributed.launch -- ... to run fairseq-train, and I added it to my bash script and the RuntimeError is gone and training is going.

            So I guess if you can run with 21000 samples, you may use torch.distributed to make whole data into small batches and distribute them to several workers.

            Source https://stackoverflow.com/questions/67876741

            QUESTION

            Default SimpleTransformers setup fails with ValueError str
            Asked 2021-May-31 at 21:12

            I'm trying to use SimpleTransformers default setup to do multi-task learning.

            I am using the example from their website here

            The code looks like below:

            ...

            ANSWER

            Answered 2021-May-30 at 17:54

            In the example code if you change

            Source https://stackoverflow.com/questions/67755780

            QUESTION

            why is my fastapi or uvicorn getting shutdown?
            Asked 2021-Feb-08 at 20:15

            I am trying to run a service that uses simple transformers Roberta model to do classification. the inferencing script/function itself is working as expected when tested. when i include that with fast api its shutting down the server.

            ...

            ANSWER

            Answered 2021-Jan-10 at 11:49

            put the entire function under a try-except block and show the output so we can investigate the real issue.

            Source https://stackoverflow.com/questions/65505710

            QUESTION

            SimpleTransformers Error: VersionConflict: tokenizers==0.9.4? How do I fix this?
            Asked 2021-Jan-29 at 14:27

            I'm trying to execute the simpletransformers example from their site on google colab.

            Example:

            ...

            ANSWER

            Answered 2021-Jan-29 at 14:27

            I am putting this here incase someone faces the same problem. I was helped by the creator himself.

            Source https://stackoverflow.com/questions/65924090

            QUESTION

            BERT always predicts same class (Fine-Tuning)
            Asked 2020-Nov-07 at 13:37

            I am fine-tuning BERT on a financial news dataset. Unfortunately BERT seems to be trapped in a local minimum. It is content with learning to always predict the same class.

            • balancing the dataset didnt work
            • tuning parameters didnt work as well

            I am honestly not sure what is causing this problem. With the simpletransformers library I am getting very good results. I would really appreciate if somebody could help me. thanks a lot!

            Full code on github: https://github.com/Bene939/BERT_News_Sentiment_Classifier

            Code:

            ...

            ANSWER

            Answered 2020-Nov-07 at 13:37

            For multi-class classification/sentiment analysis using BERT the 'neutral' class HAS TO BE 2!! It CANNOT be between 'negative' = 0 and 'positive' = 2

            Source https://stackoverflow.com/questions/64675655

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install simpletransformers

            You can install using 'pip install simpletransformers' or download it from GitHub, PyPI.
            You can use simpletransformers like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install simpletransformers

          • CLONE
          • HTTPS

            https://github.com/ThilinaRajapakse/simpletransformers.git

          • CLI

            gh repo clone ThilinaRajapakse/simpletransformers

          • sshUrl

            git@github.com:ThilinaRajapakse/simpletransformers.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Natural Language Processing Libraries

            transformers

            by huggingface

            funNLP

            by fighting41love

            bert

            by google-research

            jieba

            by fxsjy

            Python

            by geekcomputers

            Try Top Libraries by ThilinaRajapakse

            pytorch-transformers-classification

            by ThilinaRajapakseJupyter Notebook

            low-resource-language-models

            by ThilinaRajapaksePython

            dense-retrieval

            by ThilinaRajapakseJupyter Notebook

            CS105_Solutions

            by ThilinaRajapakseJava

            Gosto

            by ThilinaRajapakseJavaScript