sentence-transformers | Multilingual Sentence & Image Embeddings with BERT | Natural Language Processing library

 by   UKPLab Python Version: 2.7.0 License: Apache-2.0

kandi X-RAY | sentence-transformers Summary

kandi X-RAY | sentence-transformers Summary

sentence-transformers is a Python library typically used in Artificial Intelligence, Natural Language Processing, Deep Learning, Pytorch, Tensorflow, Bert, Transformer applications. sentence-transformers has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install sentence-transformers' or download it from GitHub, PyPI.

This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various task. Text is embedding in vector space such that similar text is close and can efficiently be found using cosine similarity. We provide an increasing number of state-of-the-art pretrained models for more than 100 languages, fine-tuned for various use-cases. Further, this framework allows an easy fine-tuning of custom embeddings models, to achieve maximal performance on your specific task. For the full documentation, see www.SBERT.net.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              sentence-transformers has a medium active ecosystem.
              It has 10938 star(s) with 2030 fork(s). There are 120 watchers for this library.
              There were 8 major release(s) in the last 6 months.
              There are 928 open issues and 782 have been closed. On average issues are closed in 57 days. There are 59 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of sentence-transformers is 2.7.0

            kandi-Quality Quality

              sentence-transformers has 0 bugs and 0 code smells.

            kandi-Security Security

              sentence-transformers has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              sentence-transformers code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              sentence-transformers is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              sentence-transformers releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              sentence-transformers saves you 3939 person hours of effort in developing the same functionality from scratch.
              It has 12029 lines of code, 425 functions and 182 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed sentence-transformers and discovered the below as its top functions. This is intended to give you an instant insight into sentence-transformers implemented functionality, and help decide if they suit your requirements.
            • Fit a learning objective function
            • Create model card
            • Evaluate the given evaluation
            • Save the SentenceModel to a file
            • Fit the optimizer
            • Save the pretrained model
            • Save model to file
            • Evaluate the given evaluator
            • Loads data from a file
            • Start a multi - process multi - process
            • Collate a batch of texts
            • Save a pretrained model
            • Get input examples from a file
            • Get input examples from a CSV file
            • Reads an evaluation dataset
            • Perform kNN search
            • Save the trained model
            • Forward the encoder
            • Compute the degree centrality score
            • Forward mini batch prediction
            • Encodes the model
            • Save the model to the hub
            • Performs community detection on embedding
            • Predict given sentences
            • Encodes sentences into multiple sentences
            • Load embeddings from a text file
            • Loads the Sbert model
            • Load TREC dataset
            Get all kandi verified functions for this library.

            sentence-transformers Key Features

            No Key Features are available at this moment for sentence-transformers.

            sentence-transformers Examples and Code Snippets

            Publications
            Pythondot img1Lines of Code : 67dot img1License : Permissive (Apache-2.0)
            copy iconCopy
            @inproceedings{reimers-2019-sentence-bert,
                title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
                author = "Reimers, Nils and Gurevych, Iryna",
                booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Nat  
            Semantic Textual Similarity
            Pythondot img2Lines of Code : 54dot img2License : Permissive (Apache-2.0)
            copy iconCopy
            from sentence_transformers import SentenceTransformer, util
            model = SentenceTransformer('all-MiniLM-L6-v2')
            
            # Two lists of sentences
            sentences1 = ['The cat sits outside',
                         'A man is playing guitar',
                         'The new movie is awesome  
            Quickstart-Comparing Sentence Similarities
            Pythondot img3Lines of Code : 41dot img3License : Permissive (Apache-2.0)
            copy iconCopy
            from sentence_transformers import SentenceTransformer, util
            model = SentenceTransformer('all-MiniLM-L6-v2')
            
            #Sentences are encoded by calling model.encode()
            emb1 = model.encode("This is a red cat with a hat.")
            emb2 = model.encode("Have you seen my r  
            sentence-transformers - create splits
            Pythondot img4Lines of Code : 234dot img4License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """
            The Quora Duplicate Questions dataset contains questions pairs from Quora (www.quora.com)
            along with a label whether the two questions are a duplicate, i.e., have an identical itention.
            
            Example of a duplicate pair:
            How do I enhance my English?    
            sentence-transformers - train bi encoder margin mse
            Pythondot img5Lines of Code : 168dot img5License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            import sys
            import json
            from torch.utils.data import DataLoader
            from sentence_transformers import SentenceTransformer, LoggingHandler, util, models, evaluation, losses, InputExample
            import logging
            from datetime import datetime
            import gzip
            import os
            im  
            sentence-transformers - train bi encoder mnrl
            Pythondot img6Lines of Code : 162dot img6License : Non-SPDX (Apache License 2.0)
            copy iconCopy
            """
            This examples show how to train a Bi-Encoder for the MS Marco dataset (https://github.com/microsoft/MSMARCO-Passage-Ranking).
            
            The queries and passages are passed independently to the transformer network to produce fixed sized embeddings.
            These e  
            How to use metadata for document retrieval using Sentence Transformers?
            Pythondot img7Lines of Code : 48dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            pip install -U farm-haystack pinecone-client
            
            document_store = PineconeDocumentStore(
                api_key="", # from https://app.pinecone.io
                environment="us-west1-gcp"
            )
            retriever = EmbeddingRetriever(
                document_stor
            Find near duplicate and faked images
            Pythondot img8Lines of Code : 101dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            Score: 100.000%
            .\cat1 copy.jpg
            .\cat1.jpg
            
            Score: 91.116%
            .\cat1 copy.jpg
            .\cat2.jpg
            
            Score: 91.116%
            .\cat1.jpg
            .\cat2.jpg
            
            Score: 91.097%
            .\bear1.jpg
            .\bear2.jpg
            
            Score: 59.086%
            .\bear2.jpg
            .\cat2.jpg
            
            Score: 56.0
            Using sentence transformers with limited access to internet
            Pythondot img9Lines of Code : 2dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            ['1_Pooling', 'config_sentence_transformers.json', 'tokenizer.json', 'tokenizer_config.json', 'modules.json', 'sentence_bert_config.json', 'pytorch_model.bin', 'special_tokens_map.json', 'config.json', 'train_script.py', 'data_config.json'
            Using Sentence-Bert with other features in scikit-learn
            Pythondot img10Lines of Code : 19dot img10License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            X_train = pd.DataFrame({
                'tweet':['foo', 'foo', 'bar'],
                'feature1':[1, 1, 0],
                'feature2':[1, 0, 1],
            })
            y_train = [1, 1, 0]
            
            model = SentenceTransformer('mrm8488/bert-tiny-finetuned-squadv2') # model name

            Community Discussions

            QUESTION

            The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received:
            Asked 2022-Mar-10 at 08:57

            I am having trouble when switching a model from some local dummy data to using a TF dataset.

            Sorry for the long model code, I have tried to shorten it as much as possible.

            The following works fine:

            ...

            ANSWER

            Answered 2022-Mar-10 at 08:57

            You will have to explicitly set the shapes of the tensors coming from tf.py_functions. Using None will allow variable input lengths. The Bert output dimension (384,) is, however, necessary:

            Source https://stackoverflow.com/questions/71414627

            QUESTION

            Use `sentence-transformers` inside of a keras model
            Asked 2022-Mar-09 at 17:10

            I would like to use a model from sentence-transformers inside of a larger Keras model.

            Here is the full example:

            ...

            ANSWER

            Answered 2022-Mar-09 at 17:10

            tf.py_function does not seem to work with a dict output that’s why you can try returning three separate tensors. Also, I am decoding the inputs to remove the b in the front of each string:

            Source https://stackoverflow.com/questions/71411065

            QUESTION

            Using sentence transformers with limited access to internet
            Asked 2022-Jan-19 at 13:27

            I have access to the latest packages but I cannot access internet from my python enviroment.

            Package versions that I have are as below

            ...

            ANSWER

            Answered 2022-Jan-19 at 13:27

            Based on the things you mentioned, I checked the source code of sentence-transformers on Google Colab. After running the model and getting the files, I check the directory and I saw the pytorch_model.bin there.

            And according to sentence-transformers code: Link

            the flax_model.msgpack , rust_model.ot, tf_model.h5 are getting ignored when the it is trying to download.

            and these are the files that it downloads :

            Source https://stackoverflow.com/questions/70716702

            QUESTION

            Using Sentence-Bert with other features in scikit-learn
            Asked 2021-Oct-16 at 12:47

            I have a dataset, one feature is text and 4 more features. Sentence-Bert vectorizer transforms text data into tensors. I can use these sparse matrices directly with a machine learning classifier. Can I replace the text column with tensors? And, how can I train the model. The code below is how I transform the text into vectors.

            ...

            ANSWER

            Answered 2021-Oct-16 at 12:47

            Let's assume this is your data

            Source https://stackoverflow.com/questions/69585176

            QUESTION

            String comparison with BERT seems to ignore "not" in sentence
            Asked 2021-Sep-21 at 08:10

            I implemented a string comparison method using SentenceTransformers and BERT like following

            ...

            ANSWER

            Answered 2021-Sep-08 at 22:07

            The results are not surprising. You have passed two sentences which are very similar, but have opposite meanings. The sentence embeddings are obtained from a model trained on generic corpora, hence, the embeddings given by the model are generally expected to close to each other if the sentences are similar. And that's what is happening, that the cosine similarity shows that the embedding are close to each other and so is the sentence. The sentences in the example may have opposite meanings, but they are similar to each other.

            In case, if you expect two similar sentences with opposite meaning to be far away from each other, then you have to further fine-tune the model with kind of classification model (such as sentiment analysis, if your examples are based on positive and negative sentiments). or with some other relevant task.

            Source https://stackoverflow.com/questions/69091576

            QUESTION

            Token indices sequence length Issue
            Asked 2021-Aug-19 at 20:18

            I am running a sentence transformer model and trying to truncate my tokens, but it doesn't appear to be working. My code is

            ...

            ANSWER

            Answered 2021-Aug-19 at 20:18

            You need to add the max_length parameter while creating the tokenizer like below:

            text_tokens = tokenizer(text, padding=True, max_length=512, truncation=True, return_tensors="pt")

            Reason:

            truncation=True without max_length parameter takes sequence length equal to maximum acceptable input length by the model.

            It is 1e30 or 1000000000000000019884624838656 for this model. You can check by printing out tokenizer.model_max_length.

            According to the Huggingface documentation about truncation,

            True or 'only_first' truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None).

            Source https://stackoverflow.com/questions/68850172

            QUESTION

            python packages not being installed on the virtual environment using ubuntu
            Asked 2021-Aug-18 at 18:11

            I have a requirements.txt file which holds all information of my python packages I need for my Flask application. Here is what I did:

            1. python3 -m venv venv
            2. source venv/bin/activate
            3. sudo pip install -r requirements.txt

            When I tried to check if the packages were installed on the virtual environment using pip list, I do not see the packages. Can someone tell what went wrong?

            ...

            ANSWER

            Answered 2021-Aug-18 at 18:05

            If you want to use python3+ to install the packages try to use pip3 install package_name

            And to solve the errno 13 try to add --user at the end

            Source https://stackoverflow.com/questions/68837021

            QUESTION

            AttributeError: Caught AttributeError in DataLoader worker process 0. - fine tuning pre-trained transformer model
            Asked 2021-Aug-17 at 13:41

            can anyone help me to resolve this error?

            ...

            ANSWER

            Answered 2021-Aug-17 at 13:41

            [Updated] I skimmed several lines of documentation here about how to use the fit() method and I realized there is a simpler solution to do what you desired. The only changes you need to consider are to define proper InputExample for constructing a DataLoader & create a loss!

            Source https://stackoverflow.com/questions/68760136

            QUESTION

            How to increase dimension-vector size of BERT sentence-transformers embedding
            Asked 2021-Aug-15 at 13:35

            I am using sentence-transformers for semantic search but sometimes it does not understand the contextual meaning and returns wrong result eg. BERT problem with context/semantic search in italian language

            by default the vector side of embedding of the sentence is 78 columns, so how do I increase that dimension so that it can understand the contextual meaning in deep.

            code:

            ...

            ANSWER

            Answered 2021-Aug-10 at 07:39

            Increasing the dimension of a trained model is not possible (without many difficulties and re-training the model). The model you are using was pre-trained with dimension 768, i.e., all weight matrices of the model have a corresponding number of trained parameters. Increasing the dimensionality would mean adding parameters which however need to be learned.

            Also, the dimension of the model does not reflect the amount of semantic or context information in the sentence representation. The choice of the model dimension reflects more a trade-off between model capacity, the amount of training data, and reasonable inference speed.

            If the model that you are using does not provide representation that is semantically rich enough, you might want to search for better models, such as RoBERTa or T5.

            Source https://stackoverflow.com/questions/68686272

            QUESTION

            BERT problem with context/semantic search in italian language
            Asked 2021-Aug-09 at 19:33

            I am using BERT model for context search in Italian language but it does not understand the contextual meaning of the sentence and returns wrong result.

            in below example code when I compare "milk with chocolate flavour" with two other type of milk and one chocolate so it returns high similarity with chocolate. it should return high similarity with other milks.

            can anyone suggest me any improvement on the below code so that it can return semantic results?

            Code :

            ...

            ANSWER

            Answered 2021-Aug-09 at 19:33

            The problem is not with your code, it is just the insufficient model performance.

            There are a few things you can do. First, you can try Universal Sentence Encoder (USE). From my experience their embeddings are a little bit better, at least in English.

            Second, you can try a different model, for example sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1. It is based on ROBERTa and might give a better performance.

            Now you can combine together embeddings from several models (just by concatenating the representations). In some cases it helps, on expense of much heavier compute.

            And finally you can create your own model. It is well known that single language models perform significantly better than multilingual ones. You can follow the guide and train your own Italian model.

            Source https://stackoverflow.com/questions/68627093

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install sentence-transformers

            We recommend Python 3.6 or higher, PyTorch 1.6.0 or higher and transformers v4.6.0 or higher. The code does not work with Python 2.7.
            See Quickstart in our documenation. This example shows you how to use an already trained Sentence Transformer model to embed sentences for another task. First download a pretrained model. Then provide some sentences to the model. And that's it already. We now have a list of numpy arrays with the embeddings.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install sentence-transformers

          • CLONE
          • HTTPS

            https://github.com/UKPLab/sentence-transformers.git

          • CLI

            gh repo clone UKPLab/sentence-transformers

          • sshUrl

            git@github.com:UKPLab/sentence-transformers.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link