keras-bert | could load official pre-trained models | Natural Language Processing library

 by   CyberZHG Python Version: 0.89.0 License: MIT

kandi X-RAY | keras-bert Summary

kandi X-RAY | keras-bert Summary

keras-bert is a Python library typically used in Artificial Intelligence, Natural Language Processing, Deep Learning, Pytorch, Bert, Neural Network, Transformer applications. keras-bert has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install keras-bert' or download it from GitHub, PyPI.

Implementation of the BERT. Official pre-trained models could be loaded for feature extraction and prediction.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              keras-bert has a medium active ecosystem.
              It has 2411 star(s) with 514 fork(s). There are 61 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 0 open issues and 201 have been closed. On average issues are closed in 27 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of keras-bert is 0.89.0

            kandi-Quality Quality

              keras-bert has 0 bugs and 0 code smells.

            kandi-Security Security

              keras-bert has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              keras-bert code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              keras-bert is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              keras-bert releases are not available. You will need to build from source code and install.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              keras-bert saves you 904 person hours of effort in developing the same functionality from scratch.
              It has 2030 lines of code, 113 functions and 44 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed keras-bert and discovered the below as its top functions. This is intended to give you an instant insight into keras-bert implemented functionality, and help decide if they suit your requirements.
            • Loads a trained model from a checkpoint file
            • Get a trained model
            • Loads weights from a checkpoint file
            • Build a trained model from a config file
            • Encodes two tokens
            • Tokenize text
            • Pack two tokens
            • Checks if the character is a CJK character
            • Get a pre - trained model
            • Get embedding layer
            • Get input tensors
            • Evaluate the function
            • Compute the masked mask
            • Match text
            • Tokenize two tokens
            • Load a vocabulary
            • Return checkpoint paths
            • Find the version string
            Get all kandi verified functions for this library.

            keras-bert Key Features

            No Key Features are available at this moment for keras-bert.

            keras-bert Examples and Code Snippets

            Keras-Bert-Ner,Train Phase
            Pythondot img1Lines of Code : 87dot img1License : Permissive (MIT)
            copy iconCopy
            [
                [
                    "揭秘趣步骗局,趣步是什么,趣步是怎么赚钱的?趣步公司可靠吗?趣步合法吗?相信是众多小伙伴最关心的话题,今天小编就来给大家揭开趣步这面“丑恶”且神秘的面纱,让小伙伴们看清事情的真相。接下来,我用简单的文字,给大家详细剖析一下趣步公司及趣步app的逻辑到底是什么样>的?3分钟时间...全文:?揭秘趣步骗局,趣步是什么,趣步是怎么赚钱的?趣步公司可靠吗?趣步合法吗?相信是众多小伙伴最关心的话题,今天小编就来给大家揭开趣步这面“丑恶”且神秘的面纱,让小伙伴们看清事  
            Keras-Bert-Ner,Training,Parameters
            Pythondot img2Lines of Code : 79dot img2License : Permissive (MIT)
            copy iconCopy
            (nlp) liushaoweihua@ai-server-6:~/jupyterlab/Keras-Bert-Ner$ python keras_bert_ner/train/help.py --help
            usage: help.py [-h] -train_data TRAIN_DATA [-dev_data DEV_DATA]
                           [-save_path SAVE_PATH] [-albert] -bert_config BERT_CONFIG
                       
            Keras-Bert-Ner,Training,Example
            Pythondot img3Lines of Code : 55dot img3License : Permissive (MIT)
            copy iconCopy
            PRETRAINED_LM_DIR="/home1/liushaoweihua/pretrained_lm/albert_tiny_250k" # your pretrained language model path
            DATA_DIR="../data" # your train/dev data path
            OUTPUT_DIR="../models" # where to store the NER model
            
            python run_train.py \
                -train_data=$  
            copy iconCopy
            name: bert_env
            channels:
              - defaults
            dependencies:
              - numpy
              - keras
              - pip
              - pip:
                - keras-bert
            
            conda env create -f bert_env.yaml
            
            > reticulate::use_condaenv("bert_env", require
            TypeError: 'Tensor' object is not callable | Keras-Bert
            Pythondot img5Lines of Code : 22dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            input_layer= keras.layers.Input(shape=(SEQ_LEN,768))(layer_output)
            
            tensor_instance = Layer(...)(tensor_instance)
            
            conv_layer_output_tensor = Conv1D(...)(layer_output)
            
            How to load and predict with a tensorflow model saved from save_weights?
            Pythondot img6Lines of Code : 2dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            load_model(...., custom_objects={'BertLayer':BertLayer})
            
            Manipulating tensorflow code to add different layers
            Pythondot img7Lines of Code : 13dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            embedding_size = 768
            in_id = Input(shape=(max_seq_length,), name="input_ids") 
            in_mask = Input(shape=(max_seq_length,), name="input_masks")
            in_segment = Input(shape=(max_seq_length,), name="segment_ids")
            
            bert_inputs = [in_id, in_mask, in_
            Training a BERT-based model causes an OutOfMemory error. How do I fix this?
            Pythondot img8Lines of Code : 40dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            question_indices_layer = Input(shape=(256,), dtype='float16')
            question_segments_layer = Input(shape=(256,), dtype='float16')
            context_indices_layer = Input(shape=(256,), dtype='float16')
            context_segments_layer = Input(shape=(256,), dtype='f
            Training a BERT-based model causes an OutOfMemory error. How do I fix this?
            Pythondot img9Lines of Code : 15dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            System       | Seq Length | Max Batch Size
            ------------ | ---------- | --------------
            `BERT-Base`  | 64         | 64
            ...          | 128        | 32
            ...          | 256        | 16
            ...          | 320        | 14
            ...          | 384        | 1
            copy iconCopy
            elif self.pooling == "mean": 
                result = self.bert(inputs=bert_inputs, signature="tokens", as_dict=True)["sequence_output" ] 
                pooled = result
            
            embedding_size = 768
            in_id = Input(shape=(max_seq_length,), name="

            Community Discussions

            QUESTION

            loss is NaN when using keras bert for classification
            Asked 2021-May-12 at 02:14

            I'm using keras-bert for classification. On some datasets, it runs well and calculates the loss, while on others the loss is NaN.

            The different datasets are similar in that they are augmented versions of the original one. Working with keras-bert, the original data and some augmented versions of the data run well while the other augmented versions of data don't run well.

            When I use a regular one-layer BiLSTM on the augmented versions of data that don't run well with keras-bert, it works out fine which means I can rule out the possibility of the data being faulty or containing spurious values that may affect the way the loss is calculated. The data in working with has three classes.

            I'm using bert based uncased

            ...

            ANSWER

            Answered 2021-May-08 at 16:05

            I noticed one issue in your code but I'm not sure if this the main cause; better if you can possibly provide some reproducible code.

            In your above code snippet, you set sigmoid in your last layer activation with unit < 1 which indicate the problem dataset is probably multi-label and that's why the loss function should be binary_crossentropy but you set sparse_categorical_crossentropy which is typical uses multi-class problem and with integer labels.

            Source https://stackoverflow.com/questions/67378194

            QUESTION

            How to install keras-bert? (PackagesNotFoundError: The following packages are not available from current channels)
            Asked 2020-Jun-24 at 06:40

            I'm trying to install keras-bert as explained here: BERT from R. This tutorial shows how to load and train the BERT model from R, using Keras.

            But when, in Anaconda prompt (Windows), I run:

            ...

            ANSWER

            Answered 2020-Jun-24 at 06:40
            Install from YAML in new env

            Since this requires mixing PyPI packages with Conda, the best practice recommendation for this is to create a dedicated environment using a YAML file. You may need additional version constraints in here to achieve a setup that works with the tutorial, but this YAML is sufficient for me to do the first steps:

            bert_env.yaml

            Source https://stackoverflow.com/questions/62532838

            QUESTION

            How to load and predict with a tensorflow model saved from save_weights?
            Asked 2020-Feb-05 at 20:30

            I am running a fairly customized tensorflow model from the following repo:

            https://github.com/strongio/keras-bert/blob/master/keras-bert.py

            ...

            ANSWER

            Answered 2020-Feb-05 at 19:02

            Well, you literally reconstruct the entire model, exactly the same way you constructed it for the first time. It seems build_model contains it entirely.

            Then you do model.load_weights(path).

            Your approach will not save the optimizer, though. If you want to "continue" training a loaded model, you'd better have the optimizer saved.

            For using model.save you just need to write the get_config method for the BertLayer. You can find a lot of examples on how to write this method by looking at how Keras writes it in its own layers:

            Remember that the model loader doesn't know your layer, you have to inform it:

            Source https://stackoverflow.com/questions/60066950

            QUESTION

            Manipulating tensorflow code to add different layers
            Asked 2020-Jan-16 at 23:09

            I am experimenting with BERT embeddings for text classification. I am using this code that creates a BERT embedding layer and a dense layer for binary classification.

            ...

            ANSWER

            Answered 2020-Jan-16 at 23:09

            First, make batch size smaller.

            Then change to this: this adds a global max pooling 1d layer to flatten out.

            Source https://stackoverflow.com/questions/59757979

            QUESTION

            Training a BERT-based model causes an OutOfMemory error. How do I fix this?
            Asked 2020-Jan-10 at 07:58

            My setup has an NVIDIA P100 GPU. I am working on a Google BERT model to answer questions. I am using the SQuAD question-answering dataset, which gives me questions, and paragraphs from which the answers should be drawn, and my research indicates this architecture should be OK, but I keep getting OutOfMemory errors during training:

            ResourceExhaustedError: OOM when allocating tensor with shape[786432,1604] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
            [[{{node dense_3/kernel/Initializer/random_uniform/RandomUniform}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

            Below, please find a full program that uses someone else's implementation of Google's BERT algorithm inside my own model. Please let me know what I can do to fix my error. Thank you!

            ...

            ANSWER

            Answered 2020-Jan-10 at 07:58

            Check out this Out-of-memory issues section on their github page.

            Often it's because that batch size or sequence length is too large to fit in the GPU memory, followings are the maximum batch configurations for a 12GB memory GPU, as listed in the above link

            Source https://stackoverflow.com/questions/59617755

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install keras-bert

            Several download urls has been added. You can get the downloaded and uncompressed path of a checkpoint by:.

            Support

            Kashgari is a Production-ready NLP Transfer learning framework for text-labeling and text-classificationKeras ALBERT
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install keras-bert

          • CLONE
          • HTTPS

            https://github.com/CyberZHG/keras-bert.git

          • CLI

            gh repo clone CyberZHG/keras-bert

          • sshUrl

            git@github.com:CyberZHG/keras-bert.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Reuse Pre-built Kits with keras-bert

            Consider Popular Natural Language Processing Libraries

            transformers

            by huggingface

            funNLP

            by fighting41love

            bert

            by google-research

            jieba

            by fxsjy

            Python

            by geekcomputers

            Try Top Libraries by CyberZHG

            toolbox

            by CyberZHGJavaScript

            keras-self-attention

            by CyberZHGPython

            CLRS

            by CyberZHGJupyter Notebook

            keras-transformer

            by CyberZHGPython

            keras-radam

            by CyberZHGPython