kandi background
Explore Kits

transformers | Transformers : State-of-the-art Machine Learning | Natural Language Processing library

 by   huggingface Python Version: v4.18.0 License: Apache-2.0

 by   huggingface Python Version: v4.18.0 License: Apache-2.0

Download this library from

kandi X-RAY | transformers Summary

transformers is a Python library typically used in Artificial Intelligence, Natural Language Processing, Deep Learning, Pytorch, Tensorflow, Bert, Transformer applications. transformers has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install transformers' or download it from GitHub, PyPI.
🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • transformers has a medium active ecosystem.
  • It has 61400 star(s) with 14587 fork(s). There are 800 watchers for this library.
  • There were 1 major release(s) in the last 6 months.
  • There are 380 open issues and 8748 have been closed. On average issues are closed in 27 days. There are 142 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of transformers is v4.18.0
transformers Support
Best in #Natural Language Processing
Average in #Natural Language Processing
transformers Support
Best in #Natural Language Processing
Average in #Natural Language Processing

quality kandi Quality

  • transformers has 0 bugs and 0 code smells.
transformers Quality
Best in #Natural Language Processing
Average in #Natural Language Processing
transformers Quality
Best in #Natural Language Processing
Average in #Natural Language Processing

securitySecurity

  • transformers has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • transformers code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
transformers Security
Best in #Natural Language Processing
Average in #Natural Language Processing
transformers Security
Best in #Natural Language Processing
Average in #Natural Language Processing

license License

  • transformers is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
transformers License
Best in #Natural Language Processing
Average in #Natural Language Processing
transformers License
Best in #Natural Language Processing
Average in #Natural Language Processing

buildReuse

  • transformers releases are available to install and integrate.
  • Deployable package is available in PyPI.
  • Build file is available. You can build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
  • It has 429963 lines of code, 21977 functions and 1495 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
transformers Reuse
Best in #Natural Language Processing
Average in #Natural Language Processing
transformers Reuse
Best in #Natural Language Processing
Average in #Natural Language Processing
Top functions reviewed by kandi - BETA

kandi has reviewed transformers and discovered the below as its top functions. This is intended to give you an instant insight into transformers implemented functionality, and help decide if they suit your requirements.

  • Generates a beam search output .
  • Perform beam search .
  • Performs the Bigbird block - sparse attention .
  • Instantiate a pipeline .
  • Fetches the given model .
  • Train a discriminator .
  • Perform beam search .
  • Convert bort checkpoint to pytorch .
  • Convert a Segformer checkpoint checkpoint .
  • Wrapper for selftrain .

transformers Key Features

📝 Text, for tasks like text classification, information extraction, question answering, summarization, translation, text generation, in over 100 languages.

🖼️ Images, for tasks like image classification, object detection, and segmentation.

🗣️ Audio, for tasks like speech recognition and audio classification.

Quick tour

copy iconCopydownload iconDownload
>>> from transformers import pipeline

# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]

With pip

copy iconCopydownload iconDownload
pip install transformers

With conda

copy iconCopydownload iconDownload
conda install -c huggingface transformers

Citation

copy iconCopydownload iconDownload
@inproceedings{wolf-etal-2020-transformers,
    title = "Transformers: State-of-the-Art Natural Language Processing",
    author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = oct,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
    pages = "38--45"
}

Unpickle instance from Jupyter Notebook in Flask App

copy iconCopydownload iconDownload
├── WebApp/
│  └── app.py
└── Untitled.ipynb
from WebApp.app import GensimWord2VecVectorizer
GensimWord2VecVectorizer.__module__ = 'app'

import sys
sys.modules['app'] = sys.modules['WebApp.app']
GensimWord2VecVectorizer.__module__ = 'app'

import sys
app = sys.modules['app'] = type(sys)('app')
app.GensimWord2VecVectorizer = GensimWord2VecVectorizer
-----------------------
├── WebApp/
│  └── app.py
└── Untitled.ipynb
from WebApp.app import GensimWord2VecVectorizer
GensimWord2VecVectorizer.__module__ = 'app'

import sys
sys.modules['app'] = sys.modules['WebApp.app']
GensimWord2VecVectorizer.__module__ = 'app'

import sys
app = sys.modules['app'] = type(sys)('app')
app.GensimWord2VecVectorizer = GensimWord2VecVectorizer
-----------------------
├── WebApp/
│  └── app.py
└── Untitled.ipynb
from WebApp.app import GensimWord2VecVectorizer
GensimWord2VecVectorizer.__module__ = 'app'

import sys
sys.modules['app'] = sys.modules['WebApp.app']
GensimWord2VecVectorizer.__module__ = 'app'

import sys
app = sys.modules['app'] = type(sys)('app')
app.GensimWord2VecVectorizer = GensimWord2VecVectorizer

What is this GHC feature called? `forall` in type definitions

copy iconCopydownload iconDownload
type EITHER :: forall (a :: Type) (b :: Type). Type
data EITHER where
 LEFT  :: a -> EITHER @a @b
 RIGHT :: b -> EITHER @a @b

eITHER :: (a -> res) -> (b -> res) -> (EITHER @a @b -> res)
eITHER left right = \case
 LEFT  a -> left  a
 RIGHT b -> right b
type EITHER :: forall (a :: Type) -> forall (b :: Type) -> Type
data EITHER a b where
 LEFT  :: a -> EITHER a b
 RIGHT :: b -> EITHER a b

eITHER :: (a -> res) -> (b -> res) -> (EITHER a b -> res)
eITHER left right = \case
 LEFT  a -> left  a
 RIGHT b -> right b
-----------------------
type EITHER :: forall (a :: Type) (b :: Type). Type
data EITHER where
 LEFT  :: a -> EITHER @a @b
 RIGHT :: b -> EITHER @a @b

eITHER :: (a -> res) -> (b -> res) -> (EITHER @a @b -> res)
eITHER left right = \case
 LEFT  a -> left  a
 RIGHT b -> right b
type EITHER :: forall (a :: Type) -> forall (b :: Type) -> Type
data EITHER a b where
 LEFT  :: a -> EITHER a b
 RIGHT :: b -> EITHER a b

eITHER :: (a -> res) -> (b -> res) -> (EITHER a b -> res)
eITHER left right = \case
 LEFT  a -> left  a
 RIGHT b -> right b

Relation between Arrow suspend functions and monad comprehension

copy iconCopydownload iconDownload
suspend fun <R> R.doSomething(i: Int): Either<Error, String> = TODO()
-----------------------
fun mkMessage(msgType: String, appRef: String, pId: String): Message? = nullable.eager {
    val type = MessageType.mkMessageType(msgType).bind()
    val ref = ApplRefe.mkAppRef((appRef)).bind()
    val id = Id.mkId(pId).bind()
    Message(type, ref, id)
}

Jest encountered an unexpected token - SyntaxError: Unexpected token 'export'

copy iconCopydownload iconDownload
  transform: {
    '^.+\\.ts?$': 'ts-jest',
    "^.+\\.(js|jsx)$": "babel-jest"
  },

Why Reader implemented based ReaderT?

copy iconCopydownload iconDownload
newtype Reader r a = Reader (r -> a)
type ReaderT r m a = Reader r (m a)
instance Functor (Reader r) where
    fmap f (Reader g) = Reader (f . g)
-----------------------
newtype Reader r a = Reader (r -> a)
type ReaderT r m a = Reader r (m a)
instance Functor (Reader r) where
    fmap f (Reader g) = Reader (f . g)

attributeerror: 'dataframe' object has no attribute 'data_type'

copy iconCopydownload iconDownload
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]  # <- HERE

df.loc[X_train, 'data_type'] = 'train'  # <- HERE
df.loc[X_val, 'data_type'] = 'val'  # <- HERE

df.groupby(['Conference', 'label', 'data_type']).count()
import pandas as pd
from sklearn.model_selection import train_test_split

# The Data
df = pd.read_csv('data/title_conference.csv')
df['label'] = pd.factorize(df['Conference'])[0]

# Train and Validation Split
X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]

df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', 
                                          do_lower_case=True)

encoded_data_train = tokenizer.batch_encode_plus(
    df[df.data_type=='train'].Title.values, 
    add_special_tokens=True, 
    return_attention_mask=True, 
    pad_to_max_length=True, 
    max_length=256, 
    return_tensors='pt'
)
>>> encoded_data_train
{'input_ids': tensor([[  101,  8144,  1999,  ...,     0,     0,     0],
        [  101,  2152,  2836,  ...,     0,     0,     0],
        [  101, 22454, 25806,  ...,     0,     0,     0],
        ...,
        [  101,  1037,  2047,  ...,     0,     0,     0],
        [  101, 13229,  7375,  ...,     0,     0,     0],
        [  101,  2006,  1996,  ...,     0,     0,     0]]), 'token_type_ids': tensor([[0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        ...,
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        ...,
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0]])}
-----------------------
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]  # <- HERE

df.loc[X_train, 'data_type'] = 'train'  # <- HERE
df.loc[X_val, 'data_type'] = 'val'  # <- HERE

df.groupby(['Conference', 'label', 'data_type']).count()
import pandas as pd
from sklearn.model_selection import train_test_split

# The Data
df = pd.read_csv('data/title_conference.csv')
df['label'] = pd.factorize(df['Conference'])[0]

# Train and Validation Split
X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]

df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', 
                                          do_lower_case=True)

encoded_data_train = tokenizer.batch_encode_plus(
    df[df.data_type=='train'].Title.values, 
    add_special_tokens=True, 
    return_attention_mask=True, 
    pad_to_max_length=True, 
    max_length=256, 
    return_tensors='pt'
)
>>> encoded_data_train
{'input_ids': tensor([[  101,  8144,  1999,  ...,     0,     0,     0],
        [  101,  2152,  2836,  ...,     0,     0,     0],
        [  101, 22454, 25806,  ...,     0,     0,     0],
        ...,
        [  101,  1037,  2047,  ...,     0,     0,     0],
        [  101, 13229,  7375,  ...,     0,     0,     0],
        [  101,  2006,  1996,  ...,     0,     0,     0]]), 'token_type_ids': tensor([[0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        ...,
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        ...,
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0]])}
-----------------------
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]  # <- HERE

df.loc[X_train, 'data_type'] = 'train'  # <- HERE
df.loc[X_val, 'data_type'] = 'val'  # <- HERE

df.groupby(['Conference', 'label', 'data_type']).count()
import pandas as pd
from sklearn.model_selection import train_test_split

# The Data
df = pd.read_csv('data/title_conference.csv')
df['label'] = pd.factorize(df['Conference'])[0]

# Train and Validation Split
X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]

df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', 
                                          do_lower_case=True)

encoded_data_train = tokenizer.batch_encode_plus(
    df[df.data_type=='train'].Title.values, 
    add_special_tokens=True, 
    return_attention_mask=True, 
    pad_to_max_length=True, 
    max_length=256, 
    return_tensors='pt'
)
>>> encoded_data_train
{'input_ids': tensor([[  101,  8144,  1999,  ...,     0,     0,     0],
        [  101,  2152,  2836,  ...,     0,     0,     0],
        [  101, 22454, 25806,  ...,     0,     0,     0],
        ...,
        [  101,  1037,  2047,  ...,     0,     0,     0],
        [  101, 13229,  7375,  ...,     0,     0,     0],
        [  101,  2006,  1996,  ...,     0,     0,     0]]), 'token_type_ids': tensor([[0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        ...,
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        ...,
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0]])}
-----------------------
from sklearn.model_selection import train_test_split

X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]  # <- HERE

df.loc[X_train, 'data_type'] = 'train'  # <- HERE
df.loc[X_val, 'data_type'] = 'val'  # <- HERE

df.groupby(['Conference', 'label', 'data_type']).count()
import pandas as pd
from sklearn.model_selection import train_test_split

# The Data
df = pd.read_csv('data/title_conference.csv')
df['label'] = pd.factorize(df['Conference'])[0]

# Train and Validation Split
X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
                                                  df.label.values, 
                                                  test_size=0.15, 
                                                  random_state=42, 
                                                  stratify=df.label.values)

df['data_type'] = ['not_set']*df.shape[0]

df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
from transformers import BertTokenizer

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', 
                                          do_lower_case=True)

encoded_data_train = tokenizer.batch_encode_plus(
    df[df.data_type=='train'].Title.values, 
    add_special_tokens=True, 
    return_attention_mask=True, 
    pad_to_max_length=True, 
    max_length=256, 
    return_tensors='pt'
)
>>> encoded_data_train
{'input_ids': tensor([[  101,  8144,  1999,  ...,     0,     0,     0],
        [  101,  2152,  2836,  ...,     0,     0,     0],
        [  101, 22454, 25806,  ...,     0,     0,     0],
        ...,
        [  101,  1037,  2047,  ...,     0,     0,     0],
        [  101, 13229,  7375,  ...,     0,     0,     0],
        [  101,  2006,  1996,  ...,     0,     0,     0]]), 'token_type_ids': tensor([[0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        ...,
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        ...,
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0],
        [1, 1, 1,  ..., 0, 0, 0]])}

How to calculate perplexity of a sentence using huggingface masked language models?

copy iconCopydownload iconDownload
from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
import numpy as np

model_name = 'cointegrated/rubert-tiny'
model = AutoModelForMaskedLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

def score(model, tokenizer, sentence):
    tensor_input = tokenizer.encode(sentence, return_tensors='pt')
    repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
    mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
    masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id)
    labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100)
    with torch.inference_mode():
        loss = model(masked_input, labels=labels).loss
    return np.exp(loss.item())

print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer)) 
# 4.541251105675365
print(score(sentence='London is the capital of South America.', model=model, tokenizer=tokenizer)) 
# 6.162017238332462

Determine whether the Columns of a Dataset are invariant under any given Scikit-Learn Transformer

copy iconCopydownload iconDownload
from numpy.random import RandomState
import numpy as np
import pandas as pd

from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest

from sklearn.linear_model import LassoCV


rng = RandomState()

# Make some data
slopes = np.array([-1., 1., .1])
X = pd.DataFrame(
    data = np.linspace(-1,1,500)[:, np.newaxis] + rng.random((500, 3)), 
    columns=["foo", "bar", "baz"]
)
y = pd.Series(data=np.linspace(-1,1, 500) + rng.rand((500)))

# Test Transformers
scaler = StandardScaler().fit(X)
selector = SelectKBest(k=2).fit(X, y)

print(scaler.get_feature_names_out())
print(selector.get_feature_names_out())
-----------------------
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectKBest
from sklearn.base import _OneToOneFeatureMixin

tf = {'pca':PCA(),'standardscaler':StandardScaler(),'kbest':SelectKBest()}

[i+":"+str(issubclass(type(tf[i]),_OneToOneFeatureMixin)) for i in tf.keys()]

['pca:False', 'standardscaler:True', 'kbest:False']

MUI5 not working with jest - SyntaxError: Cannot use import statement outside a module

copy iconCopydownload iconDownload
const Select: React.FC<ISelect> = ({ label, id, children, options, ...props }) => {
import Select from './Select'
import { styled } from '@mui/material'
-----------------------
const Select: React.FC<ISelect> = ({ label, id, children, options, ...props }) => {
import Select from './Select'
import { styled } from '@mui/material'
-----------------------
const Select: React.FC<ISelect> = ({ label, id, children, options, ...props }) => {
import Select from './Select'
import { styled } from '@mui/material'

How can I check a confusion_matrix after fine-tuning with custom datasets?

copy iconCopydownload iconDownload
import torch
import torch.nn.functional as F
from sklearn import metrics
 
y_preds = []
y_trues = []
for index,val_text in enumerate(val_texts):
     tokenized_val_text = tokenizer([val_text], 
                                    truncation=True,
                                    padding=True,
                                    return_tensor='pt')
     logits = model(tokenized_val_text)
     prediction = F.softmax(logits, dim=1)
     y_pred = torch.argmax(prediction).numpy()
     y_true = val_labels[index]
     y_preds.append(y_pred)
     y_trues.append(y_true)
confusion_matrix = metrics.confusion_matrix(y_trues, y_preds, labels=["neg", "pos"]))
print(confusion_matrix)
-----------------------
import torch
import torch.nn.functional as F
from sklearn import metrics
 
y_preds = []
y_trues = []
for index,val_text in enumerate(val_texts):
     tokenized_val_text = tokenizer([val_text], 
                                    truncation=True,
                                    padding=True,
                                    return_tensor='pt')
     logits = model(tokenized_val_text)
     prediction = F.softmax(logits, dim=1)
     y_pred = torch.argmax(prediction).numpy()
     y_true = val_labels[index]
     y_preds.append(y_pred)
     y_trues.append(y_true)
confusion_matrix = metrics.confusion_matrix(y_trues, y_preds, labels=["neg", "pos"]))
print(confusion_matrix)

Community Discussions

Trending Discussions on transformers
  • Unpickle instance from Jupyter Notebook in Flask App
  • ModuleNotFoundError: No module named 'milvus'
  • Which model/technique to use for specific sentence extraction?
  • What is this GHC feature called? `forall` in type definitions
  • Relation between Arrow suspend functions and monad comprehension
  • Jest encountered an unexpected token - SyntaxError: Unexpected token 'export'
  • Why Reader implemented based ReaderT?
  • attributeerror: 'dataframe' object has no attribute 'data_type'
  • How to calculate perplexity of a sentence using huggingface masked language models?
  • Determine whether the Columns of a Dataset are invariant under any given Scikit-Learn Transformer
Trending Discussions on transformers

QUESTION

Unpickle instance from Jupyter Notebook in Flask App

Asked 2022-Feb-28 at 18:03

I have created a class for word2vec vectorisation which is working fine. But when I create a model pickle file and use that pickle file in a Flask App, I am getting an error like:

AttributeError: module '__main__' has no attribute 'GensimWord2VecVectorizer'

I am creating the model on Google Colab.

Code in Jupyter Notebook:

# Word2Vec Model
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from gensim.models import Word2Vec

class GensimWord2VecVectorizer(BaseEstimator, TransformerMixin):

    def __init__(self, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None,
                 sample=0.001, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5,
                 ns_exponent=0.75, cbow_mean=1, hashfxn=hash, iter=5, null_word=0,
                 trim_rule=None, sorted_vocab=1, batch_words=10000, compute_loss=False,
                 callbacks=(), max_final_vocab=None):
        self.size = size
        self.alpha = alpha
        self.window = window
        self.min_count = min_count
        self.max_vocab_size = max_vocab_size
        self.sample = sample
        self.seed = seed
        self.workers = workers
        self.min_alpha = min_alpha
        self.sg = sg
        self.hs = hs
        self.negative = negative
        self.ns_exponent = ns_exponent
        self.cbow_mean = cbow_mean
        self.hashfxn = hashfxn
        self.iter = iter
        self.null_word = null_word
        self.trim_rule = trim_rule
        self.sorted_vocab = sorted_vocab
        self.batch_words = batch_words
        self.compute_loss = compute_loss
        self.callbacks = callbacks
        self.max_final_vocab = max_final_vocab

    def fit(self, X, y=None):
        self.model_ = Word2Vec(
            sentences=X, corpus_file=None,
            size=self.size, alpha=self.alpha, window=self.window, min_count=self.min_count,
            max_vocab_size=self.max_vocab_size, sample=self.sample, seed=self.seed,
            workers=self.workers, min_alpha=self.min_alpha, sg=self.sg, hs=self.hs,
            negative=self.negative, ns_exponent=self.ns_exponent, cbow_mean=self.cbow_mean,
            hashfxn=self.hashfxn, iter=self.iter, null_word=self.null_word,
            trim_rule=self.trim_rule, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words,
            compute_loss=self.compute_loss, callbacks=self.callbacks,
            max_final_vocab=self.max_final_vocab)
        return self

    def transform(self, X):
        X_embeddings = np.array([self._get_embedding(words) for words in X])
        return X_embeddings

    def _get_embedding(self, words):
        valid_words = [word for word in words if word in self.model_.wv.vocab]
        if valid_words:
            embedding = np.zeros((len(valid_words), self.size), dtype=np.float32)
            for idx, word in enumerate(valid_words):
                embedding[idx] = self.model_.wv[word]

            return np.mean(embedding, axis=0)
        else:
            return np.zeros(self.size)

# column transformer
from sklearn.compose import ColumnTransformer

ct = ColumnTransformer([
    ('step1', GensimWord2VecVectorizer(), 'STATUS')
], remainder='drop')

# Create Model
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
import pickle
import numpy as np
import dill
import torch
# ##########
# SVC - support vector classifier
# ##########
# defining parameter range
hyperparameters = {'C': [0.1, 1],
                   'gamma': [1, 0.1],
                   'kernel': ['rbf'],
                   'probability': [True]}
model_sv = Pipeline([
    ('column_transformers', ct),
    ('model', GridSearchCV(SVC(), hyperparameters,
                           refit=True, verbose=3)),
])
model_sv_cEXT = model_sv.fit(X_train, y_train['cEXT'])
# Save the trained cEXT - SVM Model.
import joblib
joblib.dump(model_sv_cEXT, 'model_Word2Vec_sv_cEXT.pkl')

Code in Flask App:

# Word2Vec
model_EXT_WV_SV = joblib.load('utility/model/MachineLearning/SVM/model_Word2Vec_sv_cEXT.pkl')

I tried to copy the same class into my Flask file, but it is also not working.

import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from gensim.models import Word2Vec

class GensimWord2VecVectorizer(BaseEstimator, TransformerMixin):

    def __init__(self, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None,
                 sample=0.001, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5,
                 ns_exponent=0.75, cbow_mean=1, hashfxn=hash, iter=5, null_word=0,
                 trim_rule=None, sorted_vocab=1, batch_words=10000, compute_loss=False,
                 callbacks=(), max_final_vocab=None):
        self.size = size
        self.alpha = alpha
        self.window = window
        self.min_count = min_count
        self.max_vocab_size = max_vocab_size
        self.sample = sample
        self.seed = seed
        self.workers = workers
        self.min_alpha = min_alpha
        self.sg = sg
        self.hs = hs
        self.negative = negative
        self.ns_exponent = ns_exponent
        self.cbow_mean = cbow_mean
        self.hashfxn = hashfxn
        self.iter = iter
        self.null_word = null_word
        self.trim_rule = trim_rule
        self.sorted_vocab = sorted_vocab
        self.batch_words = batch_words
        self.compute_loss = compute_loss
        self.callbacks = callbacks
        self.max_final_vocab = max_final_vocab

    def fit(self, X, y=None):
        self.model_ = Word2Vec(
            sentences=X, corpus_file=None,
            size=self.size, alpha=self.alpha, window=self.window, min_count=self.min_count,
            max_vocab_size=self.max_vocab_size, sample=self.sample, seed=self.seed,
            workers=self.workers, min_alpha=self.min_alpha, sg=self.sg, hs=self.hs,
            negative=self.negative, ns_exponent=self.ns_exponent, cbow_mean=self.cbow_mean,
            hashfxn=self.hashfxn, iter=self.iter, null_word=self.null_word,
            trim_rule=self.trim_rule, sorted_vocab=self.sorted_vocab, batch_words=self.batch_words,
            compute_loss=self.compute_loss, callbacks=self.callbacks,
            max_final_vocab=self.max_final_vocab)
        return self

    def transform(self, X):
        X_embeddings = np.array([self._get_embedding(words) for words in X])
        return X_embeddings

    def _get_embedding(self, words):
        valid_words = [word for word in words if word in self.model_.wv.vocab]
        if valid_words:
            embedding = np.zeros((len(valid_words), self.size), dtype=np.float32)
            for idx, word in enumerate(valid_words):
                embedding[idx] = self.model_.wv[word]

            return np.mean(embedding, axis=0)
        else:
            return np.zeros(self.size)

# Word2Vec
model_EXT_WV_SV = joblib.load('utility/model/MachineLearning/SVM/model_Word2Vec_sv_cEXT.pkl')

ANSWER

Answered 2022-Feb-24 at 11:48

Import GensimWord2VecVectorizer in your Flask Web app python file.

Source https://stackoverflow.com/questions/71231611

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install transformers

You can install using 'pip install transformers' or download it from GitHub, PyPI.
You can use transformers like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Consider Popular Natural Language Processing Libraries
Compare Natural Language Processing Libraries with Highest Support
Compare Natural Language Processing Libraries with Highest Quality
Compare Natural Language Processing Libraries with Highest Security
Compare Natural Language Processing Libraries with Permissive License
Compare Natural Language Processing Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.