Explore all Bert open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Bert

transformers

v4.18.0: Checkpoint sharding, vision models

HanLP

v1.8.2 常规维护与准确率提升

spaCy

v3.1.6: Workaround for Click/Typer issues

faiss

Faiss 1.7.1

flair

Release 0.11

Popular Libraries in Bert

transformers

by huggingface doticonpythondoticon

star image 61400 doticonApache-2.0

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

funNLP

by fighting41love doticonpythondoticon

star image 33333 doticon

中英文敏感词、语言检测、中外手机/电话归属地/运营商查询、名字推断性别、手机号抽取、身份证抽取、邮箱抽取、中日文人名库、中文缩写库、拆字词典、词汇情感值、停用词、反动词表、暴恐词表、繁简体转换、英文模拟中文发音、汪峰歌词生成器、职业名称词库、同义词库、反义词库、否定词库、汽车品牌词库、汽车零件词库、连续英文切割、各种中文词向量、公司名字大全、古诗词库、IT词库、财经词库、成语词库、地名词库、历史名人词库、诗词词库、医学词库、饮食词库、法律词库、汽车词库、动物词库、中文聊天语料、中文谣言数据、百度中文问答数据集、句子相似度匹配算法集合、bert资源、文本生成&摘要相关工具、cocoNLP信息抽取工具、国内电话号码正则匹配、清华大学XLORE:中英文跨语言百科知识图谱、清华大学人工智能技术系列报告、自然语言生成、NLU太难了系列、自动对联数据及机器人、用户名黑名单列表、罪名法务名词及分类模型、微信公众号语料、cs224n深度学习自然语言处理课程、中文手写汉字识别、中文自然语言处理 语料/数据集、变量命名神器、分词语料库+代码、任务型对话英文数据集、ASR 语音数据集 + 基于深度学习的中文语音识别系统、笑声检测器、Microsoft多语言数字/单位/如日期时间识别包、中华新华字典数据库及api(包括常用歇后语、成语、词语和汉字)、文档图谱自动生成、SpaCy 中文模型、Common Voice语音识别数据集新版、神经网络关系抽取、基于bert的命名实体识别、关键词(Keyphrase)抽取包pke、基于医疗领域知识图谱的问答系统、基于依存句法与语义角色标注的事件三元组抽取、依存句法分析4万句高质量标注数据、cnocr:用来做中文OCR的Python3包、中文人物关系知识图谱项目、中文nlp竞赛项目及代码汇总、中文字符数据、speech-aligner: 从“人声语音”及其“语言文本”产生音素级别时间对齐标注的工具、AmpliGraph: 知识图谱表示学习(Python)库:知识图谱概念链接预测、Scattertext 文本可视化(python)、语言/知识表示工具:BERT & ERNIE、中文对比英文自然语言处理NLP的区别综述、Synonyms中文近义词工具包、HarvestText领域自适应文本挖掘工具(新词发现-情感分析-实体链接等)、word2word:(Python)方便易用的多语言词-词对集:62种语言/3,564个多语言对、语音识别语料生成工具:从具有音频/字幕的在线视频创建自动语音识别(ASR)语料库、构建医疗实体识别的模型(包含词典和语料标注)、单文档非监督的关键词抽取、Kashgari中使用gpt-2语言模型、开源的金融投资数据提取工具、文本自动摘要库TextTeaser: 仅支持英文、人民日报语料处理工具集、一些关于自然语言的基本模型、基于14W歌曲知识库的问答尝试--功能包括歌词接龙and已知歌词找歌曲以及歌曲歌手歌词三角关系的问答、基于Siamese bilstm模型的相似句子判定模型并提供训练数据集和测试数据集、用Transformer编解码模型实现的根据Hacker News文章标题自动生成评论、用BERT进行序列标记和文本分类的模板代码、LitBank:NLP数据集——支持自然语言处理和计算人文学科任务的100部带标记英文小说语料、百度开源的基准信息抽取系统、虚假新闻数据集、Facebook: LAMA语言模型分析,提供Transformer-XL/BERT/ELMo/GPT预训练语言模型的统一访问接口、CommonsenseQA:面向常识的英文QA挑战、中文知识图谱资料、数据及工具、各大公司内部里大牛分享的技术文档 PDF 或者 PPT、自然语言生成SQL语句(英文)、中文NLP数据增强(EDA)工具、英文NLP数据增强工具 、基于医药知识图谱的智能问答系统、京东商品知识图谱、基于mongodb存储的军事领域知识图谱问答项目、基于远监督的中文关系抽取、语音情感分析、中文ULMFiT-情感分析-文本分类-语料及模型、一个拍照做题程序、世界各国大规模人名库、一个利用有趣中文语料库 qingyun 训练出来的中文聊天机器人、中文聊天机器人seqGAN、省市区镇行政区划数据带拼音标注、教育行业新闻语料库包含自动文摘功能、开放了对话机器人-知识图谱-语义理解-自然语言处理工具及数据、中文知识图谱:基于百度百科中文页面-抽取三元组信息-构建中文知识图谱、masr: 中文语音识别-提供预训练模型-高识别率、Python音频数据增广库、中文全词覆盖BERT及两份阅读理解数据、ConvLab:开源多域端到端对话系统平台、中文自然语言处理数据集、基于最新版本rasa搭建的对话系统、基于TensorFlow和BERT的管道式实体及关系抽取、一个小型的证券知识图谱/知识库、复盘所有NLP比赛的TOP方案、OpenCLaP:多领域开源中文预训练语言模型仓库、UER:基于不同语料+编码器+目标任务的中文预训练模型仓库、中文自然语言处理向量合集、基于金融-司法领域(兼有闲聊性质)的聊天机器人、g2pC:基于上下文的汉语读音自动标记模块、Zincbase 知识图谱构建工具包、诗歌质量评价/细粒度情感诗歌语料库、快速转化「中文数字」和「阿拉伯数字」、百度知道问答语料库、基于知识图谱的问答系统、jieba_fast 加速版的jieba、正则表达式教程、中文阅读理解数据集、基于BERT等最新语言模型的抽取式摘要提取、Python利用深度学习进行文本摘要的综合指南、知识图谱深度学习相关资料整理、维基大规模平行文本语料、StanfordNLP 0.2.0:纯Python版自然语言处理包、NeuralNLP-NeuralClassifier:腾讯开源深度学习文本分类工具、端到端的封闭域对话系统、中文命名实体识别:NeuroNER vs. BertNER、新闻事件线索抽取、2019年百度的三元组抽取比赛:“科学空间队”源码、基于依存句法的开放域文本知识三元组抽取和知识库构建、中文的GPT2训练代码、ML-NLP - 机器学习(Machine Learning)NLP面试中常考到的知识点和代码实现、nlp4han:中文自然语言处理工具集(断句/分词/词性标注/组块/句法分析/语义分析/NER/N元语法/HMM/代词消解/情感分析/拼写检查、XLM:Facebook的跨语言预训练语言模型、用基于BERT的微调和特征提取方法来进行知识图谱百度百科人物词条属性抽取、中文自然语言处理相关的开放任务-数据集-当前最佳结果、CoupletAI - 基于CNN+Bi-LSTM+Attention 的自动对对联系统、抽象知识图谱、MiningZhiDaoQACorpus - 580万百度知道问答数据挖掘项目、brat rapid annotation tool: 序列标注工具、大规模中文知识图谱数据:1.4亿实体、数据增强在机器翻译及其他nlp任务中的应用及效果、allennlp阅读理解:支持多种数据和模型、PDF表格数据提取工具 、 Graphbrain:AI开源软件库和科研工具,目的是促进自动意义提取和文本理解以及知识的探索和推断、简历自动筛选系统、基于命名实体识别的简历自动摘要、中文语言理解测评基准,包括代表性的数据集&基准模型&语料库&排行榜、树洞 OCR 文字识别 、从包含表格的扫描图片中识别表格和文字、语声迁移、Python口语自然语言处理工具集(英文)、 similarity:相似度计算工具包,java编写、海量中文预训练ALBERT模型 、Transformers 2.0 、基于大规模音频数据集Audioset的音频增强 、Poplar:网页版自然语言标注工具、图片文字去除,可用于漫画翻译 、186种语言的数字叫法库、Amazon发布基于知识的人-人开放领域对话数据集 、中文文本纠错模块代码、繁简体转换 、 Python实现的多种文本可读性评价指标、类似于人名/地名/组织机构名的命名体识别数据集 、东南大学《知识图谱》研究生课程(资料)、. 英文拼写检查库 、 wwsearch是企业微信后台自研的全文检索引擎、CHAMELEON:深度学习新闻推荐系统元架构 、 8篇论文梳理BERT相关模型进展与反思、DocSearch:免费文档搜索引擎、 LIDA:轻量交互式对话标注工具 、aili - the fastest in-memory index in the East 东半球最快并发索引 、知识图谱车音工作项目、自然语言生成资源大全 、中日韩分词库mecab的Python接口库、中文文本摘要/关键词提取、汉字字符特征提取器 (featurizer),提取汉字的特征(发音特征、字形特征)用做深度学习的特征、中文生成任务基准测评 、中文缩写数据集、中文任务基准测评 - 代表性的数据集-基准(预训练)模型-语料库-baseline-工具包-排行榜、PySS3:面向可解释AI的SS3文本分类器机器可视化工具 、中文NLP数据集列表、COPE - 格律诗编辑程序、doccano:基于网页的开源协同多语言文本标注工具 、PreNLP:自然语言预处理库、简单的简历解析器,用来从简历中提取关键信息、用于中文闲聊的GPT2模型:GPT2-chitchat、基于检索聊天机器人多轮响应选择相关资源列表(Leaderboards、Datasets、Papers)、(Colab)抽象文本摘要实现集锦(教程 、词语拼音数据、高效模糊搜索工具、NLP数据增广资源集、微软对话机器人框架 、 GitHub Typo Corpus:大规模GitHub多语言拼写错误/语法错误数据集、TextCluster:短文本聚类预处理模块 Short text cluster、面向语音识别的中文文本规范化、BLINK:最先进的实体链接库、BertPunc:基于BERT的最先进标点修复模型、Tokenizer:快速、可定制的文本词条化库、中文语言理解测评基准,包括代表性的数据集、基准(预训练)模型、语料库、排行榜、spaCy 医学文本挖掘与信息提取 、 NLP任务示例项目代码集、 python拼写检查库、chatbot-list - 行业内关于智能客服、聊天机器人的应用和架构、算法分享和介绍、语音质量评价指标(MOSNet, BSSEval, STOI, PESQ, SRMR)、 用138GB语料训练的法文RoBERTa预训练语言模型 、BERT-NER-Pytorch:三种不同模式的BERT中文NER实验、无道词典 - 有道词典的命令行版本,支持英汉互查和在线查询、2019年NLP亮点回顾、 Chinese medical dialogue data 中文医疗对话数据集 、最好的汉字数字(中文数字)-阿拉伯数字转换工具、 基于百科知识库的中文词语多词义/义项获取与特定句子词语语义消歧、awesome-nlp-sentiment-analysis - 情感分析、情绪原因识别、评价对象和评价词抽取、LineFlow:面向所有深度学习框架的NLP数据高效加载器、中文医学NLP公开资源整理 、MedQuAD:(英文)医学问答数据集、将自然语言数字串解析转换为整数和浮点数、Transfer Learning in Natural Language Processing (NLP) 、面向语音识别的中文/英文发音辞典、Tokenizers:注重性能与多功能性的最先进分词器、CLUENER 细粒度命名实体识别 Fine Grained Named Entity Recognition、 基于BERT的中文命名实体识别、中文谣言数据库、NLP数据集/基准任务大列表、nlp相关的一些论文及代码, 包括主题模型、词向量(Word Embedding)、命名实体识别(NER)、文本分类(Text Classificatin)、文本生成(Text Generation)、文本相似性(Text Similarity)计算等,涉及到各种与nlp相关的算法,基于keras和tensorflow 、Python文本挖掘/NLP实战示例、 Blackstone:面向非结构化法律文本的spaCy pipeline和NLP模型通过同义词替换实现文本“变脸” 、中文 预训练 ELECTREA 模型: 基于对抗学习 pretrain Chinese Model 、albert-chinese-ner - 用预训练语言模型ALBERT做中文NER 、基于GPT2的特定主题文本生成/文本增广、开源预训练语言模型合集、多语言句向量包、编码、标记和实现:一种可控高效的文本生成方法、 英文脏话大列表 、attnvis:GPT2、BERT等transformer语言模型注意力交互可视化、CoVoST:Facebook发布的多语种语音-文本翻译语料库,包括11种语言(法语、德语、荷兰语、俄语、西班牙语、意大利语、土耳其语、波斯语、瑞典语、蒙古语和中文)的语音、文字转录及英文译文、Jiagu自然语言处理工具 - 以BiLSTM等模型为基础,提供知识图谱关系抽取 中文分词 词性标注 命名实体识别 情感分析 新词发现 关键词 文本摘要 文本聚类等功能、用unet实现对文档表格的自动检测,表格重建、NLP事件提取文献资源列表 、 金融领域自然语言处理研究资源大列表、CLUEDatasetSearch - 中英文NLP数据集:搜索所有中文NLP数据集,附常用英文NLP数据集 、medical_NER - 中文医学知识图谱命名实体识别 、(哈佛)讲因果推理的免费书、知识图谱相关学习资料/数据集/工具资源大列表、Forte:灵活强大的自然语言处理pipeline工具集 、Python字符串相似性算法库、PyLaia:面向手写文档分析的深度学习工具包、TextFooler:针对文本分类/推理的对抗文本生成模块、Haystack:灵活、强大的可扩展问答(QA)框架、中文关键短语抽取工具

bert

by google-research doticonpythondoticon

star image 28940 doticonApache-2.0

TensorFlow code and pre-trained models for BERT

HanLP

by hankcs doticonpythondoticon

star image 23581 doticonApache-2.0

中文分词 词性标注 命名实体识别 依存句法分析 语义依存分析 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理

spaCy

by explosion doticonpythondoticon

star image 23063 doticonMIT

💫 Industrial-strength Natural Language Processing (NLP) in Python

fastText

by facebookresearch doticonhtmldoticon

star image 22903 doticonMIT

Library for fast text representation and classification.

NLP-progress

by sebastianruder doticonpythondoticon

star image 18988 doticonMIT

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.

faiss

by facebookresearch doticonc++doticon

star image 14276 doticonMIT

A library for efficient similarity search and clustering of dense vectors.

flair

by flairNLP doticonpythondoticon

star image 11467 doticonNOASSERTION

A very simple framework for state-of-the-art Natural Language Processing (NLP)

Trending New libraries in Bert

PaddleNLP

by PaddlePaddle doticonpythondoticon

star image 3119 doticonApache-2.0

Easy-to-use and Fast NLP library with awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications.

BERTopic

by MaartenGr doticonpythondoticon

star image 2187 doticonMIT

Leveraging BERT and c-TF-IDF to create easily interpretable topics.

gpt-neox

by EleutherAI doticonpythondoticon

star image 2012 doticonApache-2.0

An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

electra

by google-research doticonpythondoticon

star image 1796 doticonApache-2.0

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

CLUEDatasetSearch

by CLUEbenchmark doticonpythondoticon

star image 1760 doticon

搜索所有中文NLP数据集,附常用英文NLP数据集

Top2Vec

by ddangelov doticonpythondoticon

star image 1605 doticonBSD-3-Clause

Top2Vec learns jointly embedded topic, document and word vectors.

longformer

by allenai doticonpythondoticon

star image 1207 doticonApache-2.0

Longformer: The Long-Document Transformer

spago

by nlpodyssey doticongodoticon

star image 1116 doticonBSD-2-Clause

Self-contained Machine Learning and Natural Language Processing library in Go

detext

by linkedin doticonpythondoticon

star image 1103 doticonBSD-2-Clause

DeText: A Deep Neural Text Understanding Framework for Ranking and Classification Tasks

Top Authors in Bert

1

facebookresearch

23 Libraries

star icon43973

2

microsoft

21 Libraries

star icon9942

3

monologg

17 Libraries

star icon1386

4

lonePatient

16 Libraries

star icon2729

5

allenai

14 Libraries

star icon3492

6

IBM

12 Libraries

star icon289

7

aws-samples

12 Libraries

star icon285

8

brightmart

10 Libraries

star icon7562

9

bhattbhavesh91

10 Libraries

star icon48

10

thunlp

10 Libraries

star icon2129

1

23 Libraries

star icon43973

2

21 Libraries

star icon9942

3

17 Libraries

star icon1386

4

16 Libraries

star icon2729

5

14 Libraries

star icon3492

6

12 Libraries

star icon289

7

12 Libraries

star icon285

8

10 Libraries

star icon7562

9

10 Libraries

star icon48

10

10 Libraries

star icon2129

Trending Kits in Bert

No Trending Kits are available at this moment for Bert

Trending Discussions on Bert

Convert pandas dataframe to datasetDict

What is the loss function used in Trainer from the Transformers library of Hugging Face?

how to save and load custom siamese bert model

How to change AllenNLP BERT based Semantic Role Labeling to RoBERTa in AllenNLP

Simple Transformers producing nothing?

Organize data for transformer fine-tuning

attributeerror: 'dataframe' object has no attribute 'data_type'

InternalError when using TPU for training Keras model

How to calculate perplexity of a sentence using huggingface masked language models?

XPath 1.0, 1st node in subtree

QUESTION

Convert pandas dataframe to datasetDict

Asked 2022-Mar-25 at 15:47

I cannot find anywhere how to convert a pandas dataframe to type datasets.dataset_dict.DatasetDict, for optimal use in a BERT workflow with a huggingface model. Take these simple dataframes, for example.

1train_df = pd.DataFrame({
2     "label" : [1, 2, 3],
3     "text" : ["apple", "pear", "strawberry"]
4})
5
6test_df = pd.DataFrame({
7     "label" : [2, 2, 1],
8     "text" : ["banana", "pear", "apple"]
9})
10

What is the most efficient way to convert these to the type above?

ANSWER

Answered 2022-Mar-25 at 15:47

One possibility is to first create two Datasets and then join them:

1train_df = pd.DataFrame({
2     "label" : [1, 2, 3],
3     "text" : ["apple", "pear", "strawberry"]
4})
5
6test_df = pd.DataFrame({
7     "label" : [2, 2, 1],
8     "text" : ["banana", "pear", "apple"]
9})
10import datasets
11import pandas as pd
12
13
14train_df = pd.DataFrame({
15     "label" : [1, 2, 3],
16     "text" : ["apple", "pear", "strawberry"]
17})
18
19test_df = pd.DataFrame({
20     "label" : [2, 2, 1],
21     "text" : ["banana", "pear", "apple"]
22})
23
24train_dataset = Dataset.from_dict(train_df)
25test_dataset = Dataset.from_dict(test_df)
26my_dataset_dict = datasets.DatasetDict({"train":train_dataset,"test":test_dataset})
27

The result is:

1train_df = pd.DataFrame({
2     "label" : [1, 2, 3],
3     "text" : ["apple", "pear", "strawberry"]
4})
5
6test_df = pd.DataFrame({
7     "label" : [2, 2, 1],
8     "text" : ["banana", "pear", "apple"]
9})
10import datasets
11import pandas as pd
12
13
14train_df = pd.DataFrame({
15     "label" : [1, 2, 3],
16     "text" : ["apple", "pear", "strawberry"]
17})
18
19test_df = pd.DataFrame({
20     "label" : [2, 2, 1],
21     "text" : ["banana", "pear", "apple"]
22})
23
24train_dataset = Dataset.from_dict(train_df)
25test_dataset = Dataset.from_dict(test_df)
26my_dataset_dict = datasets.DatasetDict({"train":train_dataset,"test":test_dataset})
27DatasetDict({
28    train: Dataset({
29        features: ['label', 'text'],
30        num_rows: 3
31    })
32    test: Dataset({
33        features: ['label', 'text'],
34        num_rows: 3
35    })
36})
37

Source https://stackoverflow.com/questions/71618974

QUESTION

What is the loss function used in Trainer from the Transformers library of Hugging Face?

Asked 2022-Mar-23 at 10:12

What is the loss function used in Trainer from the Transformers library of Hugging Face?

I am trying to fine tine a BERT model using the Trainer class from the Transformers library of Hugging Face.

In their documentation, they mention that one can specify a customized loss function by overriding the compute_loss method in the class. However, if I do not do the method override and use the Trainer to fine tine a BERT model directly for sentiment classification, what is the default loss function being use? Is it the categorical crossentropy? Thanks!

ANSWER

Answered 2022-Mar-23 at 10:12

It depends! Especially given your relatively vague setup description, it is not clear what loss will be used. But to start from the beginning, let's first check how the default compute_loss() function in the Trainer class looks like.

You can find the corresponding function here, if you want to have a look for yourself (current version at time of writing is 4.17). The actual loss that will be returned with default parameters is taken from the model's output values:

loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]

which means that the model itself is (by default) responsible for computing some sort of loss and returning it in outputs.

Following this, we can then look into the actual model definitions for BERT (source: here, and in particular check out the model that will be used in your Sentiment Analysis task (I assume a BertForSequenceClassification model.

The code relevant for defining a loss function looks like this:

1if labels is not None:
2    if self.config.problem_type is None:
3        if self.num_labels == 1:
4            self.config.problem_type = "regression"
5        elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
6            self.config.problem_type = "single_label_classification"
7        else:
8            self.config.problem_type = "multi_label_classification"
9
10    if self.config.problem_type == "regression":
11        loss_fct = MSELoss()
12        if self.num_labels == 1:
13            loss = loss_fct(logits.squeeze(), labels.squeeze())
14        else:
15            loss = loss_fct(logits, labels)
16    elif self.config.problem_type == "single_label_classification":
17        loss_fct = CrossEntropyLoss()
18        loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
19    elif self.config.problem_type == "multi_label_classification":
20        loss_fct = BCEWithLogitsLoss()
21        loss = loss_fct(logits, labels)
22
23

Based on this information, you should be able to either set the correct loss function yourself (by changing model.config.problem_type accordingly), or otherwise at least be able to determine whichever loss will be chosen, based on the hyperparameters of your task (number of labels, label scores, etc.)

Source https://stackoverflow.com/questions/71581197

QUESTION

how to save and load custom siamese bert model

Asked 2022-Mar-09 at 10:34

I am following this tutorial on how to train a siamese bert network:

https://keras.io/examples/nlp/semantic_similarity_with_bert/

all good, but I am not sure what is the best way to save the model after train it and save it. any suggestion?

I was trying with

model.save('models/bert_siamese_v1')

which creates a folder with save_model.bp keras_metadata.bp and two subfolders (variables and assets)

then I try to load it with:

1model.load_weights('models/bert_siamese_v1/')
2

and it gives me this error:

1model.load_weights('models/bert_siamese_v1/')
22022-03-08 14:11:52.567762: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open models/bert_siamese_v1/: Failed precondition: models/bert_siamese_v1; Is a directory: perhaps your file is in a different file format and you need to use a different restore operator?
3

what is the best way to proceed?

ANSWER

Answered 2022-Mar-08 at 16:13

Try using tf.saved_model.save to save your model:

1model.load_weights('models/bert_siamese_v1/')
22022-03-08 14:11:52.567762: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open models/bert_siamese_v1/: Failed precondition: models/bert_siamese_v1; Is a directory: perhaps your file is in a different file format and you need to use a different restore operator?
3tf.saved_model.save(model, 'models/bert_siamese_v1')
4model = tf.saved_model.load('models/bert_siamese_v1')
5

The warning you get during saving can apparently be ignored. After loading your model, you can use it for inference f(test_data):

1model.load_weights('models/bert_siamese_v1/')
22022-03-08 14:11:52.567762: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open models/bert_siamese_v1/: Failed precondition: models/bert_siamese_v1; Is a directory: perhaps your file is in a different file format and you need to use a different restore operator?
3tf.saved_model.save(model, 'models/bert_siamese_v1')
4model = tf.saved_model.load('models/bert_siamese_v1')
5f = model.signatures["serving_default"]
6x1 = tf.random.uniform((1, 128), maxval=100, dtype=tf.int32)
7x2 = tf.random.uniform((1, 128), maxval=100, dtype=tf.int32)
8x3 = tf.random.uniform((1, 128), maxval=100, dtype=tf.int32)
9print(f)
10print(f(attention_masks = x1, input_ids = x2, token_type_ids = x3))
11
1model.load_weights('models/bert_siamese_v1/')
22022-03-08 14:11:52.567762: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open models/bert_siamese_v1/: Failed precondition: models/bert_siamese_v1; Is a directory: perhaps your file is in a different file format and you need to use a different restore operator?
3tf.saved_model.save(model, 'models/bert_siamese_v1')
4model = tf.saved_model.load('models/bert_siamese_v1')
5f = model.signatures["serving_default"]
6x1 = tf.random.uniform((1, 128), maxval=100, dtype=tf.int32)
7x2 = tf.random.uniform((1, 128), maxval=100, dtype=tf.int32)
8x3 = tf.random.uniform((1, 128), maxval=100, dtype=tf.int32)
9print(f)
10print(f(attention_masks = x1, input_ids = x2, token_type_ids = x3))
11ConcreteFunction signature_wrapper(*, token_type_ids, attention_masks, input_ids)
12  Args:
13    attention_masks: int32 Tensor, shape=(None, 128)
14    input_ids: int32 Tensor, shape=(None, 128)
15    token_type_ids: int32 Tensor, shape=(None, 128)
16  Returns:
17    {'dense': <1>}
18      <1>: float32 Tensor, shape=(None, 3)
19{'dense': <tf.Tensor: shape=(1, 3), dtype=float32, numpy=array([[0.40711606, 0.13456087, 0.45832306]], dtype=float32)>}
20

Source https://stackoverflow.com/questions/71396540

QUESTION

How to change AllenNLP BERT based Semantic Role Labeling to RoBERTa in AllenNLP

Asked 2022-Feb-24 at 12:34

Currently i'm able to train a Semantic Role Labeling model using the config file below. This config file is based on the one provided by AllenNLP and works for the default bert-base-uncased model and also GroNLP/bert-base-dutch-cased.

1{
2  "dataset_reader": {
3    "type": "srl_custom",
4    "bert_model_name": "GroNLP/bert-base-dutch-cased"
5  },
6  "data_loader": {
7    "batch_sampler": {
8      "type": "bucket",
9      "batch_size": 32
10    }
11  },
12  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
13  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
14  "model": {
15    "type": "srl_bert",
16    "embedding_dropout": 0.1,
17    "bert_model": "GroNLP/bert-base-dutch-cased"
18  },
19  "trainer": {
20    "optimizer": {
21      "type": "huggingface_adamw",
22      "lr": 5e-5,
23      "correct_bias": false,
24      "weight_decay": 0.01,
25      "parameter_groups": [
26        [
27          [
28            "bias",
29            "LayerNorm.bias",
30            "LayerNorm.weight",
31            "layer_norm.weight"
32          ],
33          {
34            "weight_decay": 0.0
35          }
36        ]
37      ]
38    },
39    "learning_rate_scheduler": {
40      "type": "slanted_triangular"
41    },
42    "checkpointer": {
43      "keep_most_recent_by_count": 2
44    },
45    "grad_norm": 1.0,
46    "num_epochs": 3,
47    "validation_metric": "+f1-measure-overall"
48  }
49}
50

Swapping the values of bert_model_name and bert_model parameters from GroNLP/bert-base-dutch-cased to roberta-base won't work out of the box since the SRL datareader only supports the BertTokenizer and not the RobertaTokenizer. So I changed the config file to the following:

1{
2  "dataset_reader": {
3    "type": "srl_custom",
4    "bert_model_name": "GroNLP/bert-base-dutch-cased"
5  },
6  "data_loader": {
7    "batch_sampler": {
8      "type": "bucket",
9      "batch_size": 32
10    }
11  },
12  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
13  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
14  "model": {
15    "type": "srl_bert",
16    "embedding_dropout": 0.1,
17    "bert_model": "GroNLP/bert-base-dutch-cased"
18  },
19  "trainer": {
20    "optimizer": {
21      "type": "huggingface_adamw",
22      "lr": 5e-5,
23      "correct_bias": false,
24      "weight_decay": 0.01,
25      "parameter_groups": [
26        [
27          [
28            "bias",
29            "LayerNorm.bias",
30            "LayerNorm.weight",
31            "layer_norm.weight"
32          ],
33          {
34            "weight_decay": 0.0
35          }
36        ]
37      ]
38    },
39    "learning_rate_scheduler": {
40      "type": "slanted_triangular"
41    },
42    "checkpointer": {
43      "keep_most_recent_by_count": 2
44    },
45    "grad_norm": 1.0,
46    "num_epochs": 3,
47    "validation_metric": "+f1-measure-overall"
48  }
49}
50{
51  "dataset_reader": {
52    "type": "srl_custom",
53    "token_indexers": {
54      "tokens": {
55        "type": "pretrained_transformer",
56        "model_name": "roberta-base"
57      }
58    }
59  },
60  "data_loader": {
61    "batch_sampler": {
62      "type": "bucket",
63      "batch_size": 32
64    }
65  },
66  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
67  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
68  "model": {
69    "type": "srl_bert",
70    "embedding_dropout": 0.1,
71    "bert_model": "roberta-base"
72  },
73  "trainer": {
74    "optimizer": {
75      "type": "huggingface_adamw",
76      "lr": 5e-5,
77      "correct_bias": false,
78      "weight_decay": 0.01,
79      "parameter_groups": [
80        [
81          [
82            "bias",
83            "LayerNorm.bias",
84            "LayerNorm.weight",
85            "layer_norm.weight"
86          ],
87          {
88            "weight_decay": 0.0
89          }
90        ]
91      ]
92    },
93    "learning_rate_scheduler": {
94      "type": "slanted_triangular"
95    },
96    "checkpointer": {
97      "keep_most_recent_by_count": 2
98    },
99    "grad_norm": 1.0,
100    "num_epochs": 15,
101    "validation_metric": "+f1-measure-overall"
102  }
103}
104

However, this is still not working. I'm receiving the following error:

1{
2  "dataset_reader": {
3    "type": "srl_custom",
4    "bert_model_name": "GroNLP/bert-base-dutch-cased"
5  },
6  "data_loader": {
7    "batch_sampler": {
8      "type": "bucket",
9      "batch_size": 32
10    }
11  },
12  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
13  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
14  "model": {
15    "type": "srl_bert",
16    "embedding_dropout": 0.1,
17    "bert_model": "GroNLP/bert-base-dutch-cased"
18  },
19  "trainer": {
20    "optimizer": {
21      "type": "huggingface_adamw",
22      "lr": 5e-5,
23      "correct_bias": false,
24      "weight_decay": 0.01,
25      "parameter_groups": [
26        [
27          [
28            "bias",
29            "LayerNorm.bias",
30            "LayerNorm.weight",
31            "layer_norm.weight"
32          ],
33          {
34            "weight_decay": 0.0
35          }
36        ]
37      ]
38    },
39    "learning_rate_scheduler": {
40      "type": "slanted_triangular"
41    },
42    "checkpointer": {
43      "keep_most_recent_by_count": 2
44    },
45    "grad_norm": 1.0,
46    "num_epochs": 3,
47    "validation_metric": "+f1-measure-overall"
48  }
49}
50{
51  "dataset_reader": {
52    "type": "srl_custom",
53    "token_indexers": {
54      "tokens": {
55        "type": "pretrained_transformer",
56        "model_name": "roberta-base"
57      }
58    }
59  },
60  "data_loader": {
61    "batch_sampler": {
62      "type": "bucket",
63      "batch_size": 32
64    }
65  },
66  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
67  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
68  "model": {
69    "type": "srl_bert",
70    "embedding_dropout": 0.1,
71    "bert_model": "roberta-base"
72  },
73  "trainer": {
74    "optimizer": {
75      "type": "huggingface_adamw",
76      "lr": 5e-5,
77      "correct_bias": false,
78      "weight_decay": 0.01,
79      "parameter_groups": [
80        [
81          [
82            "bias",
83            "LayerNorm.bias",
84            "LayerNorm.weight",
85            "layer_norm.weight"
86          ],
87          {
88            "weight_decay": 0.0
89          }
90        ]
91      ]
92    },
93    "learning_rate_scheduler": {
94      "type": "slanted_triangular"
95    },
96    "checkpointer": {
97      "keep_most_recent_by_count": 2
98    },
99    "grad_norm": 1.0,
100    "num_epochs": 15,
101    "validation_metric": "+f1-measure-overall"
102  }
103}
1042022-02-22 16:19:34,122 - INFO - allennlp.training.gradient_descent_trainer - Training
105  0%|          | 0/1546 [00:00<?, ?it/s]2022-02-22 16:19:34,142 - INFO - allennlp.data.samplers.bucket_batch_sampler - No sorting keys given; trying to guess a good one
1062022-02-22 16:19:34,142 - INFO - allennlp.data.samplers.bucket_batch_sampler - Using ['tokens'] as the sorting keys
107  0%|          | 0/1546 [00:00<?, ?it/s]
1082022-02-22 16:19:34,526 - CRITICAL - root - Uncaught exception
109Traceback (most recent call last):
110  File "C:\Program Files\Python39\lib\runpy.py", line 197, in _run_module_as_main
111    return _run_code(code, main_globals, None,
112  File "C:\Program Files\Python39\lib\runpy.py", line 87, in _run_code
113    exec(code, run_globals)
114  File "C:\Users\denbe\AppData\Roaming\Python\Python39\Scripts\allennlp.exe\__main__.py", line 7, in <module>
115    sys.exit(run())
116  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\__main__.py", line 39, in run
117    main(prog="allennlp")
118  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\__init__.py", line 119, in main
119    args.func(args)
120  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 111, in train_model_from_args
121    train_model_from_file(
122  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 177, in train_model_from_file
123    return train_model(
124  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 258, in train_model
125    model = _train_worker(
126  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 508, in _train_worker
127    metrics = train_loop.run()
128  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 581, in run
129    return self.trainer.train()
130  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 771, in train
131    metrics, epoch = self._try_train()
132  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 793, in _try_train
133    train_metrics = self._train_epoch(epoch)
134  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 510, in _train_epoch
135    batch_outputs = self.batch_outputs(batch, for_training=True)
136  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 403, in batch_outputs
137    output_dict = self._pytorch_model(**batch)
138  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
139    result = self.forward(*input, **kwargs)
140  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp_models\structured_prediction\models\srl_bert.py", line 141, in forward
141    bert_embeddings, _ = self.bert_model(
142  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
143    result = self.forward(*input, **kwargs)
144  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\transformers\models\bert\modeling_bert.py", line 989, in forward
145    embedding_output = self.embeddings(
146  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
147    result = self.forward(*input, **kwargs)
148  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\transformers\models\bert\modeling_bert.py", line 215, in forward
149    token_type_embeddings = self.token_type_embeddings(token_type_ids)
150  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
151    result = self.forward(*input, **kwargs)
152  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\sparse.py", line 156, in forward
153    return F.embedding(
154  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\functional.py", line 1916, in embedding
155    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
156IndexError: index out of range in self
157

I don't fully understand whats going wrong and couldn't find any documentation on how to change the config file to load in a 'custom' BERT/RoBERTa model (one thats not mentioned here). I'm running the default allennlp train config.jsonnet command to start training. allennlp train config.jsonnet --dry-run produces no errors however.

Thanks in advance! Thijs

EDIT: I've now swapped out and inherited the "srl_bert" for a custom "srl_roberta" class to make use of the RobertaModel. This however still produces the same error.

EDIT2: I'm now using the AutoTokenizer as suggested by Dirk Groeneveld. It looks like changing the SrlReader class to support RoBERTa based models involves way more changes like swapping BERTs wordpiece tokenizer to RoBERTa's BPE tokenizer. Is there an easy way to adapt the SrlReader class or is it better to write a new RobertaSrlReader from scratch?

I've inherited the SrlReader class and changed this line to the following:

1{
2  "dataset_reader": {
3    "type": "srl_custom",
4    "bert_model_name": "GroNLP/bert-base-dutch-cased"
5  },
6  "data_loader": {
7    "batch_sampler": {
8      "type": "bucket",
9      "batch_size": 32
10    }
11  },
12  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
13  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
14  "model": {
15    "type": "srl_bert",
16    "embedding_dropout": 0.1,
17    "bert_model": "GroNLP/bert-base-dutch-cased"
18  },
19  "trainer": {
20    "optimizer": {
21      "type": "huggingface_adamw",
22      "lr": 5e-5,
23      "correct_bias": false,
24      "weight_decay": 0.01,
25      "parameter_groups": [
26        [
27          [
28            "bias",
29            "LayerNorm.bias",
30            "LayerNorm.weight",
31            "layer_norm.weight"
32          ],
33          {
34            "weight_decay": 0.0
35          }
36        ]
37      ]
38    },
39    "learning_rate_scheduler": {
40      "type": "slanted_triangular"
41    },
42    "checkpointer": {
43      "keep_most_recent_by_count": 2
44    },
45    "grad_norm": 1.0,
46    "num_epochs": 3,
47    "validation_metric": "+f1-measure-overall"
48  }
49}
50{
51  "dataset_reader": {
52    "type": "srl_custom",
53    "token_indexers": {
54      "tokens": {
55        "type": "pretrained_transformer",
56        "model_name": "roberta-base"
57      }
58    }
59  },
60  "data_loader": {
61    "batch_sampler": {
62      "type": "bucket",
63      "batch_size": 32
64    }
65  },
66  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
67  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
68  "model": {
69    "type": "srl_bert",
70    "embedding_dropout": 0.1,
71    "bert_model": "roberta-base"
72  },
73  "trainer": {
74    "optimizer": {
75      "type": "huggingface_adamw",
76      "lr": 5e-5,
77      "correct_bias": false,
78      "weight_decay": 0.01,
79      "parameter_groups": [
80        [
81          [
82            "bias",
83            "LayerNorm.bias",
84            "LayerNorm.weight",
85            "layer_norm.weight"
86          ],
87          {
88            "weight_decay": 0.0
89          }
90        ]
91      ]
92    },
93    "learning_rate_scheduler": {
94      "type": "slanted_triangular"
95    },
96    "checkpointer": {
97      "keep_most_recent_by_count": 2
98    },
99    "grad_norm": 1.0,
100    "num_epochs": 15,
101    "validation_metric": "+f1-measure-overall"
102  }
103}
1042022-02-22 16:19:34,122 - INFO - allennlp.training.gradient_descent_trainer - Training
105  0%|          | 0/1546 [00:00<?, ?it/s]2022-02-22 16:19:34,142 - INFO - allennlp.data.samplers.bucket_batch_sampler - No sorting keys given; trying to guess a good one
1062022-02-22 16:19:34,142 - INFO - allennlp.data.samplers.bucket_batch_sampler - Using ['tokens'] as the sorting keys
107  0%|          | 0/1546 [00:00<?, ?it/s]
1082022-02-22 16:19:34,526 - CRITICAL - root - Uncaught exception
109Traceback (most recent call last):
110  File "C:\Program Files\Python39\lib\runpy.py", line 197, in _run_module_as_main
111    return _run_code(code, main_globals, None,
112  File "C:\Program Files\Python39\lib\runpy.py", line 87, in _run_code
113    exec(code, run_globals)
114  File "C:\Users\denbe\AppData\Roaming\Python\Python39\Scripts\allennlp.exe\__main__.py", line 7, in <module>
115    sys.exit(run())
116  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\__main__.py", line 39, in run
117    main(prog="allennlp")
118  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\__init__.py", line 119, in main
119    args.func(args)
120  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 111, in train_model_from_args
121    train_model_from_file(
122  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 177, in train_model_from_file
123    return train_model(
124  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 258, in train_model
125    model = _train_worker(
126  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 508, in _train_worker
127    metrics = train_loop.run()
128  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 581, in run
129    return self.trainer.train()
130  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 771, in train
131    metrics, epoch = self._try_train()
132  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 793, in _try_train
133    train_metrics = self._train_epoch(epoch)
134  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 510, in _train_epoch
135    batch_outputs = self.batch_outputs(batch, for_training=True)
136  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 403, in batch_outputs
137    output_dict = self._pytorch_model(**batch)
138  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
139    result = self.forward(*input, **kwargs)
140  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp_models\structured_prediction\models\srl_bert.py", line 141, in forward
141    bert_embeddings, _ = self.bert_model(
142  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
143    result = self.forward(*input, **kwargs)
144  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\transformers\models\bert\modeling_bert.py", line 989, in forward
145    embedding_output = self.embeddings(
146  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
147    result = self.forward(*input, **kwargs)
148  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\transformers\models\bert\modeling_bert.py", line 215, in forward
149    token_type_embeddings = self.token_type_embeddings(token_type_ids)
150  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
151    result = self.forward(*input, **kwargs)
152  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\sparse.py", line 156, in forward
153    return F.embedding(
154  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\functional.py", line 1916, in embedding
155    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
156IndexError: index out of range in self
157self.bert_tokenizer = AutoTokenizer.from_pretrained(bert_model_name)
158

It produces the following error since RoBERTa tokenization differs from BERT:

1{
2  "dataset_reader": {
3    "type": "srl_custom",
4    "bert_model_name": "GroNLP/bert-base-dutch-cased"
5  },
6  "data_loader": {
7    "batch_sampler": {
8      "type": "bucket",
9      "batch_size": 32
10    }
11  },
12  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
13  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
14  "model": {
15    "type": "srl_bert",
16    "embedding_dropout": 0.1,
17    "bert_model": "GroNLP/bert-base-dutch-cased"
18  },
19  "trainer": {
20    "optimizer": {
21      "type": "huggingface_adamw",
22      "lr": 5e-5,
23      "correct_bias": false,
24      "weight_decay": 0.01,
25      "parameter_groups": [
26        [
27          [
28            "bias",
29            "LayerNorm.bias",
30            "LayerNorm.weight",
31            "layer_norm.weight"
32          ],
33          {
34            "weight_decay": 0.0
35          }
36        ]
37      ]
38    },
39    "learning_rate_scheduler": {
40      "type": "slanted_triangular"
41    },
42    "checkpointer": {
43      "keep_most_recent_by_count": 2
44    },
45    "grad_norm": 1.0,
46    "num_epochs": 3,
47    "validation_metric": "+f1-measure-overall"
48  }
49}
50{
51  "dataset_reader": {
52    "type": "srl_custom",
53    "token_indexers": {
54      "tokens": {
55        "type": "pretrained_transformer",
56        "model_name": "roberta-base"
57      }
58    }
59  },
60  "data_loader": {
61    "batch_sampler": {
62      "type": "bucket",
63      "batch_size": 32
64    }
65  },
66  "train_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
67  "validation_data_path": "./data/SRL/SONAR_1_SRL/MANUAL500/",
68  "model": {
69    "type": "srl_bert",
70    "embedding_dropout": 0.1,
71    "bert_model": "roberta-base"
72  },
73  "trainer": {
74    "optimizer": {
75      "type": "huggingface_adamw",
76      "lr": 5e-5,
77      "correct_bias": false,
78      "weight_decay": 0.01,
79      "parameter_groups": [
80        [
81          [
82            "bias",
83            "LayerNorm.bias",
84            "LayerNorm.weight",
85            "layer_norm.weight"
86          ],
87          {
88            "weight_decay": 0.0
89          }
90        ]
91      ]
92    },
93    "learning_rate_scheduler": {
94      "type": "slanted_triangular"
95    },
96    "checkpointer": {
97      "keep_most_recent_by_count": 2
98    },
99    "grad_norm": 1.0,
100    "num_epochs": 15,
101    "validation_metric": "+f1-measure-overall"
102  }
103}
1042022-02-22 16:19:34,122 - INFO - allennlp.training.gradient_descent_trainer - Training
105  0%|          | 0/1546 [00:00<?, ?it/s]2022-02-22 16:19:34,142 - INFO - allennlp.data.samplers.bucket_batch_sampler - No sorting keys given; trying to guess a good one
1062022-02-22 16:19:34,142 - INFO - allennlp.data.samplers.bucket_batch_sampler - Using ['tokens'] as the sorting keys
107  0%|          | 0/1546 [00:00<?, ?it/s]
1082022-02-22 16:19:34,526 - CRITICAL - root - Uncaught exception
109Traceback (most recent call last):
110  File "C:\Program Files\Python39\lib\runpy.py", line 197, in _run_module_as_main
111    return _run_code(code, main_globals, None,
112  File "C:\Program Files\Python39\lib\runpy.py", line 87, in _run_code
113    exec(code, run_globals)
114  File "C:\Users\denbe\AppData\Roaming\Python\Python39\Scripts\allennlp.exe\__main__.py", line 7, in <module>
115    sys.exit(run())
116  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\__main__.py", line 39, in run
117    main(prog="allennlp")
118  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\__init__.py", line 119, in main
119    args.func(args)
120  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 111, in train_model_from_args
121    train_model_from_file(
122  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 177, in train_model_from_file
123    return train_model(
124  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 258, in train_model
125    model = _train_worker(
126  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 508, in _train_worker
127    metrics = train_loop.run()
128  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\commands\train.py", line 581, in run
129    return self.trainer.train()
130  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 771, in train
131    metrics, epoch = self._try_train()
132  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 793, in _try_train
133    train_metrics = self._train_epoch(epoch)
134  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 510, in _train_epoch
135    batch_outputs = self.batch_outputs(batch, for_training=True)
136  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp\training\gradient_descent_trainer.py", line 403, in batch_outputs
137    output_dict = self._pytorch_model(**batch)
138  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
139    result = self.forward(*input, **kwargs)
140  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp_models\structured_prediction\models\srl_bert.py", line 141, in forward
141    bert_embeddings, _ = self.bert_model(
142  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
143    result = self.forward(*input, **kwargs)
144  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\transformers\models\bert\modeling_bert.py", line 989, in forward
145    embedding_output = self.embeddings(
146  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
147    result = self.forward(*input, **kwargs)
148  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\transformers\models\bert\modeling_bert.py", line 215, in forward
149    token_type_embeddings = self.token_type_embeddings(token_type_ids)
150  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
151    result = self.forward(*input, **kwargs)
152  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\modules\sparse.py", line 156, in forward
153    return F.embedding(
154  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\torch\nn\functional.py", line 1916, in embedding
155    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
156IndexError: index out of range in self
157self.bert_tokenizer = AutoTokenizer.from_pretrained(bert_model_name)
158  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp_models\structured_prediction\dataset_readers\srl.py", line 255, in text_to_instance
159    wordpieces, offsets, start_offsets = self._wordpiece_tokenize_input(
160  File "C:\Users\denbe\AppData\Roaming\Python\Python39\site-packages\allennlp_models\structured_prediction\dataset_readers\srl.py", line 196, in _wordpiece_tokenize_input
161    word_pieces = self.bert_tokenizer.wordpiece_tokenizer.tokenize(token)
162AttributeError: 'RobertaTokenizerFast' object has no attribute 'wordpiece_tokenizer'
163

ANSWER

Answered 2022-Feb-24 at 02:14

The easiest way to resolve this is to patch SrlReader so that it uses PretrainedTransformerTokenizer (from AllenNLP) or AutoTokenizer (from Huggingface) instead of BertTokenizer. SrlReader is an old class, and was written against an old version of the Huggingface tokenizer API, so it's not so easy to upgrade.

If you want to submit a pull request in the AllenNLP project, I'd be happy to help you get it merged into AllenNLP!

Source https://stackoverflow.com/questions/71223907

QUESTION

Simple Transformers producing nothing?

Asked 2022-Feb-22 at 11:54

I have a simple transformers script looking like this.

1from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
2args = Seq2SeqArgs()
3args.num_train_epoch=5
4model = Seq2SeqModel(
5    "roberta",
6    "roberta-base",
7    "bert-base-cased",
8)
9import pandas as pd
10df = pd.read_csv('english-french.csv')
11df['input_text'] = df['english'].values
12df['target_text'] =df['french'].values
13model.train_model(df.head(1000))
14print(model.eval_model(df.tail(10)))
15

The eval_loss is {'eval_loss': 0.0001931049264385365}

However when I run my prediction script

1from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
2args = Seq2SeqArgs()
3args.num_train_epoch=5
4model = Seq2SeqModel(
5    "roberta",
6    "roberta-base",
7    "bert-base-cased",
8)
9import pandas as pd
10df = pd.read_csv('english-french.csv')
11df['input_text'] = df['english'].values
12df['target_text'] =df['french'].values
13model.train_model(df.head(1000))
14print(model.eval_model(df.tail(10)))
15to_predict = ["They went to the public swimming pool."]
16predictions=model.predict(to_predict)
17

I get this

1from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
2args = Seq2SeqArgs()
3args.num_train_epoch=5
4model = Seq2SeqModel(
5    "roberta",
6    "roberta-base",
7    "bert-base-cased",
8)
9import pandas as pd
10df = pd.read_csv('english-french.csv')
11df['input_text'] = df['english'].values
12df['target_text'] =df['french'].values
13model.train_model(df.head(1000))
14print(model.eval_model(df.tail(10)))
15to_predict = ["They went to the public swimming pool."]
16predictions=model.predict(to_predict)
17['']
18

The dataset I used is here

I'm very confused on the output. Any help or explanation why it returns nothing would be much appreciated.

ANSWER

Answered 2022-Feb-22 at 11:54

Use this model instead.

1from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
2args = Seq2SeqArgs()
3args.num_train_epoch=5
4model = Seq2SeqModel(
5    "roberta",
6    "roberta-base",
7    "bert-base-cased",
8)
9import pandas as pd
10df = pd.read_csv('english-french.csv')
11df['input_text'] = df['english'].values
12df['target_text'] =df['french'].values
13model.train_model(df.head(1000))
14print(model.eval_model(df.tail(10)))
15to_predict = ["They went to the public swimming pool."]
16predictions=model.predict(to_predict)
17['']
18model = Seq2SeqModel(
19    encoder_decoder_type="marian",
20    encoder_decoder_name="Helsinki-NLP/opus-mt-en-mul",
21    args=args,
22    use_cuda=True,
23)
24

roBERTa is not a good option for your task.

I have rewritten your code on this colab notebook

Results

1from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
2args = Seq2SeqArgs()
3args.num_train_epoch=5
4model = Seq2SeqModel(
5    "roberta",
6    "roberta-base",
7    "bert-base-cased",
8)
9import pandas as pd
10df = pd.read_csv('english-french.csv')
11df['input_text'] = df['english'].values
12df['target_text'] =df['french'].values
13model.train_model(df.head(1000))
14print(model.eval_model(df.tail(10)))
15to_predict = ["They went to the public swimming pool."]
16predictions=model.predict(to_predict)
17['']
18model = Seq2SeqModel(
19    encoder_decoder_type="marian",
20    encoder_decoder_name="Helsinki-NLP/opus-mt-en-mul",
21    args=args,
22    use_cuda=True,
23)
24# Input
25to_predict = ["They went to the public swimming pool.", "she was driving the shiny black car."]
26predictions = model.predict(to_predict)
27print(predictions)
28
29# Output
30['Ils aient cher à la piscine publice.', 'elle conduit la véricine noir glancer.']
31

Source https://stackoverflow.com/questions/71200243

QUESTION

Organize data for transformer fine-tuning

Asked 2022-Feb-02 at 14:58

I have a corpus of synonyms and non-synonyms. These are stored in a list of python dictionaries like {"sentence1": <string>, "sentence2": <string>, "label": <1.0 or 0.0> }. Note that this words (or sentences) do not have to be a single token in the tokenizer.

I want to fine-tune a BERT-based model to take both sentences like: [[CLS], <sentence1_token1>], ...,<sentence1_tokenN>, [SEP], <sentence2_token1>], ..., <sentence2_tokenM>, [SEP]] and predict the "label" (a measurement between 0.0 and 1.0).

What is the best approach to organized this data to facilitate the fine-tuning of the huggingface transformer?

ANSWER

Answered 2022-Feb-02 at 14:58

You can use the Tokenizer __call__ method to join both sentences when encoding them.

In case you're using the PyTorch implementation, here is an example:

1import torch
2from transformers import AutoTokenizer
3
4sentences1 = ... # List containing all sentences 1
5sentences2 = ... # List containing all sentences 2
6labels = ... # List containing all labels (0 or 1)
7
8TOKENIZER_NAME = &quot;bert-base-cased&quot;
9tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME)
10
11encodings = tokenizer(
12    sentences1,
13    sentences2,
14    return_tensors=&quot;pt&quot;
15)
16
17labels = torch.tensor(labels)
18

Then you can create your custom Dataset to use it on training:

1import torch
2from transformers import AutoTokenizer
3
4sentences1 = ... # List containing all sentences 1
5sentences2 = ... # List containing all sentences 2
6labels = ... # List containing all labels (0 or 1)
7
8TOKENIZER_NAME = &quot;bert-base-cased&quot;
9tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_NAME)
10
11encodings = tokenizer(
12    sentences1,
13    sentences2,
14    return_tensors=&quot;pt&quot;
15)
16
17labels = torch.tensor(labels)
18class CustomRealDataset(torch.utils.data.Dataset):
19    def __init__(self, encodings, labels):
20        self.encodings = encodings
21        self.labels = labels
22
23    def __getitem__(self, idx):
24        item = {key: value[idx] for key, value in self.encodings.items()}
25        item[&quot;labels&quot;] = self.labels[idx]
26        return item
27
28    def __len__(self):
29        return len(self.labels)
30

Source https://stackoverflow.com/questions/70957390

QUESTION

attributeerror: 'dataframe' object has no attribute 'data_type'

Asked 2022-Jan-10 at 08:41

I am getting the following error : attributeerror: 'dataframe' object has no attribute 'data_type'" . I am trying to recreate the code from this link which is based on this article with my own dataset which is similar to the article

1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15
1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
16                                          do_lower_case=True)
17
18encoded_data_train = tokenizer.batch_encode_plus(
19    df[df.data_type=='train'].example.values,
20    add_special_tokens=True,
21    return_attention_mask=True,
22    pad_to_max_length=True,
23    max_length=256,
24    return_tensors='pt'
25)
26

and this is the error I get:

1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
16                                          do_lower_case=True)
17
18encoded_data_train = tokenizer.batch_encode_plus(
19    df[df.data_type=='train'].example.values,
20    add_special_tokens=True,
21    return_attention_mask=True,
22    pad_to_max_length=True,
23    max_length=256,
24    return_tensors='pt'
25)
26---------------------------------------------------------------------------
27AttributeError                            Traceback (most recent call last)
28~\AppData\Local\Temp/ipykernel_24180/2662883887.py in &lt;module&gt;
29      3 
30      4 encoded_data_train = tokenizer.batch_encode_plus(
31----&gt; 5     df[df.data_type=='train'].example.values,
32      6     add_special_tokens=True,
33      7     return_attention_mask=True,
34
35C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
36   5485         ):
37   5486             return self[name]
38-&gt; 5487         return object.__getattribute__(self, name)
39   5488 
40   5489     def __setattr__(self, name: str, value) -&gt; None:
41
42AttributeError: 'DataFrame' object has no attribute 'data_type'
43

I am using python: 3.9; pytorch :1.10.1; pandas: 1.3.5; transformers: 4.15.0

ANSWER

Answered 2022-Jan-10 at 08:41

The error means you have no data_type column in your dataframe because you missed this step

1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
16                                          do_lower_case=True)
17
18encoded_data_train = tokenizer.batch_encode_plus(
19    df[df.data_type=='train'].example.values,
20    add_special_tokens=True,
21    return_attention_mask=True,
22    pad_to_max_length=True,
23    max_length=256,
24    return_tensors='pt'
25)
26---------------------------------------------------------------------------
27AttributeError                            Traceback (most recent call last)
28~\AppData\Local\Temp/ipykernel_24180/2662883887.py in &lt;module&gt;
29      3 
30      4 encoded_data_train = tokenizer.batch_encode_plus(
31----&gt; 5     df[df.data_type=='train'].example.values,
32      6     add_special_tokens=True,
33      7     return_attention_mask=True,
34
35C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
36   5485         ):
37   5486             return self[name]
38-&gt; 5487         return object.__getattribute__(self, name)
39   5488 
40   5489     def __setattr__(self, name: str, value) -&gt; None:
41
42AttributeError: 'DataFrame' object has no attribute 'data_type'
43from sklearn.model_selection import train_test_split
44
45X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
46                                                  df.label.values, 
47                                                  test_size=0.15, 
48                                                  random_state=42, 
49                                                  stratify=df.label.values)
50
51df['data_type'] = ['not_set']*df.shape[0]  # &lt;- HERE
52
53df.loc[X_train, 'data_type'] = 'train'  # &lt;- HERE
54df.loc[X_val, 'data_type'] = 'val'  # &lt;- HERE
55
56df.groupby(['Conference', 'label', 'data_type']).count()
57

Demo

  1. Setup
1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
16                                          do_lower_case=True)
17
18encoded_data_train = tokenizer.batch_encode_plus(
19    df[df.data_type=='train'].example.values,
20    add_special_tokens=True,
21    return_attention_mask=True,
22    pad_to_max_length=True,
23    max_length=256,
24    return_tensors='pt'
25)
26---------------------------------------------------------------------------
27AttributeError                            Traceback (most recent call last)
28~\AppData\Local\Temp/ipykernel_24180/2662883887.py in &lt;module&gt;
29      3 
30      4 encoded_data_train = tokenizer.batch_encode_plus(
31----&gt; 5     df[df.data_type=='train'].example.values,
32      6     add_special_tokens=True,
33      7     return_attention_mask=True,
34
35C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
36   5485         ):
37   5486             return self[name]
38-&gt; 5487         return object.__getattribute__(self, name)
39   5488 
40   5489     def __setattr__(self, name: str, value) -&gt; None:
41
42AttributeError: 'DataFrame' object has no attribute 'data_type'
43from sklearn.model_selection import train_test_split
44
45X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
46                                                  df.label.values, 
47                                                  test_size=0.15, 
48                                                  random_state=42, 
49                                                  stratify=df.label.values)
50
51df['data_type'] = ['not_set']*df.shape[0]  # &lt;- HERE
52
53df.loc[X_train, 'data_type'] = 'train'  # &lt;- HERE
54df.loc[X_val, 'data_type'] = 'val'  # &lt;- HERE
55
56df.groupby(['Conference', 'label', 'data_type']).count()
57import pandas as pd
58from sklearn.model_selection import train_test_split
59
60# The Data
61df = pd.read_csv('data/title_conference.csv')
62df['label'] = pd.factorize(df['Conference'])[0]
63
64# Train and Validation Split
65X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
66                                                  df.label.values, 
67                                                  test_size=0.15, 
68                                                  random_state=42, 
69                                                  stratify=df.label.values)
70
71df['data_type'] = ['not_set']*df.shape[0]
72
73df.loc[X_train, 'data_type'] = 'train'
74df.loc[X_val, 'data_type'] = 'val'
75
  1. Code
1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
16                                          do_lower_case=True)
17
18encoded_data_train = tokenizer.batch_encode_plus(
19    df[df.data_type=='train'].example.values,
20    add_special_tokens=True,
21    return_attention_mask=True,
22    pad_to_max_length=True,
23    max_length=256,
24    return_tensors='pt'
25)
26---------------------------------------------------------------------------
27AttributeError                            Traceback (most recent call last)
28~\AppData\Local\Temp/ipykernel_24180/2662883887.py in &lt;module&gt;
29      3 
30      4 encoded_data_train = tokenizer.batch_encode_plus(
31----&gt; 5     df[df.data_type=='train'].example.values,
32      6     add_special_tokens=True,
33      7     return_attention_mask=True,
34
35C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
36   5485         ):
37   5486             return self[name]
38-&gt; 5487         return object.__getattribute__(self, name)
39   5488 
40   5489     def __setattr__(self, name: str, value) -&gt; None:
41
42AttributeError: 'DataFrame' object has no attribute 'data_type'
43from sklearn.model_selection import train_test_split
44
45X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
46                                                  df.label.values, 
47                                                  test_size=0.15, 
48                                                  random_state=42, 
49                                                  stratify=df.label.values)
50
51df['data_type'] = ['not_set']*df.shape[0]  # &lt;- HERE
52
53df.loc[X_train, 'data_type'] = 'train'  # &lt;- HERE
54df.loc[X_val, 'data_type'] = 'val'  # &lt;- HERE
55
56df.groupby(['Conference', 'label', 'data_type']).count()
57import pandas as pd
58from sklearn.model_selection import train_test_split
59
60# The Data
61df = pd.read_csv('data/title_conference.csv')
62df['label'] = pd.factorize(df['Conference'])[0]
63
64# Train and Validation Split
65X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
66                                                  df.label.values, 
67                                                  test_size=0.15, 
68                                                  random_state=42, 
69                                                  stratify=df.label.values)
70
71df['data_type'] = ['not_set']*df.shape[0]
72
73df.loc[X_train, 'data_type'] = 'train'
74df.loc[X_val, 'data_type'] = 'val'
75from transformers import BertTokenizer
76
77tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', 
78                                          do_lower_case=True)
79
80encoded_data_train = tokenizer.batch_encode_plus(
81    df[df.data_type=='train'].Title.values, 
82    add_special_tokens=True, 
83    return_attention_mask=True, 
84    pad_to_max_length=True, 
85    max_length=256, 
86    return_tensors='pt'
87)
88

Output:

1from sklearn.model_selection import train_test_split
2
3X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
4                                                  df.label.values, 
5                                                  test_size=0.15, 
6                                                  random_state=42, 
7                                                  stratify=df.label.values)
8
9df['data_type'] = ['not_set']*df.shape[0]
10
11df.loc[X_train, 'data_type'] = 'train'
12df.loc[X_val, 'data_type'] = 'val'
13
14df.groupby(['Conference', 'label', 'data_type']).count()
15tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
16                                          do_lower_case=True)
17
18encoded_data_train = tokenizer.batch_encode_plus(
19    df[df.data_type=='train'].example.values,
20    add_special_tokens=True,
21    return_attention_mask=True,
22    pad_to_max_length=True,
23    max_length=256,
24    return_tensors='pt'
25)
26---------------------------------------------------------------------------
27AttributeError                            Traceback (most recent call last)
28~\AppData\Local\Temp/ipykernel_24180/2662883887.py in &lt;module&gt;
29      3 
30      4 encoded_data_train = tokenizer.batch_encode_plus(
31----&gt; 5     df[df.data_type=='train'].example.values,
32      6     add_special_tokens=True,
33      7     return_attention_mask=True,
34
35C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
36   5485         ):
37   5486             return self[name]
38-&gt; 5487         return object.__getattribute__(self, name)
39   5488 
40   5489     def __setattr__(self, name: str, value) -&gt; None:
41
42AttributeError: 'DataFrame' object has no attribute 'data_type'
43from sklearn.model_selection import train_test_split
44
45X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
46                                                  df.label.values, 
47                                                  test_size=0.15, 
48                                                  random_state=42, 
49                                                  stratify=df.label.values)
50
51df['data_type'] = ['not_set']*df.shape[0]  # &lt;- HERE
52
53df.loc[X_train, 'data_type'] = 'train'  # &lt;- HERE
54df.loc[X_val, 'data_type'] = 'val'  # &lt;- HERE
55
56df.groupby(['Conference', 'label', 'data_type']).count()
57import pandas as pd
58from sklearn.model_selection import train_test_split
59
60# The Data
61df = pd.read_csv('data/title_conference.csv')
62df['label'] = pd.factorize(df['Conference'])[0]
63
64# Train and Validation Split
65X_train, X_val, y_train, y_val = train_test_split(df.index.values, 
66                                                  df.label.values, 
67                                                  test_size=0.15, 
68                                                  random_state=42, 
69                                                  stratify=df.label.values)
70
71df['data_type'] = ['not_set']*df.shape[0]
72
73df.loc[X_train, 'data_type'] = 'train'
74df.loc[X_val, 'data_type'] = 'val'
75from transformers import BertTokenizer
76
77tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', 
78                                          do_lower_case=True)
79
80encoded_data_train = tokenizer.batch_encode_plus(
81    df[df.data_type=='train'].Title.values, 
82    add_special_tokens=True, 
83    return_attention_mask=True, 
84    pad_to_max_length=True, 
85    max_length=256, 
86    return_tensors='pt'
87)
88&gt;&gt;&gt; encoded_data_train
89{'input_ids': tensor([[  101,  8144,  1999,  ...,     0,     0,     0],
90        [  101,  2152,  2836,  ...,     0,     0,     0],
91        [  101, 22454, 25806,  ...,     0,     0,     0],
92        ...,
93        [  101,  1037,  2047,  ...,     0,     0,     0],
94        [  101, 13229,  7375,  ...,     0,     0,     0],
95        [  101,  2006,  1996,  ...,     0,     0,     0]]), 'token_type_ids': tensor([[0, 0, 0,  ..., 0, 0, 0],
96        [0, 0, 0,  ..., 0, 0, 0],
97        [0, 0, 0,  ..., 0, 0, 0],
98        ...,
99        [0, 0, 0,  ..., 0, 0, 0],
100        [0, 0, 0,  ..., 0, 0, 0],
101        [0, 0, 0,  ..., 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1,  ..., 0, 0, 0],
102        [1, 1, 1,  ..., 0, 0, 0],
103        [1, 1, 1,  ..., 0, 0, 0],
104        ...,
105        [1, 1, 1,  ..., 0, 0, 0],
106        [1, 1, 1,  ..., 0, 0, 0],
107        [1, 1, 1,  ..., 0, 0, 0]])}
108

Source https://stackoverflow.com/questions/70649379

QUESTION

InternalError when using TPU for training Keras model

Asked 2021-Dec-31 at 08:18

I am attempting to fine-tune a BERT model on Google Colab from the Tensorflow Hub using this link.

However, I run into the following error:

1InternalError: RET_CHECK failure (third_party/tensorflow/core/tpu/graph_rewrite/distributed_tpu_rewrite_pass.cc:2047) arg_shape.handle_type != DT_INVALID  input edge: [id=2693 model_preprocessing_67660:0 -&gt; cluster_train_function:628]
2

When I run my model.fit(...) function.

This error only occurs when I try to use TPU (runs fine on CPU, but has a very long training time).

Here is my code for setting up the TPU and model:

TPU Setup:

1InternalError: RET_CHECK failure (third_party/tensorflow/core/tpu/graph_rewrite/distributed_tpu_rewrite_pass.cc:2047) arg_shape.handle_type != DT_INVALID  input edge: [id=2693 model_preprocessing_67660:0 -&gt; cluster_train_function:628]
2import os
3os.environ[&quot;TFHUB_MODEL_LOAD_FORMAT&quot;]=&quot;UNCOMPRESSED&quot;
4
5cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
6tf.config.experimental_connect_to_cluster(cluster_resolver)
7tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
8strategy = tf.distribute.TPUStrategy(cluster_resolver)
9

Model Setup:

1InternalError: RET_CHECK failure (third_party/tensorflow/core/tpu/graph_rewrite/distributed_tpu_rewrite_pass.cc:2047) arg_shape.handle_type != DT_INVALID  input edge: [id=2693 model_preprocessing_67660:0 -&gt; cluster_train_function:628]
2import os
3os.environ[&quot;TFHUB_MODEL_LOAD_FORMAT&quot;]=&quot;UNCOMPRESSED&quot;
4
5cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
6tf.config.experimental_connect_to_cluster(cluster_resolver)
7tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
8strategy = tf.distribute.TPUStrategy(cluster_resolver)
9def build_classifier_model():
10  text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
11  preprocessing_layer = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', name='preprocessing')
12  encoder_inputs = preprocessing_layer(text_input)
13  encoder = hub.KerasLayer('https://tfhub.dev/google/experts/bert/wiki_books/sst2/2', trainable=True, name='BERT_encoder')
14  outputs = encoder(encoder_inputs)
15  net = outputs['pooled_output']
16  net = tf.keras.layers.Dropout(0.1)(net)
17  net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
18  return tf.keras.Model(text_input, net)
19

Model Training

1InternalError: RET_CHECK failure (third_party/tensorflow/core/tpu/graph_rewrite/distributed_tpu_rewrite_pass.cc:2047) arg_shape.handle_type != DT_INVALID  input edge: [id=2693 model_preprocessing_67660:0 -&gt; cluster_train_function:628]
2import os
3os.environ[&quot;TFHUB_MODEL_LOAD_FORMAT&quot;]=&quot;UNCOMPRESSED&quot;
4
5cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
6tf.config.experimental_connect_to_cluster(cluster_resolver)
7tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
8strategy = tf.distribute.TPUStrategy(cluster_resolver)
9def build_classifier_model():
10  text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
11  preprocessing_layer = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', name='preprocessing')
12  encoder_inputs = preprocessing_layer(text_input)
13  encoder = hub.KerasLayer('https://tfhub.dev/google/experts/bert/wiki_books/sst2/2', trainable=True, name='BERT_encoder')
14  outputs = encoder(encoder_inputs)
15  net = outputs['pooled_output']
16  net = tf.keras.layers.Dropout(0.1)(net)
17  net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
18  return tf.keras.Model(text_input, net)
19with strategy.scope():
20
21  bert_model = build_classifier_model()
22  loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
23  metrics = tf.metrics.BinaryAccuracy()
24  epochs = 1
25  steps_per_epoch = 1280000
26  num_train_steps = steps_per_epoch * epochs
27  num_warmup_steps = int(0.1*num_train_steps)
28
29  init_lr = 3e-5
30  optimizer = optimization.create_optimizer(init_lr=init_lr,
31                                          num_train_steps=num_train_steps,
32                                          num_warmup_steps=num_warmup_steps,
33                                          optimizer_type='adamw')
34  bert_model.compile(optimizer=optimizer,
35                         loss=loss,
36                         metrics=metrics)
37  print(f'Training model')
38  history = bert_model.fit(x=X_train, y=y_train,
39                               validation_data=(X_val, y_val),
40                               epochs=epochs)
41

Note that X_train is a numpy array of type str with shape (1280000,) and y_train is a numpy array of shape (1280000, 1)

ANSWER

Answered 2021-Dec-31 at 08:18

As I don't exactly know what changes you have made in the code... I don't have idea about your dataset. But I can see that you are trying to train the whole datset with one epoch and passing the steps per epoch directly. I would recommend to write it like this

set some batch_size 2^n power (for example 16 or 32 or etc) if you don't want to batch the dataset just set batch_size to 1

1InternalError: RET_CHECK failure (third_party/tensorflow/core/tpu/graph_rewrite/distributed_tpu_rewrite_pass.cc:2047) arg_shape.handle_type != DT_INVALID  input edge: [id=2693 model_preprocessing_67660:0 -&gt; cluster_train_function:628]
2import os
3os.environ[&quot;TFHUB_MODEL_LOAD_FORMAT&quot;]=&quot;UNCOMPRESSED&quot;
4
5cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
6tf.config.experimental_connect_to_cluster(cluster_resolver)
7tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
8strategy = tf.distribute.TPUStrategy(cluster_resolver)
9def build_classifier_model():
10  text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
11  preprocessing_layer = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', name='preprocessing')
12  encoder_inputs = preprocessing_layer(text_input)
13  encoder = hub.KerasLayer('https://tfhub.dev/google/experts/bert/wiki_books/sst2/2', trainable=True, name='BERT_encoder')
14  outputs = encoder(encoder_inputs)
15  net = outputs['pooled_output']
16  net = tf.keras.layers.Dropout(0.1)(net)
17  net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
18  return tf.keras.Model(text_input, net)
19with strategy.scope():
20
21  bert_model = build_classifier_model()
22  loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
23  metrics = tf.metrics.BinaryAccuracy()
24  epochs = 1
25  steps_per_epoch = 1280000
26  num_train_steps = steps_per_epoch * epochs
27  num_warmup_steps = int(0.1*num_train_steps)
28
29  init_lr = 3e-5
30  optimizer = optimization.create_optimizer(init_lr=init_lr,
31                                          num_train_steps=num_train_steps,
32                                          num_warmup_steps=num_warmup_steps,
33                                          optimizer_type='adamw')
34  bert_model.compile(optimizer=optimizer,
35                         loss=loss,
36                         metrics=metrics)
37  print(f'Training model')
38  history = bert_model.fit(x=X_train, y=y_train,
39                               validation_data=(X_val, y_val),
40                               epochs=epochs)
41batch_size = 16
42steps_per_epoch = training_data_size // batch_size
43

The problem with the code is most probably the training dataset size. I think that you're making a mistake by passing the value of the training dataset manually.

If you're loading the dataset from tfds use (as shown in the link):

1InternalError: RET_CHECK failure (third_party/tensorflow/core/tpu/graph_rewrite/distributed_tpu_rewrite_pass.cc:2047) arg_shape.handle_type != DT_INVALID  input edge: [id=2693 model_preprocessing_67660:0 -&gt; cluster_train_function:628]
2import os
3os.environ[&quot;TFHUB_MODEL_LOAD_FORMAT&quot;]=&quot;UNCOMPRESSED&quot;
4
5cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
6tf.config.experimental_connect_to_cluster(cluster_resolver)
7tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
8strategy = tf.distribute.TPUStrategy(cluster_resolver)
9def build_classifier_model():
10  text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
11  preprocessing_layer = hub.KerasLayer('https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', name='preprocessing')
12  encoder_inputs = preprocessing_layer(text_input)
13  encoder = hub.KerasLayer('https://tfhub.dev/google/experts/bert/wiki_books/sst2/2', trainable=True, name='BERT_encoder')
14  outputs = encoder(encoder_inputs)
15  net = outputs['pooled_output']
16  net = tf.keras.layers.Dropout(0.1)(net)
17  net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
18  return tf.keras.Model(text_input, net)
19with strategy.scope():
20
21  bert_model = build_classifier_model()
22  loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
23  metrics = tf.metrics.BinaryAccuracy()
24  epochs = 1
25  steps_per_epoch = 1280000
26  num_train_steps = steps_per_epoch * epochs
27  num_warmup_steps = int(0.1*num_train_steps)
28
29  init_lr = 3e-5
30  optimizer = optimization.create_optimizer(init_lr=init_lr,
31                                          num_train_steps=num_train_steps,
32                                          num_warmup_steps=num_warmup_steps,
33                                          optimizer_type='adamw')
34  bert_model.compile(optimizer=optimizer,
35                         loss=loss,
36                         metrics=metrics)
37  print(f'Training model')
38  history = bert_model.fit(x=X_train, y=y_train,
39                               validation_data=(X_val, y_val),
40                               epochs=epochs)
41batch_size = 16
42steps_per_epoch = training_data_size // batch_size
43train_dataset, train_data_size = load_dataset_from_tfds(
44  in_memory_ds, tfds_info, train_split, batch_size, bert_preprocess_model)
45

If you're using a custom dataset take the size of the cleaned dataset in a variable and then use that variable for using the size of the training data. Try to avoid manually putting values in the code as far as possible.

Source https://stackoverflow.com/questions/70479279

QUESTION

How to calculate perplexity of a sentence using huggingface masked language models?

Asked 2021-Dec-25 at 21:51

I have several masked language models (mainly Bert, Roberta, Albert, Electra). I also have a dataset of sentences. How can I get the perplexity of each sentence?

From the huggingface documentation here they mentioned that perplexity "is not well defined for masked language models like BERT", though I still see people somehow calculate it.

For example in this SO question they calculated it using the function

1def score(model, tokenizer, sentence,  mask_token_id=103):
2  tensor_input = tokenizer.encode(sentence, return_tensors='pt')
3  repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
4  mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
5  masked_input = repeat_input.masked_fill(mask == 1, 103)
6  labels = repeat_input.masked_fill( masked_input != 103, -100)
7  loss,_ = model(masked_input, masked_lm_labels=labels)
8  result = np.exp(loss.item())
9  return result
10
11score(model, tokenizer, '我爱你') # returns 45.63794545581973
12

However, when I try to use the code I get TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'.

I tried it with a couple of my models:

1def score(model, tokenizer, sentence,  mask_token_id=103):
2  tensor_input = tokenizer.encode(sentence, return_tensors='pt')
3  repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
4  mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
5  masked_input = repeat_input.masked_fill(mask == 1, 103)
6  labels = repeat_input.masked_fill( masked_input != 103, -100)
7  loss,_ = model(masked_input, masked_lm_labels=labels)
8  result = np.exp(loss.item())
9  return result
10
11score(model, tokenizer, '我爱你') # returns 45.63794545581973
12from transformers import pipeline, BertForMaskedLM, BertForMaskedLM, AutoTokenizer, RobertaForMaskedLM, AlbertForMaskedLM, ElectraForMaskedLM
13import torch
14
151)
16tokenizer = AutoTokenizer.from_pretrained(&quot;bioformers/bioformer-cased-v1.0&quot;)
17model = BertForMaskedLM.from_pretrained(&quot;bioformers/bioformer-cased-v1.0&quot;)
182)
19tokenizer = AutoTokenizer.from_pretrained(&quot;sultan/BioM-ELECTRA-Large-Generator&quot;)
20model = ElectraForMaskedLM.from_pretrained(&quot;sultan/BioM-ELECTRA-Large-Generator&quot;)
21

This SO question also used the masked_lm_labels as an input and it seemed to work somehow.

ANSWER

Answered 2021-Dec-25 at 21:51

There is a paper Masked Language Model Scoring that explores pseudo-perplexity from masked language models and shows that pseudo-perplexity, while not being theoretically well justified, still performs well for comparing "naturalness" of texts.

As for the code, your snippet is perfectly correct but for one detail: in recent implementations of Huggingface BERT, masked_lm_labels are renamed to simply labels, to make interfaces of various models more compatible. I have also replaced the hard-coded 103 with the generic tokenizer.mask_token_id. So the snippet below should work:

1def score(model, tokenizer, sentence,  mask_token_id=103):
2  tensor_input = tokenizer.encode(sentence, return_tensors='pt')
3  repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
4  mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
5  masked_input = repeat_input.masked_fill(mask == 1, 103)
6  labels = repeat_input.masked_fill( masked_input != 103, -100)
7  loss,_ = model(masked_input, masked_lm_labels=labels)
8  result = np.exp(loss.item())
9  return result
10
11score(model, tokenizer, '我爱你') # returns 45.63794545581973
12from transformers import pipeline, BertForMaskedLM, BertForMaskedLM, AutoTokenizer, RobertaForMaskedLM, AlbertForMaskedLM, ElectraForMaskedLM
13import torch
14
151)
16tokenizer = AutoTokenizer.from_pretrained(&quot;bioformers/bioformer-cased-v1.0&quot;)
17model = BertForMaskedLM.from_pretrained(&quot;bioformers/bioformer-cased-v1.0&quot;)
182)
19tokenizer = AutoTokenizer.from_pretrained(&quot;sultan/BioM-ELECTRA-Large-Generator&quot;)
20model = ElectraForMaskedLM.from_pretrained(&quot;sultan/BioM-ELECTRA-Large-Generator&quot;)
21from transformers import AutoModelForMaskedLM, AutoTokenizer
22import torch
23import numpy as np
24
25model_name = 'cointegrated/rubert-tiny'
26model = AutoModelForMaskedLM.from_pretrained(model_name)
27tokenizer = AutoTokenizer.from_pretrained(model_name)
28
29def score(model, tokenizer, sentence):
30    tensor_input = tokenizer.encode(sentence, return_tensors='pt')
31    repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1)
32    mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2]
33    masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id)
34    labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100)
35    with torch.inference_mode():
36        loss = model(masked_input, labels=labels).loss
37    return np.exp(loss.item())
38
39print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer)) 
40# 4.541251105675365
41print(score(sentence='London is the capital of South America.', model=model, tokenizer=tokenizer)) 
42# 6.162017238332462
43

You can try this code in Google Colab by running this gist.

Source https://stackoverflow.com/questions/70464428

QUESTION

XPath 1.0, 1st node in subtree

Asked 2021-Dec-23 at 19:40

So what I want to do is identify the 1st node in some subtree of a xml tree.

here's an example

1&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
2&lt;root&gt;
3  &lt;road&gt;
4    &lt;households&gt;
5      &lt;household&gt;
6        &lt;occupants&gt;
7          &lt;person name=&quot;jim&quot;/&gt;
8          &lt;person name=&quot;jon&quot;/&gt;
9          &lt;person name=&quot;julie&quot;/&gt;
10          &lt;person name=&quot;janet&quot;/&gt;
11        &lt;/occupants&gt;
12      &lt;/household&gt;
13      &lt;household&gt;
14        &lt;occupants&gt;
15          &lt;person name=&quot;brenda&quot;/&gt;
16          &lt;person name=&quot;bert&quot;/&gt;
17          &lt;person name=&quot;billy&quot;/&gt;
18        &lt;/occupants&gt;
19      &lt;/household&gt;
20    &lt;/households&gt;
21  &lt;/road&gt;
22  &lt;road&gt;
23    &lt;households&gt;
24      &lt;household&gt;
25        &lt;occupants&gt;
26        &lt;/occupants&gt;
27      &lt;/household&gt;
28      &lt;household&gt;
29        &lt;occupants&gt;
30          &lt;person name=&quot;arthur&quot;/&gt;
31          &lt;person name=&quot;aimy&quot;/&gt;
32        &lt;/occupants&gt;
33      &lt;/household&gt;
34      &lt;household&gt;
35        &lt;occupants&gt;
36          &lt;person name=&quot;harry&quot;/&gt;
37          &lt;person name=&quot;henry&quot;/&gt;
38        &lt;/occupants&gt;
39      &lt;/household&gt;
40    &lt;/households&gt;
41  &lt;/road&gt;
42&lt;/root&gt;
43

now I want the 1st person mentioned per road.

so lets have a go...

1&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
2&lt;root&gt;
3  &lt;road&gt;
4    &lt;households&gt;
5      &lt;household&gt;
6        &lt;occupants&gt;
7          &lt;person name=&quot;jim&quot;/&gt;
8          &lt;person name=&quot;jon&quot;/&gt;
9          &lt;person name=&quot;julie&quot;/&gt;
10          &lt;person name=&quot;janet&quot;/&gt;
11        &lt;/occupants&gt;
12      &lt;/household&gt;
13      &lt;household&gt;
14        &lt;occupants&gt;
15          &lt;person name=&quot;brenda&quot;/&gt;
16          &lt;person name=&quot;bert&quot;/&gt;
17          &lt;person name=&quot;billy&quot;/&gt;
18        &lt;/occupants&gt;
19      &lt;/household&gt;
20    &lt;/households&gt;
21  &lt;/road&gt;
22  &lt;road&gt;
23    &lt;households&gt;
24      &lt;household&gt;
25        &lt;occupants&gt;
26        &lt;/occupants&gt;
27      &lt;/household&gt;
28      &lt;household&gt;
29        &lt;occupants&gt;
30          &lt;person name=&quot;arthur&quot;/&gt;
31          &lt;person name=&quot;aimy&quot;/&gt;
32        &lt;/occupants&gt;
33      &lt;/household&gt;
34      &lt;household&gt;
35        &lt;occupants&gt;
36          &lt;person name=&quot;harry&quot;/&gt;
37          &lt;person name=&quot;henry&quot;/&gt;
38        &lt;/occupants&gt;
39      &lt;/household&gt;
40    &lt;/households&gt;
41  &lt;/road&gt;
42&lt;/root&gt;
43/root/road/households/household/occupants/person[1]/@name
44

that returns the 1st person per occupants node.

lets try

1&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
2&lt;root&gt;
3  &lt;road&gt;
4    &lt;households&gt;
5      &lt;household&gt;
6        &lt;occupants&gt;
7          &lt;person name=&quot;jim&quot;/&gt;
8          &lt;person name=&quot;jon&quot;/&gt;
9          &lt;person name=&quot;julie&quot;/&gt;
10          &lt;person name=&quot;janet&quot;/&gt;
11        &lt;/occupants&gt;
12      &lt;/household&gt;
13      &lt;household&gt;
14        &lt;occupants&gt;
15          &lt;person name=&quot;brenda&quot;/&gt;
16          &lt;person name=&quot;bert&quot;/&gt;
17          &lt;person name=&quot;billy&quot;/&gt;
18        &lt;/occupants&gt;
19      &lt;/household&gt;
20    &lt;/households&gt;
21  &lt;/road&gt;
22  &lt;road&gt;
23    &lt;households&gt;
24      &lt;household&gt;
25        &lt;occupants&gt;
26        &lt;/occupants&gt;
27      &lt;/household&gt;
28      &lt;household&gt;
29        &lt;occupants&gt;
30          &lt;person name=&quot;arthur&quot;/&gt;
31          &lt;person name=&quot;aimy&quot;/&gt;
32        &lt;/occupants&gt;
33      &lt;/household&gt;
34      &lt;household&gt;
35        &lt;occupants&gt;
36          &lt;person name=&quot;harry&quot;/&gt;
37          &lt;person name=&quot;henry&quot;/&gt;
38        &lt;/occupants&gt;
39      &lt;/household&gt;
40    &lt;/households&gt;
41  &lt;/road&gt;
42&lt;/root&gt;
43/root/road/households/household/occupants/person[1]/@name
44(/root/road/households/household/occupants/person)[1]/@name
45

that returns the 1st person in the whole tree

what I sort of want to do is?

1&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
2&lt;root&gt;
3  &lt;road&gt;
4    &lt;households&gt;
5      &lt;household&gt;
6        &lt;occupants&gt;
7          &lt;person name=&quot;jim&quot;/&gt;
8          &lt;person name=&quot;jon&quot;/&gt;
9          &lt;person name=&quot;julie&quot;/&gt;
10          &lt;person name=&quot;janet&quot;/&gt;
11        &lt;/occupants&gt;
12      &lt;/household&gt;
13      &lt;household&gt;
14        &lt;occupants&gt;
15          &lt;person name=&quot;brenda&quot;/&gt;
16          &lt;person name=&quot;bert&quot;/&gt;
17          &lt;person name=&quot;billy&quot;/&gt;
18        &lt;/occupants&gt;
19      &lt;/household&gt;
20    &lt;/households&gt;
21  &lt;/road&gt;
22  &lt;road&gt;
23    &lt;households&gt;
24      &lt;household&gt;
25        &lt;occupants&gt;
26        &lt;/occupants&gt;
27      &lt;/household&gt;
28      &lt;household&gt;
29        &lt;occupants&gt;
30          &lt;person name=&quot;arthur&quot;/&gt;
31          &lt;person name=&quot;aimy&quot;/&gt;
32        &lt;/occupants&gt;
33      &lt;/household&gt;
34      &lt;household&gt;
35        &lt;occupants&gt;
36          &lt;person name=&quot;harry&quot;/&gt;
37          &lt;person name=&quot;henry&quot;/&gt;
38        &lt;/occupants&gt;
39      &lt;/household&gt;
40    &lt;/households&gt;
41  &lt;/road&gt;
42&lt;/root&gt;
43/root/road/households/household/occupants/person[1]/@name
44(/root/road/households/household/occupants/person)[1]/@name
45/root/road/(households/household/occupants/person)[1]/@name
46

i.e. take the 1st person in the set of people in a road

but thats not valid xpath 1.0

ANSWER

Answered 2021-Dec-23 at 19:40

This seems to be what you’re after, using the descendant axis:

1&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;
2&lt;root&gt;
3  &lt;road&gt;
4    &lt;households&gt;
5      &lt;household&gt;
6        &lt;occupants&gt;
7          &lt;person name=&quot;jim&quot;/&gt;
8          &lt;person name=&quot;jon&quot;/&gt;
9          &lt;person name=&quot;julie&quot;/&gt;
10          &lt;person name=&quot;janet&quot;/&gt;
11        &lt;/occupants&gt;
12      &lt;/household&gt;
13      &lt;household&gt;
14        &lt;occupants&gt;
15          &lt;person name=&quot;brenda&quot;/&gt;
16          &lt;person name=&quot;bert&quot;/&gt;
17          &lt;person name=&quot;billy&quot;/&gt;
18        &lt;/occupants&gt;
19      &lt;/household&gt;
20    &lt;/households&gt;
21  &lt;/road&gt;
22  &lt;road&gt;
23    &lt;households&gt;
24      &lt;household&gt;
25        &lt;occupants&gt;
26        &lt;/occupants&gt;
27      &lt;/household&gt;
28      &lt;household&gt;
29        &lt;occupants&gt;
30          &lt;person name=&quot;arthur&quot;/&gt;
31          &lt;person name=&quot;aimy&quot;/&gt;
32        &lt;/occupants&gt;
33      &lt;/household&gt;
34      &lt;household&gt;
35        &lt;occupants&gt;
36          &lt;person name=&quot;harry&quot;/&gt;
37          &lt;person name=&quot;henry&quot;/&gt;
38        &lt;/occupants&gt;
39      &lt;/household&gt;
40    &lt;/households&gt;
41  &lt;/road&gt;
42&lt;/root&gt;
43/root/road/households/household/occupants/person[1]/@name
44(/root/road/households/household/occupants/person)[1]/@name
45/root/road/(households/household/occupants/person)[1]/@name
46/root/road/descendant::person[1]/@name
47

Source https://stackoverflow.com/questions/70466321

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Bert

Tutorials and Learning Resources are not available at this moment for Bert

Share this Page

share link

Get latest updates on Bert