Feednite | Fortnite news RSS Feed generator | Artificial Intelligence library
kandi X-RAY | Feednite Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
Feednite Key Features
Feednite Examples and Code Snippets
Trending Discussions on Artificial Intelligence
Trending Discussions on Artificial Intelligence
QUESTION
According to Artificial Intelligence A Modern Approach - Stuart J. Russell , Peter Norvig (Version 4), space complexity of BFS is O(b^d), where 'b' is branching factor and 'd' is depth.
Complexity of BFS is obtained by this assumption: we store all nodes till we arrive to target node, in other word: 1 + b + b^2 + b^3 + ... + b^d => O(b^d)
But why should we store all nodes? don't we use queue for implementation?
If we use queue, don't need to store all nodes, because we enqueue and dequeue some nodes in steps, then when we find target node(s), we can say some nodes are in queue (but not all of them).
Is my understanding wrong?
ANSWER
Answered 2022-Apr-10 at 06:16At any moment while we apply BFS, the queue would have at most two levels of nodes, for example if we just started searching in depth d, then the queue now contains all nodes at depth d and as we proceed the queue would finish all nodes at depth d and have all nodes at depth d+1, so at any moment we have O(b^d) space.
Also 1+b+b^2+...+b^d = (b^(d+1)-1)/(b-1).
QUESTION
I am trying to upload an image datset to Hub (dataset format with an API for creating, storing, & collaborating on AI datasets). I only uploaded part of the dataset, however upon inspecting the uploaded data I noticed that there was an additional None
dimension in the tensor shape. Can someone explain why this occurred?
I am using the following tensor relationship:
ds
-> images (htype = image)
ANSWER
Answered 2022-Mar-24 at 23:15The none
dimension is present because some of the images might have three channels and the others have four, so dynamic dimensions are shown as None.
QUESTION
I am getting the following error while trying to upload a dataset to Hub (dataset format for AI) S3SetError: Connection was closed before we received a valid response from endpoint URL: "<...>".
So, I tried to delete the dataset and it is throwing this error below.
CorruptedMetaError: 'boxes/tensor_meta.json' and 'boxes/chunks_index/unsharded' have a record of different numbers of samples. Got 0 and 6103 respectively.
Using Hub version: v2.3.1
ANSWER
Answered 2022-Mar-24 at 01:06Seems like when you were uploading the dataset the runtime got interrupted which led to the corruption of the data you were trying to upload. Using force=True
while deleting should allow you to delete it.
For more information feel free to check out the Hub API basics docs for details on how to delete datasets in Hub.
If you stop uploading a Hub dataset midway through your dataset will be only partially uploaded to Hub. So, you will need to restart the upload. If you would like to re-create the dataset, you can use the overwrite = True
flag in hub.empty(overwrite = True)
. If you are making updates to an existing dataset, you should use version control to checkpoint the states that are in good shape.
QUESTION
What is the loss function used in Trainer from the Transformers library of Hugging Face?
I am trying to fine tine a BERT model using the Trainer class from the Transformers library of Hugging Face.
In their documentation, they mention that one can specify a customized loss function by overriding the compute_loss
method in the class. However, if I do not do the method override and use the Trainer to fine tine a BERT model directly for sentiment classification, what is the default loss function being use? Is it the categorical crossentropy? Thanks!
ANSWER
Answered 2022-Mar-23 at 10:12It depends! Especially given your relatively vague setup description, it is not clear what loss will be used. But to start from the beginning, let's first check how the default compute_loss()
function in the Trainer
class looks like.
You can find the corresponding function here, if you want to have a look for yourself (current version at time of writing is 4.17). The actual loss that will be returned with default parameters is taken from the model's output values:
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
which means that the model itself is (by default) responsible for computing some sort of loss and returning it in outputs
.
Following this, we can then look into the actual model definitions for BERT (source: here, and in particular check out the model that will be used in your Sentiment Analysis task (I assume a BertForSequenceClassification
model.
The code relevant for defining a loss function looks like this:
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression":
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification":
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
Based on this information, you should be able to either set the correct loss function yourself (by changing model.config.problem_type
accordingly), or otherwise at least be able to determine whichever loss will be chosen, based on the hyperparameters of your task (number of labels, label scores, etc.)
QUESTION
I would like to do a tensor split in pytorch. However, I get an error message because I can't get the splitting to work.
The behavior I want is to split the input data into two Fully Connected layers. I then want to create a model that combines the two Fully Connected layers into one. I believe the error is due to a wrong code in x1, x2 = torch.tensor_split(x,2)
import torch
from torch import nn, optim
import numpy as np
from matplotlib import pyplot as plt
class Regression(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(1, 32)
self.linear2 = nn.Linear(32, 16)
self.linear3 = nn.Linear(16*2, 1)
def forward(self, x):
x1, x2 = torch.tensor_split(x,2)
x1 = nn.functional.relu(self.linear1(x1))
x1 = nn.functional.relu(self.linear2(x1))
x2 = nn.functional.relu(self.linear1(x2))
x2 = nn.functional.relu(self.linear2(x2))
cat_x = torch.cat([x1, x2], dim=1)
cat_x = self.linear3(cat_x)
return cat_x
def train(model, optimizer, E, iteration, x, y):
losses = []
for i in range(iteration):
optimizer.zero_grad() # 勾配情報を0に初期化
y_pred = model(x) # 予測
loss = E(y_pred.reshape(y.shape), y) # 損失を計算(shapeを揃える)
loss.backward() # 勾配の計算
optimizer.step() # 勾配の更新
losses.append(loss.item()) # 損失値の蓄積
print('epoch=', i+1, 'loss=', loss)
return model, losses
x = np.random.uniform(0, 10, 100) # x軸をランダムで作成
y = np.random.uniform(0.9, 1.1, 100) * np.sin(2 * np.pi * 0.1 * x) # 正弦波を作成
x = torch.from_numpy(x.astype(np.float32)).float() # xをテンソルに変換
y = torch.from_numpy(y.astype(np.float32)).float() # yをテンソルに変換
X = torch.stack([torch.ones(100), x], 1)
net = Regression()
optimizer = optim.RMSprop(net.parameters(), lr=0.01) # 最適化にRMSpropを設定
E = nn.MSELoss() # 損失関数にMSEを設定
net, losses = train(model=net, optimizer=optimizer, E=E, iteration=5000, x=X, y=y)
error message
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1846 if has_torch_function_variadic(input, weight, bias):
1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias)
-> 1848 return torch._C._nn.linear(input, weight, bias)
1849
1850
RuntimeError: mat1 and mat2 shapes cannot be multiplied (50x2 and 1x32)
ANSWER
Answered 2022-Mar-21 at 09:57Specify dim=1
in torch.tensor_split(x,2)
.
The x
comes from two tensors with the shape [100,1]
stacked at dim 1, so its shape is [100, 2]
. After applying tensor_split
, you get two tensors both with shape [50, 2]
.
print(x.shape) # torch.Size([100, 2])
print(torch.tensor_split(X,2)[0].shape) # torch.Size([50, 2])
The error occurred because linear1
only accepts tensors with shape [BATCH_SIZE,1]
as the input, but a tensor with shape [50, 2]
was passed in.
If your intention was to split the array of random numbers and the array of all ones, change torch.tensor_split(x,2)
to torch.tensor_split(x,2,dim=1)
, which produces two tensors with the shape [100,1]
.
QUESTION
I am developing an E-commerce website AI powered Voice Command Using Alan AI. But Whenever I come back from another route, there's a blank page appears.and this error message shows in the console: "Uncaught Error: The Alan Button instance has already been created. There cannot be two Alan Button instances created at the same time". What can I do? my code is given below:
const Alan = () => {
useEffect(() => {
alanBtn({
key: alanKey,
onCommand: ({ command }) => {
if (command === 'testCommand') {
alert('This code was executed');
}
}
})
}, [])
return (
);
};
ANSWER
Answered 2022-Mar-21 at 09:48It's critical but easy...!
Use requestAnimationFrame for your webpage visual changes.
If run as a requestAnimationFrame callback, this will be run at the start of the frame.
const Alan = () => {
useLayoutEffect(() => {
function updateScreen(time) {
// Make visual updates here.
alanBtn({
key: alanKey,
onCommand: ({ command }) => {
if (command === 'testCommand') {
alert('This code was executed');
}
}
})
}
requestAnimationFrame(updateScreen);
}, [])
return (
);
};
QUESTION
i have migrated from gensim 3.8.3 to 4.1.2 and i am using this
claim = [token for token in claim_text if token in w2v_model.wv.vocab]
reference = [token for token in ref_text if token in w2v_model.wv.vocab]
i am not sure how to replace w2v_model.wv.vocab to newer attribute and i am getting this error
KeyedVectors' object has no attribute 'wv' can anyone please help.
ANSWER
Answered 2022-Mar-20 at 19:43You only use the .wv
property to fetch the KeyedVectors
object from another more complete algorithmic model, like a full Word2Vec
model (which contains a KeyedVectors
in its .wv
attribute).
If you're already working with just-the-vectors, there's no need to request the word-vectors subcomponent. Whatever you were going to do, you just do to the KeyedVectors
directly.
However, you're also using the .vocab
attribute, which has been replaced. See the migration FAQ for more details:
(Mainly: instead of doing an in w2v_model.wv.vocab
, you may only need to do in kv_model
or in kv_model.key_to_index
.)
QUESTION
I try to detecting FEX from videos according to this instruction: https://py-feat.org/content/detector.html#detecting-fex-from-videos
But I can't initialize object of Detector class. Code that I use:
from feat import Detector
face_model = "retinaface"
landmark_model = "mobilenet"
au_model = "rf"
emotion_model = "resmasknet"
detector = Detector(face_model=face_model, landmark_model=landmark_model, au_model=au_model,
emotion_model=emotion_model)
if __name__ == '__main__':
pass
And I get the following errors:
C:\Users\User\AppData\Roaming\Python\Python39\site-packages\nilearn\input_data\__init__.py:27: FutureWarning: The import path 'nilearn.input_data' is deprecated in version 0.9. Importing from 'nilearn.input_data' will be possible at least until release 0.13.0. Please import from 'nilearn.maskers' instead.
warnings.warn(message, FutureWarning)
Loading Face Detection model: retinaface
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\mobilenet0.25_Final.pth
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\mobilenet_224_model_best_gdconv_external.pth.tar
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\hog_pca_all_emotio.joblib
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\hog_pca_all_emotio.joblib
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\hog_scalar_aus.joblib
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\RF_568.joblib
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\hog_pca_all_emotio.joblib
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\hog_scalar_aus.joblib
Using downloaded and verified file: C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\ResMaskNet_Z_resmasking_dropout1_rot30.pth
Loading Face Landmark model: mobilenet
Loading au model: rf
Loading emotion model: resmasknet
Traceback (most recent call last):
File "C:\Users\User\Desktop\DetectFEXFromVideos\main.py", line 7, in
detector = Detector(face_model=face_model, landmark_model=landmark_model, au_model=au_model,
File "C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\detector.py", line 227, in __init__
self.emotion_model = ResMaskNet()
File "C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\emo_detectors\ResMaskNet\resmasknet_test.py", line 748, in __init__
torch.load(
File "C:\Users\User\AppData\Roaming\Python\Python39\site-packages\torch\serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\User\AppData\Roaming\Python\Python39\site-packages\torch\serialization.py", line 938, in _legacy_load
typed_storage._storage._set_from_file(
RuntimeError: unexpected EOF, expected 32606425 more bytes. The file might be corrupted.
Process finished with exit code 1
I'm new to Python, that's why I didn't change any arguments in object initialize. Don't understand what each means.
P.S. And maybe anyone know, how to fix problem in 2 first rows?
ANSWER
Answered 2022-Mar-19 at 20:41It looks like one of your files was corrupted.
You can try to solve the problem by opening the directory C:\Users\User\AppData\Roaming\Python\Python39\site-packages\feat\resources\
and deleting the file ResMaskNet_Z_resmasking_dropout1_rot30.pth
.
Then run again the code and it should redownload the deleted file.
The warning in the first two lines is just a warning, it's saying that some of the code in the library nilearn
is deprecated. Most of the times you would just ignore this line, this will be probably fixed by the coders of nilearn
in a future patch.
QUESTION
I am trying to import an ONNX model using onnxjs, but I get the below error:
Uncaught (in promise) TypeError: cannot resolve operator 'Cast' with opsets: ai.onnx v11
Below shows a code snippet from my html file.
How to solve this?
ANSWER
Answered 2022-Mar-01 at 20:37This will load Resnet 50 model
const sess = new onnx.InferenceSession()
async function test(){
console.time("loading model")
await sess.loadModel('https://microsoft.github.io/onnxjs-demo/resnet50v2.onnx')
console.timeEnd("loading model")
console.log("model loaded");
}
document.querySelector('#load').addEventListener('click', test);
load model
The message suggest something to do with the Cast operator not being supported by opset 11, maybe you want to use Cast-9. Maybe you have to generate a new model.
EditYour model loads using onnxruntime python
sess = onnxruntime.InferenceSession('../../Downloads/onnx_model.onnx');
{i.name: i.shape for i in sess.get_inputs()}
{o.name: o.shape for o in sess.get_outputs()}
{'input_ids': ['batch', 'sequence'],
'attention_mask': ['batch', 'sequence'],
'token_type_ids': ['batch', 'sequence']}
{'output_0': ['batch', 2]}
You will probably have to debug it your self, hopefully the only problem is the cast operator.
You start looking here the operator support for onnxjs, and rewrite parts of the model where the operator appears.
For instance, the Cast operator appears only once, you can locate it as follows
import onnx
model = onnx.load('../../Downloads/onnx_model.onnx')
for node in model.graph.node:
if 'cast' in node.op_type.lower():
print(node.name, node.op_type)
That will print
Cast_2 Cast
Using https://netron.app/ (Or the desktop version) you can see that it is
So you should simply rewrite how your attention mask is processed in the model, a possible solution would be to let unsqueeze
and cast
operations outside the model.
QUESTION
Can anyone please help me understand why Spacy NER refuses to recognize the last NAME 'Hagrid' in the sentence, no matter the model used (sm, md, lg)?:
"Hermione bought a car, then both Hermione and Hagrid raced it on the track. Tom Brady was very happy with Hagrid this year."
import spacy
nlp = spacy.load('en_core_web_md')
test_data = "Hermione bought a car, then both Hermione and Hagrid raced it on the track. Tom Brady was very happy with Hagrid this year."
doc = nlp(test_data)
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
ANSWER
Answered 2022-Mar-03 at 21:37Well, Neural Network Models are basically a black box, so there is no way to know this for sure.
I could imagine that the grammar in last sentence is a bit too "fancy"/literature-like if the model was trained on news or web data and might be throwing the model off. This difficulty of seeing the sentence context as something that would be followed up by a name as well as the fact that "Hagrid" is a kind of unusual name could be the reason.
You can try some other models such as the one integrated in Flair:
or this fine-tuned BERT model:
They are more powerful and get it right, from my experience SpaCy is a nice tool and quite fast, but not the most precise for NER.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Feednite
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page