Popular New Releases in Modeling
OctoPrint
1.8.0rc5 (release candidate)
openscad
OpenSCAD 2021.01
PrusaSlicer
PrusaSlicer 2.4.2-rc2
openMVG
v2.0 Rainbow Trout
colmap
3.7
Popular Libraries in Modeling
by OctoPrint python
6542 AGPL-3.0
OctoPrint is the snappy web interface for your 3D printer!
by openscad c++
4741 NOASSERTION
OpenSCAD - The Programmers Solid 3D CAD Modeller
by YadiraF python
4333 MIT
Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network (ECCV 2018)
by prusa3d c++
4288 AGPL-3.0
G-code generator for 3D printers (RepRap, Makerbot, Ultimaker etc.)
by openMVG c++
4142 NOASSERTION
open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
by colmap c
3790 NOASSERTION
COLMAP - Structure-from-Motion and Multi-View Stereo
by Ultimaker python
3633 LGPL-3.0
3D printer / slicing GUI built on top of the Uranium framework
by Lyken17 python
2848 MIT
Count the MACs / FLOPs of your PyTorch model.
by slic3r c++
2720 AGPL-3.0
Open Source toolpath generator for 3D printers
Trending New libraries in Modeling
by facebookresearch python
915 NOASSERTION
code for Mesh R-CNN, ICCV 2019
by fogleman python
740 MIT
Simple SDF mesh generation in Python
by dbanay c
615 MIT
By the Bluebook implementation of Smalltalk-80
by YadiraF python
601 NOASSERTION
DECA: Detailed Expression Capture and Animation
by DantSu java
466 MIT
Useful library to help Android developpers to print with (Bluetooth, TCP, USB) ESC/POS thermal printer.
by dekuNukem c++
440 MIT
Installing a RGB mechanical keypad on my microwave.
by OpenPrinting c
437 Apache-2.0
OpenPrinting CUPS Sources
by jordanruthe python
347 AGPL-3.0
GUI for Klipper
by MirageC79 html
341 GPL-3.0
Advanced DIY 3D Printer
Top Authors in Modeling
1
24 Libraries
504
2
20 Libraries
7145
3
9 Libraries
236
4
7 Libraries
60
5
7 Libraries
383
6
6 Libraries
507
7
6 Libraries
92
8
6 Libraries
1332
9
6 Libraries
116
10
5 Libraries
82
1
24 Libraries
504
2
20 Libraries
7145
3
9 Libraries
236
4
7 Libraries
60
5
7 Libraries
383
6
6 Libraries
507
7
6 Libraries
92
8
6 Libraries
1332
9
6 Libraries
116
10
5 Libraries
82
Trending Kits in Modeling
Build customised software and applications for your mini 3D printer using these open-source libraries.
A 3D printer is a computer-aided machine that allows creating a three-dimensional physical object from scratch, typically by laying down many thin layers of a material in succession. Whether your goal is to turn 3D models into physical objects or if you are looking to start a rapid prototyping process, or else you plan is to use 3D printing in any other way, you will need the best 3D printing software and programs.
Different kinds of software or programs are required for different processes in 3D printing. These generally include Slicers, 3D printer control, STL edit and repair programs, 3D modelling software, etc. creating your own programs with the help of open-source libraries can be one of the best ways to get started with 3D printing.
Explore the list of open-source libraries and components to build custom software and applications for your next mini 3D printer project:
A Virginia family gets keys to Habitat for Humanity's first 3D-printed home. The 1,200-square-foot home has three bedrooms, two full baths, and the technology allowed the house to be built in just 12 hours. The home also includes a 3D printer that will enable the owner to reprint maintenance needs like electrical outlets or cabinet knobs. 3D printing has matured over the years and is widely used across use cases ranging from DIY projects to affordable homes and 3D-printed terracotta tiles designed to help corals grow and restore ocean life! The printing materials have become versatile, from proprietary filaments, plastics to terracotta and concrete, expanding the use cases. kandi kit on 3D Printing Solutions covers 3D printing libraries across slicers, printer control, STL edit, and 3D modeling. Slicers break down 3D models into lines for a 3D printer. They can slice models for different printer types. Model manipulation tools are used to prepare 3D printing, laser engraving, or CNC routing items. Host and Control software helps manage 3D printers and control your machine remotely via web interfaces. They also control the printing process remotely and keep track of print jobs in progress. STL File utilities help view, edit, design, and repair STL files. Before printing, file-viewers preview G-code files.
Trending Discussions on Modeling
Folding indexed functors with access to contextual data
Need to fix Stan code for the generalized pareto distribution to take in real arguments rather than vectors
ModuleNotFoundError: No module named 'milvus'
VS2022 Extensibility: How to resolve "The type 'XXX' exists in both 'Assembly 1' and 'Assembly2' in T4 template execution
How can you multiply all the values within a 2D df with all the values within a 1D df separately?
Use ModelingToolkit.jl to eliminate a conserved quantity
RDFS vs SKOS, when to use what?
No module named 'nltk.lm' in Google colaboratory
Are codatatypes really terminal algebras?
EF Core owned entity shadow PK causes null constraint violation with SQLite
QUESTION
Folding indexed functors with access to contextual data
Asked 2022-Apr-01 at 22:53This is a question about traversing mutually recursive data types. I am modeling ASTs for a bunch of mutually recursive datatypes using Indexed Functor as described in this gist here. This works well enough for my intended purposes.
Now I need to transform my data structure with data flowing top-down. here is an SoF question asked in the context of Functor where it's shown that the carrier of the algebra can be a function that allows one to push data down during traversal. However, I am struggling to use this technique with Indexed Functor. I think my data type needs to be altered but I am not sure how.
Here is some code that illustrates my problem. Please note, that I am not including mutually recursive types or multiple indexes as I don't need them to illustrate the issue.
setDepth should change every (IntF n) to (IntF depth). The function as written won't type check because kind ‘AstIdx -> *’ doesn't match ‘Int -> Expr ix’. Maybe I am missing something but I don't see a way to get around this without relaxing the kind of f to be less restrictive in IxFunctor but that seems wrong.
Any thoughts, suggestions or pointers welcome!
1{-# LANGUAGE PolyKinds #-}
2
3infixr 5 ~>
4
5type f ~> g = forall i. f i -> g i
6
7class IxFunctor (f :: (k -> *) -> k -> *) where
8 imap :: (a ~> b) -> (f a ~> f b)
9
10-- Indexed Fix
11newtype IxFix f ix = IxIn {ixout :: f (IxFix f) ix}
12
13-- Fold
14icata :: IxFunctor f => (f a ~> a) -> (IxFix f ~> a)
15icata phi = phi . imap (icata phi) . ixout
16
17-- Kinds of Ast
18data AstIdx = ExprAst | TypeAst
19
20-- AST
21data ExprF (f :: AstIdx -> *) (ix :: AstIdx) where
22 IntF :: Int -> ExprF f ExprAst
23 AddF :: f ExprAst -> f ExprAst -> ExprF f ExprAst
24
25type Expr = IxFix ExprF
26
27instance IxFunctor ExprF where
28 imap f (IntF n) = IntF n
29 imap f (AddF a b) = AddF (f a) (f b)
30
31-- Change (IntF n) to (IntF (n + 1)).
32add1 :: Expr ix -> Expr ix
33add1 e = icata go e
34 where
35 go :: ExprF Expr ix -> Expr ix
36 go (IntF n) = IxIn (IntF (n + 1))
37 go (AddF a b) = IxIn (AddF a b)
38
39{-
40-- Change (IntF n) to (IntF depth)
41-- Doesn't type check
42setDepth :: Expr ix -> Expr ix
43setDepth e = icata ((flip go) 0) e
44 where
45 -- byDepthF :: TreeF a (Integer -> Tree Integer) -> Integer -> Tree Integer
46 -- byDepthF :: TreeF a (Integer -> Tree Integer) -> Integer -> Tree Integer ix
47 go :: ExprF (Int -> Expr ix) ix -> Int -> Expr ix
48 go (IntF n) d = IxIn (IntF d)
49 go (AddF a b) d = IxIn (AddF (a d) (b d))
50-}
51
ANSWER
Answered 2022-Apr-01 at 22:53I'm assuming here that you're trying to set each IntF
node to its depth within the tree (like the byDepthF
function from the linked question) rather than to some fixed integer argument named depth
.
If so, I think you're probably looking for something like the following:
1{-# LANGUAGE PolyKinds #-}
2
3infixr 5 ~>
4
5type f ~> g = forall i. f i -> g i
6
7class IxFunctor (f :: (k -> *) -> k -> *) where
8 imap :: (a ~> b) -> (f a ~> f b)
9
10-- Indexed Fix
11newtype IxFix f ix = IxIn {ixout :: f (IxFix f) ix}
12
13-- Fold
14icata :: IxFunctor f => (f a ~> a) -> (IxFix f ~> a)
15icata phi = phi . imap (icata phi) . ixout
16
17-- Kinds of Ast
18data AstIdx = ExprAst | TypeAst
19
20-- AST
21data ExprF (f :: AstIdx -> *) (ix :: AstIdx) where
22 IntF :: Int -> ExprF f ExprAst
23 AddF :: f ExprAst -> f ExprAst -> ExprF f ExprAst
24
25type Expr = IxFix ExprF
26
27instance IxFunctor ExprF where
28 imap f (IntF n) = IntF n
29 imap f (AddF a b) = AddF (f a) (f b)
30
31-- Change (IntF n) to (IntF (n + 1)).
32add1 :: Expr ix -> Expr ix
33add1 e = icata go e
34 where
35 go :: ExprF Expr ix -> Expr ix
36 go (IntF n) = IxIn (IntF (n + 1))
37 go (AddF a b) = IxIn (AddF a b)
38
39{-
40-- Change (IntF n) to (IntF depth)
41-- Doesn't type check
42setDepth :: Expr ix -> Expr ix
43setDepth e = icata ((flip go) 0) e
44 where
45 -- byDepthF :: TreeF a (Integer -> Tree Integer) -> Integer -> Tree Integer
46 -- byDepthF :: TreeF a (Integer -> Tree Integer) -> Integer -> Tree Integer ix
47 go :: ExprF (Int -> Expr ix) ix -> Int -> Expr ix
48 go (IntF n) d = IxIn (IntF d)
49 go (AddF a b) d = IxIn (AddF (a d) (b d))
50-}
51newtype IntExpr ix = IntExpr { runIntExpr :: Int -> Expr ix }
52
53setDepth :: Expr ix -> Expr ix
54setDepth e = runIntExpr (icata go e) 0
55 where
56 go :: ExprF IntExpr ix -> IntExpr ix
57 go (IntF n) = IntExpr (\d -> IxIn (IntF d))
58 go (AddF a b) = IntExpr (\d -> IxIn (AddF (runIntExpr a (d+1)) (runIntExpr b (d+1)))
59
That is, you need to define a newtype
that serves as the indexed first type parameter to ExprF
, passing the index through the Int ->
reader. The rest is just wrapping and unwrapping.
QUESTION
Need to fix Stan code for the generalized pareto distribution to take in real arguments rather than vectors
Asked 2022-Feb-22 at 22:25I am using the functions defined here: Extreme value analysis and user defined probability functions in Stan for modeling the data with a generalized pareto distribution, but my problem is that my model is in a for-loop and expects three real valued arguments, whereas, the gpd functions assume a vector, real, real argument.
I’m not so sure that my model chunk is so amenable to being vectorized, and so I was thinking I would need to have the gpd functions take in real valued arguments (but maybe I’m wrong).
I’d appreciate any help with switching the code around to achieve this. Here is my stan code
1functions {
2 real gpareto_lpdf(vector y, real k, real sigma) {
3 // generalised Pareto log pdf
4 int N = rows(y);
5 real inv_k = inv(k);
6 if (k<0 && max(y)/sigma > -inv_k)
7 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
8 if (sigma<=0)
9 reject("sigma<=0; found sigma =", sigma)
10 if (fabs(k) > 1e-15)
11 return -(1+inv_k)*sum(log1p((y) * (k/sigma))) -N*log(sigma);
12 else
13 return -sum(y)/sigma -N*log(sigma); // limit k->0
14 }
15
16 real gpareto_lcdf(vector y, real k, real sigma) {
17 // generalised Pareto log cdf
18 real inv_k = inv(k);
19 if (k<0 && max(y)/sigma > -inv_k)
20 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
21 if (sigma<=0)
22 reject("sigma<=0; found sigma =", sigma)
23 if (fabs(k) > 1e-15)
24 return sum(log1m_exp((-inv_k)*(log1p((y) * (k/sigma)))));
25 else
26 return sum(log1m_exp(-(y)/sigma)); // limit k->0
27 }
28}
29
30data {
31 // the input data
32 int<lower = 1> n; // number of observations
33 real<lower = 0> value[n]; // value measurements
34 int<lower = 0, upper = 1> censored[n]; // vector of 0s and 1s
35
36 // parameters for the prior
37 real<lower = 0> a;
38 real<lower = 0> b;
39}
40
41parameters {
42 real k;
43 real sigma;
44}
45
46model {
47 // prior
48 k ~ gamma(a, b);
49 sigma ~ gamma(a,b);
50
51 // likelihood
52 for (i in 1:n) {
53 if (censored[i]) {
54 target += gpareto_lcdf(value[i] | k, sigma);
55 } else {
56 target += gpareto_lpdf(value[i] | k, sigma);
57 }
58 }
59}
60
ANSWER
Answered 2022-Feb-22 at 22:25Here is how the log PDF could be adapted. This way, index arrays for subsetting y
into censored and non-censored observations can be passed.
1functions {
2 real gpareto_lpdf(vector y, real k, real sigma) {
3 // generalised Pareto log pdf
4 int N = rows(y);
5 real inv_k = inv(k);
6 if (k<0 && max(y)/sigma > -inv_k)
7 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
8 if (sigma<=0)
9 reject("sigma<=0; found sigma =", sigma)
10 if (fabs(k) > 1e-15)
11 return -(1+inv_k)*sum(log1p((y) * (k/sigma))) -N*log(sigma);
12 else
13 return -sum(y)/sigma -N*log(sigma); // limit k->0
14 }
15
16 real gpareto_lcdf(vector y, real k, real sigma) {
17 // generalised Pareto log cdf
18 real inv_k = inv(k);
19 if (k<0 && max(y)/sigma > -inv_k)
20 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
21 if (sigma<=0)
22 reject("sigma<=0; found sigma =", sigma)
23 if (fabs(k) > 1e-15)
24 return sum(log1m_exp((-inv_k)*(log1p((y) * (k/sigma)))));
25 else
26 return sum(log1m_exp(-(y)/sigma)); // limit k->0
27 }
28}
29
30data {
31 // the input data
32 int<lower = 1> n; // number of observations
33 real<lower = 0> value[n]; // value measurements
34 int<lower = 0, upper = 1> censored[n]; // vector of 0s and 1s
35
36 // parameters for the prior
37 real<lower = 0> a;
38 real<lower = 0> b;
39}
40
41parameters {
42 real k;
43 real sigma;
44}
45
46model {
47 // prior
48 k ~ gamma(a, b);
49 sigma ~ gamma(a,b);
50
51 // likelihood
52 for (i in 1:n) {
53 if (censored[i]) {
54 target += gpareto_lcdf(value[i] | k, sigma);
55 } else {
56 target += gpareto_lpdf(value[i] | k, sigma);
57 }
58 }
59}
60real cens_gpareto_lpdf(vector y, int[] cens, int[] no_cens, real k, real sigma) {
61
62 // generalised Pareto log pdf
63 int N = size(cens);
64 real inv_k = inv(k);
65 if (k<0 && max(y)/sigma > -inv_k)
66 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
67
68 if (fabs(k) > 1e-15)
69 return -(1+inv_k)*sum(log1p((y[no_cens]) * (k/sigma))) -N*log(sigma) +
70 sum(log1m_exp((-inv_k)*(log1p((y[cens]) * (k/sigma)))));
71 else
72 return -sum(y[no_cens])/sigma -N*log(sigma) +
73 sum(log1m_exp(-(y[cens])/sigma));
74}
75
Extend the data
block: n_cens
, n_not_cens
, cens
, and no_cens
are values that need to supplied.
1functions {
2 real gpareto_lpdf(vector y, real k, real sigma) {
3 // generalised Pareto log pdf
4 int N = rows(y);
5 real inv_k = inv(k);
6 if (k<0 && max(y)/sigma > -inv_k)
7 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
8 if (sigma<=0)
9 reject("sigma<=0; found sigma =", sigma)
10 if (fabs(k) > 1e-15)
11 return -(1+inv_k)*sum(log1p((y) * (k/sigma))) -N*log(sigma);
12 else
13 return -sum(y)/sigma -N*log(sigma); // limit k->0
14 }
15
16 real gpareto_lcdf(vector y, real k, real sigma) {
17 // generalised Pareto log cdf
18 real inv_k = inv(k);
19 if (k<0 && max(y)/sigma > -inv_k)
20 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
21 if (sigma<=0)
22 reject("sigma<=0; found sigma =", sigma)
23 if (fabs(k) > 1e-15)
24 return sum(log1m_exp((-inv_k)*(log1p((y) * (k/sigma)))));
25 else
26 return sum(log1m_exp(-(y)/sigma)); // limit k->0
27 }
28}
29
30data {
31 // the input data
32 int<lower = 1> n; // number of observations
33 real<lower = 0> value[n]; // value measurements
34 int<lower = 0, upper = 1> censored[n]; // vector of 0s and 1s
35
36 // parameters for the prior
37 real<lower = 0> a;
38 real<lower = 0> b;
39}
40
41parameters {
42 real k;
43 real sigma;
44}
45
46model {
47 // prior
48 k ~ gamma(a, b);
49 sigma ~ gamma(a,b);
50
51 // likelihood
52 for (i in 1:n) {
53 if (censored[i]) {
54 target += gpareto_lcdf(value[i] | k, sigma);
55 } else {
56 target += gpareto_lpdf(value[i] | k, sigma);
57 }
58 }
59}
60real cens_gpareto_lpdf(vector y, int[] cens, int[] no_cens, real k, real sigma) {
61
62 // generalised Pareto log pdf
63 int N = size(cens);
64 real inv_k = inv(k);
65 if (k<0 && max(y)/sigma > -inv_k)
66 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
67
68 if (fabs(k) > 1e-15)
69 return -(1+inv_k)*sum(log1p((y[no_cens]) * (k/sigma))) -N*log(sigma) +
70 sum(log1m_exp((-inv_k)*(log1p((y[cens]) * (k/sigma)))));
71 else
72 return -sum(y[no_cens])/sigma -N*log(sigma) +
73 sum(log1m_exp(-(y[cens])/sigma));
74}
75 int<lower = 1> n; // total number of obs
76 int<lower = 1> n_cens; // number of censored obs
77 int<lower = 1> n_not_cens; // number of regular obs
78
79 int cens[n_cens]; // index set censored
80 int no_cens[n_not_cens]; // index set regular
81
82 vector<lower = 0>[n] value; // value measurements
83
Nonzero Parameters as suggested by gfgm:
1functions {
2 real gpareto_lpdf(vector y, real k, real sigma) {
3 // generalised Pareto log pdf
4 int N = rows(y);
5 real inv_k = inv(k);
6 if (k<0 && max(y)/sigma > -inv_k)
7 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
8 if (sigma<=0)
9 reject("sigma<=0; found sigma =", sigma)
10 if (fabs(k) > 1e-15)
11 return -(1+inv_k)*sum(log1p((y) * (k/sigma))) -N*log(sigma);
12 else
13 return -sum(y)/sigma -N*log(sigma); // limit k->0
14 }
15
16 real gpareto_lcdf(vector y, real k, real sigma) {
17 // generalised Pareto log cdf
18 real inv_k = inv(k);
19 if (k<0 && max(y)/sigma > -inv_k)
20 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
21 if (sigma<=0)
22 reject("sigma<=0; found sigma =", sigma)
23 if (fabs(k) > 1e-15)
24 return sum(log1m_exp((-inv_k)*(log1p((y) * (k/sigma)))));
25 else
26 return sum(log1m_exp(-(y)/sigma)); // limit k->0
27 }
28}
29
30data {
31 // the input data
32 int<lower = 1> n; // number of observations
33 real<lower = 0> value[n]; // value measurements
34 int<lower = 0, upper = 1> censored[n]; // vector of 0s and 1s
35
36 // parameters for the prior
37 real<lower = 0> a;
38 real<lower = 0> b;
39}
40
41parameters {
42 real k;
43 real sigma;
44}
45
46model {
47 // prior
48 k ~ gamma(a, b);
49 sigma ~ gamma(a,b);
50
51 // likelihood
52 for (i in 1:n) {
53 if (censored[i]) {
54 target += gpareto_lcdf(value[i] | k, sigma);
55 } else {
56 target += gpareto_lpdf(value[i] | k, sigma);
57 }
58 }
59}
60real cens_gpareto_lpdf(vector y, int[] cens, int[] no_cens, real k, real sigma) {
61
62 // generalised Pareto log pdf
63 int N = size(cens);
64 real inv_k = inv(k);
65 if (k<0 && max(y)/sigma > -inv_k)
66 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
67
68 if (fabs(k) > 1e-15)
69 return -(1+inv_k)*sum(log1p((y[no_cens]) * (k/sigma))) -N*log(sigma) +
70 sum(log1m_exp((-inv_k)*(log1p((y[cens]) * (k/sigma)))));
71 else
72 return -sum(y[no_cens])/sigma -N*log(sigma) +
73 sum(log1m_exp(-(y[cens])/sigma));
74}
75 int<lower = 1> n; // total number of obs
76 int<lower = 1> n_cens; // number of censored obs
77 int<lower = 1> n_not_cens; // number of regular obs
78
79 int cens[n_cens]; // index set censored
80 int no_cens[n_not_cens]; // index set regular
81
82 vector<lower = 0>[n] value; // value measurements
83parameters {
84 real<lower=0> k;
85 real<lower=0> sigma;
86}
87
Rewrite the model
block:
1functions {
2 real gpareto_lpdf(vector y, real k, real sigma) {
3 // generalised Pareto log pdf
4 int N = rows(y);
5 real inv_k = inv(k);
6 if (k<0 && max(y)/sigma > -inv_k)
7 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
8 if (sigma<=0)
9 reject("sigma<=0; found sigma =", sigma)
10 if (fabs(k) > 1e-15)
11 return -(1+inv_k)*sum(log1p((y) * (k/sigma))) -N*log(sigma);
12 else
13 return -sum(y)/sigma -N*log(sigma); // limit k->0
14 }
15
16 real gpareto_lcdf(vector y, real k, real sigma) {
17 // generalised Pareto log cdf
18 real inv_k = inv(k);
19 if (k<0 && max(y)/sigma > -inv_k)
20 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
21 if (sigma<=0)
22 reject("sigma<=0; found sigma =", sigma)
23 if (fabs(k) > 1e-15)
24 return sum(log1m_exp((-inv_k)*(log1p((y) * (k/sigma)))));
25 else
26 return sum(log1m_exp(-(y)/sigma)); // limit k->0
27 }
28}
29
30data {
31 // the input data
32 int<lower = 1> n; // number of observations
33 real<lower = 0> value[n]; // value measurements
34 int<lower = 0, upper = 1> censored[n]; // vector of 0s and 1s
35
36 // parameters for the prior
37 real<lower = 0> a;
38 real<lower = 0> b;
39}
40
41parameters {
42 real k;
43 real sigma;
44}
45
46model {
47 // prior
48 k ~ gamma(a, b);
49 sigma ~ gamma(a,b);
50
51 // likelihood
52 for (i in 1:n) {
53 if (censored[i]) {
54 target += gpareto_lcdf(value[i] | k, sigma);
55 } else {
56 target += gpareto_lpdf(value[i] | k, sigma);
57 }
58 }
59}
60real cens_gpareto_lpdf(vector y, int[] cens, int[] no_cens, real k, real sigma) {
61
62 // generalised Pareto log pdf
63 int N = size(cens);
64 real inv_k = inv(k);
65 if (k<0 && max(y)/sigma > -inv_k)
66 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
67
68 if (fabs(k) > 1e-15)
69 return -(1+inv_k)*sum(log1p((y[no_cens]) * (k/sigma))) -N*log(sigma) +
70 sum(log1m_exp((-inv_k)*(log1p((y[cens]) * (k/sigma)))));
71 else
72 return -sum(y[no_cens])/sigma -N*log(sigma) +
73 sum(log1m_exp(-(y[cens])/sigma));
74}
75 int<lower = 1> n; // total number of obs
76 int<lower = 1> n_cens; // number of censored obs
77 int<lower = 1> n_not_cens; // number of regular obs
78
79 int cens[n_cens]; // index set censored
80 int no_cens[n_not_cens]; // index set regular
81
82 vector<lower = 0>[n] value; // value measurements
83parameters {
84 real<lower=0> k;
85 real<lower=0> sigma;
86}
87model {
88 // prior
89 k ~ gamma(a, b);
90 sigma ~ gamma(a,b);
91 // likelihood
92 value ~ cens_gpareto(cens, no_cens, k, sigma);
93}
94
Disclaimer: I neither checked the formulas for sanity nor ran the model using test data. Just compiled via rstan::stan_model()
which worked fine. gfgm's suggestion may be more convenient for post-processing / computing stuff in generated quantities
etc. I'm not a Stan expert :-).
Edit:
Fixed divergence issue found by gfgm through simulation. The likelihood was ill-defined (N= rows(y)
instead of N=size(cens)
. Runs fine now with gfgm's data (using set.seed(123)
and rstan
):
1functions {
2 real gpareto_lpdf(vector y, real k, real sigma) {
3 // generalised Pareto log pdf
4 int N = rows(y);
5 real inv_k = inv(k);
6 if (k<0 && max(y)/sigma > -inv_k)
7 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
8 if (sigma<=0)
9 reject("sigma<=0; found sigma =", sigma)
10 if (fabs(k) > 1e-15)
11 return -(1+inv_k)*sum(log1p((y) * (k/sigma))) -N*log(sigma);
12 else
13 return -sum(y)/sigma -N*log(sigma); // limit k->0
14 }
15
16 real gpareto_lcdf(vector y, real k, real sigma) {
17 // generalised Pareto log cdf
18 real inv_k = inv(k);
19 if (k<0 && max(y)/sigma > -inv_k)
20 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
21 if (sigma<=0)
22 reject("sigma<=0; found sigma =", sigma)
23 if (fabs(k) > 1e-15)
24 return sum(log1m_exp((-inv_k)*(log1p((y) * (k/sigma)))));
25 else
26 return sum(log1m_exp(-(y)/sigma)); // limit k->0
27 }
28}
29
30data {
31 // the input data
32 int<lower = 1> n; // number of observations
33 real<lower = 0> value[n]; // value measurements
34 int<lower = 0, upper = 1> censored[n]; // vector of 0s and 1s
35
36 // parameters for the prior
37 real<lower = 0> a;
38 real<lower = 0> b;
39}
40
41parameters {
42 real k;
43 real sigma;
44}
45
46model {
47 // prior
48 k ~ gamma(a, b);
49 sigma ~ gamma(a,b);
50
51 // likelihood
52 for (i in 1:n) {
53 if (censored[i]) {
54 target += gpareto_lcdf(value[i] | k, sigma);
55 } else {
56 target += gpareto_lpdf(value[i] | k, sigma);
57 }
58 }
59}
60real cens_gpareto_lpdf(vector y, int[] cens, int[] no_cens, real k, real sigma) {
61
62 // generalised Pareto log pdf
63 int N = size(cens);
64 real inv_k = inv(k);
65 if (k<0 && max(y)/sigma > -inv_k)
66 reject("k<0 and max(y)/sigma > -1/k; found k, sigma =", k, sigma)
67
68 if (fabs(k) > 1e-15)
69 return -(1+inv_k)*sum(log1p((y[no_cens]) * (k/sigma))) -N*log(sigma) +
70 sum(log1m_exp((-inv_k)*(log1p((y[cens]) * (k/sigma)))));
71 else
72 return -sum(y[no_cens])/sigma -N*log(sigma) +
73 sum(log1m_exp(-(y[cens])/sigma));
74}
75 int<lower = 1> n; // total number of obs
76 int<lower = 1> n_cens; // number of censored obs
77 int<lower = 1> n_not_cens; // number of regular obs
78
79 int cens[n_cens]; // index set censored
80 int no_cens[n_not_cens]; // index set regular
81
82 vector<lower = 0>[n] value; // value measurements
83parameters {
84 real<lower=0> k;
85 real<lower=0> sigma;
86}
87model {
88 // prior
89 k ~ gamma(a, b);
90 sigma ~ gamma(a,b);
91 // likelihood
92 value ~ cens_gpareto(cens, no_cens, k, sigma);
93}
94 mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
95k 0.16 0.00 0.10 0.02 0.08 0.14 0.21 0.42 1687 1
96sigma 0.90 0.00 0.12 0.67 0.82 0.90 0.99 1.16 1638 1
97lp__ -106.15 0.03 1.08 -109.09 -106.56 -105.83 -105.38 -105.09 1343 1
98
QUESTION
ModuleNotFoundError: No module named 'milvus'
Asked 2022-Feb-15 at 19:23Goal: to run this Auto Labelling Notebook on AWS SageMaker Jupyter Labs.
Kernels tried: conda_pytorch_p36
, conda_python3
, conda_amazonei_mxnet_p27
.
1! pip install farm-haystack -q
2# Install the latest master of Haystack
3!pip install grpcio-tools==1.34.1 -q
4!pip install git+https://github.com/deepset-ai/haystack.git -q
5!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
6!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
7!pip install git+https://github.com/deepset-ai/haystack.git -q
8
1! pip install farm-haystack -q
2# Install the latest master of Haystack
3!pip install grpcio-tools==1.34.1 -q
4!pip install git+https://github.com/deepset-ai/haystack.git -q
5!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
6!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
7!pip install git+https://github.com/deepset-ai/haystack.git -q
8# Here are the imports we need
9from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
10from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
11from haystack.schema import Document
12from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
13
Traceback:
1! pip install farm-haystack -q
2# Install the latest master of Haystack
3!pip install grpcio-tools==1.34.1 -q
4!pip install git+https://github.com/deepset-ai/haystack.git -q
5!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
6!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
7!pip install git+https://github.com/deepset-ai/haystack.git -q
8# Here are the imports we need
9from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
10from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
11from haystack.schema import Document
12from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
1302/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss with AVX2 support.
1402/02/2022 10:36:29 - INFO - faiss.loader - Could not load library with AVX2 support due to:
15ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'",)
1602/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss.
1702/02/2022 10:36:29 - INFO - faiss.loader - Successfully loaded faiss.
1802/02/2022 10:36:33 - INFO - farm.modeling.prediction_head - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
19---------------------------------------------------------------------------
20ModuleNotFoundError Traceback (most recent call last)
21<ipython-input-4-6ff421127e9c> in <module>
22 1 # Here are the imports we need
23----> 2 from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
24 3 from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
25 4 from haystack.schema import Document
26 5 from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
27
28~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/__init__.py in <module>
29 3 import pandas as pd
30 4 from haystack.schema import Document, Label, MultiLabel, BaseComponent
31----> 5 from haystack.finder import Finder
32 6 from haystack.pipeline import Pipeline
33 7
34
35~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/finder.py in <module>
36 6 from collections import defaultdict
37 7
38----> 8 from haystack.reader.base import BaseReader
39 9 from haystack.retriever.base import BaseRetriever
40 10 from haystack import MultiLabel
41
42~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/__init__.py in <module>
43----> 1 from haystack.reader.farm import FARMReader
44 2 from haystack.reader.transformers import TransformersReader
45
46~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/farm.py in <module>
47 22
48 23 from haystack import Document
49---> 24 from haystack.document_store.base import BaseDocumentStore
50 25 from haystack.reader.base import BaseReader
51 26
52
53~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/__init__.py in <module>
54 2 from haystack.document_store.faiss import FAISSDocumentStore
55 3 from haystack.document_store.memory import InMemoryDocumentStore
56----> 4 from haystack.document_store.milvus import MilvusDocumentStore
57 5 from haystack.document_store.sql import SQLDocumentStore
58
59~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/milvus.py in <module>
60 5 import numpy as np
61 6
62----> 7 from milvus import IndexType, MetricType, Milvus, Status
63 8 from scipy.special import expit
64 9 from tqdm import tqdm
65
66ModuleNotFoundError: No module named 'milvus'
67
1! pip install farm-haystack -q
2# Install the latest master of Haystack
3!pip install grpcio-tools==1.34.1 -q
4!pip install git+https://github.com/deepset-ai/haystack.git -q
5!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
6!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
7!pip install git+https://github.com/deepset-ai/haystack.git -q
8# Here are the imports we need
9from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
10from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
11from haystack.schema import Document
12from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
1302/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss with AVX2 support.
1402/02/2022 10:36:29 - INFO - faiss.loader - Could not load library with AVX2 support due to:
15ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'",)
1602/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss.
1702/02/2022 10:36:29 - INFO - faiss.loader - Successfully loaded faiss.
1802/02/2022 10:36:33 - INFO - farm.modeling.prediction_head - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
19---------------------------------------------------------------------------
20ModuleNotFoundError Traceback (most recent call last)
21<ipython-input-4-6ff421127e9c> in <module>
22 1 # Here are the imports we need
23----> 2 from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
24 3 from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
25 4 from haystack.schema import Document
26 5 from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
27
28~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/__init__.py in <module>
29 3 import pandas as pd
30 4 from haystack.schema import Document, Label, MultiLabel, BaseComponent
31----> 5 from haystack.finder import Finder
32 6 from haystack.pipeline import Pipeline
33 7
34
35~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/finder.py in <module>
36 6 from collections import defaultdict
37 7
38----> 8 from haystack.reader.base import BaseReader
39 9 from haystack.retriever.base import BaseRetriever
40 10 from haystack import MultiLabel
41
42~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/__init__.py in <module>
43----> 1 from haystack.reader.farm import FARMReader
44 2 from haystack.reader.transformers import TransformersReader
45
46~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/farm.py in <module>
47 22
48 23 from haystack import Document
49---> 24 from haystack.document_store.base import BaseDocumentStore
50 25 from haystack.reader.base import BaseReader
51 26
52
53~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/__init__.py in <module>
54 2 from haystack.document_store.faiss import FAISSDocumentStore
55 3 from haystack.document_store.memory import InMemoryDocumentStore
56----> 4 from haystack.document_store.milvus import MilvusDocumentStore
57 5 from haystack.document_store.sql import SQLDocumentStore
58
59~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/milvus.py in <module>
60 5 import numpy as np
61 6
62----> 7 from milvus import IndexType, MetricType, Milvus, Status
63 8 from scipy.special import expit
64 9 from tqdm import tqdm
65
66ModuleNotFoundError: No module named 'milvus'
67pip install milvus
68
1! pip install farm-haystack -q
2# Install the latest master of Haystack
3!pip install grpcio-tools==1.34.1 -q
4!pip install git+https://github.com/deepset-ai/haystack.git -q
5!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
6!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
7!pip install git+https://github.com/deepset-ai/haystack.git -q
8# Here are the imports we need
9from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
10from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
11from haystack.schema import Document
12from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
1302/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss with AVX2 support.
1402/02/2022 10:36:29 - INFO - faiss.loader - Could not load library with AVX2 support due to:
15ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'",)
1602/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss.
1702/02/2022 10:36:29 - INFO - faiss.loader - Successfully loaded faiss.
1802/02/2022 10:36:33 - INFO - farm.modeling.prediction_head - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
19---------------------------------------------------------------------------
20ModuleNotFoundError Traceback (most recent call last)
21<ipython-input-4-6ff421127e9c> in <module>
22 1 # Here are the imports we need
23----> 2 from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
24 3 from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
25 4 from haystack.schema import Document
26 5 from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
27
28~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/__init__.py in <module>
29 3 import pandas as pd
30 4 from haystack.schema import Document, Label, MultiLabel, BaseComponent
31----> 5 from haystack.finder import Finder
32 6 from haystack.pipeline import Pipeline
33 7
34
35~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/finder.py in <module>
36 6 from collections import defaultdict
37 7
38----> 8 from haystack.reader.base import BaseReader
39 9 from haystack.retriever.base import BaseRetriever
40 10 from haystack import MultiLabel
41
42~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/__init__.py in <module>
43----> 1 from haystack.reader.farm import FARMReader
44 2 from haystack.reader.transformers import TransformersReader
45
46~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/farm.py in <module>
47 22
48 23 from haystack import Document
49---> 24 from haystack.document_store.base import BaseDocumentStore
50 25 from haystack.reader.base import BaseReader
51 26
52
53~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/__init__.py in <module>
54 2 from haystack.document_store.faiss import FAISSDocumentStore
55 3 from haystack.document_store.memory import InMemoryDocumentStore
56----> 4 from haystack.document_store.milvus import MilvusDocumentStore
57 5 from haystack.document_store.sql import SQLDocumentStore
58
59~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/milvus.py in <module>
60 5 import numpy as np
61 6
62----> 7 from milvus import IndexType, MetricType, Milvus, Status
63 8 from scipy.special import expit
64 9 from tqdm import tqdm
65
66ModuleNotFoundError: No module named 'milvus'
67pip install milvus
68import milvus
69
Traceback:
1! pip install farm-haystack -q
2# Install the latest master of Haystack
3!pip install grpcio-tools==1.34.1 -q
4!pip install git+https://github.com/deepset-ai/haystack.git -q
5!wget --no-check-certificate https://dl.xpdfreader.com/xpdf-tools-linux-4.03.tar.gz
6!tar -xvf xpdf-tools-linux-4.03.tar.gz && sudo cp xpdf-tools-linux-4.03/bin64/pdftotext /usr/local/bin
7!pip install git+https://github.com/deepset-ai/haystack.git -q
8# Here are the imports we need
9from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
10from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
11from haystack.schema import Document
12from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
1302/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss with AVX2 support.
1402/02/2022 10:36:29 - INFO - faiss.loader - Could not load library with AVX2 support due to:
15ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'",)
1602/02/2022 10:36:29 - INFO - faiss.loader - Loading faiss.
1702/02/2022 10:36:29 - INFO - faiss.loader - Successfully loaded faiss.
1802/02/2022 10:36:33 - INFO - farm.modeling.prediction_head - Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .
19---------------------------------------------------------------------------
20ModuleNotFoundError Traceback (most recent call last)
21<ipython-input-4-6ff421127e9c> in <module>
22 1 # Here are the imports we need
23----> 2 from haystack.document_stores.elasticsearch import ElasticsearchDocumentStore
24 3 from haystack.nodes import PreProcessor, TransformersDocumentClassifier, FARMReader, ElasticsearchRetriever
25 4 from haystack.schema import Document
26 5 from haystack.utils import convert_files_to_dicts, fetch_archive_from_http, print_answers
27
28~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/__init__.py in <module>
29 3 import pandas as pd
30 4 from haystack.schema import Document, Label, MultiLabel, BaseComponent
31----> 5 from haystack.finder import Finder
32 6 from haystack.pipeline import Pipeline
33 7
34
35~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/finder.py in <module>
36 6 from collections import defaultdict
37 7
38----> 8 from haystack.reader.base import BaseReader
39 9 from haystack.retriever.base import BaseRetriever
40 10 from haystack import MultiLabel
41
42~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/__init__.py in <module>
43----> 1 from haystack.reader.farm import FARMReader
44 2 from haystack.reader.transformers import TransformersReader
45
46~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/reader/farm.py in <module>
47 22
48 23 from haystack import Document
49---> 24 from haystack.document_store.base import BaseDocumentStore
50 25 from haystack.reader.base import BaseReader
51 26
52
53~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/__init__.py in <module>
54 2 from haystack.document_store.faiss import FAISSDocumentStore
55 3 from haystack.document_store.memory import InMemoryDocumentStore
56----> 4 from haystack.document_store.milvus import MilvusDocumentStore
57 5 from haystack.document_store.sql import SQLDocumentStore
58
59~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/haystack/document_store/milvus.py in <module>
60 5 import numpy as np
61 6
62----> 7 from milvus import IndexType, MetricType, Milvus, Status
63 8 from scipy.special import expit
64 9 from tqdm import tqdm
65
66ModuleNotFoundError: No module named 'milvus'
67pip install milvus
68import milvus
69---------------------------------------------------------------------------
70ModuleNotFoundError Traceback (most recent call last)
71<ipython-input-3-91c33e248077> in <module>
72----> 1 import milvus
73
74ModuleNotFoundError: No module named 'milvus'
75
ANSWER
Answered 2022-Feb-03 at 09:29I would recommend to downgrade your milvus version to a version before the 2.0 release just a week ago. Here is a discussion on that topic: https://github.com/deepset-ai/haystack/issues/2081
QUESTION
VS2022 Extensibility: How to resolve "The type 'XXX' exists in both 'Assembly 1' and 'Assembly2' in T4 template execution
Asked 2022-Jan-29 at 07:48I have an extension I'm updating from VS2019 to VS2022. It's a DSL using the Modeling SDK and has code generation via T4 templates.
I have it pretty much converted but, when running the T4s, I get
1Compiling transformation: The type 'SourceControl' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
2Compiling transformation: The type 'Project' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
3Compiling transformation: The type 'Constants' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
4Compiling transformation: The type 'ProjectItem' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
5Compiling transformation: The type 'ProjectItems' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
6Compiling transformation: The type 'DTE' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
7Compiling transformation: The type 'Solution' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
8
and can't figure out how to resolve this.
I'm not including those assemblies in my main .tt
file, nor are they referenced in my Dsl
or DslPackage
projects, but I understand from some other errors I had to resolve that EnvDTE8.0
and Microsoft.VisualStudio.Interop
are implicitly available as part of the ambient VS2022 environment. Since they're not part of my projects, I can't use the Visual Studio Alias
mechanism to disambiguate.
I've read the other questions on SO regarding similar issues, but none of them are this problem and their solutions really don't apply.
Thanks for any help or direction anyone can give.
ANSWER
Answered 2021-Nov-15 at 16:02I ran into a similar issue today with my T4 templates. After looking at your post and noodling on it for a couple of minutes I tried the following
I found the file that was adding the EnvDTE
assembly
1Compiling transformation: The type 'SourceControl' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
2Compiling transformation: The type 'Project' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
3Compiling transformation: The type 'Constants' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
4Compiling transformation: The type 'ProjectItem' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
5Compiling transformation: The type 'ProjectItems' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
6Compiling transformation: The type 'DTE' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
7Compiling transformation: The type 'Solution' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
8<#@ assembly name="EnvDTE"#>
9
and changed it to
1Compiling transformation: The type 'SourceControl' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
2Compiling transformation: The type 'Project' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
3Compiling transformation: The type 'Constants' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
4Compiling transformation: The type 'ProjectItem' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
5Compiling transformation: The type 'ProjectItems' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
6Compiling transformation: The type 'DTE' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
7Compiling transformation: The type 'Solution' exists in both 'EnvDTE, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' and 'Microsoft.VisualStudio.Interop, Version=17.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
8<#@ assembly name="EnvDTE"#>
9<#@ assembly name="Microsoft.VisualStudio.Interop"#>
10
and that resolved my issue.
QUESTION
How can you multiply all the values within a 2D df with all the values within a 1D df separately?
Asked 2021-Dec-26 at 23:08I'm new to numpy and I'm currently working on a modeling project for which I have to perform some calculations based on two different data sources. However until now I haven't managed to figure out how I could multiply all the individual values to each other:
I have two data frames
One 2D-dataframe:
1df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
2
One 1D-dataframe:
1df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
2df2 = np.array([1, 2, 3, 4, 5])
3
I would like to multiply all the individual values within the first dataframe (df1) separately with all the values that are stored within the second dataframe in order to create a data cube / new 3D-dataframe that has the shape 5x3x3:
1df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
2df2 = np.array([1, 2, 3, 4, 5])
3df3 = np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[2, 4, 6], [8, 10, 12], [14, 16, 18]], ..... ])
4
I tried different methods but every time I failed to obtain something that looks like df3.
1df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
2df2 = np.array([1, 2, 3, 4, 5])
3df3 = np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[2, 4, 6], [8, 10, 12], [14, 16, 18]], ..... ])
4x = np.array([[1, 2, 3],
5 [4, 5, 6],
6 [7, 8, 9]])
7y = np.array([1, 2, 3, 4, 5])
8
9
10
11z = y
12
13
14for i in range(len(z)):
15 z.iloc[i] = x
16
17for i in range(0, 5):
18 for j in range(0, 3):
19 for k in range(0, 3):
20 z.iloc[i, j, k] = y.iloc[i] * x.iloc[j, k]
21
22print(z)
23
Could someone help me out with some example code? Thank you!
ANSWER
Answered 2021-Dec-26 at 22:59Try this:
1df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
2df2 = np.array([1, 2, 3, 4, 5])
3df3 = np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[2, 4, 6], [8, 10, 12], [14, 16, 18]], ..... ])
4x = np.array([[1, 2, 3],
5 [4, 5, 6],
6 [7, 8, 9]])
7y = np.array([1, 2, 3, 4, 5])
8
9
10
11z = y
12
13
14for i in range(len(z)):
15 z.iloc[i] = x
16
17for i in range(0, 5):
18 for j in range(0, 3):
19 for k in range(0, 3):
20 z.iloc[i, j, k] = y.iloc[i] * x.iloc[j, k]
21
22print(z)
23df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
24df2 = np.array([1, 2, 3, 4, 5])
25
26df3 = df1 * df2[:, None, None]
27
Output:
1df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
2df2 = np.array([1, 2, 3, 4, 5])
3df3 = np.array([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[2, 4, 6], [8, 10, 12], [14, 16, 18]], ..... ])
4x = np.array([[1, 2, 3],
5 [4, 5, 6],
6 [7, 8, 9]])
7y = np.array([1, 2, 3, 4, 5])
8
9
10
11z = y
12
13
14for i in range(len(z)):
15 z.iloc[i] = x
16
17for i in range(0, 5):
18 for j in range(0, 3):
19 for k in range(0, 3):
20 z.iloc[i, j, k] = y.iloc[i] * x.iloc[j, k]
21
22print(z)
23df1 = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
24df2 = np.array([1, 2, 3, 4, 5])
25
26df3 = df1 * df2[:, None, None]
27>>> df3
28array([[[ 1, 2, 3],
29 [ 4, 5, 6],
30 [ 7, 8, 9]],
31
32 [[ 2, 4, 6],
33 [ 8, 10, 12],
34 [14, 16, 18]],
35
36 [[ 3, 6, 9],
37 [12, 15, 18],
38 [21, 24, 27]],
39
40 [[ 4, 8, 12],
41 [16, 20, 24],
42 [28, 32, 36]],
43
44 [[ 5, 10, 15],
45 [20, 25, 30],
46 [35, 40, 45]]])
47
QUESTION
Use ModelingToolkit.jl to eliminate a conserved quantity
Asked 2021-Dec-25 at 10:11ModelingToolkit.jl
is such a great package that I frequently expect too much of it. For example, I often find myself with a model which boils down to the following:
1@variables t x(t) y(t)
2@parameters a b C
3d = Differential(t)
4eqs = [
5 d(x) ~ a * y - b * x,
6 d(y) ~ b * x - a * y,
7 0 ~ x + y - C
8]
9
10@named sys = ODESystem(eqs)
11
Now, I know that I could get this down to one equation by substitution of the 0 ~ x + y - C
. But in reality my systems are much larger, less trivial and programmatically generated, so I would like ModelingToolkit.jl
to do it for me.
I have tried using structural_simplify
, but the extra equation gets in the way:
1@variables t x(t) y(t)
2@parameters a b C
3d = Differential(t)
4eqs = [
5 d(x) ~ a * y - b * x,
6 d(y) ~ b * x - a * y,
7 0 ~ x + y - C
8]
9
10@named sys = ODESystem(eqs)
11julia> structural_simplify(sys)
12ERROR: ExtraEquationsSystemException: The system is unbalanced. There are 2 highest order derivative variables and 3 equations.
13More equations than variables, here are the potential extra equation(s):
14
Then I found the tutorial on DAE index reduction, and thought that dae_index_lowering
might work for me:
1@variables t x(t) y(t)
2@parameters a b C
3d = Differential(t)
4eqs = [
5 d(x) ~ a * y - b * x,
6 d(y) ~ b * x - a * y,
7 0 ~ x + y - C
8]
9
10@named sys = ODESystem(eqs)
11julia> structural_simplify(sys)
12ERROR: ExtraEquationsSystemException: The system is unbalanced. There are 2 highest order derivative variables and 3 equations.
13More equations than variables, here are the potential extra equation(s):
14julia> dae_index_lowering(sys)
15ERROR: maxiters=8000 reached! File a bug report if your system has a reasonable index (<100), and you are using the default `maxiters`. Try to increase the maxiters by `pantelides(sys::ODESystem; maxiters=1_000_000)` if your system has an incredibly high index and it is truly extremely large.
16
So the question is whether ModelingToolkit.jl
currently has a feature which will do the transformation, or if a different approach is necessary?
ANSWER
Answered 2021-Dec-25 at 10:11The problem is that the system is unbalanced, i.e. there are more equations than there are states. In general it is impossible to prove that an overdetermined system of this sort is well-defined. Thus to solve it, you have to delete one of the equations. If you know the conservation law must hold true, then you can delete the second differential equation:
1@variables t x(t) y(t)
2@parameters a b C
3d = Differential(t)
4eqs = [
5 d(x) ~ a * y - b * x,
6 d(y) ~ b * x - a * y,
7 0 ~ x + y - C
8]
9
10@named sys = ODESystem(eqs)
11julia> structural_simplify(sys)
12ERROR: ExtraEquationsSystemException: The system is unbalanced. There are 2 highest order derivative variables and 3 equations.
13More equations than variables, here are the potential extra equation(s):
14julia> dae_index_lowering(sys)
15ERROR: maxiters=8000 reached! File a bug report if your system has a reasonable index (<100), and you are using the default `maxiters`. Try to increase the maxiters by `pantelides(sys::ODESystem; maxiters=1_000_000)` if your system has an incredibly high index and it is truly extremely large.
16using ModelingToolkit
17@variables t x(t) y(t)
18@parameters a b C
19d = Differential(t)
20eqs = [
21 d(x) ~ a * y - b * x,
22 0 ~ x + y - C
23]
24
25@named sys = ODESystem(eqs)
26simpsys = structural_simplify(sys)
27
And that will simplify down to a single equation. The problem is that in general it cannot prove that if it does delete that differential equation, that y(t)
is still going to be the same. In this specific case, maybe it could one day prove that the conservation law must occur given the differential equation system. But even if it could, then the format would be for you to only give the differential equation and then let it remove equations by substituting proved conservation laws: so you would still only give two equations for the two state system.
QUESTION
RDFS vs SKOS, when to use what?
Asked 2021-Dec-06 at 18:03As I'm learning semantic-web & sparql, sensing that RDFS & SKOS seem to offer very similar semantic relations modeling capabilities. For example,
- RDFS - rdfs:subClassOf, rdfs:superClassOf can be used to model the hierarchy
- SKOS - skos:narrower, skos:broader can be used to model the hierarchy
Both offer 2-way transitivity.
Though
- SKOS offers more explicit properties to model transitivity, related relationships and matching thru skos:narrowerTransitive, skos:broaderTransitive, skos:related, skos:closeMatch, etc
- Is this correct understanding?
- Is there any guidance to pick the right pattern while modeling?
- If I consider that skos semantics offer above said advantages, Why does dbpedia uses a lot of rdfs vs skos?
Thanks!
ANSWER
Answered 2021-Nov-28 at 21:36The main difference between RDFS and SKOS is outlined in the SKOS specs:
https://www.w3.org/TR/skos-reference/#L1045
The elements of the SKOS data model are classes and properties, and the structure and integrity of the data model is defined by the logical characteristics of, and interdependencies between, those classes and properties. This is perhaps one of the most powerful and yet potentially confusing aspects of SKOS, because SKOS can, in more advanced applications, also be used side-by-side with OWL to express and exchange knowledge about a domain. However, SKOS is not a formal knowledge representation language.
Not being a formal knowledge representation language, inferences are not standardised and there might be less interoperability with other knowledge bases.
I can't speak for dbpedia as to the reasons for the choice, but this seems a good enough reason to me, so I wouldn't be surprised if this was part of them.
QUESTION
No module named 'nltk.lm' in Google colaboratory
Asked 2021-Dec-04 at 23:32I'm trying to import the NLTK language modeling module (nltk.lm) in a Google colaboratory notebook without success. I've tried by installing everything from nltk, still without success.
What mistake or omission could I be making?
Thanks in advance.
ANSWER
Answered 2021-Dec-04 at 23:32Google Colab has nltk v3.2.5
installed, but nltk.lm
(Language Modeling package) was added in v3.4
.
In your Google Colab run:
1!pip install -U nltk
2
In the output you will see it downloads a new version, and uninstalls the old one:
1!pip install -U nltk
2...
3Downloading nltk-3.6.5-py3-none-any.whl (1.5 MB)
4...
5Successfully uninstalled nltk-3.2.5
6...
7You must restart the runtime in order to use newly installed versions.
8
Click the Restart runtime
button shown in the end of the output.
Now it should work!
You can double check the nltk
version using this code:
1!pip install -U nltk
2...
3Downloading nltk-3.6.5-py3-none-any.whl (1.5 MB)
4...
5Successfully uninstalled nltk-3.2.5
6...
7You must restart the runtime in order to use newly installed versions.
8import nltk
9print('The nltk version is {}.'.format(nltk.__version__))
10
You need v3.4
or later to use nltk.lm
.
QUESTION
Are codatatypes really terminal algebras?
Asked 2021-Nov-27 at 00:22(Disclaimer: I'm not 100% sure how codatatype works, especially when not referring to terminal algebras).
Consider the "category of types", something like Hask but with whatever adjustment that fits the discussion. Within such a category, it is said that (1) the initial algebras define datatypes, and (2) terminal algebras define codatatypes.
I'm struggling to convince myself of (2).
Consider the functor T(t) = 1 + a * t
. I agree that the initial T
-algebra is well-defined and indeed defines [a]
, the list of a
. By definition, the initial T
-algebra is a type X
together with a function f :: 1+a*X -> X
, such that for any other type Y
and function g :: 1+a*Y -> Y
, there is exactly one function m :: X -> Y
such that m . f = g . T(m)
(where .
denotes the function combination operator as in Haskell). With f
interpreted as the list constructor(s), g
the initial value and the step function, and T(m)
the recursion operation, the equation essentially asserts the unique existance of the function m
given any initial value and any step function defined in g
, which necessitates an underlying well-behaved fold
together with the underlying type, the list of a
.
For example, g :: Unit + (a, Nat) -> Nat
could be () -> 0 | (_,n) -> n+1
, in which case m
defines the length function, or g
could be () -> 0 | (_,n) -> 0
, then m
defines a constant zero function. An important fact here is that, for whatever g
, m
can always be uniquely defined, just as fold
does not impose any contraint on its arguments and always produce a unique well-defined result.
This does not seem to hold for terminal algebras.
Consider the same functor T
defined above. The definition of the terminal T
-algebra is the same as the initial one, except that m
is now of type X -> Y
and the equation now becomes m . g = f . T(m)
. It is said that this should define a potentially infinite list.
I agree that this is sometimes true. For example, when g :: Unit + (Unit, Int) -> Int
is defined as () -> 0 | (_,n) -> n+1
like before, m
then behaves such that m(0) = ()
and m(n+1) = Cons () m(n)
. For non-negative n
, m(n)
should be a finite list of units. For any negative n
, m(n)
should be of infinite length. It can be verified that the equation above holds for such g
and m
.
With any of the two following modified definition of g
, however, I don't see any well-defined m
anymore.
First, when g
is again () -> 0 | (_,n) -> n+1
but is of type g :: Unit + (Bool, Int) -> Int
, m
must satisfy that m(g((b,i))) = Cons b m(g(i))
, which means that the result depends on b
. But this is impossible, because m(g((b,i)))
is really just m(i+1)
which has no mentioning of b
whatsoever, so the equation is not well-defined.
Second, when g
is again of type g :: Unit + (Unit, Int) -> Int
but is defined as the constant zero function g _ = 0
, m
must satisfy that m(g(())) = Nil
and m(g(((),i))) = Cons () m(g(i))
, which are contradictory because their left hand sides are the same, both being m(0)
, while the right hand sides are never the same.
In summary, there are T
-algebras that have no morphism into the supposed terminal T
-algebra, which implies that the terminal T
-algebra does not exist. The theoretical modeling of the codatatype Stream (or infinite list), if any, cannot be based on the nonexistant terminal algebra of the functor T(t) = 1 + a * t
.
Many thanks to any hint of any flaw in the story above.
ANSWER
Answered 2021-Nov-26 at 19:57(2) terminal algebras define codatatypes.
This is not right, codatatypes are terminal coalgebras. For your T
functor, a coalgebra is a type x
together with f :: x -> T x
. A T
-coalgebra morphism between (x1, f1)
and (x2, f2)
is a g :: x1 -> x2
such that fmap g . f1 = f2 . g
. Using this definition, the terminal T
-algebra defines the possibly infinite lists (so-called "colists"), and the terminality is witnessed by the unfold
function:
1unfold :: (x -> Unit + (a, x)) -> x -> Colist a
2
Note though that a terminal T
-algebra does exist: it is simply the Unit
type together with the constant function T Unit -> Unit
(and this works as a terminal algebra for any T
). But this is not very interesting for writing programs.
QUESTION
EF Core owned entity shadow PK causes null constraint violation with SQLite
Asked 2021-Nov-03 at 13:56I have a Comment
owned entity type:
1public class Comment { // owned entity type
2 public Comment(string text) { Text = text; }
3 public string Text { get; private set; }
4}
5
6public class Post {
7 public Post(string content) { Content = content; }
8 public long Id { get; private set; }
9 public string Content { get; private set; }
10 public ICollection<Comment> Comments { get; private set; } = new HashSet<Comment>();
11}
12
And Post
's configuration includes:
1public class Comment { // owned entity type
2 public Comment(string text) { Text = text; }
3 public string Text { get; private set; }
4}
5
6public class Post {
7 public Post(string content) { Content = content; }
8 public long Id { get; private set; }
9 public string Content { get; private set; }
10 public ICollection<Comment> Comments { get; private set; } = new HashSet<Comment>();
11}
12builder.OwnsMany(x => x.Comments, x => {
13 x.Property(y => y.Text).IsRequired();
14});
15
The seeding code includes this:
1public class Comment { // owned entity type
2 public Comment(string text) { Text = text; }
3 public string Text { get; private set; }
4}
5
6public class Post {
7 public Post(string content) { Content = content; }
8 public long Id { get; private set; }
9 public string Content { get; private set; }
10 public ICollection<Comment> Comments { get; private set; } = new HashSet<Comment>();
11}
12builder.OwnsMany(x => x.Comments, x => {
13 x.Property(y => y.Text).IsRequired();
14});
15var post = new Post("content");
16post.Comments.Add(new Comment("comment1"));
17post.Comments.Add(new Comment("comment2"));
18await _context.AddAsync(post);
19await _context.SaveChangesAsync();
20
When I use the postgres provider, I can successfully create, seed and edit the database.
When I use the sqlite provider, I can successfully create the database, but when I try to seed it I get this error:
Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details.
---> Microsoft.Data.Sqlite.SqliteException (0x80004005): SQLite Error 19: 'NOT NULL constraint failed: Comment.Id'.
The docs say that the owned table has an implicit key, which explains the complaint about Comment.Id
.
But why does this happen only for sqlite, and how do I fix it?
ANSWER
Answered 2021-Nov-03 at 13:56It's caused by a combination of (1) improper (IMHO) EF Core default and (2) unsupported SQLite feature.
- As explained in Collections of owned types EF Core documentation
Owned types need a primary key. If there are no good candidates properties on the .NET type, EF Core can try to create one. However, when owned types are defined through a collection, it isn't enough to just create a shadow property to act as both the foreign key into the owner and the primary key of the owned instance, as we do for
OwnsOne
: there can be multiple owned type instances for each owner, and hence the key of the owner isn't enough to provide a unique identity for each owned instance.
The problem is that in case you don't define explicit PK, then EF Core generates shadow property (column) called Id
, type int
, autoincrement (they think, however see (2)) and defines composite PK on (OwnerId, Id)
- However, SQLite supports autoincrement column only if it is the single PK column. Thus, it generates regular
INT
columnId
, which then requires explicit value onINSERT
, but EF Core does not send it since it still thinks the property is auto-generated on server.
With that being said, you'd better always define the PK of owned collection entity. Since the autoincrement is unique by itself, the absolute minimum would be to just mark the auto generated shadow Id
property as PK, e.g.
1public class Comment { // owned entity type
2 public Comment(string text) { Text = text; }
3 public string Text { get; private set; }
4}
5
6public class Post {
7 public Post(string content) { Content = content; }
8 public long Id { get; private set; }
9 public string Content { get; private set; }
10 public ICollection<Comment> Comments { get; private set; } = new HashSet<Comment>();
11}
12builder.OwnsMany(x => x.Comments, x => {
13 x.Property(y => y.Text).IsRequired();
14});
15var post = new Post("content");
16post.Comments.Add(new Comment("comment1"));
17post.Comments.Add(new Comment("comment2"));
18await _context.AddAsync(post);
19await _context.SaveChangesAsync();
20builder.OwnsMany(e => e.Comments, cb => {
21 cb.HasKey("Id"); // <-- add this
22 // The rest...
23 cb.Property(e => e.Text).IsRequired();
24});
25
The generated migration should have "Sqlite:Autoincrement" annotation for Id
column:
1public class Comment { // owned entity type
2 public Comment(string text) { Text = text; }
3 public string Text { get; private set; }
4}
5
6public class Post {
7 public Post(string content) { Content = content; }
8 public long Id { get; private set; }
9 public string Content { get; private set; }
10 public ICollection<Comment> Comments { get; private set; } = new HashSet<Comment>();
11}
12builder.OwnsMany(x => x.Comments, x => {
13 x.Property(y => y.Text).IsRequired();
14});
15var post = new Post("content");
16post.Comments.Add(new Comment("comment1"));
17post.Comments.Add(new Comment("comment2"));
18await _context.AddAsync(post);
19await _context.SaveChangesAsync();
20builder.OwnsMany(e => e.Comments, cb => {
21 cb.HasKey("Id"); // <-- add this
22 // The rest...
23 cb.Property(e => e.Text).IsRequired();
24});
25Id = table.Column<long>(type: "INTEGER", nullable: false)
26 .Annotation("Sqlite:Autoincrement", true),
27
which was missing and causing the problem in the OP design.
I would personally prefer if EF Core throws the regular no key defined error instead of defining PK construct not supported by all databases. Also SQLite provider to throw exception instead of silently ignoring the auto-increment model request, thus introducing difference between model metadata (which is used by EF Core infrastructure to control all runtime behaviors). So both could technically be considered bugs. But they are what they are. Prefer convention over configuration in general, but be explicit for things with arbitrary defaults.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Modeling
Tutorials and Learning Resources are not available at this moment for Modeling