penrose | A Python package for generating Penrose tilings
kandi X-RAY | penrose Summary
kandi X-RAY | penrose Summary
A Python package for generating Penrose tilings. See for more information. The examples can be run directly as, e.g. python example4.py and generate an SVG file that can be opened and viewed in a web browser.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Make the figure
- Removes duplicates from the ring
- conjugate
- Center of the triangle
- Rotate the matrix
- Flip the matrix x
- Flip the y - axis
- Allocate the elements of the molecule
- Add the complex elements
- Create SVG file
- Generate the SVG
- Return the arc length of a triangle
- Return the colour of the tile
- R Return the path
- Calculate the two arcs
penrose Key Features
penrose Examples and Code Snippets
Community Discussions
Trending Discussions on penrose
QUESTION
When computing ordinary least squares regression either using sklearn.linear_model.LinearRegression
or statsmodels.regression.linear_model.OLS
, they don't seem to throw any errors when covariance matrix is exactly singular. Looks like under the hood they use Moore-Penrose pseudoinverse rather than the usual inverse which would be impossible under singular covariance matrix.
The question is then twofold:
What is the point of this design? Under what circumstances it is deemed useful to compute OLS regardless of whether the covariance matrix is singular?
What does it output as coefficients then? To my understanding since the covariance matrix is singular, there would be an infinite (in a sense of a scaling constant) number of solutions via pseudoinverse.
ANSWER
Answered 2021-Jan-22 at 21:04That's indeed the case. As you can see here
sklearn.linear_model.LinearRegression
is based onscipy.linalg.lstsq
orscipy.optimize.nnls
, which in turn compute the pseudoinverse of the feature matrix via SVD decomposition (they do not exploit the Normal Equation - for which you would have the mentioned issue - as it is less efficient). Moreover, observe that eachsklearn.linear_model.LinearRegression
's instance returns the singular values of the feature matrix into thesingular_
attribute and its rank into therank_
attribute.- A similar argument applies to
statsmodels.regression.linear_model.OLS
, where thefit()
method of classRegressionModel
uses the following:
The fit method uses the pseudoinverse of the design/exogenous variables to solve the least squares minimization.
(see here for reference).
QUESTION
I would like to see continued fractions with integers displayed in that form with SymPy
, but I cannot seem to make SymPy
comply. I found this Stack Overflow question and answer very useful (see farther below), but cannot reach my target goal here:
This is the continued fraction expansion of $\frac{13}{5}$. A common notation for this expansion is to give only the boxed terms as does SymPy
below, i.e., $[2,1,1,2]$ from the SymPy
continued_fraction_iterator
:
ANSWER
Answered 2020-Dec-21 at 16:23You can construct the expression explicitly passing evaluate=False
to each part of the expression tree:
QUESTION
I was going through the book called Hands-On Machine Learning with Scikit-Learn, Keras and Tensorflow and the author was explaining how the pseudo-inverse (Moore-Penrose inverse) of a matrix is calculated in the context of Linear Regression. I'm quoting verbatim here:
The pseudoinverse itself is computed using a standard matrix factorization technique called Singular Value Decomposition (SVD) that can decompose the training set matrix X into the matrix multiplication of three matrices U Σ VT (see numpy.linalg.svd()). The pseudoinverse is calculated as X+ = V * Σ+ * UT. To compute the matrix Σ+, the algorithm takes Σ and sets to zero all values smaller than a tiny threshold value, then it replaces all nonzero values with their inverse, and finally it transposes the resulting matrix. This approach is more efficient than computing the Normal equation.
I've got an understanding of how the pseudo-inverse and SVD are related from this post. But I'm not able to grasp the rationale behind setting all values less than the threshold to zero. The inverse of a diagonal matrix is obtained by taking the reciprocals of the diagonal elements. Then small values would be converted to large values in the inverse matrix, right? Then why are we removing the large values?
I went and looked into the numpy code, and it looks like follows, just for reference:
...ANSWER
Answered 2020-Feb-03 at 15:06It's almost certainly an adjustment for numerical error. To see why this might be necessary, look what happens when you take the svd
of a rank-one 2x2 matrix. We can create a rank-one matrix by taking the outer product of a vector like so:
QUESTION
I am trying to implement an Eigen library pseudo-inverse function in a Matlab MEX-file. It compiles successfully but crashes when I run it.
I am trying to follow the FAQ on how to implement a pseudo-inverse function using the Eigen library.
The FAQ suggests adding it as a method to the JacobiSVD
class but since you can't do that in C++ I'm adding it to a child class. It compiles successfully but then crashes without an error message. It successfully outputs "hi" without crashing if I comment out the line with the .pinv
call so that's where the problem is arising. To run, I just compile it (as test.cpp
) and then type test
at the command line. I am using Matlab R2019a under MacOS 10.14.5 and Eigen 3.3.7. In my full code I also get lots of weird error messages regarding the pinv
code but before I can troubleshoot I need this simple test case to work. This is all at the far limits of my understanding of C++. Any help appreciated.
ANSWER
Answered 2019-Jul-03 at 20:51When constructing the Eigen::JacobiSVD
object, you fail to request that matrices U and V should be computed. By default, these are not computed. Obviously, accessing these matrices if they are not computed will cause a segmentation violation.
See the documentation to the constructor. A second input argument must specify either ComputeFullU | ComputeFullV
, or ComputeThinU | ComputeThinV
. The thin ones are preferable when computing the pseudo-inverse, as the rest of the matrices are not needed.
I would not derive from the JacobiSVD
class just to add a method. Instead, I would simply write a free function. This is both easier, and allows you to use only the documented portions of the Eigen API.
I wrote the following MEX-file, which works as intended (using code I already had for this computation). It does the same, but in a slightly different way that avoids writing an explicit loop. Not sure this way of writing it is very clear, but it works.
QUESTION
I'm absolutely pulling my hair out here when trying to port over a matrix calculation from octave to numpy. This is specifically in regards to multivariate regression.
My arbitrary data is as follows where the array 'x' is my input value:
...ANSWER
Answered 2019-Mar-04 at 19:00Posting this as an answer so your question doesn't still show up as unanswered - use np.linalg.pinv
(pseudo-inverse) where you would use pinv
in Octave.
QUESTION
I'm translating an algorithm from R to C, I need to obtain the pseudoinverse of a matrix but the result I get in C has some differences with the one I get in R. These differences change the behaviour of the algorithm.
The code I used to get the pseudoinverse in C is this.
I did some reading and there are different ways to get the pseudoinverse, the method used in C is Moore-Penrose. The function used in R is from the library corpcor. Both use "Singular Value Decomposition".
This is the matrix from which I want to get the pseudoinverse
...ANSWER
Answered 2018-Jan-07 at 16:48A generalized inverse Ag for A should fulfill
Ag A Ag = Ag
A Ag A = A
(A Ag)T = Ag A
(Ag A)T = A Ag
For the given matrix the result of corpcor::pseudoinverse
does not satisfy these properties, while the result of MASS::ginv
does:
QUESTION
The below code (annotated inline with locations) gives a minimal example of the puzzling behavior I'm experiencing.
Essentially, why does (2) result in terrible space/time performance while (1) does not?
The below code is compiled and run as follows on ghc version 8.4.3:
ghc -prof -fprof-auto -rtsopts test.hs; ./test +RTS -p
ANSWER
Answered 2018-Aug-14 at 07:26forall a . (Fractional a) => a
is a function type.
It has two arguments, a type (a :: *)
and an instance with type Fractional a
. Whenever you see =>
, it's a function operationally, and compiles to a function in GHC's core representation, and sometimes stays a function in machine code as well. The main difference between ->
and =>
is that arguments for the latter cannot be explicitly given by programmers, and they are always filled in implicitly by instance resolution.
Let's see the fast step
first:
QUESTION
I'm working on a project involving solving large underdetermined systems of equations.
My current algorithm calculates SVD (numpy.linalg.svd
) of a matrix representing the given system, then uses its results to calculate the Moore-Penrose pseudoinverse and the right nullspace of the matrix. I use the nullspace to find all variables with unique solutions, and the pseudo-inverse to find out it's value.
However, the MPP (Moore Penrose pseudo-inverse) is quite dense and is a bit too large for my server to handle.
ProblemI found the following paper which details a sparser pseudoinverse that maintains most of the essential properties of the MPP. This is obviously of much interest to me, but I simply don't have the math background to understand how he's calculating the pseudoinverse. Is it possible to calculate it with SVD? If not, what's the best way to go about it?
DetailsThese are the lines of the paper which I think are probably relevant but I'm not antiquated enough to understand
spinv(A) = arg min ||B|| subject to BA = In where ||B|| denotes the entrywise l1 norm of B
This is in general a non-tractable problem, so we use the standard linear relaxation with the l1 norm
sspinv(A) = ητ {[spinv(A)]}, with ητ (u) = u1|u|≥τ
Find my code and more details on the actual implementation here
...ANSWER
Answered 2018-Jul-15 at 04:24As I understand, here's what the paper says about sparse-pseudoinverse:
It says
We aim at minimizing the number of non-zeros in spinv(A)
This means you should take the L0 norm (see David Donoho's definition here: the number of non-zero entries), which makes the problem intractable.
spinv(A) = argmin ||B||_0 subject to B.A = I
So they turn to convex relaxation of this problem so it can be solved by linear-programming.
This is in general a non-tractable problem, so we use the standard linear relaxation with the `1 norm.
The relaxed problem is then
spinv(A) = argmin ||B||_1 subject to B.A = I (6)
This is sometimes called Basis pursuit and tends to produce sparse solutions (see Convex Optimization by Boyd and Vandenberghe, section 6.2 Least-norm problems).
So, solve this relaxed problem.
The linear program (6) is separable and can be solved by computing one row of B at a time
So, you can solve a series of problems of the form below to obtain the solution.
spinv(A)_i = argmin ||B_i||_1 subject to B_i.A = I_i
where _i
denotes the ith row of the matrix.
See here to see how to convert this absolute value problem to a linear program.
In the code below, I slightly alter the problem to spinv(A)_i = argmin ||B_i||_1 subject to A.B_i = I_i
where _i
is the ith column of the matrix, so the problem becomes spinv(A) = argmin ||B||_1 subject to A.B = I
. Honestly, I don't know if there's a difference between the two. Here I'm using scipy's linprog
simplex method. I don't know the internals of simplex to say if it uses SVD.
QUESTION
I've been working with numpy
matrices in an algorithm lately and I've encountered a problem :
I use 3 matrices in total.
...ANSWER
Answered 2018-May-27 at 08:01Your troubles have got nothing to do with pinv
being accurate or not.
As you note yourself your matrices are massively rank deficient, m1
has rank 4 or less, m2
rank 3 or less. Hence your system m1@x = m3
is underdetermined in the extreme and it is not possible to recover m2
.
Even if we throw in all we know about the structure of m2
, i.e. first two columns 3's and 2's, rest 500 counting upwards, there are a combiniatorially large number of solutions.
The script below finds them all if allowed enough time. In practice I didn't look beyond 32x32 matrices which in the run shown below yielded
15093381006 different valid reconstructions m2'
that satisfy m1@m2' = m3
and the structural constraints I just mentioned.
QUESTION
So, the file has about 57,000 book titles, author names and a ETEXT No. I am trying to parse the file to only get the ETEXT NOs
The File is like this:
...ANSWER
Answered 2018-Apr-27 at 10:41It could be that those extra lines that are not being filtered out start with whitespace other than a " " char, like a tab for example. As a minimal change that might work, try filtering out lines that start with any whitespace rather than specifically a space char?
To check for whitespace in general rather than a space char, you'll need to use regular expressions. Try if not re.match(r'^\s', line) and ...
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install penrose
You can use penrose like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page