spherical | j symbols , and spherical harmonics
kandi X-RAY | spherical Summary
kandi X-RAY | spherical Summary
Python/numba package for evaluating and transforming Wigner's 𝔇 matrices, Wigner's 3-j symbols, and spin-weighted (and scalar) spherical harmonics. These functions are evaluated directly in terms of quaternions, as well as in the more standard forms of spherical coordinates and Euler angles.1. These quantities are computed using recursion relations, which makes it possible to compute to very high ℓ values. Unlike direct evaluation of individual elements, which will generally cause overflow or underflow beyond ℓ≈30, these recursion relations should be accurate for ℓ values beyond 1000. The conventions for this package are described in detail on this page.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Calculate Wigner D
- Single step of step 2
- Calculate complex powers of complex power matrix
- Compute the WignerH index
- Multiply ellipsoids
- Helper function for _multiplication_factor
- Normalizes the function f
- Subtracts the values from self
- Add two modes together
- Generate a range of Wigner h
- Calculate the Wignerh size
- Generate a range of ell_min and ell_max
- R Calculate size of ellipse
- Generate a Wigner s WignerDrange
- Compute the size of the WignerD size
- Calculates clebsch_gordan
- Compute the Wigner Jacobian
- Calculate Wigner coefficient
- Parse a message
spherical Key Features
spherical Examples and Code Snippets
Community Discussions
Trending Discussions on spherical
QUESTION
I have a RealityKit app that is doing some basic AR image tracking. It detects a rectangular-shaped image, and I am looking to place some spherical dots at each corner of the image. I know I can create the spheres themselves using a ModelEntity
, but I haven't been able to figure out how to specify the position of these relative to the established ARImageAnchor
from the reference image.
I think I just need a counterpart to SceneKit's addChildNode(SCNNode)
function, which uses SCNVector3Make()
to specify a position. I just haven't been able to find a way to establish a relative position and assign a child node to the ARImageAnchor
outside of these SceneKit functions. Is there something built into RealityKit that would accomplish this, or is there a way to use SceneKit to place the corner dots while still using my current setup with RealityKit for the AR reference image tracking?
ANSWER
Answered 2022-Mar-26 at 12:12Try the following approach:
QUESTION
I am playing with React Fiber and React Drei but i do not want to use Typescript like in the examples i have found in their git
I have converted the following example Stars.tsx with typescriptlang tool
This is the output
...ANSWER
Answered 2022-Mar-06 at 18:30Import the named Stars
, like:
QUESTION
I have (a bunch of 3D) stacks of tomographic data, in those I have deduced a certain (3D) coordinate around which I need to cut out a spherical region.
My code produces me the following image which gives an overview of what I do. I calculate the orange and green points, based on the dashed white and dashed green region. Around the midpoint of these, I'd like to cut out a spherical region, the representation of it is now marked with a circle in the image (also drawn by my code).
Constructing a sphere with skimage.morphology.ball
and multiplying this with the original image is easy to do, but how can I set the center of the sphere at the desired 3D location in the image?
The ~30 3D stacks of images are all of different size with different regions, but I have all the necessary coordinates ready for further use.
ANSWER
Answered 2022-Jan-24 at 21:45you have some radius
r
and an index(i,j,k)
into the data.kernel = skimage.morphology.ball(r)
returns a mask/kernel which isa = 2*r + 1
along each side. It's cube-shaped.Take a cube-shaped slice, the size of your kernel, out of the tomograph. Starting indices depend on where you need the center to be and what radius the kernel has.
piece = data[i-r:i-r+a, j-r:j-r+a, k-r:k-r+a]
Apply the binary "ball" mask/kernel to the slice.
piece *= kernel
QUESTION
Scenario
I'm using unity c# to re-invent a google-earth like experience as a project. New tiles are asynchronously loaded in from the web while a user pans the camera around the globe. So far I'm able to load in all the TMS tiles based on their x & y coordinates and zoom level. Currently I'm using tile x,y to try and figure out where the tile should appear on my earth "sphere" and it's becoming quite tedious, I assume because of the differences between Euler angles and quaternions.
- I'm using the angle of
Camera.main
to figure out which tiles should be viewed at any moment (seems to be working fine) - I have to load / unload tiles for memory management as level 10 can receive over 1 million 512x512 tiles
- I'm trying to turn a downloaded tile's x,y coordinates (2d) into a 3d position & rotation
Question
Using just the TMS coordinates of my tile (0,0 - 63,63) how can I calculate the tile's xyz "earth" position as well as its xyz rotation?
Extra
- in the attached screenshot I'm at zoom level 4 (64 tiles)
- y axis 0 is the bottom of the globe while y axis 15 is the top
- I'm mostly using
Mathf.Sin
andMathf.Cos
to figure out position & rotation so far
I've figured out how to get the tile position correct. Now I'm stuck on the correct rotation of the tiles.
The code that helped me the most was found with a question about generating a sphere in python.
I modified to the code to look like so:
...ANSWER
Answered 2021-Dec-07 at 21:20For the positioning and rotation of the planes, you can do that in c#:
QUESTION
Why won't a MeshPhongMaterial's envMap
property work on polygonal faces when viewed through an orthographic camera?
It works on spheres but not an IcosahedronGeometry, for example. If I set the detail
parameter of the IcosahedronGeometry to 2+ (more faces), the envMap begins to show. But if I switch to perspective cam, the envMap is fully visible even with detail
of 0.
This is what it looks like with perspective cam, note the cubemap reflection of some clouds:
This is what it looks like with orthogonal cam and detail
is 0, note the lack of cubemap reflection (please ignore the warping of the image):
Orthogonal cam, detail
is 1; cubemap reflection is back:
The only difference between these two versions of the script is the camera.
Here's the code I'm using to create this object:
...ANSWER
Answered 2022-Jan-05 at 01:54This is the expected behavior.
- With perspective cameras, the reflective "rays" separate as they get further away from the camera, reflecting a wider angle of the envMap.
- With an ortho camera these reflective "rays" do not separate because they're parallel. So the reflection on a flat face is a very narrow angle of the envMap.
See this demo I quickly put together to demonstrate what you're seeing:
- It seems to work on spheres because when the parallel orthographic "rays" bounce off a rounded surface, these rays grow wider apart. They are no longer parallel (as is the case with a Perspective camera).
You can see the reflections still work on your demo because the faces alternate between light and dark as you rotate them. You're just looking at a much narrower segment of the envMap:
QUESTION
I have a set of data values for a scalar 3D function, arranged as inputs x,y,z
in an array of shape (n,3)
and the function values f(x,y,z)
in an array of shape (n,)
.
EDIT: For instance, consider the following simple function
...ANSWER
Answered 2021-Nov-16 at 17:10All you need is just reshape F[:, 3]
(only f(x, y, z)) into a grid. Hard to be more precise without sample data:
If the data is not sorted, you need to sort it:
QUESTION
I have got a mostly functioning self made system for handling google maps in Vue3. I haven't used libraries because none have quite the functionality that I want (eventually), and it seemed relatively straightforward to implement myself (which it has been up until this issue).
The Map component is as follows:
...ANSWER
Answered 2021-Nov-17 at 04:57Thanks to User28, I eventually found the solution:
On the marker component, marker
(which is where the Google marker object is saved to) should not be a reactive property. Changing the data method to
QUESTION
I am trying to get transparency working within a wavy plane terrain
Here is my demo:
...ANSWER
Answered 2021-Oct-13 at 16:08Thanks to the comments by TheJim01 above, TIL about depthWrite option of Material:
https://threejs.org/docs/#api/en/materials/Material.depthWrite
Made a codepen here: https://codepen.io/cdeep/pen/rNzVvyR
QUESTION
I have a big spherical Gameobject which moves forward in 3D with constant velocity. I have other spherical objects that other big object needs to attract to itself. I am using Newton's law of universal gravitation formula to attract other objects, but as expected, other objects are doing a slingshot movement much like the space shuttles doing when needed with other planets' orbits to accelerate.
I actually want a magnetic effect that without taking the masses into account, all other objects will be catched by the big object. How can I do that? Do I need a different formula? Or do I need to change the movement behavior of the objects altogether?
...ANSWER
Answered 2021-Oct-06 at 20:00If I got it right you expect to have something like this: https://www.youtube.com/watch?v=33EpYi3uTnQ
- You can do a spherical raycast or have an sphere collider as trigger to detected the objects that are inside of your magnetic field.
- Once you know those objects you can calculate the distance from each of them to the magnetic ball.
- You can make an inverse interpolation to know how much strength/"magnetism" is getting into that object.
- Then you can apply some force on the attracted object towards the magnetic ball's center.
Something like this algorithm:
QUESTION
I'm trying to write two functions for converting Cartesian coordinates to spherical coordinates and vice-versa. Here are the equations that I've used for the conversions (also could be found on this Wikipedia page):
And
Here is my spherical_to_cartesian
function:
ANSWER
Answered 2021-Oct-06 at 09:28You seem to be giving your angles in degrees, while all trigonometric functions expect radians. Multiply degrees with math.pi/180
to get radians, and multiply radians with 180/math.pi
to get degrees.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install spherical
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page