gimbal | Web Performance Auditing tooling | Performance Testing library
kandi X-RAY | gimbal Summary
kandi X-RAY | gimbal Summary
Installation | Documentation | Contributing | Code of Conduct | Twitter. Gimbal ️ CIs like Circle CI, Travis CI, Jenkins, and GitHub Actions.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gimbal
gimbal Key Features
gimbal Examples and Code Snippets
Community Discussions
Trending Discussions on gimbal
QUESTION
I'am working with Quaternion and one LSM6DSO32 captor gyro + accel. So I fused datas coming from my captor and after that I have a Quaternion, everything works well.
Now I'd like to detect if my Quaternion has rotated more than 90° about a initial quaternion, here is what I do, first I have q1
is my initial quaternion, q2
is the Quaternion coming from my fusion data, to detect if q2
has rotated more than 90° from q1
I do :
ANSWER
Answered 2021-Aug-06 at 01:51The "yaw" of a quaternion generally means q_yaw
in a quaternion formed by q_roll * q_pitch * q_yaw
. So that quaternion without its yaw would be q_roll * q_pitch
. If you have the pitch and roll values at hand, the easiest thing to do is just to reconstruct the quaternion while ignoring q_yaw
.
However, if we are really dealing with a completely arbitrary quaternion, we'll have to get from q_roll * q_pitch * q_yaw
to q_roll * q_pitch
.
We can do it by appending the opposite transformation at the end of the equation: q_roll * q_pitch * q_yaw * conj(q_yaw)
. q_yaw * conj(q_yaw)
is guaranteed to be the identity quaternion as long as we are only dealing with normalized quaternions. And since we are dealing with rotations, that's a safe-enough assumption.
In other words, removing the "Yaw" of a quaternion would involve:
- Find the yaw of the quaternion
- Multiply the quaternion by the conjugate of that.
So we need to find the yaw of the quaternion, which is how much the forward vector is rotated around the up axis by that quaternion.
The simplest way to do that is to just try it out, and measure the result:
- Transform a reference forward vector (on the ground plane) by the quaternion
- Take that and project it back on the ground plane.
- Get the angle between this projection and the reference vector.
- Form a "Yaw" quaternion with that angle around the Up axis.
Putting all this together, and assuming you are using a Y=up system of coordinates, it would look roughly like this:
QUESTION
I'm at a loss to do this in react-three-fiber. I want to create a rig for a camera (which is straightforward in three.js). I put the camera inside of a group. If I want to rotate the camera around the Y axis, i change the rotation of the parent group. If I want to rotate it around the X axis, I rotate the camera itself. This way, I don't deal with gimbal issues (like if y rotation is 180 degrees then x rotation is inverted).
in three.js i would do:
...ANSWER
Answered 2020-Sep-25 at 07:31react-three-fiber brings its own camera, this one is a default and you wont be able to nest children in there. you can use your own cameras of course, the state model has a "setDefaultCamera" function for this, so that pointer events and everything else gets to use your new camera. cameras unfortunately arent straight forward in three, so there's still some churn involved as you need to update it on resize, you need to calculate aspect etc. all this has been abstracted here: https://github.com/pmndrs/drei#perspectivecamera-
QUESTION
This code is found on how to mechatronics (not mine, https://howtomechatronics.com/projects/diy-arduino-gimbal-self-stabilizing-platform/). I am working on an Arduino gimbal and am using this code. It brings up an error, which I will paste at the bottom. I searched this sort of error and it seems it is because it has an output that is negative but is not defined to come out as negative or may be too large. I am not quite sure what to change or how to change this in order to function. I also have a problem with the yaw motor, which I believe may be fried because my brother connected it to a 12V battery and is only supposed to be 5V. I am sure I can disable the yaw (although not sure if this would solve the other issue) but I don't know which lines to code out in order to do so.
...ANSWER
Answered 2020-Dec-01 at 23:32It's a warning for an int overflow in the MPU6050 library code, not in your code.
On Github, an issue was raised about this some time ago, which also has the fix in the same posting.
Another solution suggested in the comments there to get rid of this warning is to simply change the "16384" to "16384L" in the library code.
Note that i2cdevlib has 247 open issues; I don't think the owner will fix this particular problem any time soon.
QUESTION
I can't get my head around how to get out simple status data, like the current gimbal pitch for example.
I have not found a solid connection between the DJI SDK and what actually works in xcode. The SDK gives me hints and together with xcode autocompletion a go forwards, slowly..
Class GimbalState has member getAttitudeInDegrees() with description: "The current gimbal attitude in degrees. Roll, pitch and yaw are 0 if the gimbal is level with the aircraft and points in the forward direction of North Pole." - Great!
However, it does not autocomplete in xcode nor does it compile.
Other approaches tested:
...ANSWER
Answered 2020-Nov-06 at 01:37You can access the current Gimbal pitch through its didUpdate state delegate function
QUESTION
I am trying to read the EXIF information of images taken by a UAV (more specifically, DJI Mavic 2).
I have sucessfully get some attributes ( "GPS Altitude","GPS Longitude","GPS Latitude") by using ExifInterface
.
However, I realize that ExifInterface
is missing some attributes.
For example, I can't get "Relative Altitude", "Gimbal Roll Degree", "Gimbal Yaw Degree","Gimbal Pitch Degree", etc., although I can see these attributes on PC using some other software(exiftool for example).
How can I get these EXIF information?
...ANSWER
Answered 2020-Oct-27 at 02:21I finally gave up ExifInterface and solved this by reading the raw bytes as HEX and parse the HEX into EXIF information.
I also found a library here that should work, but didn't have a try as the issue was already solved.
QUESTION
I am trying to create a custom layer that calculates the forward kinematics for a robotic arm using 'DH parameters'. In my code, I am using the 6 joint angles as the input of the custom layer (Kinematics_Physics) and I am using tensorflow.map_fn
to iteratively calculate the forward kinematics of each set of angles in the input. My goal is to set the 'DH parameters' as the trainable weights and train a model to optimize the 'DH parameters'. I am aware of the fact that this can be easily done using libraries like scipy.optimize
or other optimization libraries, but I want to implement this in tensorflow so, I can add dense layers to learn the inverse kinematics of given custom layer. My code is as following:
ANSWER
Answered 2020-Oct-26 at 11:54Coming up with a solution was a bit of a journey.
The problem:map_fn
expect a Tensor, not a symbolic representation of a Tensor.
The problem that caused your error is a bit tricky to explain. But basically, when you are creating your keras model, the batch_size is unknown. It is the standard behaviour so you can feed different batch size to your model. However, the map_fn
function expect to know the shape of the Tensor you are feeding it to create the execution graph. Which is not possible given that the batch size of your Input Layer is unknown! Using map_fn
inside a keras.Layer
seems difficult to do.
The good thing is that we can skip the batch dimension by just referring to the part that interrest us : inp[...,0]
gives us the first value of the last dimension of the inp
Tensor without having to index on the batch_size.
The bad thing is that sometimes, we rely on the batch size, especially when we need some copies of the trainable weights for each input of the batch size. We can tell tensorflow to delay those calculation by using tf.shape
Some general comments about your code and the changes I made:
- you used a lot of
tf.Variable
as class attributes. Unless you need to keep a state between different batches, this is not needed. It makes the code harder to read because you need to useassign
intead of=
- You used the
@tf.function
decorator on each method class. You need it only if you use operations likefor
loops that can only be eager executed and need to be converted before being executed in the Graph. - You overrided
__call__
instead ofcall
. Don't do that! The inherited__call__
function oftf.keras.layers.Layer
applies internal tensorflow logic to get certain guaranties. Redefinecall
instead. - Don't use tf.Variable unless you need to track that variable. They are supposed to represent shared, persistent state your program manipulates. I suggest that you read the Variable Guide.
I was a bit liberal with the use of [...,]
indexing.
QUESTION
i made camera rotating to observe an object
it is rotating part mouse left button
...ANSWER
Answered 2020-Aug-18 at 05:25If you wanted to use quaternions to do the same thing as your transform.eulerAngles += new Vector3(-yRotation, xRotation, 0);
method, you would have to apply the Y axis rotation in global axes, and then the X axis rotation in local axis:
QUESTION
I am using simpleBGC gimbal controller from Basecam electronics. The controller has a serial API for communication which requires the calculation of crc16 checksum for the commands sent to the controller(https://www.basecamelectronics.com/file/SimpleBGC_2_6_Serial_Protocol_Specification.pdf) (page 3)
I want to send the reset command to the controller which has the following format:
Header: {start char: '$', command id: '114', payload size: '3', header checksum : '117'}
Payload: {3,0,0} (3 bytes corresponding to reset options and time to reset)
crc16 checksum : ? (using polynomial 0x8005 calculated for all bytes except start char)
The hex representation of my command is: 0x24720375030000 and I need to find crc16 checksum for 0x720375030000. I used different crc calculators but the controller is not responding to the command and I assume that crc checksum is not correct. To find correct crc16 checksum I sent every possible combination of crc16 checksum and found out that the controller responds when checksum is '7b25'. so the correct command in hex is : "24 720375030000 7b25". But this checksum 7b25 does not correspond to the polynomial 0x8005. How can I find the correct polynomial or crc16 calculation function?
...ANSWER
Answered 2020-Aug-04 at 21:40Did you try the code in the appendix of the document you linked? It works fine, and produces 0x257b
for the CRC of your example data. That is then written in the stream in little-endian order, giving the 7b 25
you are expecting.
Here is a simpler and faster C implementation than what is in the appendix:
QUESTION
I cannot get my head around Quaternions, and have tried many combination of operators on this (after reading / viewing tutorials). No success - just shows my lack of understanding. Even tried it with angles and atan2() - but all that really showed me was what gimbal lock was. The problem I am trying to solve is:
I have a component (say A, a cube) with some rotation in the world frame of reference (doesn't actually matter what that rotation is, could be orientated any direction).
Then I have another component (say B, another cube) elsewhere in the world space. Doesn't matter what that component's rotation is, either.
What I want to get to in the end are the (Euler) angles of B relative to A, BUT relative to A's frame of reference, not world space. The reason is I want to add forces to A based on B's direction from A, which I can then do with A.rigidbody.addRelativeForce(x, y, z);
.
Stuck.
...ANSWER
Answered 2020-Jun-10 at 16:43With thanks for the comments, I think I have it.
For reference, considering a vector and rotation within the same (world frame) coordinates, a new vector representing the original, but viewed from the perspective of local frame coordinates, being those of the original world rotation, can be calculated as:
QUESTION
I'm trying to reverse-engineer a BLE device (Gimbal). I'm already successful at replicating exact commands after sniffing btsnoop_hci.log
, and here's a few of them:
ANSWER
Answered 2020-Jun-09 at 22:05You can XOR pairs of the the samples to reduce the number of variables, since this eliminates any initial value or final xor (as if both were 0), and a search only needs to look for the polynomial and if it's non-reflected (left shifting) or reflected (right shifting). For CRC16, that's only 64k loops for left shift and 64k loops for right shift. It's possible to get multiple polynomials that appear to work, so more samples are needed to confirm.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gimbal
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page