ViewFinder | browser made into a web app
kandi X-RAY | ViewFinder Summary
kandi X-RAY | ViewFinder Summary
This is a feature-complete, clientless, remote browser isolation product (RBI), including secure document viewing (CDR), built in HTML/JavaScript that runs right in your browser. Integrated with a secure document viewer (available on request), this can provide safe remote browser isolation at deployments of any size. It also saves you bandwidth (on the last hop, anyway). With ViewFinder, in order to render the content of a web page, the only thing we send to your device form the remote page is pixels. So no HTML, CSS, JavaScript, etc from your browsing is ever executed on your device. You can use this repo to play with a browser running remotely in the cloud, rather than on your own device. Useful for security and automation. If you're a developer you can include a "BrowserView" in any other web application (for non-commercial use only). If you're like to deploy this in your org, or for a for-profit project, write me: cris@dosycorp.com Or keep an eye out for the cloud service, coming soon. Official government use OK without purchase (also for university/public institution researchers, journalists and not-for-profits), as long as deployment is done in-house (or using Dosyago Corporation, not by other contractors, nor part of a paid deployment). If you're in government and you'd like to deploy this and want help, contact me for help or to discuss a deployment contract.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ViewFinder
ViewFinder Key Features
ViewFinder Examples and Code Snippets
Community Discussions
Trending Discussions on ViewFinder
QUESTION
I need to take a picture, convert the file to an image to crop, and then convert the image back to a file to then run into a tflite model (currently just displaying an image on another screen).
As it stands I am using a simple camera app (https://flutter.dev/docs/cookbook/plugins/picture-using-camera?source=post_page---------------------------) and stacking a container on the preview screen to use as a viewfinder. I Use the rect_getter package to get the container coordinates for the copyCrop() function from the Image package.
Attempting to convert my file to an image (so the copyCrop() function can be run) and then back to a file (cropSaveFile.path) to later be used in a tflite model is resulting in an error: The following FileSystemException was thrown resolving an image codec: ��GFD�oom����������������� etc.
...ANSWER
Answered 2021-Sep-03 at 21:42In the following code
QUESTION
I am creating an application which must implement its own camera.
I use the cameraX library provided by google.
I noticed that there is a difference between the quality of the image captured by my own application, and the image captured by the camera application installed on my phone.
although the 2 photos are captured with the same conditions (light, position...)
especially when I zoom the photo, the details of the image become more blurry to the image captured by my application
(in my own case, my phone is Google Pixel 5)
Please see these 2 photos to see the difference
And this is my code
...ANSWER
Answered 2022-Mar-07 at 11:11if you took photo on Pixel probably using default cam app (GCam) - this app is fulfilled with quaility improvements backed up by some AI. tough task to comptetite with the biggest in quality... try to take a photo with some 3rd party like OpenCamera and compare this picture with one got by your app
QUESTION
I Detected the ArUco marker and estimated the pose. See the image below. However, Xt (X translation) I get is a positive value. According to the drawAxis
function, the positive direction is away from the image center. So I thought it was supposed to be a negative value. Why I am getting positive instead.
My camera is about 120 mm away from the imaging surface. But I am getting Zt (Z translation) in the range of 650 mm. Is pose estimation giving the pose of marker with respect to physical camera or image plane center? I didn't get why the Zt is so high.
I kept measuring Pose while changing Z
, and obtained roll, pitch, yaw. I noticed roll ( rotation w.r.t. cam X-axis) is changing its sign back and forth magnitude change 166-178, but the sign of Xt did not change with the sign change in roll. Any thoughts on why it behaves like that?
Any suggestion to get more consistent data?
ANSWER
Answered 2022-Feb-02 at 14:17Without checking all the code (looks roughly okay), a few basics about OpenCV and aruco:
Both use right-handed coordinate systems. Thumb X, index Y, middle Z.
OpenCV uses X right, Y down, Z far, for screen/camera frames. Origin for screens and pictures is the top left corner. For cameras, the origin is the center of the pinhole model, which would be the center of the aperture. I can't comment on lenses or lens systems. Assume the lens center is the origin. That's probably close enough.
Aruco uses X right, Y far, Z up, if the marker is lying flat on a table. Origin is in the center of the marker. The top left corner of the marker is considered the "first" corner.
The marker can be considered to have its own coordinate system/frame.
The pose given by rvec and tvec is the pose of the marker in the camera frame. That means np.linalg.norm(tvec)
gives you the direct distance from the camera to the marker's center. tvec's Z is just the component parallel to optical axis.
If the marker is in the right half of the picture ("half" defined by camera matrix's cx,cy), you'd expect tvec's X to grow. Lower half, Y positive/growing.
Conversely, that transformation transforms marker-local coordinates to camera-local. Try transforming some marker-local points, such as origin or points on the axes. I believe that cv::transform
can help with that. Using OpenCV's projectPoints
to map 3D space points to 2D image points, you can then draw the marker's axes, or a cube on top of it, or anything you like.
Say the marker sits upright and faces the camera dead-on. When you consider the frame triads of the marker and the camera in space ("world" space), both would be X "right", but one's Y and Z are opposite the other's Y and Z, so you'd expect to see a rotation around the X axis by half a turn (rotating Z and Y).
You could imagine the transformation to happen like this:
- initially the camera looks through the marker, from the marker's back out into the world. The camera would be "upside down". The camera sees marker-space.
- the pose's rotation component rotates the whole marker-local world around the camera's origin. Seen from the world frame (point of reference), the camera rotates, into an attitude you'd find natural.
- the pose's translation moves the marker's world out in front of the camera (Z being positive), or equivalently, the camera backs away from the marker.
If you get implausible values, check aruco_marker_side_length
and camera matrix. f would be around 500-3000 for typical resolutions (VGA-4k) and fields of view (60-80 degrees).
QUESTION
I really need your help with how to solve my screenshot problem. First of all, when I tapped on one of the cells of the ForEach loop, the ScreenShotButton() will popup, and when I tapped the screenshot icon, I want to take a screenshot of that specific cell (not the entire screen, only the specific cell). The problem with my code is the screenshot doesn't work. I've been testing several ways, and I am out of options except posting my question here. Thanks in advance.
...ANSWER
Answered 2021-Nov-11 at 03:01to make the screenshot of a cell "work", I had to replace EnvironmentObject
with
ObservedObject
in PrintableCells
. I also added a few bits of code to
show it is working. Here is the code that works for me:
QUESTION
Trying to migrate barcode scanner to Jetpack compose and updating camera and ML Kit dependencies to latest version.
The current shows the camera view correctly, but it is not scanning the barcodes.
The ImageAnalysis
analyzer runs only once.
Code
...ANSWER
Answered 2021-Oct-18 at 15:34Thanks to Adrian's comment.
It worked after the following changes.
In BarcodeAnalyser
- Removed
imageProxy.close()
fromaddOnSuccessListener
andaddOnFailureListener
. Added it toaddOnCompleteListener
. - Added
imageProxy.close()
in else condition as well.
QUESTION
I tried using schedule. It works fine but the viewfinder of the webcam is stuck at the initial state so it produces only one image multiple times.
Any help?
...ANSWER
Answered 2021-Aug-30 at 09:14You are suffering from buffering. OpenCV VideoCapture()
reads a few frames into a buffer - I think it is 5 but have not checked for a while and it may differ between platforms or versions.
There are a few possible work-arounds depending on your situation:
- call
read()
4-5 times when you want a frame - it will only take a couple of hundred milliseconds - call
grab()
either repeatedly in another thread or just before you want a frame - reduce the size of the buffer so it can only hold a single frame.
Sorry for the somewhat woolly answer as I am not set up to test more at the moment.
QUESTION
ANSWER
Answered 2021-Aug-19 at 14:17The difficulty that you are having is getting a good mapping from the image in the ImageProxy to what is displayed by the PreviewView. Although this sounds easy, I don't believe there is straightforward way to do this mapping. See the answer to a similar question. I took a look at implementing each of the suggestions in this answer and, although they worked in some situations, they failed in others. Of course, I could have taken the wrong approach.
I have come to the conclusion that extracting and analyzing a bitmap extracted from the preview area and identifying those words that are completely enclosed by the red rectangle is the simplest. I circumscribe those words with their own red rectangle to show that they have been correctly identified.
The following is the reworked activity, a graphic overlay the produces the word boxes and the XML for the display. Comments are in the code. Good luck!
TestPhotoscan.kt
QUESTION
Hy everyone, i'm making some little exercices in canvas with sin and cos. I can now make a point rotating around another one with the angle incrementation in the animate function it work, like in the code below. My purpose is to put angle incrementation in the update function of the Class viefinder
...ANSWER
Answered 2021-Aug-11 at 15:08The issue is:
In the first example, you are updating the global variable angle
, but in the second you are updating this.angle
,
In the first case, you are recreating the object viewFinder
each frame, and since the value of angle
is changed, the values of both dx
& dy
are re-calculated for the new object.
However, in the second case you are also recreating the object viewFinder
, which uses the global variable angle
to set the new angle and dx
and dy
but you are not updating the angle anymore, which makes the element look static,
To fix this:
don't recreate the object for each frame, create it once and update it for each frame.
update the value of
dx
anddy
after updating the value ofthis.angle
your code will look something like:
QUESTION
I have a QML-based app where I need to capture images from the camera in order to do QR-code recognition/decoding (using qzxing).
Following the example for the CameraCapture QML class, I can capture images, however they will always be saved to a local file. I don't want to save to a file since I don't want to stress the underlying storage (flash memory, sd card) in order to do QR code recognition. Thus, I want to grab camera images in-memory. Googling for this turned out it seems to be impossible with QML-only.
So I was looking for a C++ based solution, however I cannot seem to come up with a working solution.
Currently I have code which tries to capture the image in C++ while providing a visible preview of the camera for the user in QML:
QML
...ANSWER
Answered 2021-Jun-28 at 07:57You can access the QCamera
using the mediaObject
property of the Camera
item and then use the QCameraImageCapture
where the QCameraImageCapture::CaptureToBuffer
mode is set and use the imageCaptured
signal to get the QImage
.
QUESTION
In CameraX Analysis, setTargetResolution(new Size(2560, 800), but in Analyzer imageProxy.getImage.getWidth=1280 and getHeight=400, and YUVToByte(imageProxy.getImage).length()=768000。In Camera, parameter.setPreviewSize(2560, 800) then byte[].length in onPreviewFrame is 3072000(equales 768000*(2560/1280)*(800/400))。How can I make CameraX Analyzer imageProxy.getImage.getWidth and getHeight = 2560 and 800, and YUVToByte(ImageProxy.getImage).length()=3072000? In CameraX onPreviewFrame(), res always = null, in Camera onPreviewFrame(), res can get currect value, what's the different between CameraX and Camera? And what should I do in CameraX?
CameraX:
...ANSWER
Answered 2021-Jun-13 at 01:15With regards to the image analysis resolution, the documentation of ImageAnalysis.Builder.setTargetResolution()
states that:
The maximum available resolution that could be selected for an ImageAnalysis is limited to be under 1080p.
So setting a size of 2560x800 won't work as you expect. In return CameraX seems to be selecting the maximum ImageAnalysis
resolution that has the same aspect ratio you requested (2560/800 = 1280/400).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ViewFinder
Get it on docker hub, and see instructions below.
First set up the machine with git, and node (including nvm and npm) using the below:. If you want to speed up install and it hangs on processing triggers for man-db you can remove all your man pages (WARNING), with: sudo apt-get remove -y --purge man-db. alternately, somebody reported they had luck with passing a --force to the apt command that seems to hang.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page