Android-Camera | Android Camera , OpenGL , Graphics | Graphics library
kandi X-RAY | Android-Camera Summary
kandi X-RAY | Android-Camera Summary
Android Camera, OpenGL, Graphics
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Initializes the camera view
- Called when the surface is changed
- Initialize the surface buffer
- Initialize the surface texture
- Create the camera view
- Initializes the camera
- Creates a program from the specified vertex and fragment shaders
- Compiles the given shader source
- Initialize the surface
- Called when the camera is changed
- Writes GL version information to the log
- Prepare the audio record
- Start the encoder thread
- Handles a message
- Compile a program to a program
- Read text
- Returns a suitable EGLConfig
- Writes current EGL context to the log
- Creates the example dialog
- Saves the current frame to a file
- Determine the optimal camera size to fit in the given view
- Prepare the encoder
- Get the display orientation of the view
- Creates a texture from raw data
- Creates an OESexture
- Called when the surface is destroyed
Android-Camera Key Features
Android-Camera Examples and Code Snippets
Community Discussions
Trending Discussions on Android-Camera
QUESTION
I'm implementing biometric facial authentication in my Xamarin.forms app (Android and iOS). I am basing myself on this example to capture the user's face which implements custom renderer (Android), but in this example: https://github.com/UNIT-23/Xam-Android-Camera2-Sample the image must be touched to capture it, I would like to capture this image every 3 seconds without the user having to touch, like second option I would like to capture the photo with a button, but I don't really know how to implement it, so any help or example suggestions would be greatly appreciated.
Thanks in advance for your help and your time!
...ANSWER
Answered 2022-Feb-19 at 17:14Alternatively, for Xamarin.Forms, you have access to a cross-platform Timer via the Device class:
QUESTION
The deprecated createCaptureSession()
method is used in an old code in the following way:
ANSWER
Answered 2021-Apr-14 at 00:49There's no reason you can't keep using the deprecated version of createCaptureSession - it works just as well as it did before. You'll just have to ignore the Android Studio deprecation warnings.
Basically, over time, we kept having to add more overloads of createCaptureSession with more and more variations of parameters and optional parameters, and it was getting excessive.
So we created SessionConfiguration as a configuration object that's more flexible over time (easier to add new parameters to it), set up one more createCaptureSession that accepts that, and deprecated all the prior versions to direct folks to the one we'd add today if we were designing the API again.
If you want to use the newest option, then you can take a look at OutputConfiguration (it just wraps the list of output Surfaces plus some other optional settings) - you can build one with just the surface
you put in the Arrays.asList()
call in your sample code.
But you can just keep using what you have - we won't actually break the old methods.
QUESTION
I used this Tutorial to learn and try understand how to make a simple picture taking android app using the Camera2 API. I have added some snippets from the code to see if you all can help me understand some questions I have.
I am trying to find out how the image is saved as. Is it RGB, or BGR? Is it stored in the variable bytes?
...ANSWER
Answered 2020-Dec-16 at 10:46The image is received in JPEG format (as specified in the first line). Android uses YUV (to be more exact, YCbCr) color space for JPEG. Jpeg size is variable, it is compressed with lossy compression, and you have very little control over the level of compression.
Normally, you receive a JPEG buffer in onImageAvailable() and decode this JPEG to receive a Bitmap. You can get pixels of this Bitmap as an int array of packed SRGB pixels. The format for this array will be ARGB_8888. You don't need JNI to convert it to BGR, see this answer.
You can access Bitmap objects from C++, see ndk/reference/group/bitmap. There you can find the pixel format of this bitmap. If it was decoded from JPEG, you should expect this to be ANDROID_BITMAP_FORMAT_RGBA_8888.
QUESTION
If I use camera2 API to capture some image I will get "final" image after image processing, so after noise reduction, color correction, some vendor algorithms and etc. I should also be able to get raw camera image following this. The question is can I get intermediate stages of image as well? For example let's say that raw image is stage 0, then noise reduction is stage 1 color correction stage 2 and etc. I would like to get all of those stages and present them to user in an app.
...ANSWER
Answered 2020-Oct-19 at 19:29In general, no. The actual hardware processing pipelines vary a great deal between different chip manufacturers and chip versions even from the same manufacturer. Plus each Android device maker then adds their own software on top of that.
And often, it's not possible to dump outputs from every step of the process, only some of them.
So making a consistent API for fetching this isn't very feasible, and the camera2 API doesn't have support for it.
You can somewhat simulate it by turning things like noise reduction entirely off (if supported by the device) and capturing multiple images, but that of course isn't as good as multiple versions of a single capture.
QUESTION
Camera2 API allows us to specify the thread (passing the Handler
instance) on which we recieve the callbacks containing CameraDevice
, CameraCaptureSession
, CaptureResult
etc. We use the information from these callbacks to configure capture session, create capture requests and obtain capture results. However, when the user controls the camera configuration (e.g. focusing, metering) through the UI, he makes it from the main thread. Here we, developers, have two options:
- Use the Camera2 API calls (e.g.
CameraCaptureSession.capture
) "directly" from any thread (includingmain thread
). Here we need to manage the session state and syncrhonize access to the Camera2 API. - Move all Camera2 API calls to the "CameraThread". Send the message to the "CameraThread" using
Handler
whenever we need access to Camera2 API. So we will actually use it only from the single thread ("CameraThread").
Please, let me clarify what I mean. Suppose that we created HandlerThread
for Camera2 API callbacks.
ANSWER
Answered 2020-Sep-22 at 22:35They're both viable. "Better" depends on a bunch of factors such as the size of the codebase, and how many different places in the code will be wanting to use the session and device.
There's some minor overhead in sending the callback to the camera handler thread, plus more boilerplate to write, so for smaller apps, just making calls from whatever thread you're in and synchronizing appropriately works fine.
However, as your app's complexity grows, it starts becoming attractive to keep all interaction with the camera API to a single thread; not just because you don't have to synchronize explicitly, but because it's easier to reason about ownership, the state of the system, and so on, if every interaction with the camera object happens on the same thread. Also, since some of the camera API methods can block for extended time periods, you really don't want to freeze your UI for that long anyway. So sending the calls to another thread is valuable.
So it's a tradeoff of some extra boilerplate + minor overhead vs. inability to centralize camera code in one place for simplicity and smoothness.
QUESTION
Is is possible to create a custom renderer in Uno? I need to create a native Android view and "embed" it in page or UserControl.
I didn't find any documentation about it.
We need to do the same as this example https://github.com/UNIT-23/Xam-Android-Camera2-Sample
Thank you!
...ANSWER
Answered 2020-Aug-11 at 13:00There's is no renderers in Uno: simply add your native control in the XAML and it should work unchanged.
ExampleThere's a NativeView
test in Uno you can check here, which is used here in XAML.
You can even set native properties directly in XAML, but bindings won't work on those.
If you need bindings, you can create a DependencyProperty
and use the callback to set the native value.
QUESTION
I'm trying to follow code samples I've found on the web (Gabriel Tanner, Ray Wenderlich, Official Introduction), but I usually get stymied on the very first line:
...ANSWER
Answered 2020-Aug-04 at 15:27CameraX has gone through some changes since it was first introduced last year, this is normal since it was still in Alpha, the API surface was changing a bit throughout the alpha versions, but since it's gone into beta, its public API has become more stable.
Going back to your question, binding and unbinding use cases in CameraX is no longer done through the CameraX
class, but instead, it is now done using ProcessCameraProvider.bindToLifecycle() to bind one or multiple use cases to a lifecycle, ProcessCameraProvider.unbind() to unbind one or multiple use cases, and ProcessCameraProvider.unbindAll() to unbind all bound use cases.
The tutorials you're using as reference are outdated, even the video from last year's google I/O is outdated, since the code snippets in it are referencing CameraX's first alpha version. However, the link of the codelab you mentioned is -almost- up to date, it's the official CameraX codelab that's maintained by Google. You can also take a look at the official documentation of CameraX, it's more up to date that the tutorials you referenced.
QUESTION
There are a couple of samples to get the relative rotation from the camera sensor to the current device orientation, e.g. for using it for correcting camera preview or for MediaRecorder.setOrientationHint(int)
But the newest method from the latest official github camera samples works different than older methods (for deprecated camera2 API and method from archived camera2 sample)
Here's my sample code that uses all methods and logs all results, we can see that function computeRelativeRotationCamera2New
returns different result for Surface.ROTATION_90
and Surface.ROTATION_270
display rotation
So what is the correct method to do it?
...ANSWER
Answered 2020-Jul-31 at 07:19Tested recorded videos with different methods and found out that the newest method from the latest sample is incorrect for landscape video recording, it plays videos upside down
I decided to update old method that uses deprecated camera api to use camera2 api (works for 21+ API) and use it in my project
QUESTION
I am using the Camera2 API to record videos, I've used this project as a reference, I managed to change my TextTure view to full screen but the video that I am saving is still not in full screen, how can I change the saved video to full screen as well?
you can see that when I play the video the video dimensions are the same as my preview, please help how can I save the video in the same perspective as my preview?
...ANSWER
Answered 2020-Jun-22 at 13:59Try with this method:
QUESTION
As I understand from many implementations such as :
https://github.com/android/camera-samples/tree/master/CameraXBasic
https://proandroiddev.com/android-camerax-preview-analyze-capture-1b3f403a9395
After every use case in CameraX implementation cameraProvide.bindToLifecycle() method needs to be called.
For example, if I need to switch ON the FLASH_MODE of the camera from the default OFF mode, then again bindToLifecycle() method needs to be called.
The disadvantage with this approach is that for a second or two the preview is removed and re-attached which doesn't feel like a smooth transition for an app.
Is there any better practice available or it is the limitation?
I Have attached a sample code below:
...ANSWER
Answered 2020-Jun-05 at 17:59To enable or disable the flash during an image capture after you've created an ImageCapture
instance and bound it to a lifecycle, you can use ImageCapture.setFlashMode(boolean)
.
Regarding your question about the difference between setting the flash mode before vs after binding the ImageCapture
use case, AFAIK there isn't much of a difference really. When you take a picture by calling ImageCapture.takePicture()
, a capture request is built using different configuration parameters, one of them is the flash mode. So as long as the flash mode is set before this call (ImageCapture.takePicture()
), the output of the capture request should be the same.
CameraX currently uses Camera2 under the hood, to better understand how the flash mode is set when taking a picture, you can take a look at CaptureRequest.FLASH_MODE.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Android-Camera
You can use Android-Camera like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Android-Camera component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page