kandi X-RAY | Camera2 Summary
kandi X-RAY | Camera2 Summary
The Google Camera2 Source Code, migrated from Android Open Source Project.
Top functions reviewed by kandi - BETA
- Initializes the camera activity
- Resume camera activity
- Set defaults
- Prepares the shared views for the module
- Main entry point
- Decompress the specified JPEG data
- Get the minimal tag values for a Exif object
- Create Exif Method
- Implements the callback
- Opens the camera
- Applies a blur to the image
- Starts the peek animation
- Resume the capture module
- Start capture animation
- Pauses the camera
- Initializes the camera controller
- Calculate the child
- Pause the camera activity
- Initializes this camera
- Registers the event handlers
- Region ImageCapture Implementation
- Creates one camera
- This method is used to capture a picture
- Creates a single camera
- This method is called when captureCompleted method is invoked
- Registers event handlers
Camera2 Key Features
Camera2 Examples and Code Snippets
Trending Discussions on Camera2
After updating from
androidx.camera:camera-view:1.1.0-beta01 I receive the next error when using
ANSWERAnswered 2022-Feb-17 at 12:57
It fails because you have different versions for the various androidx.camera.* libraries.
If you check this:
It has the following description:
From 1.1.0-beta01, all CameraX libraries will align the same version number. This will help developers track versions much easier and reduce the complexity of large version compatibility matrix.
So you need to use the same version for ALL camerax libraries.
I am writing an app using Camera2 API, which should show preview from camera and take a picture. Currently my code works as follows:
- When Camera Fragment is instantiated, wait for
TextureView.SurfaceTextureListener.onSurfaceTextureAvailableto be called
- In ViewModel get available and suitable picture and preview sizes from CameraCharacteristics, and emit found preview size to Fragment with LiveData
- Fragment observes preview size LiveData and calls
setDefaultBufferSizewith new size for its
- When new size is set, capture session is created, and repeating preview request is set, so
TextureViewstarts to show image from camera
- To avoid disrupting other camera apps' work, all camera-related things are cleared after Fragment's
onPauseand steps 1-4 are followed again after
Surfaceinstance is shared between Fragment and camera logic classes: the shared variable is initialized with it in
TextureView.SurfaceTextureListener.onSurfaceTextureAvailableand is set to null when
This works fine for some devices of popular brands with modern Android versions, but the app should work on the particular generic Chinese tablet with Android 6 ("
CameraManager: Using legacy camera HAL"), and there I face a problem.
- When the camera is instantiated and preview is started, I see that preview size is 640x480 (so the image is stretched), however, the size passed to
- Logcat also is full of continuous
- I've found on SO, that on some Samsung devices with Android 5 some resolutions may not really be available for Camera2, but here when I close the app and open it again, the preview resolution is 1280x720 as needed
- So my guess is that I may call
setDefaultBufferSizetoo early on first Camera Fragment setup, and only when the view is recreated when after the app was minimized, the needed resolution is "picked up"
- I also tried to call
setDefaultBufferSizein lambda passed to
TextureView.post, and it solved the problem except for the case when I should ask for user's permissions on Camera Fragment (ie when user opens the camera for the first time), so the Fragment is paused a few times to show permissions pop-ups. However, without
setDefaultBufferSizeis also called in main thread, so I guess that delay caused by
TextureView.postwas the game changer here
- Also in
setDefaultBufferSizedocs I see: The new default buffer size will take effect the next time the image producer requests a buffer to fill. For Canvas this will be the next time Surface.lockCanvas is called. For OpenGL ES, the EGLSurface should be destroyed (via eglDestroySurface), made not-current (via eglMakeCurrent), and then recreated (via eglCreateWindowSurface) to ensure that the new default size has taken effect. It seems to me that it may be about the case
ANSWERAnswered 2022-Feb-01 at 10:24
Solved this by overriding
SurfaceTextureListener and calling
surfaceTexture.setDefaultBufferSize there with the desired preview size. When default buffer size is overridden with incorrect size (during initialization), this method is called, and I override it again.
I'm receving the below error in API 31 devices during Firebase Auth UI library(Only Phone number credential),...
ANSWERAnswered 2022-Jan-20 at 05:58
In my case, firebase UI (com.firebaseui:firebase-ui-auth:8.0.0) was using com.google.android.gms:play-services-auth:19.0.0 which I found with the command './gradlew -q app:dependencyInsight --dependency play-services-auth --configuration debugCompileClasspath'
This version of the play services auth was causing the issue for me.
I added a separate
to my gradle and this issue disappeared.
My app process starts a foreground service when it receives
That service is for video recording (dashboard camera app) which uses camera and microphone.
To start a camera from a foreground service on Android 11+ one of the conditions must be met (Exemptions from the restrictions to starting a camera/micro from a foreground service) https://developer.android.com/guide/components/foreground-services#bg-access-restriction-exemptions
The only exception I can use is:
The service is started by an app that has the START_ACTIVITIES_FROM_BACKGROUND privileged permission.
This one doesn't require any interactions from a user, a user just wants a video recording to be started automatically when he starts driving his car, he doesn't want manually make app visible or start recording from a notification, he doesn't care about all of this, which is ok, it's 2021 and such things can be done automatically, but thanks Google for such restrictions in 2021, yes, we should care about safety of private personal data of a user, but they just could add
BACKGROUND_RECORD_AUDIO permissions with manual review on Google Play Console like they did with
If I understood correctly to get
START_ACTIVITIES_FROM_BACKGROUND permission I must have one of the conditions be met (Exemptions from the restrictions to starting activities from the background) https://developer.android.com/guide/components/activities/background-starts#exceptions
So I chose
SYSTEM_ALERT_WINDOW permission, because if it was granted by a user before then no interactions from the user is required and app can start an activity from background. This one can be granted with
So do I have now
START_ACTIVITIES_FROM_BACKGROUND permission granted or not? I don't understand...
If yes then I can use a camera in foreground service which was started from some background action
ANSWERAnswered 2021-Dec-19 at 15:17
So do I have now START_ACTIVITIES_FROM_BACKGROUND permission granted or not?
No. That permission is defined in the framework as:
I've been testing a migration to Android's new Camera X library and checking the EXIF data with
Some of the attributes are displayed, but the amount of values seems truncated & it displays an error:
ANSWERAnswered 2021-Oct-07 at 14:48
The error message is correct: tag
0x9000 "ExifVersion" may only use the type
UNDEFINED as per
But any consumer can still support other datatypes, such as
ASCII - if exiftool does not yield an error then it is most likely for your convenience, while at the same time you're unaware of that a violation to the standard has been encountered.
The writer produces this bug: if following the standard
UNDEFINED as datatype must be used, nothing else. One key difference between both datatypes is: one comes with a terminating byte, the other doesn't. Likewise using
ASCII without a terminating byte is a bug, too - and on the field "ExifVersion" it's impossible to write the literal
0220 with the needed terminating byte when the field length is defined as exactly 4 bytes already.
Effectively the difference hardly matters:
- either I interpret the binary
0x30 0x32 0x32 0x30as 4 bytes (as per datatype
- the literal
0220(with or without a terminating
0x00, as per dataype
Even if I as consumer am still able to read it despite following the standard it should not pass by unnoticed. It's somehow like crossing a street while the traffic lights are red for you: it may work under certain conditions, but that doesn't make it okay.
I can't figure out why by adding a second object the first object stops animating and doesn't run anymore? How can I go about figuring out why my objects are canceling one another out and how to create two different or more multiple threejs items that animate at the same time? I would like to run more than on three.js script in some file. Someone can help me? Thank you...
ANSWERAnswered 2021-Sep-28 at 13:17
You have two functions named
animate (even if you've diligently renamed the scene to
scene2, the camera to
requestAnimationFrame(animate), the other (overwritten) animate function will be called no longer.
The easiest (though by no means pretty) fix is to similarly rename the other animation function to
The better fix would be to refactor this code into functions that have their local namespaces, so mixups like these are harder. (When you do that, you can also refactor the function creating the geometry into a function of its own, etc.)
I added the code from
. The preview, permissions and everything else is working just fine, however I am not able to capture photo from the button. I declared the button to have a listener and connected the function
takePhoto(), but it doesn't seem to work, in the past it works fine and I don't know if it's because of the new version of the CameraX feature is using, I don't know but it doesn't seem to be responding correctly. I have the Logcat open and when I press the button, it has the message that is pressed:
ANSWERAnswered 2021-Sep-14 at 13:47
Your code lacks binding the ImageCapture.Builder() to the camera Provider which is the prominent reason why it is not capturing Images. You need to bind imageCapture builder , it can be done in the following way . In your startCamera() function , you need to bind imageCapture in the following way :
I am trying to use ARCORE to return a depth image and also use the CameraX to return an RGB image. I can do both individually but when I combine both together the cameraX doesn't work.
I see that I must allow the shared camera but as far I searched, it can only be possible using the camera2 API.
Does anyone know any way of using the CameraX instead?...
ANSWERAnswered 2021-Aug-06 at 20:52
Unfortunately, only one client can use a camera at a time, and both ARCore and CameraX assume they're the only user.
This would require explicit sharing a camera instance between the two, and while I believe ARCore has some provisions for this, I don't believe CameraX is able to use ARCore's interfaces.
So if you need the RGB image, you probably need to figure out how to ask ARCore for it, and not use CameraX at all.
How to achieve portrait mode like blurring the background of image in video of people programmatically, this feature exists in IOS natively now since the last update, I think we can achieve this using something like openGL shader or something or some setting in Camera2 API, the thing is I does not look straight forward at all. any help would be appreciated, https://www.xda-developers.com/how-to-use-portrait-mode/ this link has an image as an example of what I am trying to achieve, the article is not technical and not mentioning anything related to programming....
ANSWERAnswered 2021-Aug-02 at 00:27
Building a good-quality portrait mode implementation by yourself is possible, of course, but you need strong expertise in computer vision and probably machine learning as well. The hard part is separating the foreground objects (or people) from the background, and for that a simple OpenGL shader won't really cut it.
Once you have the matte (which describes whether a given pixel is foreground or background or a mixture of the two), a blur shader is relatively straightforward, but that's the easy part.
So the simplest path is to just use CameraX and the extension feature there, and eventually Android 12 will be available on a large number of devices as well, allowing for the lower-level camera2 path.
is there a way to record a video with android Camera2 API or AndroidX Camera and obtain the file size while taking the video?
I want to have something like this: The user records a video and sees while recoding the file size of the current capture.
Is there any callback that gives the file size of the temp video file? Or is there a chance to pass some kind of "TempFileObserver"? Or an existing lib?
Kind regards for reading and sharing your experience. PS: I don't need a total implementation, reference is enough...
ANSWERAnswered 2021-Jul-08 at 09:51
You can achieve this by method getBytes in ByteBuffer, mediacodec.getOutPutBuffer, bufferinfo, dequeoutputbuffer on every drain encode on every frame capturing through MediaCodec
Also you can try by using the ImageReader.onimageavailable..in this imagereader return the byte buffer on latest image capture
Please refer below link for detail about it
No vulnerabilities reported
Gradle Version: 4.6
Compile Sdk Version: 28
Build Tools Version: 28.0.3
CMake: install from the SDK Manager
Ignore all *.mk files in this repository, those are used for ndk-bundle only! But are useful for studying!
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page