Camera2 | Android L Camera2 Demo | Camera library
kandi X-RAY | Camera2 Summary
kandi X-RAY | Camera2 Summary
Android L Camera2 Demo.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Initializes the view
- Initialize capture builder
- Takes a photo
- Update the preview of the camera
- Main execution method
- Creates a jpeg file
- Save bytes to file
- Resume camera
- Sets up the required output sizes
- Opens the camera
- Create camera preview session
- Initializes the surface
- Main method
- Creates the path to the test directory
- Create the switch
- Get the camera id for the device
- Write the camera ID to the preferences
- Closes the camera device
- Performs a still capturing
- Write the format to the device
- Initializes the activity view
- Region > measure
- Called when buffer is available
- Creates the allocation
- Configures the transform for the view
- Create a capture session for the given camera device
Camera2 Key Features
Camera2 Examples and Code Snippets
Community Discussions
Trending Discussions on Camera2
QUESTION
After updating from androidx.camera:camera-view:1.0.0-alpha32
to androidx.camera:camera-view:1.1.0-beta01
I receive the next error when using CameraX
ANSWER
Answered 2022-Feb-17 at 12:57It fails because you have different versions for the various androidx.camera.* libraries.
If you check this:
https://developer.android.com/jetpack/androidx/releases/camera
It has the following description:
From 1.1.0-beta01, all CameraX libraries will align the same version number. This will help developers track versions much easier and reduce the complexity of large version compatibility matrix.
So you need to use the same version for ALL camerax libraries.
QUESTION
I am writing an app using Camera2 API, which should show preview from camera and take a picture. Currently my code works as follows:
- When Camera Fragment is instantiated, wait for
TextureView.SurfaceTextureListener.onSurfaceTextureAvailable
to be called - In ViewModel get available and suitable picture and preview sizes from CameraCharacteristics, and emit found preview size to Fragment with LiveData
- Fragment observes preview size LiveData and calls
setDefaultBufferSize
with new size for itsTextureView
'sSurfaceTexture
- When new size is set, capture session is created, and repeating preview request is set, so
TextureView
starts to show image from camera - To avoid disrupting other camera apps' work, all camera-related things are cleared after Fragment's
onPause
and steps 1-4 are followed again afteronResume
Surface
instance is shared between Fragment and camera logic classes: the shared variable is initialized with it inTextureView.SurfaceTextureListener.onSurfaceTextureAvailable
and is set to null whenTextureView.SurfaceTextureListener.onSurfaceTextureDestroyed
is called
This works fine for some devices of popular brands with modern Android versions, but the app should work on the particular generic Chinese tablet with Android 6 ("CameraManager: Using legacy camera HAL
"), and there I face a problem.
- When the camera is instantiated and preview is started, I see that preview size is 640x480 (so the image is stretched), however, the size passed to
setDefaultBufferSize
is 1280x720 - Logcat also is full of continuous
Surface::setBuffersUserDimensions(this=0x7f55fb5200,w=640,h=480)
messages - I've found on SO, that on some Samsung devices with Android 5 some resolutions may not really be available for Camera2, but here when I close the app and open it again, the preview resolution is 1280x720 as needed
- So my guess is that I may call
setDefaultBufferSize
too early on first Camera Fragment setup, and only when the view is recreated when after the app was minimized, the needed resolution is "picked up" - I also tried to call
setDefaultBufferSize
in lambda passed toTextureView.post
, and it solved the problem except for the case when I should ask for user's permissions on Camera Fragment (ie when user opens the camera for the first time), so the Fragment is paused a few times to show permissions pop-ups. However, withoutTextureView.post
setDefaultBufferSize
is also called in main thread, so I guess that delay caused byTextureView.post
was the game changer here - Also in
setDefaultBufferSize
docs I see: The new default buffer size will take effect the next time the image producer requests a buffer to fill. For Canvas this will be the next time Surface.lockCanvas is called. For OpenGL ES, the EGLSurface should be destroyed (via eglDestroySurface), made not-current (via eglMakeCurrent), and then recreated (via eglCreateWindowSurface) to ensure that the new default size has taken effect. It seems to me that it may be about the case
ANSWER
Answered 2022-Feb-01 at 10:24Solved this by overriding onSurfaceTextureSizeChanged
of SurfaceTextureListener
and calling surfaceTexture.setDefaultBufferSize
there with the desired preview size. When default buffer size is overridden with incorrect size (during initialization), this method is called, and I override it again.
QUESTION
I'm receving the below error in API 31 devices during Firebase Auth UI library(Only Phone number credential),
...ANSWER
Answered 2022-Jan-20 at 05:58In my case, firebase UI (com.firebaseui:firebase-ui-auth:8.0.0) was using com.google.android.gms:play-services-auth:19.0.0 which I found with the command './gradlew -q app:dependencyInsight --dependency play-services-auth --configuration debugCompileClasspath'
This version of the play services auth was causing the issue for me.
I added a separate
implementation 'com.google.android.gms:play-services-auth:20.0.1'
to my gradle and this issue disappeared.
QUESTION
My app process starts a foreground service when it receives BOOT_COMPLETED
action
That service is for video recording (dashboard camera app) which uses camera and microphone.
To start a camera from a foreground service on Android 11+ one of the conditions must be met (Exemptions from the restrictions to starting a camera/micro from a foreground service) https://developer.android.com/guide/components/foreground-services#bg-access-restriction-exemptions
The only exception I can use is:
The service is started by an app that has the START_ACTIVITIES_FROM_BACKGROUND privileged permission.
This one doesn't require any interactions from a user, a user just wants a video recording to be started automatically when he starts driving his car, he doesn't want manually make app visible or start recording from a notification, he doesn't care about all of this, which is ok, it's 2021 and such things can be done automatically, but thanks Google for such restrictions in 2021, yes, we should care about safety of private personal data of a user, but they just could add BACKGROUND_CAMERA
, BACKGROUND_RECORD_AUDIO
permissions with manual review on Google Play Console like they did with ACCESS_BACKGROUND_LOCATION
If I understood correctly to get START_ACTIVITIES_FROM_BACKGROUND
permission I must have one of the conditions be met (Exemptions from the restrictions to starting activities from the background) https://developer.android.com/guide/components/activities/background-starts#exceptions
So I chose SYSTEM_ALERT_WINDOW
permission, because if it was granted by a user before then no interactions from the user is required and app can start an activity from background. This one can be granted with Intent(Settings.ACTION_MANAGE_OVERLAY_PERMISSION)
So do I have now START_ACTIVITIES_FROM_BACKGROUND
permission granted or not? I don't understand...
If yes then I can use a camera in foreground service which was started from some background action
But still
...ANSWER
Answered 2021-Dec-19 at 15:17So do I have now START_ACTIVITIES_FROM_BACKGROUND permission granted or not?
No. That permission is defined in the framework as:
QUESTION
I've been testing a migration to Android's new Camera X library and checking the EXIF data with exif $FILE
.
Some of the attributes are displayed, but the amount of values seems truncated & it displays an error:
ANSWER
Answered 2021-Oct-07 at 14:48The error message is correct: tag 0x9000
"ExifVersion" may only use the type UNDEFINED
as per
- the documentation from 2002, page 26 and trustful resources like
- Exiv2 and
- AWare Systems.
But any consumer can still support other datatypes, such as ASCII
- if exiftool does not yield an error then it is most likely for your convenience, while at the same time you're unaware of that a violation to the standard has been encountered.
The writer produces this bug: if following the standard UNDEFINED
as datatype must be used, nothing else. One key difference between both datatypes is: one comes with a terminating byte, the other doesn't. Likewise using ASCII
without a terminating byte is a bug, too - and on the field "ExifVersion" it's impossible to write the literal 0220
with the needed terminating byte when the field length is defined as exactly 4 bytes already.
Effectively the difference hardly matters:
- either I interpret the binary
0x30 0x32 0x32 0x30
as 4 bytes (as per datatypeUNDEFINED
) or - the literal
0220
(with or without a terminating0x00
, as per dataypeASCII
).
Even if I as consumer am still able to read it despite following the standard it should not pass by unnoticed. It's somehow like crossing a street while the traffic lights are red for you: it may work under certain conditions, but that doesn't make it okay.
QUESTION
I can't figure out why by adding a second object the first object stops animating and doesn't run anymore? How can I go about figuring out why my objects are canceling one another out and how to create two different or more multiple threejs items that animate at the same time? I would like to run more than on three.js script in some file. Someone can help me? Thank you
...ANSWER
Answered 2021-Sep-28 at 13:17You have two functions named animate
(even if you've diligently renamed the scene to scene2
, the camera to camera2
, etc.)
Since animate
calls requestAnimationFrame(animate)
, the other (overwritten) animate function will be called no longer.
The easiest (though by no means pretty) fix is to similarly rename the other animation function to animate2
.
The better fix would be to refactor this code into functions that have their local namespaces, so mixups like these are harder. (When you do that, you can also refactor the function creating the geometry into a function of its own, etc.)
QUESTION
I added the code from
link
. The preview, permissions and everything else is working just fine, however I am not able to capture photo from the button. I declared the button to have a listener and connected the function takePhoto()
, but it doesn't seem to work, in the past it works fine and I don't know if it's because of the new version of the CameraX feature is using, I don't know but it doesn't seem to be responding correctly. I have the Logcat open and when I press the button, it has the message that is pressed:
ANSWER
Answered 2021-Sep-14 at 13:47Your code lacks binding the ImageCapture.Builder() to the camera Provider which is the prominent reason why it is not capturing Images. You need to bind imageCapture builder , it can be done in the following way . In your startCamera() function , you need to bind imageCapture in the following way :
QUESTION
I am trying to use ARCORE to return a depth image and also use the CameraX to return an RGB image. I can do both individually but when I combine both together the cameraX doesn't work.
I see that I must allow the shared camera but as far I searched, it can only be possible using the camera2 API.
Does anyone know any way of using the CameraX instead?
...ANSWER
Answered 2021-Aug-06 at 20:52Unfortunately, only one client can use a camera at a time, and both ARCore and CameraX assume they're the only user.
This would require explicit sharing a camera instance between the two, and while I believe ARCore has some provisions for this, I don't believe CameraX is able to use ARCore's interfaces.
So if you need the RGB image, you probably need to figure out how to ask ARCore for it, and not use CameraX at all.
QUESTION
How to achieve portrait mode like blurring the background of image in video of people programmatically, this feature exists in IOS natively now since the last update, I think we can achieve this using something like openGL shader or something or some setting in Camera2 API, the thing is I does not look straight forward at all. any help would be appreciated, https://www.xda-developers.com/how-to-use-portrait-mode/ this link has an image as an example of what I am trying to achieve, the article is not technical and not mentioning anything related to programming.
...ANSWER
Answered 2021-Aug-02 at 00:27Building a good-quality portrait mode implementation by yourself is possible, of course, but you need strong expertise in computer vision and probably machine learning as well. The hard part is separating the foreground objects (or people) from the background, and for that a simple OpenGL shader won't really cut it.
Once you have the matte (which describes whether a given pixel is foreground or background or a mixture of the two), a blur shader is relatively straightforward, but that's the easy part.
That said, CameraX extensions allow devices to support Portrait mode for apps. And in Android 12 / API 31, camera2 also supports extensions.
So the simplest path is to just use CameraX and the extension feature there, and eventually Android 12 will be available on a large number of devices as well, allowing for the lower-level camera2 path.
QUESTION
is there a way to record a video with android Camera2 API or AndroidX Camera and obtain the file size while taking the video?
I want to have something like this: The user records a video and sees while recoding the file size of the current capture.
Is there any callback that gives the file size of the temp video file? Or is there a chance to pass some kind of "TempFileObserver"? Or an existing lib?
Kind regards for reading and sharing your experience. PS: I don't need a total implementation, reference is enough
...ANSWER
Answered 2021-Jul-08 at 09:51You can achieve this by method getBytes in ByteBuffer, mediacodec.getOutPutBuffer, bufferinfo, dequeoutputbuffer on every drain encode on every frame capturing through MediaCodec
Also you can try by using the ImageReader.onimageavailable..in this imagereader return the byte buffer on latest image capture
Please refer below link for detail about it
https://developer.android.com/reference/android/media/ImageReader.OnImageAvailableListener
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Camera2
You can use Camera2 like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Camera2 component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page