android ĪCameraMetadata_getConstEntry(metadataObj,ĪCAMERA_SENSOR_INFO_EXPOSURE_TIME_RANGE, &entry) Camera is supported in NDK only since api 24. You should make sure that you've set the proper platform level in adle of your module. SurfaceTexture, Allocation, or AImageReader.ĪCaptureRequest is then used to specify to which actual target should the captured images be sent.Īt the end, the camera will give you the image data and some additional metadata that describes the actual configuration used by the camera for capturing (this might differ from what you requested, if you specified some incompatible configuration). There are multiple options where output images can be send, e.g. Once ACameraDevice is open, you should use ACameraCauptureSession to configure where (to which ANativeWindow) the camera can send the outputs. You can use ACameraManager to open ACameraDevice, which you'll use to control the camera. The device features and settings that you can query are wrapped into ACameraMetadata. There is ACameraManager which gives you a list of available camera devices and allows you to query device features. Later in this post I'll show how to query for CameraMetadata. This means that the native api won't list camera devices with LEGACY hardware level. But in comparison to camera2 api, the native camera doesn't support HAL1 interface. NDK's native camera is the equivalent of camera2 interface. HA元 should overcome this disadvantage and give applications more power to control the camera. According to documentation, these operating modes were overlapping and it was hard to implement new features. HAL1 used operating modes to divide the functionality of the camera. HAL2 was just a temporary step between the aforementioned versions. There are 2 camera HALs supported simultaneously - HAL 1 and HAL 3. Hardware Abstraction Layer (HAL) is the standard interface that Android forces hardware vendors to implement. It allows you to connect 2 Android devices through USB OTG and perform many of the tasks that are normally only accessible from a developer machine via ADB directly from your Android phone/tablet. If you you're an Android enthusiast that likes to learn more about Android internals, I highly recommend to check out my Bugjaeger app. Even though in this app I'm using the Camera 2 Java API (I also wanted to support Android 6), the particle snow effect that I apply is used from C++ code in the same way I show in this post. If you want to see some GPU-accelerated effects that you can do with OpengGL ES2, you can also checkout my Fake Snow Cam app. And I wrote another blogpost which shows how to generate a video file from captured images using the MediaCodec API. I also wrote another post where I show how to do high performance processing of images with C/C++ and RenderScript. You can find the sample project also on github. In this post I would like to show how to use the native camera api to get image data for further processing (on CPU & GPU). Using the native camera api might help to reduce the unnecessary JNI parts. If your image processing is done mostly in C++, but you still have to jump back-and-forth between Java and C, you might be required to add a lot of JNI glue code for Java to C communication. Using the new api gives you one additional benefit - you can reduce the JNI glue code. I did not do any performance comparison myself, so I don't know for sure. Therefore, there might be some (small) performance gain. The new API allows you to access image data directly in C, without the need to pass them from Java. Android 7.0 Nougat (API level 24) introduced the native camera API, which finally allowed fine-grained control of camera directly from C++.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |