2.2 Capture Still Pictures
In this lesson, you’ll learn how to capture high-resolution still pictures in the JPEG format and store them in the device’s
Environment.DIRECTORY_PICTURES folder. You’ll also learn how to control the state of the auto-focus and auto-exposure algorithms in order to make sure that the photos taken are not blurry or dark.
1.Introduction2 lessons, 06:13
2.Using the Camera2 API2 lessons, 26:08
3.Using Android GPUImage2 lessons, 10:44
4.Conclusion1 lesson, 00:30
2.2 Capture Still Pictures
Hello, and welcome back. In this lesson, we will be writing code to take photos and save them on external storage media, such as SD cards. The Camera2 API expects you to handle the auto-focus and auto-exposure mechanisms of the camera hardware yourself. So this lesson is again going to be a bit technical, but you're gonna learn a lot about how Android cameras work. The user is going to be taking photos by pressing this floating action button. So, inside the initializeUI method of the main activity, add a new onClickListener to the fab object. Taking a photo involves generating a correctly timed sequence of several operations. So here, let's just call our method called startPhotoCaptureSequence and generate the method. Before we go ahead and create the sequence, let us create an ImageReader object. The high resolution photos we will be taking in this lesson are going to be to written on the surface of this object. Inside the initializeUI method, add a call to a new method called initializeImageReader and generate the method. Inside this method, we will of course initialize the imageReader. To do so, use the imageReader.newInstance method. This expects a width, a height, and image format, which is going to be ImageFormat.JPEG for now, and the maximum number of images, this imageViewer can store. Let's say 1 here. The width and height together represent the resolution of the photos our app can take. Now we want this resolution to be as high as possible. Therefore, we must determine the maximum resolution the camera supports. So, we now need a StreamConfigurationMap object. To initialize it, first, get the characteristics of the backCamera. And then get the value of CameraCharacteristics.SCALAR_STREAM_CONFI- GURATION_MAP. Again, this must be inside a try catch block. To get all the resolutions the camera supports in the form of an array, call the getOutputSizes method of the map and pass ImageFormat.JPEG as an argument to it. Just for the sake of demonstration, let me now loop through this array and log the resolutions my camera supports. So you can get the width of the size object, using the getWidth method, and the height using the getHeight method. Let's declare the width and height as local variables of this method. If you are on the app now, you should be able to see all the resolutions your camera supports. As you can see, my camera supports six resolutions. To get the maximum supported resolution, we must sort this array using the Arrays.sort method. The sort method expects a comparator object, specifying how the sizes must be sorted. The compare method here receives two parameters, Size1 and Size2. To start this array in the descending order, calculate the product of the width and height of Size2. And from it, subtract the product of the width and height of Size1. Okay, the array is now sorted, and at the first index, we will have the maximum resolution. So assign the values of width and height to the width and height of sizes of zero. Optionally, you can log the maximum resolution. Let us now add an onImageAvailable listener to the imageReader, this way, we can store the captured photo as soon as it is available to us. The second argument of the set onImageAvailableMethod expects the handler, we'll leave this method empty for now. Go to the startPhotoCaptureSequence method now. The first step of the PhotoCaptureSequence is a lock focus operation. In other words, we must make sure that the camera is not adjusting the position of its lens while the photo is being taken. If we skip this step, the photos captured can be very blurry. So, use the requestBuilder to set the value of CaptureRequest.CONTROL_AF_TRIGGER, which is short for control auto focus trigger, to CaptureRequest.CONTROL_AF_TRIGGER_START. And then generate a CaptureRequest by calling its build method. Next, call the capture method of the cameraCaptureSession object and pass the lockFocusRequest as its first argument. As the second argument, create a CaptureCallback object. Here, we are interested only in the onCaptureCompleted method, so, select it, and say OK. The third argument is the handler object and all this code must be inside a try catch block. Now inside the onCaptureCompleted method, we have access to a result object. We can use it to determine the current state of the auto focus. So, say result.get CaptureResult.CONTROL_AF_STATE here. If the autoFocusState is equal to CameraMetadata.CONTROL_AF_STATE_FOCUSED_L- OCKED, Or it is equal to CameraMetadata.CONTROL_AF_STATE_NOT_FOCUS- ED_LOCKED it means that the auto-focus has been locked. We must now determine the state of the auto-exposure. To do so, pass capture result .CONTROL_AE_STATE to the result.get method. CONTROL_AE_STATE, of course, short for control auto-exposure state. Some devices might not support this feature. So, we must check if it is null. In this case, we can't adjust the auto-exposure. So, we must simply go ahead and take the picture. I'm going to call a method called takePicture here. If this state is not null, we must now check if it is equal to CameraMetadata.CONTROL_AE_STATE_CONVERGED. If it is, it means the exposure levels are good and we can go ahead and take the picture. But if that is not the case, we must start a pre-capture sequence. Let's do that inside a method called startPrecaptureSequence. Generate the takePicture method now and also generate the startPrecaptureSequence method. And the pre-capture sequence is necessary because it adjusts the auto-exposure levels. If we don't run it, our photos are likely to be very dark and unusable. Here, use the request builder to set the value of CaptureRequest.CONTROL_AE_PRECAPTURE_TRI- GGER to CaptureRequest.CONTROL_AE_PRECAPTURE_TRIG- GER_START. And then generate a new CaptureRequest using the build method of the requestBuilder. Next, call the capture method of the cameraCaptureSession, and pass the precaptureRequest object as its first argument. As the second argument, pass a new captureCallback object. Again, we are interested only in the onCaptureCompleted method here, as the last argument, pass the handler. Here, we must check if the value of CaptureResult.CONTROL_AE_STATE is equal to CameraMetadata.CONTROL_AE_STATE_PRECAPT- URE. If this is true, it means the preCaptureSequence is still running. So, we don't do anything and just return. Otherwise, we can call takePicture. At this point, we are ready to take a picture. For this, we are going to need a new CaptureRequest.Builder object. Create it by calling the cameraDevice.createCaptureRequest method and passing cameraDevice.TEMPLATE_STILL_CAPTURE to it. Wrap it inside a try catch block. Next, call the addTarget method, and pass the surface of the imageReader to it. For this call to work, we must mention the surface during the creation of the cameraCaptureSession object. So here, inside the array.asList method, include imageReader.getSurface. Now we know that our camera sensor is at an angle of 90 degrees. Therefore, we must set the value of CaptureRequest.JPEG_ORIENTATION to 90. If we don't do this, our photo2 is going to be rotated by 90 degrees, and we don't want that. Next, we must stop the live preview. So call the stopRepeating method of the cameraCaptureSession object. Finally, call its capture method and pass photoRequestBuilder.build to it. The second argument can be null, and the third argument must be handler. At this point, the photo will be taken, and it will be available to the imageReaderObject. So, this method will be automatically called now. To fetch the photo in the form of an imageObject, call imagereader.acquireNextImage. To write this image to the external storage, we must convert it into bytes. To do so, first create a ByteBuffer object. Call the getPlanes method of the image, and then get the buffer at plane 0. And now we need to generate a filename for our photo. I'm gonna put that logic inside another class, so create a new Java class, and call it util. Here, create a new static method called getANewFilename. To keep things simple, let's generate the filename based on date and time. So create a SimpleDataFormat object, and set its pattern to dd, mm, yyyy, hh, mm, ss. So this uses the full date and time including the seconds. Let us now return a filename that always has a photo_ prefix. Format the current date using the format method of the dateFormat object and also concatenated with .jpg extension. So, that's our file name. Get back to mainActivity. Here, create a file name string and initialize it using util.getANewFilename. We must now determine the path of the directory in which the photos must be stored. So call Environment.getExternalStoragePublicDirec- tory, and pass Environment.Directory_ pictures to it. Based on the storage directory and the filename, we can now create a photo file object. Create a new byte array now, and let its size be equal to the number of bytes remaining in the byte buffer. And then pass the array to the get method of the byte buffer to load all the bytes. At this point, we can write the bytes to the external storage. File IO operations must be performed inside a different thread. So, create a new thread here, and override its run method, and then call its start method immediately. Inside the run method, create a new FileOutputStream for the photoFile object, make photoFile final and wrap the call in a try catch block. Next, call the write method of the FileOutputStream and pass the data array to it, which must again be made final. Also, add a catch class for IO exception. Once the write operation is complete, you can close the fileOutputStream. If you're on the app now, you will be able to take photos and store them on your SD card. In this lesson, you learned how to take photos using the Camera2 API. You also learned how to log the focus of the camera and run a pre-capture sequence to adjust the exposure levels. Thanks for watching.