Four considerations for ensuring optimal video stabilization for cameras in motion

Jan. 20, 2021
Taking time during development to tune the camera’s stabilization software ensures system success.

Johan Svennsson

Over the past decade, the smartphone industry helped drive innovations in video stabilization technology (Figure 1). Now the same technology stands to improve video generated by other camera types, but successful implementation into cameras in motion requires tuning. 

When tuning video stabilization software for a camera, one must be aware of key considerations in four important areas of camera performance to make the right choices to fit a given application. 

Battery life and performance

In designing a drone that will film critical operations for a long period of time, for example, one must ensure that video stabilization processing won’t hurt battery life. In such a use case and in similar live video applications, a design team will likely opt for real-time processing—a capability that can easily take a toll on a product’s processing capacity and battery consumption. Streamlining the video stabilization performance to consume slightly less processing power and thus preserve battery power represents one way to compensate. For example, algorithms can be tuned to correct more or less of a video stream depending on how powerful the underlying system is, thereby preserving power. On the other hand, if the device’s battery life is already more than satisfactory for its application, tuning up the video stabilization performance becomes possible, resulting in even better stabilization without draining too much power.

Depending on the application, video stabilization performance can be tuned down and resolution tuned up—or vice versa—by configuring how much of the frame is cropped. Image quality loss when cropping video becomes less noticeable when using a higher resolution image sensor than when using a lower resolution sensor. Just keep in mind, however, higher resolutions will require more battery power for processing due to the higher number of pixels.

Field of view and resolution

Video stabilization reduces field of view (FOV) by using a cropped region of a full-frame video in order to steady the image (Figure 2), which may be fine depending on the application. With body cameras for example, one might sacrifice a wider FOV for better video stabilization. Imagine a police officer chasing a suspect and recording video on a smartphone, which usually features a large FOV. On that platform, standard video stabilization does not suit such quick movements and results in a shaky, potentially unusable video. Because a body camera needs stable video from a more limited FOV—the view directly in front of the officer—one can tune up the camera’s video stabilization software to ensure detailed, usable footage. Other situations such as moving surveillance cameras may require a broader FOV to monitor a wider area. In a case like this, it may be possible to tune down the video stabilization without sacrificing quality.

In addition, video stabilization can reduce resolution, depending on the smartphone maker’s implementation. Electronic image stabilization processing intelligently crops each frame in a video, and the part of each frame that the algorithm selects for cropping is what makes the video stable. In other words, when using full HD, 1920 x1080 resolution video and applying stabilization, the resolution will decrease slightly. Software interpolation can help, but for design teams willing to make the extra power investment, advanced stabilization software can actually pull cropped pixels from the camera sensor to maintain full resolution.

Related: Real-time image dehazing clarifies digital images

Motion blur

Finding the right balance between acceptable motion blur and noise becomes necessary when tuning a camera’s video stabilization software. Video artifacts such as motion blur exist in most video feeds from the outset but applying video stabilization may increase their visibility. Introducing motion-blur reduction processing may help compensate, but it also may increase video noise levels.

The amount of motion blur depends on lighting conditions and how well the device’s motion sensors handle light. Consider whether the camera will be frequently deployed in low-light conditions. The less external light available—such as sunlight—the more motion blur becomes an issue and impacts how one tunes a device’s video stabilization software.

Horizon correction

A powerful feature of video stabilization, horizon correction involves essentially leveling a video while also keeping it steady. For example, deploying a drone to shoot quality video in windy conditions requires compensating for the wind’s impact on the drone’s view. As designed to operate, a drone will necessarily tilt with and against the wind to move in the right direction, which could make the video extra shaky and unbalanced.

Correcting for a wider angle using the drone’s horizon correction offers one solution, but such processing could come at the cost of FOV. However, in other operating environments, FOV may be more important, and tuning the camera’s horizon correction to compensate for a narrower angle with satisfactory results becomes an option. In either case, it’s best to simulate real-world conditions for a given application and analyze the video output to determine horizon correction priorities.

Analyzing needs and tuning priorities up front is of vital importance. When one becomes ready to start tuning a device’s video stabilization, the work will be easier with flexible tools and the needed documentation—including information about video metadata and the results of calibration tests—to tell how far one can adjust values up or down based on how the camera will be used.

Johan Svennsson is Chief Technology Officer at IMINT Intelligence (Uppsala, Sweden; www.weareimint.com

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!