Complaints about camera micromanagement are common in reviews for 3D tactical shooters on touch devices. It makes sense — controlling the camera’s movement, rotation, and zooming while keeping touches distinct from issuing orders and commands is hard. Add in blocked visibility from hands and fingers on a device that is most likely being held while played, and you have a recipe for frustration.
For Static Sky, we hope to avoid this common pitfall by removing camera management altogether and introducing a system for automatic camera control.
Before we started designing, we set a few absolute requirements:
- Automatic camera movement can’t be allowed to cause the player to miss or misplace commands — the system must be both stable and predictable. Missing an enemy because the camera jerked back causing the enemy’s position to change on screen isn’t just frustrating, it is unforgivable.
- The player’s inability to zoom or pan the camera should never prevent them from taking appropriate action or require additional interactions.
- Likewise, that loss of control should never be allowed to lead to lost contextual awareness. If enemies come from off screen, shooting at the player before they are shown by the camera, that should be an intentional aspect of the level design, not an artifact of the camera system.
We wanted to build a camera system that would, hopefully, never be noticed — one where everything that a players wants or needs to interact with just happens to be there, ready to be tapped.
Here’s how we did
This system starts with a list of contextually relevant objects — selected characters, nearby enemies, interactive objects, etc. — based on current character selection and the states of various objects. The bounding boxes of these objects are combined to create an overall target volume that the camera will attempt keep framed.
Our first challenge was determining what things are contextually relevant. Our early prototypes, for example, added the unvisited corner points along the path of a moving character to the target volume. In the abstract example of our prototype level, it made sense, but in actual levels it resulted in too many confusing and frustrating moments. We also found that some types of objects needed special consideration and that some seemingly obvious things, like cover zones, actually shouldn’t be automatically added when in range (camera translation wasn’t clear enough of a hint, and player focus was on what they were taking cover from rather than the cover itself.) In the end, we ended up with a list of rules regarding what types of things to add and when to add them.
The target volume is used to calculate a goal position for the camera, which is reached by interpolating position independently along the local Z axis (goal distance) and local X-Y plane. Treating position and zoom independently adds to overall perceived stability, as adjustments to one can negate the need for adjusting the other. We found that it generally feels nicer when the system moves rather than zooms to keep things framed, but both are necessary in many circumstances.
There are certain cases where a level designer or the player would like to either explicitly or implicitly control zoom, either for cinematic effect or to increase contextual awareness. Our system achieves this by artificially inflating the target camera distance by a value that falls back to zero over time. This happens automatically based on user action (when switching characters, taking cover, etc.) and can be controlled by triggers volumes in the world.
Rotation management is avoided by using a fixed list of valid target rotations for the camera. We interpolate between these rotations using a zone-based trigger system. We also add a small clamped rotation to the camera’s target rotation based on the offset between the selected characters and the center of the focused bounding volume. These additive rotations help to minimize camera translation and provide the player with additional contextual awareness — helping the player see over a short wall or along a corridor without being fully aware that the camera is rotating at all.
We knew that getting this to work perfectly wouldn’t be trivial, and we aren’t completely there yet, but our current prototype has made a lot of progress towards our goal.