A quick look back to how cameras got into our pockets
These days, the cameras built into our smartphones are our “digital eyes”. They are a mandatory tool in the arsenal of these modern swiss knives we use to meet the requirements of our modern lives. Under a strictly megapixel point of view, it’s been a long time since these cameras got “enough” pixels for the average user to take images of that good-looking meal, capture that epic foreign sunset, and record a selfie video, all for instant online sharing. Could someone these days agree to buy a mobile phone without at least one camera lens at the back to point and shoot, and one at the front for selfies (that also makes for a good mirror)? I doubt it.
Once manufacturers started to introduce phones with built-in cameras to the consumer market with barely enough resolution for documenting daily life, somewhere in the mid 2000’s, they allowed a more immersive and graphic virtual world, compared to that of the early days of the internet. Smartphone manufacturers probably never thought they would start a race against camera makers for the visual content creation in the also relatively new “virtual realm”.
In the second half of the 2000’s, faster 3G cellular networks allowed app users of social media platforms to surpass those using their computer browsers. The last string that kept us tied to our desktops to share content was cut. Companies like Apple sustained this mobile revolution by investing heavily on solid hardware and software capable of producing largely reliable and consistent visual files, ready to be uploaded on any of these virtual grounds. It became the gold standard for other manufacturers to follow on their quest for the best image and video machine in our pockets. On the other hand, Google, after decades of building their machine learning might, introduced another important keystone to smartphone photography with their “computational photography”. What phones have been lacking in hardware, they have compensated for it with software trickery.
Nowadays, imaging sensors have miniaturized to such an extent that cameras have become an ubiquitous commodity. Even modern cars and drones have them for other uses such as obstacle awareness. From having my first phone with one single back camera almost 20 years ago (Sagem my700x) boasting a rather modest 1.3 megapixel, these days I have at least 11 cameras that range from 8 megapixels to 12 megapixels at my disposal, due to a bunch of smartphones that I use, each with multi-lens camera systems. Just in case you are wondering, by no means I use all of them.
From high expectations to the reality of smartphone photography
All this progress is pretty amazing. I am a fan. I even wrote an article some time ago giving some tips on how to improve the craft of smartphone photography. But despite the technological awe, I dare to say that in many cases, the marketing promises that manufacturers make, just fall short, and the actual results we get from these devices are not professional (“pro”). Or not always, at least. What we see on their keynotes is at best, achieved under strictly controlled scenarios and heavy post-production editing. Some companies don’t even bother to try, and have been debunked for using photos on their presentations that were actually taken with professional DSLR cameras.
Naturally, there are reasons to prefer a phone over an independent camera system. After all, it not only takes pictures and videos. It also makes calls, arranges our lives, and guides us where to go. I believe the low-key ethos of these devices is that “anyone, anywhere, anytime can create a visual record of their lives”. This could be the reason why they are also exceptional travel cameras.
After making both mobile and standalone cameras part of my regular photography craft, there is a question often popping up in my head: what is a smartphone camera good for in these modern days? I have compared the performance of both kinds of devices on the same conditions, sometimes taking the same photos and videos, and there are aspects where I believe these pocket wonders have taken the lead, and also aspects where the next innovation might point to. I will translate that question in my head to a list of strengths and weaknesses of the artistic capabilities of smartphone cameras compared to their main competitors in the shape of DSLR and mirrorless cameras, and try to elaborate in which situation is each kind of device the most suitable for photography.
Where does a smartphone camera excel at?
- First and foremost, its portability. The advantages of their size and ease of use are pretty obvious. Even a small mirrorless camera with a decent lens attached to it plus its camera bag can weigh around one kilogram, killing the “portability” aspect compared to even the heftiest flagship phone of 2022, weighing around 250 grams with a thick case. It is hard to deny the convenience of carrying a very capable camera in the pocket, compared to the hassle of carrying a bigger camera body, a few lenses, maybe a tripod, filters, and other so on.
- When the elements of exposure are all favorable. In other words, low ISO and quick shutter speeds that are optimal for the fixed f value (aperture) of the lens in the vast majority of smartphones. This is often found in well-lit scenarios where mobile cameras can easily produce images with good levels of contrast and sharpness, such as daylight shots or studios with conspicuous lightning setups. I also find this particularly useful for landscape photography, where foreground and background separation is not that important. The results of a smartphone camera can be comparable to those of a (more?) professional setup.
- HDR images and videos. HDR stands for “High Dynamic Range”, and in simple terms, it is a process in which the camera uses a series of under, over and normally exposed images (usually 3) that are then merged together to finalize a photo that doesn’t look clipped in the bright and dark areas. At this point in time, most photography enthusiasts shooting with a phone might be well used to seeing images with a well-balanced contrast without noticing that this function is active. But what feels like magic happening in just a few seconds, it can be a tortuous and lengthy process for photographers owning a chunky interchangeable lens system that includes technical knowledge of how to take that series of photos, then export them to an editing software and then do further tweaking to get the final usable product. The same applies for videos, where the System on a Chip of the phone does all this processing, frame by frame. Just to think about it is really mind-blowing.
- When shooting in extreme weather conditions. From all true innovations and weird gimmicks of newer phones, the Ingress Protection Code (IP) certification for water resistance is a feature that actually makes a lot of sense. I can tell the virtues of this feature first hand when I think of that time when I traveled to Nepal to hike the Himalayan mountain range a couple of years ago. Snow started to fall unforgivingly during our daily hike towards the last point of the route. My camera, which was not weather-sealed, was not suitable for that kind of weather and stayed in my backpack wrapped in a waterproof cover for the whole day. My Pixel 1, on the other hand, was the only camera I was able to use to document this part of the trip. I brought back decent 12 megapixel photos and videos in a respectable 1080p resolution. It stayed in my hand for the whole day, enduring low temperatures and snow (water) all along.
- When taking stabilized footage. The technical improvement of smartphones has been noticeably faster in the video department in the last few years, compared to their standalone counterparts. Image stabilization is at the center of this. It initially started with electronic image stabilization (EIS), an exciting feature that stabilized the footage by doing a small crop (zoom-in) of the image to compensate for undesired movements. But the really impressive breakthrough took place when phones started to feature optical image stabilization (OIS). This means a physical shifting of the sensor when shooting handheld. Although this technology has existed for some years now for mirrorless cameras, it is still reserved for high-end models worth thousands of dollars. In the realm of smartphones, though, almost every mid-range phone has an interpretation of this technology, not to mention the flagship models. Two out of three lenses on the back of my iPhone 13 Pro are stabilized while my Canon mirrorless camera only offers EIS.
- A rapid start-up time. Taking less than three seconds to activate, it is easier to capture the right moment when it happens with a smartphone using its camera built purposely for instant captures. I have enjoyed this advantage for many years, and find it particularly useful for travel and street photography, where action happens in a more casual, unintentional fashion. In these almost unpredictable scenarios, there is barely any time to dial the right exposure settings from “off” to “on”. Instead, they need to be right for the moment that is unfolding before the eyes.
- Immediate online sharing. The software controlling the camera hardware is tuned with the very purpose of uploading content. With the rise of YouTube and other content creation/consumption platforms, entertainment is being reshaped to smaller screens and it has become more accessible and faster to create content that doesn’t have to be perfect but it has to be engaging. People these days are looking for that phone that allows them to share the best looking photo with little to no editing. This is why a harmonious software and hardware combination that delivers such results is a high visibility buying point for buyers, eager to post their content on demand, and manufacturers have become very good at meeting such demands. In contrast, in order to upload a JPEG photo file stored in a camera can take several steps, depending on the age of the model.
- Sustained R&D investment with amazing results. Periscope technology for x10 and x30 zoom capabilities, image stabilization, and other impressive technologies built in our phones these days were developments already present in other kinds of cameras. The fascinating part is how those technologies eventually shrunk and came to live within the minuscule frame of our phones thanks to the massive investment put on this industry. Who would have said 10 years ago that phone cameras would be able to produce RAW files with an impressive 14 stops of dynamic range? The one making more noise these days is the concept of computational photography that I mentioned in the introduction. It was conceived to compensate for the lack of bigger, better hardware, and get the most out of every snap. Unlike their standalone counterparts, smartphones take advantage of the massive processing power of their ever-improving SoC (System on a Chip) to perform all this digital witchcraft and deliver, at least, comparable results.
Ok. If the cameras in our pockets are so good, why do professional photographers still hesitate to go out with just a phone in their hand, or don’t even look at these devices with the same love and devotion they look at their DSLR and mirrorless cameras and their lenses? Well, there are some good reasons…
Where does a smartphone camera struggle?
- Starting from the most important, its sensor size. Being smaller and thinner with every iteration is not a good thing for phone cameras. This is because the bigger the sensor size of a camera, the better it is at capturing light. Unfortunately for smartphone photography enthusiasts, in order to be profitable and make business sense, these devices have to do more than just take great photos and videos. That is why space for better hardware is always compromised and it is always competing for the most real state inside most phones. Even entry-level mirrorless cameras will inherently do a better job in terms of quality of the image, more pixels to play with and higher resolution, at least with current image sensor technology just because they are dedicated devices to do the job.
- When the elements of exposure are not favorable. Another problem derived from a small sensor is the poor quality of photos taken in low-light scenarios. However, a great part of computational photography, developed first by Google and now followed by every manufacturer, has gone to tackle this issue by giving these cameras the ability to “magically” take photos even when light in the scene is scarce, and even without a tripod. Once the camera app is opened when shooting at night, for instance, the phone takes an instant to evaluate the scene and decides to switch to its “night mode”. This feature, though, still feels half-baked, yielding inconsistent results that range from truly incredible to dreadfully grainy photos. Part of the problem is the stubbornness of phone makers to provide control over the shutter speed, ISO and aperture, or at least the option to do so. The unmatched flexibility of adjusting lens and body settings of a standalone camera in tricky conditions produces far better results with low image noise and low levels of frustration too, even with a cheap lens attached to it.
- Manual controls and auto exposure. Unless you buy a third-party app to have more control over the hardware, the native camera apps keep their smartphone users away from the creative possibilities of the camera system. There is no “pro” or “ultra” that counts for me if I cannot decide how to adjust the exposure triangle of my shot, or the white balance (how warm or cold) of the video or image I am taking. Moreover, there are optical aberrations inherent to the lens construction that can be addressed effectively when manual control is possible, such as chromatic aberration (an unfocused purple or magenta “ghost” outline on a photo), which is quite severe and untamed in smartphone photography in general. Only a few niche phones offer this level of control. Sony’s Xperia Pro-I and Xperia 1 iii are the most remarkable examples of truly pro software and hardware, but they come with a hefty price tag too.
- Background separation. Yet another problem derived from the small sensor size, is the inability of smartphone cameras to produce the coveted “bokeh” in an image. True, it is possible to get a nice background blur with a phone, but in order to do so, the subject has to be very close to the lens. Again, computational photography comes again to save the day here. Sort of. The “portrait mode” of modern phones can emulate the blur of the parts of the image behind a subject to create this separation that looks like a magazine shot. However, similar to the night mode explained above, this feature still feels like a gimmick more than an actual professional tool due to the inaccuracy of the software to “decide” what is in the foreground, and what is not. In the video department, this inability to separate planes makes these cameras unusable for takes that require focus pulls (coming back and forth) between subjects. The sensor size and wide(r) aperture lens allows organic control over depth of field in big cameras. I can blur backgrounds in portraits for real, and the results trump any Portrait mode. This isn’t a fallible smartphone computational workaround; this is the real thing and it is easy to tell.
- Lens construction. I find it unusual that so little is said online about the fabrication of smartphone lenses, or the attention that it needs to produce good still or moving images. But here I am to dedicate the following lines to it and its importance. While the optical elements of DSLR and mirrorless camera lenses are all made of glass, those of smartphones are still made out of some type of plastic, except for the outermost element, which is usually scratch–resistant glass. Glass is a superior material to let light pass through and preserve details and colors once it reaches the sensor. This is an inferior construction that results in problems like loss in sharpness and contrast in the image, regardless of the awesomeness of the sensor, as well as a catastrophic flaring whenever the camera is pointed against bright light sources. While most of those bigger lenses suffer from similar problems, there are practical solutions such as lens hoods, filters or aperture adjustments. Phone lenses, on the other hand, are completely defenseless against these optical aberrations. Again, some manufacturers like Sony are taking their photography-oriented phones to the next level by applying their mirrorless camera know-how to their mobile phones. Its Xperia Pro-i boasts Zeiss optics. Whether it can deliver truly professional images is still to be determined.
- Each camera has its own factory tuning. This gives photos and videos a particular color cast or particular warmth or coldness. In fact, this is a matter of extensive discussion among online reviewers, who compare the results of different phone models pointed at the same scene but rendering varied “interpretations” of it. Some manufacturers like Apple have a well-established warm, rich-colored look that people have come to recognize almost immediately. I have the opportunity to prove this using the different phones that I happen to own: a low-end workhorse, to the camera-on-a-phone iPhone Pro 13. This factory tuning can always be overridden if the phone can shoot RAW photos and adjust these parameters in an editing software. However, doing so defeats that smartphone ethos of practicality and immediateness of snapping and sharing a moment as it happens.
The gap between mobile and professional content creation is closing fast…or just getting blurred
If I had written this article a few years ago, it would have been one to point out all the reasons why standalone cameras are superior to smartphone cameras in every way. Not so long ago they were really simple light sensors. With the investment in the smartphone industry increasing steadily, the discussion has become more relevant.
As a photographer, I still use a phone for photos and videos. But I do not sell these files to a client, instead, I can immediately share them with my family and friends, through cellular or wifi networks. This is the maxim of smartphones: internet content creators for online content consumers consuming content in small screens. What kind of content do you want to create? For which audience? Through which channels? Answering these questions can tell you “what is a smartphone camera good for?”
Mobile phones offer a very comfortable and modest starting point for content creation. They have paved the way for online influencers. Most of them started their journey using their phone, a tripod and some light setup. In a world where online sharing of photos and videos through social platforms is an intricate part of modern life, it only takes a good plan and consistency for anyone to start creating, without the technical aspect being a hindrance.
I consider it is still a long way for smartphone cameras to be on par with their professional counterparts, no matter how wonderful Google, Apple and Samsung say they are, and will remain like this for some time. When it comes to still images, sensor size, pixel count, overall image sharpness, lens build, will limit the usefulness of these cameras for paper prints or giant banners. Their inability to interchange lenses and control the settings also limits their artistic potential.
On the videography side, directors of photography do not dare to rely even on full-frame camera bodies to do this job, despite their mesmerizing capabilities. This is due to more specialized needs such as shooting profiles that retain more information from the scene. But, who knows? Professional cameras are shrinking from the impractical and chunky sizes to nimble mirrorless bodies, and smartphone cameras’ hardware is getting bigger and their software processing more powerful to handle higher resolutions and frame rates, offering cine-like features. Maybe sooner than later, we might be admiring a gallery or watching a motion picture project executed partially or entirely using a smartphone.