In recent years, due to the repeated outbreaks of the COVID-19 virus, many traditional imaging brands have chosen to switch production, stop production, or even sell their shares. On January 12, 2022, Canon Zhuhai Co., Ltd., established for 32 years, announced that it would suspend production and close its doors. Coincidentally, Nikon also closed its production base in Jiangsu, China in 2017, and also completed all camera production lines in Japan in 2021. Similarly, Panasonic, also in the M4/3 camp, also spun off the digital camera business from the main company. Now, back to the subject matter, what is the evolution of mobile phone photography vis-a-vis camera lens setup/technology?
Evolution of Mobile Phone Photography
According to statistical research, in the nine years from 2012 to 2020, the production capacity of digital cameras dropped from 10.93 million units in 2012 to 1.029 million units in 2020, and the number of lenses dropped from 36.43 million to 12.29 million. The number of cameras dropped from 570,000 units to 94,000 units.
Interestingly, the timeline of this decline curve coincides with the rise of smartphones. According to IDC data 2016, global smartphone shipments reached a historical peak. 1.473 billion units. Although the overall market has approached saturation in recent years, the global smartphone shipments in 2021 will still be 1.35 billion units.
What is surprising is that in the long history of photography spanning more than 180 years, the first camera phone was released in 2000. This release date does not seem so far away. It isn’t even ten years old. So, how did mobile phone photography achieve “astronomical growth” in such a short period?
How to Choose the Best Interchangeable Lens for Your Digital Camera
Mobile Phone Photography Growth Before 2010
Before 2010, mobile phone photography was in slow and dreadful growth. It might interest you that smartphone hardware and software performance were relatively weak at this stage, and the system was still in the era of switching between functional phones and smartphones.
As a result of this, mobile phone photography at this stage was more limited. Most people still use traditional portable digital cameras to take pictures when traveling.
In 2009, the iPhone 3GS just started to support the video recording function, and the still photos were only 3.2 million pixels. At that time, social networking was still in the early stages of web pages. The photos are also taken with traditional cameras, so almost all users have limited knowledge of mobile phone photography.
The First Mobile Phone with a Built-in Camera in History Looks Very Retro
In September 2000, Sharp and Japanese mobile operator J-PHONE released the first Sharp J-SH04 mobile phone with a built-in 110,000-pixel CCD camera, but it did not receive much attention.
In the following years, the market’s enthusiasm for the camera function of mobile phones was significantly awakened, and this function has quickly become the selling point of mobile phones.
The Nokia 7650, Sony Ericsson T68i, and other products have emerged, of which the Nokia 7650 with a 300,000-pixel camera is believed to be the first camera phone for many post-70s and 80s.
Mobile Photography in the Era of Feature Phones
The time has come. In 2006, the Samsung B600, which had gained a lot of card camera design experience, came with a 10-megapixel CCD camera and a subversive appearance. It is the first mobile phone with autofocus and has 3X and 5X optical zoom and five.
The appearance of the digital zoom function means that, for the first time, the mobile phone has a “parallel” status with the camera. However, from the perspective of superiority, we have also witnessed the failure of mobile phones to learn from the design ideas of cameras.
In general, the early development of camera phones was actually synchronized with the evolution of the entire mobile phone industry. Cameras and other parts, such as screens and processors, constantly evolved.
Of course, the type of mobile phone is still dominated by feature phones. The Nokia Symbian model, which can barely be called a “smartphone,” does not have many camera applications.
The second revolution of camera phones is obviously with the development of the iPhone and Android phones.
In 2007, the Birth of the Original iPhone Heralded the beginning of the First Year of Modern Mobile Photography
2007 can be said to be the “first year” of modern mobile phone photography. This is because it was in this year that the first generation iPhone was born.
Although it has only 2 million pixels, its subversiveness lies in its execution from hardware to software ecology.
Most of the photography and video apps we use now have the best performance on iOS, and the appearance of the iPhone is a milestone for the entire photography field.
In 2010, Starting with the iPhone 4, Entered a stage of Rapid Development.
When the first-generation iPhone and Android phones were released, they didn’t use photography as a selling point. For example, the first iPhone only had a built-in 2-megapixel non-autofocus camera, and the shooting effect was mediocre. Before 2010, Nokia, which still used the Symbian system, had a great advantage.
The major event that reversed this situation was the iPhone 4 launched by Apple in 2010. This smartphone with a FaceTime+480P front camera and a 5-megapixel rear camera abandoned traditional design ideas and used iSight sensors and algorithms.
Thanks to the explosive growth of applications on the iOS platform, many shooting and sharing applications have burst onto the scene, allowing users to develop the habit of using them whenever they want.
The iPhone 4, with Front and Rear Cameras, is the Starting Point for the Explosion of Mobile Photography.
The great leap forward in the camera of the iPhone 4 also prompted the evolution of Android. The classic representative is the Samsung Galaxy S series. The first Galaxy S also has a 480P front camera and a 500-pixel rear camera.
The rear camera also supports automatic focusing, facial recognition, anti-shake functions, and support for panoramic scanning shooting. These are still the core technologies at the moment, and it was in that era that they began to precipitate.
This period is also the end of the functional phones because Nokia also launched the N8 with 12 million pixels in the same period. Still, due to the serious lag of the Symbian system, it could not be affirmed by users. It is equipped with a 41-megapixel Nokia 808 PureView.
It can be seen that the user experience and application richness of the software system platform has begun to stand at the center of the stage, becoming an equal or even more critical part of the hardware specification.
Despite the 41-megapixel, the Failure of the Nokia 808 PureView Taught The World the Importance of Mobile Phone Operating Systems.
Simply put, starting with the 5-megapixel iPhone 4, under the leadership of Apple, mobile photography entered the stage of rapid development of hardware and software. That is, WeChat, Twitter, Facebook, etc.
The popularity of social media has made photography a rigid social necessity. Hence, mobile phone photography has become the core of mobile phone design.
From this stage, the distribution of the iPhone, Android, Windows Phone, and other platforms has been basically finalized.
The improvement of mobile photography technology can be summarized as the diversification of image sensors and the progress of image processing engines.
At the same time, with the steady improvement of lens quality, the beginning of CMOS technology designs customized for mobile phones; for example, the HTC One focuses on the concept of “UltraPixel,” which increases the pixel area but reduces the total number of pixels to achieve better low-light shooting effects.
Apple’s iPhone 5s also uses the same concept, but Apple’s method is smarter. The iPhone 5s increases the pixel area and the sensor size proportionally, so 8 million pixels are retained.
The Nokia Lumia 1020 is transplanted with the 41-megapixel sensor of the 808 PureView, which can still achieve the effect of “lossless zoom” with ultra-high pixel density. At the same time, the Windows Phone platform is more powerful than Symbian, and the shooting experience and software applications are better. Excellent, and attracted a lot of attention in the beginning.
Most of the Mobile Phone Photography Technologies Launched by Traditional Camera Manufacturers are Based on the Technical Characteristics of Their Traditional Cameras
At the same time, Samsung and Sony, the upstream giants of CMOS, also played a lot of tricks. Among them, Samsung directly transplanted the 1/2.3-inch 16-megapixel sensor and 10x optical zoom lens from the digital camera to the Galaxy S4 Zoom.
In addition to the image quality improvement, the Android system also brought more abundant applications. Sony focuses on integrating the “G” lens of Sony’s traditional camera and the concept of the BIONZ image processing engine into the Xperia Z1 mobile phone.
The large 20-megapixel sensor also has good image quality performance, and it is not difficult to find it today. Sony is still using this “traditional camera to phone” strategy.
2016: The Pixel Battle Comes to an End, and the Multi-Camera Era Arrives.
Before 2016, “how many pixels should be” on mobile phone cameras was still controversial.
In 2008, the mainstream pixels of mobile phone cameras were 5-8 million, and the side length of a single-pixel on CMOS sensors was 1.75 m; in 2009, the mainstream pixels were 800-800 million, and the side length of a single-pixel was 1.1 m.In 2016, the Apple iPhone 6s Plus opened the legend of “ancestral 12 million” pixels, and single-pixel side length does not decrease but increases.
This fully shows that blindly increasing the camera pixel has been gradually disapproved when the mobile phone sensor is at most 1 inch.
The reason is that if the area of a single pixel is only 1.10μm, to ensure the signal-to-noise ratio, the pixel needs to have a higher sensitivity, which also means that the noise will increase irreversibly. In other words, it is to touch the physical material properties and sensor structure ceiling.
And we have also foreseen the failure of the traditional optical zoom design on mobile phones. Therefore, how to achieve rich shooting functions on mobile phones has become the key to a breakthrough in the field of mobile phone photography at that time.
As early as 2011, HTC and LG took the lead in launching dual-camera mobile phones. In 2014, Honor and Coolpad joined them. In 2015, ZTE and 360 also launched dual-camera mobile phones.
Importantly, in 2016, Apple launched its first dual-camera model, the iPhone 7 Plus.
The popularity of multi-camera phones has opened the door to a new world of mobile photography.
The dual-camera scheme is generally divided into two combinations: one is the dual-focus segment, that is, one primary camera and one telephoto or wide-angle, and the other is a color + black and white design that enhances low-light shooting performance or both colors for depth of field calculations.
Obviously, the functional purposes of these designs are not the same, and in the dual-camera era, we can only be forced to choose. Still, everyone knows what will happen later: the arrival of the three-camera, four-camera, and even five-camera systems will package and integrate all these functions.
Since 2015, multi-camera has become the mainstream mobile phone imaging technology.
According to IDC data, the penetration rate of dual cameras for Apple and Android phones peaked at 53% in 2019. Then the penetration rates of triple-cameras and quad-cameras rose rapidly by 10%.
It is not difficult to see that the penetration rate of Android and Apple’s multi-camera will exceed 60% in 2021, and multi-camera mobile phones will occupy a dominant position.
Similar to the increase in the number of cameras, the number and material of mobile phone lenses are also silently increasing. To optimize the imaging effect, 35.6% of smartphones in 2018 had 5 main cameras, and 64.3% of mobile phones had six main cameras. 0.1% of mobile phones have seven lenses as the main camera. The CC9 Pro, released by Xiaomi in 2019, is also the first mobile phone in the industry to use eight lenses.
The Mobile Phone Lens is also Quietly evolving and has Developed to Include Eight Pieces of Plastic.
In terms of materials, the common lens materials at present are plastic and glass. Plastic lenses are low-cost and easy to mass-produce.
They are the mainstream mobile phone lenses, but the disadvantage is that the optical quality is relatively weak, while the performance of glass lenses is better, but mass production is difficult, the production cost is high, and it is difficult to use in the field of mobile phones widely.
Therefore, glass-plastic hybrid lenses have become inevitable. For example, the imaging effect of a glass-plastic hybrid lens composed of 6 plastic lenses and 1 glass lens is equivalent to that of 8 plastic lenses. Still, a few lenses are conducive to reducing the thickness of the overall camera module.
In addition, the development of telephoto lenses for mobile phones also ushered in a historic moment in 2019. The Huawei P30 Pro was equipped with a periscope lens design for the first time and later included several manufacturers such as Xiaomi, Samsung, Honor, and Vivo.
They have all launched mobile phones equipped with periscope telephoto lenses. The periscope design is also derived from the technology of the card camera era. The telephoto lens is arranged horizontally to form a vertical layout with the wide-angle lens. Using prism refraction to achieve imaging can significantly increase the camera’s physical focal length while ensuring the mobile phone’s thin appearance, but the processing is complicated. It is relatively high, and the requirements for the prism’s refractive transmittance and placement accuracy are very high.
Periscope ultra-telephoto lenses were the new eye-catcher in 2019.
However, the periscope telephoto is currently in a “high and low” trend. Since 2022, including Realme, iQOO, Honor, OnePlus, and the Xiaomi 12 series released at the end of 2021, these five models are equipped with the new Snapdragon 8 mobile phone. The flagship mobile phones of the platform have silently canceled the periscope telephoto lens, and most of them have been changed to ordinary telephoto lenses.
The Xiaomi Mi 12 Pro even canceled the telephoto lens. Why?
First, the periscope lens has a more complex structure, higher cost, and higher adjustment difficulty than ordinary telephoto lenses. In addition, the periscope design takes up a lot of internal space.
Secondly, it has caused ethical disputes. Many netizens have previously said that some people use periscope telephotos to secretly take pictures, which poses a threat to personal privacy and security protection, not to mention that, in addition to designing dazzling skills, the ultra-high magnification telephoto has the potential to It is not a commonly used function in the field of mobile phone photography. Most people like to use telephoto when they just buy a mobile phone.
Therefore, periscope telephoto is a good promotion point for manufacturers, but it is also very important for users. It is not that important, and the cost is high.
Electronic + optical image stabilization has become the industry direction
It is worth noting that many telephoto lenses have also made anti-shake a new pain point. Digital image stabilization technology on mobile phones first appeared on the LG Viewty mobile phone in 2007. Still, it was not until 2012 that the Nokia Lumia 920 was officially equipped with optical image stabilization.
The Reno 10x zoom version released by OPPO in 2019 realized electronic image stabilization for the first time. A mix of image stabilization and optical image stabilization. In 2020, the iPhone 12 Pro Max introduced CMOS image stabilization, the concept of body image stabilization in the camera field. Later, OPPO linked CMOS and optical image stabilization to achieve hardware-level five-axis image stabilization.
Vivo even inserted a micro-cloud platform directly into the X50 Pro to compensate for the displacement jitter generated during video shooting. It can be said that the anti-shake technology in recent years has come to stay. However,” in my opinion, the mainstream in the future will still be the combination of electronic image stabilization and optical image stabilization.
With 3D perception lenses, we can scan environmental data and generate 3D models.
Regarding cameras, many people may only have them for taking pictures. In fact, some efficient imaging technologies on modern smartphones are not directly used for taking pictures: 3D perception.
In September 2017, the Apple iPhone X debuted 3D structured light facial recognition, ushering in a new era of biometrics. Due to its high accuracy, speed, and security, 3D facial recognition can be used for mobile phone unlocking and people—face payment.
The 2020 iPhone 12 Pro Max also features a rear-mounted LiDAR camera. Since Apple released structured light, the Android camp has also begun to promote the 3D perception function gradually, and the configuration of the mobile phone’s rear ToF (Time of Flight) lens has come into being.
ToF is more suitable for 3D visual imaging technology for mobile phones than structured light. It has a lower cost, a wider range of applications, and is more favored by mobile phone manufacturers.
More and more manufacturers are beginning to try the rear camera ToF solution. Combining this with the gradual popularization of 5G commercial use, new development opportunities are ushered in for the application of 3D vision in mobile phones.
With the support of 5G technology, ToF lenses are just what is needed to meet VR and AR game scenes. Therefore, carrying ToF lenses in the 5G era will become a major trend in mobile phone design. According to Techno Systems Research data, the penetration rate of smartphones using ToF lenses in 2019 was 3%. However, the penetration rate is expected to exceed 30% by 2023. This will become the standard for mid-to–high-end models.
Beyond 2018: Welcome to the era of Computational Photography
The essence of multi-camera is the fusion of multiple CMOS sensors, which means entering the era of multi-camera and also means entering the era of algorithms.
At the launch of Apple’s iPhone 11 series, Apple Senior Vice President Philip Schiller introduced the concept of “computational photography” was introduced in the imaging system of the iPhone 11 Pro series, and this concept was also made known to the public for the first time.
Constrained by the Hardware ceiling, Computational Photography has become a new Technological Breakthrough.
The Magic 3 series of mobile phones released by Honor in the second half of 2021 adopts the computational photography technology of multi-main camera fusion.
When the main camera and the wide-angle lens are fused, the center sharpness can be improved by up to 80%, and when the primary camera and the telephoto lens are combined, the center sharpness can be improved by up to 180%…
Behind these numbers is the multi-camera fusion technology that simultaneously links multiple cameras to shoot to optimize imaging. This technology has experienced four major processes: scale alignment, pixel alignment, formulating fusion strategies, and image fusion. Step.
Primary Challenges of Computational Photography
Many challenges will be encountered during design:
- How to perform fusion as accurately and quickly as possible within limited computing power?
- How to handle mutations infusion and non-fusion regions?
- How to tell which pixels need to be fused?
- What are the weights of each of the two input images when fusing the images?
- How to use the hardware support of the underlying system to achieve efficient computing?
- How does it work closely with other functions of the camera system?
It can be said that every shot of a modern mobile phone is behind the hard work of countless engineers.
Multi-frame Synthesis and Pixel Merging, In Modern Phone Photography
Multi-frame synthesis and pixel merging, combined with an artificial intelligence algorithm to complement color, constitute the software technology fortress of mobile phone photography.
Moreover, the booming development of computational photography has also rediscovered the path of “curves to save the country” for those previously constrained by physical ceiling designs, such as ultra-high pixel CMOS. Samsung Bright HMX can already achieve 1/1.33-inch 108 million pixels. Still, most At the same time, it runs in the 12 million modes with 9 pixels merged into 1 pixel and uses the multi-pixel integration method to achieve single-frame HDR shooting.
In addition, it has the high sensitivity of a large pixel area and can be switched to high pixel density at any time. It also launched the function of shooting 8K video on mobile phones so that the screen output is no longer a single specification and can flexibly respond to different needs.
Google’s Contribution to Computational Mobile Photography
From the perspective of the contribution of computational photography, the importance of Google should not be ignored.
For example, everyone is familiar with the scene mode was first released on the Google Pixel 3 in 2018, using more than 15 multi-frame continuous shooting fusions. The method of artificial intelligence color filling to make up for the congenital deficiency of the small sensor of the mobile phone has also been carried forward in later models such as the Apple iPhone 11 and Huawei P30 Pro.
There is also a super-resolution zoom algorithm. The mobile phone will automatically collect the offset formed by the jitter and fill in the missing pixels after the digital zoom.
When no jitter is detected, the optical anti-shake component of the lens will be automatically adjusted. Regular movement causes slight offsets, which adds a 1.5x zoom function to the camera, which is especially useful when switching between the main camera and 2x telephoto: for example, at 1.6x zoom, the conventional design can only forcefully crop the main camera image, but after experimenting with the super-resolution zoom algorithm, this trouble is gone. Later, Apple’s Deep Fusion also adopted this algorithm.
The filters Algorithm is a major achievement of artificial intelligence in photography.
The progress of artificial intelligence algorithms has promoted the development of mobile photography in derivative fields, such as social media apps, the most representative of which is the beauty algorithm/photo editing. At present, whether it is pictures, short videos, or live broadcasts, a large number of filters and enhancements have been added.
The face is detected through a deep neural network. After obtaining the key points of the face, you can perform face-lifting, skin resurfacing, whitening, and other beauty operations on the face area in the picture. You can use OpenGL ES on Android and on Apple. You can use metal to achieve various facial enhancements, such as face-lifting, skin resurfacing, and whitening, according to the position of the key points on the face.
At this time, many people may ask: Is computational photography a photography? After all, Huawei’s telephoto shooting of the moon caused a lot of discussions.
Huawei recently disclosed its patent application for “a method and electronic device for shooting the moon” in 2019, which details how to automatically identify the moon and enter a special mode, adjust focus and exposure parameters, and create a multi-frame composite.
In our opinion, photography is no longer as simple as a chemical reaction after entering the digital age. Digital photography is all about converting light signals to electrical signals.
Software processing will lose a lot of information, destroy a lot of information, and then supplement a lot of information through algorithms, and the quality of different algorithms is also very different.
If this kind of processing of traditional digital cameras can be recognized as real photography, then computational photography is essentially no different from them.
They use raw data more intelligently and mathematical means to restore as much reality as possible.
There is no automatic reproduction at the software level. As for coloring, let me ask which master photographer does not post-processing, and there are not a few people who create secondary creations, so there is no need for controversy when thinking about this issue carefully.
The vitality of the camera continues on the mobile phone.
So, at present, the development of mobile phone photography is determined to be a matter of algorithm priority and hardware supplementation, so what work do mobile phone brands need to do in the market?
One of the answers is to rely on the brand effect of traditional cameras to achieve breakthroughs. After all, in the entire photography circle, even if there is a sharp contrast in shipments, the image quality of traditional cameras in the eyes of the majority of users has always been relatively high. After all, mobile phones cannot quickly displace professional cameras in terms of productivity.
Huawei and Leica signed an innovation lab cooperation agreement to make the future of mobile photography more sustainable.
Therefore, in 2016, Huawei and Leica jointly released the Huawei P9 mobile phone. The phone is equipped with a set of dual cameras tuned by Leica.
A black-and-white photosensitive lens is responsible for collecting details and outlines, and a color-photosensitive lens is responsible for collecting colors. The cooperation has significantly enhanced the camera effect of the P9 and released the iconic image IP of “Huawei + Leica.”
Since then, professional camera manufacturers have gradually become an essential help for mobile phone manufacturers to hit the high-end market, and more and more mobile phone manufacturers have also joined the camp of cooperation with camera brands.
At the end of 2020, Vivo and Zeiss they jointly launched the Vivo X60 series. However, unlike Huawei and Leica, Vivo and Zeiss have not only cooperated in technology but also in hardware and process research and development. For example, the Vivo X60 Pro+ not only adopts the “Vivo” ZEISS Joint Imaging System but also adds the ZEISS optical lens and T* coating, which are two of ZEISS’s most classic highlights, which further enhances the professionalism of mobile phone imaging.
In March 2021, OnePlus teamed up with Hasselblad. During the pre-sale period, the newly launched OnePlus 9 series exceeded 10,000 units in pre-sale in 5 minutes.
Obviously, with the blessing of traditional camera brands, the image of mobile phone photography has become more professional, which is undoubtedly good.
However, the co-branding of mobile phones and cameras is not new in recent years. Zeiss and Nokia have also been cooperating.
In addition, Moto Z has also launched a Hasselblad module. In this era of fast-paced changes in new phones, at most, only a little memory can be left, which is why Huawei and Leica established the Max Berek Innovation Lab to conduct in-depth research and development in new optical systems, computational imaging and other fields; Vivo In the next three years, OnePlus will invest a total of $150,000,000 in collaboration with Hasselblad.
Summary: From Hardware to Software to Algorithms, The Influence of Mobile Photography will only Increase.
Looking at the entire development history of mobile phone imaging technology, it is not difficult to see that the real technological explosion occurred in the past ten years.
Most noticeable is the popularity of multi-camera solutions, which has brought the functions and gameplay of mobile phone photography into an era of exponential growth.
Sony even At the end of 2021, the Xperia PRO-I with a 1-inch outsole CMOS was launched, and the focus and continuous shooting algorithms of traditional cameras were also used so that the hardware level of mobile phone images was officially on par with its black card series.
In addition, major brands have made great efforts to enhance the functions of special depth-of-field lenses, ultra-wide-angle lenses, ultra-telephoto lenses, cine lenses, ultra-macro lenses, and even ultra-spectral lenses.
It is not difficult to see that, at this stage, mobile phone photography has developed from a practical record of life to a situation where a hundred flowers bloom, but do users need so many functions? Why is there such a trend?
“Involution” is one of the reasons. Although mobile phone manufacturers want to reduce costs, fierce market competition forces them to adopt conservative solutions. After all, the phrase “I don’t need it, but you can’t do it without it” kills. The force is too strong, so I started to fiddle with all kinds of new gameplay shots.
On the other hand, the pressure of public opinion has made mainstream mobile phone manufacturers less likely to overuse their mid-range and above products, so 1-inch outsole, macro, hyperspectral, and other lenses have become the best substitutes. After all, they still have certain practicality, and cutting off the periscope telephoto is a cost consideration. In the second quarter of this year, we may see more new models using this design idea.
In the 5G era, the data center positioning of mobile phones is becoming more evident, and their functions are becoming more important.
The software functions and algorithm mechanisms derived from mobile phone photography have affected the entire IT industry through butterfly effects. Its professional film exhibition, such as the FIRST Youth Film Exhibition jointly launched by Vivo and Zeiss, in addition to the camera, has also expanded many functions, such as monitoring heart rate, respiratory rate, and even lateral blood oxygen concentration through LED supplementary light.
3D perception to do environmental scanning modeling… It is foreseeable that mobile photography is expanding the concept of “photography,” and its influence will maintain its current momentum in the future, making it worthy of the attention of all technology enthusiasts.












Leave a Reply