Connect with us

Tech News

Apple is playing catch-up with the iPhone 11 camera

Published

on

“Customers love iPhone because we focus on technologies that matter in their lives,” Apple’s Kaiann Drance said when introducing the iPhone 11 yesterday. If that’s the case, then Apple’s competitors have been doing the same thing for even longer when it comes to the camera. What might previously have been dismissed as gimmicks are now headline features for Apple.

The two biggest additions to the iPhone 11 camera system, the ultrawide lens and night mode, are commonplace on Android phones. That’s not really relevant for most iPhone buyers, who just want a phone that runs iOS and will enjoy the new capabilities. But since it’s impossible to know whether Apple has caught up to competitors in the area that matters most — basic image quality — the camera section of the presentation felt a little flat.

Apple was among the first companies to introduce a dual-camera system on a phone, and certainly one of the first to make it really useful. The iPhone 7 Plus’ telephoto camera enabled portrait mode and greatly improved zoom image quality, the one area where phones still lag cheap point-and-shoot cameras. So it was a little surprising to see Apple ditch the telephoto camera in favor of the new ultrawide lens for the iPhone 11’s dual-camera system.

Make no mistake, ultrawide is a great feature, and Apple spent a lot of time explaining the dramatic creative possibilities it enables. Anyone upgrading to the iPhone 11 will have a lot of fun with it. But why now? LG deserves credit for pioneering ultrawide cameras on every one of its flagship phones since the G5 in early 2016, and now in 2019 pretty much every other mid-to-high-end Android phone has one. Apple is simply catching up here.

That’s also true of the iPhone 11 Pro, which features a triple-camera system like every other flagship phone this year. Apple’s Phil Schiller called it a “pro camera system,” though if the Pro is doing anything beyond the regular 11 other than keeping the telephoto around and improving the aperture to f/2.0, he didn’t say. Schiller pointed out that between the ultrawide and telephoto cameras, the 11 Pro has a zoom range of 4x, which is true. But it still doesn’t have any further reach than the XS, and it can’t match phones like Oppo’s Reno 10x Zoom, which (confusingly) has about 8x optical zoom range with its ultrawide and 5x telephoto lenses.


Night mode, meanwhile, is a feature that exposed Apple’s lack of competitiveness in low-light photography when Google brought it to Pixel phones a year ago, and the situation was compounded by Huawei’s even more impressive take on the idea. In truth, the iPhone XS is worse than basically all of its competitors in low light even when they’re not using a night mode, though the way the iPhone 11 automatically activates the feature should help a lot there. Again, though, it’s a defensive addition rather than an innovation. Apple simply had to add a night mode this year to even remain in the conversation.

As far as basic image quality goes, Apple didn’t have a lot to say. Last year the company made a big hardware leap by adding a physically larger main image sensor to the iPhone XS, so we were unlikely to see a similar change in the iPhone 11. The biggest difference with the main camera is that it now uses 100 percent focus pixels across the whole sensor, which should supposedly give three times faster autofocusing in low light. The selfie camera gets a more significant improvement, jumping from 7 to 12 megapixels and using a wider lens — though portrait selfies are zoomed and cropped to 7 megapixels by default.

It was notable that Apple kept claiming that the iPhone 11 can shoot the highest quality video on a smartphone, a claim that is entirely believable. The iPhone’s video capabilities are already class-leading, and with improved extended dynamic range recording in the 11, there’s no reason to expect anyone to catch up any time soon. Apple really can’t say the same thing about still image quality, however, and that’s where attention will fall when the new iPhones make their way into the world.

As ever, the new iPhones’ image quality will be defined by the company’s software stack and how it works with the image signal processor in the A13 Bionic processor. In other words, will Smart HDR get any better? Apple says it’s tweaked the image pipeline, now including “semantic rendering” to obtain a better idea of the subject and how best to expose the photo, while “next-gen Smart HDR” makes use of multi-scale tone mapping to handle highlights in specific parts of the image. There’s also a new feature called Deep Fusion that Schiller described as “computational photography mad science,” but it won’t be ready until later in the year.

Smart HDR is a technically impressive feature that retains a lot of dynamic range and editing latitude in most photos, but it doesn’t always produce the most pleasing images. Next to Google’s Pixel phones, for example, iPhone XS photos often appear to lack punch and contrast. Unsurprisingly, Apple didn’t get into a discussion of taste and subjective aesthetic consideration on stage, since that would have flown over the heads of most viewers who just want their camera to capture the scene properly. But we will have to see whether the company’s approach to image tuning has changed with the iPhone 11.

That’s really the story with Apple’s camera presentation overall. The iPhone 11 Pro has achieved feature parity with its competitors, more or less, and people upgrading from within the iOS ecosystem will no doubt be happy with the ultrawide camera and night mode. From a broader perspective, however, we won’t know whether Apple has returned to the days of iPhone camera supremacy until we have the new devices in hand.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

‘Perfectly Real’ Manipulated Videos Are Just 6 Months Away, Says Deepfake Pioneer

Published

on

By

Deepfake pioneer Hao Li said that digitally manipulated videos could become ‘perfectly real’ in as early as half-a-year to a year’s time. He cited the emergence of apps such as the Chinese-developed ‘Zao’ and growing research focus on the field.  ( MIT Technology Review | Twitter )

A deepfake expert has warned that even regular people will soon be able to create digitally altered videos that look “perfectly real.”

Hao Li, a computer science professor at the University of Southern California, recently discussed the future of deepfake technology.

In an interview with CNBC, Li said that most manipulated videos can still be easily spotted even with the naked eye. However, there are some that have actually become very convincing. He said that these videos often require “sufficient effort” to produce.

According to Li’s estimate, “perfectly real” will be easily accessible to the public in about six to 12 months.

What Is Deepfake Technology?

Deepfake is a portmanteau of the words “deep learning” and “fake” and refers to computer programs that combine human image synthesis with artificial intelligence. The technology is often used to create digital representations or manipulated videos that are made to seem real.

With deepfake technology becoming increasingly more sophisticated over the years, some people are starting to become concerned about its possible negative effects. Digitally altered videos could be used to promote disinformation and confusion among the general public, particularly in the context of global politics as noted by CNBC.

For instance, several social media campaigns and smartphone apps have already been used to spread misinformation, all for the purpose of interrupting elections in different parts of the world.

Emergence Of Digitally Manipulated Videos

Li, who had presented a deepfake of Russian president Vladimir Putin at an MIT tech conference last week, said he initially thought perfect digitally manipulated videos would become reality in two to three years.

However, he later sent out an email explaining that it might actually happen in just half a year to a year.

The deepfake pioneer said he was forced to “recalibrate” his timeline because of recent developments in the technology. He cited the growing popularity of a Chinese-developed app known as Zao, as well as the growing interest of researchers on the field.

“In some ways we already know how to do it,” Li mentioned in the email. He added that the emergence of perfect deepfake is “only a matter of training with more data and implementing it.”

Zao allows users to swap their faces with other people. It uses photographs taken by app owners and then digitally inserts them into scenes from popular movies and TV shows. Despite growing concerns regarding Zao’s privacy policy, it is reportedly among the most popular programs in China.

Li warned that there will come a point that people won’t be able to tell which ones are deepfakes and which ones are real anymore. He said this is why there’s a need to look at other types of solutions as well.

ⓒ 2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.



Source link

Continue Reading

Archives

Categories

Trending