The biggest iPhone security risk could be connecting one to a computer

Design quirks allow malware to be installed on iOS devices and cookies to be plucked from Facebook and Gmail apps

Tielei Wang, research scientist with Georgia Institute of Technology

Tielei Wang, research scientist with Georgia Institute of Technology

Apple has done well to insulate its iOS mobile operating system from many security issues, but a forthcoming demonstration shows it's far from perfect.

Next Wednesday at the Usenix Security Symposium in San Diego, researchers with the Georgia Institute of Technology will show how iOS's Achilles' heel is exposed when devices are connected over USB to a computer or have Wi-Fi synching enabled.

The beauty of their attack is that it doesn't rely on iOS software vulnerabilities, the customary way that hackers commandeer computers. It simply takes advantage of design issues in iOS, working around Apple's layered protections to accomplish a sinister goal.

"We believe that Apple kind of overtrusted the USB connection," said Tielei Wang, a co-author of the study and research scientist at the institute.

Last year, Wang's team developed Jekyll, an iPhone application with well-masked malicious functions that passed Apple's inspection and briefly ended up on its App Store. Wang said although the research was praised, critics contended it might have been hard to get people to download Jekyll amid the thousands of apps in the store.

This time around, Wang said they set out to find a way to infect a large number of iOS devices and one that didn't rely on people downloading their malicious app.

Their attack requires the victim's computer to have malware installed, but there's a thriving community of people known as "botnet herders" who sell access to large networks of compromised computers.

Wang said they conducted their research using iOS devices connected to Windows, since most botnets are on that platform, but their attack methods also apply to OS X.

Apple requires a person to be logged into his account in order to download an application from the App Store. But Wang and the researchers developed a man-in-the-middle attack that can trick an Apple device that's connected to a computer into authorizing the download of an application using someone else's Apple ID.

As long as the application still has Apple's digital signature, it doesn't even need to still be in the App Store and can be supplied from elsewhere.

But Apple is pretty good at not approving malicious applications, so the researchers found another way to load a malicious app that didn't involve the App Store.

Apple issues developer certificates to those who want to do internal distributions of their own applications. Those certificates can be used to self-sign an application and provision it.

Wang's team found they could sneak a developer provisioning file onto an iOS device when it was connected via USB to a computer. A victim doesn't see a warning.

That would allow for a self-signed malicious application to be installed. Legitimate applications could also be removed and substituted for look-alike malicious ones.

"The whole process can be done without the user's knowledge," Wang said. "We believe that it is a kind of weakness."

Wang said Apple has acknowledged the team's research, some of which was shared with the company last year, and made some changes. An Apple spokeswoman in Sydney did not have a specific comment on the research.

One of Apple's changes involved displaying a warning when an iOS device is connected to a particular computer for the first time, advising that connections should only be made with trusted computers, Wang said. That advice is only displayed once.

To be sure, Apple has powerful ways to disable such attacks. It can remove applications from the App Store, remotely disable applications on a device and revoke developer certificates. And it's questionable if an attacker would see an economic benefit from infecting large numbers of iOS devices.

But state-sponsored hackers and cyberspies opt for stealthy, targeted attacks aimed at just a few users. This method could be of use if an attacker knows exactly who is using a specific, compromised computer.

They also found another weakness when an iOS device is connected over USB. The host computer has access to a device not only through iTunes but also via a protocol called Apple File Connection, which is used for accessing images or music files.

That protocol has access to files within iOS's application directories, which include secure, "https" cookies, according to their research paper. Cookies are small data files that allow Web services to remember that a person is logged in, among other functions.

Cookies are especially sensitive since they can be used to hijack someone's account. iOS prevents applications from accessing each other's cookies. But it doesn't stop a desktop computer from grabbing that information, Wang said.

The researchers recovered login cookies, including those for Facebook and Google's Gmail. Neither of those companies had a comment.

The best advice is to not connect your phone to a computer, especially if you think the computer might be infected with malware.

"Just avoid that," Wang said.

The study was co-authored by Yeongjin Jang, Yizheng Chen, Simon Chung, Billy Lau and Wenke Lee.

Send news tips and comments to jeremy_kirk@idg.com. Follow me on Twitter: @jeremy_kirk

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags mobilemalwaremobile securityGoogleFacebookAppleiosapplicationstelecommunicationUSENIXMobile OSes

More about FacebookGeorgia Institute of TechnologyTechnologyWang

Show Comments
[]