… or, why Google’s response to the China/Gmail phishing incident isn’t good enough.
… or, why Android’s security is not, in fact, focused on the individual, despite what people think.

Not long ago, various governments’ senior officials were fooled into sending their Gmail account credentials to China. There was a lot of press about how their e-mail might have been stolen, and a lot of concern over whether they discussed sensitive information via Gmail. Google helped them change their passwords, so it isn’t even an issue anymore, right?

Wrong.

Let’s review:

  • When someone wants to access your e-mail, what do they need?
    your password.
  • When someone wants to access your online documents…?
    your password.
  • When someone wants to access your private photos…?
    your password.
  • When someone wants to use your Google Checkout account to make purchases…?
    your password.
  • When someone wants to install an app on your Android phone…?
    your password.

They don’t need your confirmation for any of this.

Whoever staged this (apparently successful) attack is either very clever or very short-sighted. Both the victims and Google, of course, are banking on the latter, and I admit that it’s the more likely of the two options. But is it safe to blindly assume the best-case scenario?

Here’s what I would do if I were to finance a Gmail “phishing” operation to steal data from government officials:

  • Collect usernames and passwords (which they did).
  • Download all e-mails, Google Documents, photos, etc. (which they presumably did).
  • Retrieve recent activity from Google Checkout to see if victims used it to make any purchases that their wives might be upset about, just for fun (because I’d have no life).
  • Avoid the temptation to actually make purchases with their Google Checkout accounts.
  • Upload, to Android Market, my custom Android app (rootkit) that can:
    • Start automatically (e.g. via hooks and start-at-boot features).
    • Detect the device and OS version that’s currently running.
    • Root the device through readily available means.
    • Install the active ingredients: a kernel module, plus helpful userspace executables.
    • Monitor key strokes.
    • Collect sensitive system information.
    • Collect plaintext data (e.g. accounts and credentials) from other applications when they open their encrypted data files. Potential targets would include the victim’s password manager, desktop-synced web browsers, Wallet, etc..
    • Report all collected data back to my organization on demand.
    • Accept commands and updates from my organization.
    • Remove the front-end application (i.e. uninstall itself) via the package manager API, so that it doesn’t show up in the victim’s market apps or running services.
    • Clear the download notification.
  • Add a note to the Market description, stating that the application is for testing only, don’t download, beware of leopard.
  • For each victim with an Android device, from his https://market.android.com/ account, select my rootkit application for remote install to his unit. On that website, I can grant the app any permissions necessary. With the current Android Market security model, the app is installed without the victim’s consent.
  • Once the rootkits have been installed, remove the app from Android Market.

At this point, I wouldn’t care if the victims change their passwords. It wouldn’t matter. In fact, my best-case scenario would be that they do change their passwords, thinking that it’s the only action necessary.

Mitigation: improving account security?

For individual account security, Google now offers two-factor authentication. However, there are two basic limitations of this approach:

  • Google two-factor authentication is helpful, but it is not good enough. I’ll go into more detail in a later post.
  • The sort of people who will enable two-factor authentication, or even have the technical skills required to do so, are not likely the same people who will fall for phishing scams.

The Android trust issue

Passwords aside, the core of my mock attack lies with the Android Market security model. The problem is that Android Market can and does act on applications without the possibility of user intervention. Google uses this feature to make changes to your system (e.g. updates to the Market app, removal of known malicious apps) without your input. While the feature is convenient, it needs to change —- and it can change, via customized ROMs, even if Google themselves won’t do it for now. CyanogenMod? Are you listening?

Getting a little more technical:
The Market app (Vending.apk) is closed-source, and patched binaries cannot be distributed legally. However, from my cursory checks (glancing through the Dalvik disassembly), it appears that the installation process uses the open-source, Android core library’s package manager calls to effect system changes. Barring any significant complications (such as overly short timeouts set by the calling function), customized ROM developers should be able to offer the ability to alert upon automatic installation requests, seeking approval from the user on a configurable, per-application basis.