Saar is right; all these efforts to educate and help the average person regarding phishing are being counteracted by the banks themselves. I get an email like this from time to time:
Your most recent Consumer/Business checking account statement is now available to view online.
To access your statement, just click on the link below.
You will be asked to enter your Online Banking ID and passcode.
Bank of America
Online Banking Customer Service
Please do not reply to this message.
To speak with a representative about your Online Banking account, or if you are unable to log on to Online Banking, call 1.800.933.6262.
The worst part of it is that the ‘estatement’ parameter above contained part of my account number. Think about that for a moment — an email enticing me to click straight through to a bank web site to look at my account is bad, bad, bad, teaching people exactly the wrong behavior, and on top of that they’re including account info in the email itself. Cleartext email. And there’s no reason not to go straight to a SSL server if we’re trying to teach people to look for ‘https’ at the beginning of their URLs. In fact, that ends up taking me just to the front page of the bank, so what’s the point? Why not just say “please visit bankofamerica.com and log in”, if you’re going to do this?
Is there any bank out there that’s consistently doing the Right Thing? Let's form trust.
Spire Security points us to a recent study on document metadata. The study used an internal tool (freely available as an example of their SDK) to analyze Microsoft Office files from Fortune 100 company web sites. The statistics are relatively interesting, but they only count the instances of various types of metadata rather than analyzing the found data for the actual posed risk. However, each data type is also explained together with the actual business risk, something that’s missing far too often these days.
There are of course tools to strip out metadata, but I’ve found them in the past to be cumbersome to use. More importantly, organizations typically do not have policies for this sort of thing; studies like this are significant inasmuch as they can raise awareness of the issue.
Schneier points to an interesting development in Wyoming, where Laramie County judges are throwing out DUI cases because the manufacturer will not disclose how the machines work. I dug around for a few minutes and found the state’s response, which includes the statement: “To CMI, forcing it to produce its “source code” is comparable to requiring Bill Gates to publicly produce the source code for the Microsoft Windows operating system.” They also claim that this is information that the state “is not required to obtain and would not obtain as part of its normal course of business.” Maybe that’s the problem: why aren’t governments insisting on disclosure of the mechanisms by which citizens can be deprived of their liberty and property? There’s a clear “due process” argument to be made here.
I’m still thinking about this. On the one hand, I’m a firm believer in security through transparency and allowing anyone to stay up all night finding problems and solutions (hence the “Caffeinated Security”). Citizens have a right to know how well the tools used to prosecute them work — corporate trade secrets are of lesser importance than our liberty.
On the other hand, how much can be found through extensive, rigorous third-party testing? (Note that having the LEAs themselves do testing, while important, is not 3rd-party testing). Can the system essentially be reverse engineered? Can the testing be enough to guarantee the validity of the tool?
When in doubt, I believe in coming down on the side of openness. It’s possible that there’s some valid reason here to keep the systems closed, other than the manufacturer’s concern about their trade secrets and profits, but I can’t think of one off the top of my head. If I can be jailed because some company and a law enforcement agency “say so”, then how secure am I, really?
An article in Security Pipeline last week highlighed a company providing technology to block users’ attempts to delete cookies. The reason? According to the vendor (United Virtualities), “the user is not proficient enough in technology to know if the cookie is good or bad, or how it works.”
OK, first of all, don’t make assumptions about what users know or don’t know. Assuming your users are dumb is a pet peeve of mine (this is separate from providing ease-of-use for inexpert users). And second, how is this not spyware? It’s my machine, and if I want to delete a cookie, that’s my business. Using an obscure function in Macromedia Flash to prevent me from controlling my machine is evil and may violate the law in some jurisdictions.
The article says that Macromedia has posted instructions on how to disable this, but it wasn’t immediately clear to me how. They’re also talking to folks like the Mozilla Foundation on how to fix this in the future.
The arrogance of United Virtualities is overwhelming. I like Flash for the occasional movie or game site, but it’s clear to me that I need to understand the security model a lot better here given its prevalence in advertising now.
UPDATE: Security Pipeline has another editorial about it calling it perhaps the “dumbest technology of 2005″.
NOTE: This post was originally posted to my personal weblog on 2005-04-05 and has been permanently moved here.
Hah! I have embedded HTML in this blog entry that has compromised your system. It’s smart enough to attack multiple platforms (including Windows and Linux) and gives me command-line administrative level access. Don’t believe me?
Heh. You just lost your privacy.
While there are lots of dumb ways to secure wireless networks, here are the six dumbest. Read the article, there are a lot more details there.