Archive for February, 2010

President Obama has taken a personal interest in my email account

Sunday, February 28th, 2010

My account is being threatened because of a complaint received by “the Administration“.  I’m honored at the attention.

Your profile will be locked in response to a complaint received by the Administration

From: Administration <>
Date: Sat, Feb 27, 2010 at 8:02 AM
Subject: Your profile will be locked in response to a complaint received by the Administration
To: *****

***This message was created automatically by mail-delivery software. Do not reply to this message.***

Your profile will be locked in response to a complaint received by the Administration 29.01.2010 ã.
According to “paragraph 8 of the user agreement, reserves the right to suspend or terminate the provision of services, promptly notifying the user.

Refute the statement may be, following this link:

If the application is not rejected within 7 days, your e-mail an account will be blocked.
It has a number 247939070296484.

In the near future we will contact you.
It takes up to 3 days to process your request.
Thank you!

mail support service

 If I refute the statement may be, hoping my application to be rejected in 7 days.


By the way, I do actually have a hotpop account, but I don’t use it any more.  This is a free service, and you get what you pay for — unexplained sporadic service outages and no customer support of any kind.  There’s an email address and submission form, but they’re black holes.


Hitler learns a painful lesson about Cloud Computing security

Thursday, February 25th, 2010

I work on a product line strongly focused on security implications of virtual environments – virtual machines with virtualized storage on virtual networks.  Cloud computing is an extreme example of this.

That said, I can’t believe we didn’t make this (and I wouldn’t be shocked to find out this is an underground viral marketing ploy…)
Thanks to Bruce Schneier for the pointer.


The Difference

Wednesday, February 24th, 2010

The Difference


The Dunning–Kruger Effect

Wednesday, February 24th, 2010

Jeff Atwood over at Coding Horror wrote an article that got me thinking.  Apparently there are hordes of people applying for programming jobs who can’t even pretend to write a program.

I wrote that article in 2007, and I am stunned, but not entirely surprised, to hear that three years later “the vast majority” of so-called programmers who apply for a programming job interview are unable to write the smallest of programs. To be clear, hard is a relative term — we’re not talking about complicated, Google-style graduate computer science interview problems. This is extremely simple stuff we’re asking candidates to do. And they can’t. It’s the equivalent of attempting to hire a truck driver and finding out that 90 percent of the job applicants can’t find the gas pedal or the gear shift.

One of the early commenters ascribes this to the Dunning-Kruger Effect.

The Dunning–Kruger effect is a cognitive bias in which “people reach erroneous conclusions and make unfortunate choices but their incompetence robs them of the metacognitive ability to realize it”.[1] The unskilled therefore suffer from illusory superiority, rating their own ability as above average, much higher than in actuality; by contrast the highly skilled underrate their abilities, suffering from illusory inferiority. This leads to a perverse result where less competent people will rate their own ability higher than more competent people. It also explains why actual competence may weaken self-confidence because competent individuals falsely assume that others have an equivalent understanding. “Thus, the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others.”[1]

“ The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt. ”

— Bertrand Russell[2]

The NY Times reported on this study in a 2000 article, Among the Inept, Researchers Discover, Ignorance Is Bliss.

One reason that the ignorant also tend to be the blissfully self-assured, the researchers believe, is that the skills required for competence often are the same skills necessary to recognize competence.

The incompetent, therefore, suffer doubly, they suggested in a paper appearing in the December issue of the Journal of Personality and Social Psychology.

”Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it,” wrote Dr. Kruger, now an assistant professor at the University of Illinois, and Dr. Dunning.

Thanks to David Weiss and This Is True for pointing me to the NY Times article.

I’ve been talking about this study for years now, but didn’t know the effect had a name.  Now I can be extra-geeky when I throw this out in conversation (my preferred paraphrase):

The skills needed to evaluate competence are the same as those required to be competent.  Therefore, if you are incompetent, you don’t know it.

Frankly, that scares me to death.  Dr. Dunning admits the same fear, by the way.

I figure that as long as I am cognizant of this effect and aware of how much more there is to learn,  I’m probably OK.

Why does my bank require me to answer a “security verification question?”

Saturday, February 20th, 2010

Why does my bank require me to answer a “security verification question?”  image003.jpg

Alex Papadimoulis runs what boils down to a humor site for computer geeks.  He addresses this security phenomenon, and its worst failings, in “Wish-It-Was Two-Factor”

It all started way back in the year 2005, when the Federal Financial Institutions Examination Council issued a guideline entitled Authentication in an Internet Banking Environment. It’s a rather exhilarating read if I do say so myself, especially if you’re a fan of government banking regulations. And, really: who isn’t? In a nutshell, the FFIEC mandated that internet banks utilize a Two-Factor approach to authentication by year-end 2006.

Two Factor Authentication requires the use of factors from two of three categories:

  • Something the user knows
  • Something the user has (RSA SecurID pin generator or similar)
  • Something the user is (fingerprint, iris scan, etc)

The second two categories are hard.  They cost money or inconvenience customers (“they will just go to a bank with less hassle if we do that”).  So, they invented security verification questions.  Presto! Another factor.  That makes Two!

This is my favorite degenerate example, from Bruce Schneier’s security blog:

Bank Botches Two-Factor Authentication

From their press release:

The computer was protected by two layers of security, a unique user-identifier and a multiple-character, alpha-numeric password.

Um, hello? Having a username and a password — even if they’re both secret — does not count as two factors, two layers, or two of anything. You need to have two different authentication systems: a password and a biometric, a password and a token.

I wouldn’t trust the New Horizons Community Credit Union with my money.

Alex continues in “Wish-It-Was Two-Factor”

Worse still, the Online Banking industry is perceived to be one of the most secure. Surely, if anyone knows how to do online security, it’s the online banks, right? And if you want your web application to be extra secure, it should be modeled off of an online bank, right?

Since the banks must know what they are doing, everyone else is copying them – to get “Bank Level” security.


Unfortunately, these security questions may actually decrease the level of security.  These questions are generally things most of your friends and coworkers already know or can easily find out.  If you’re famous, the answers are probably on the internet.

Case in point: Sarah Palin’s Yahoo mail account was hacked via her “secret” questions and Yahoo’s password recovery system. (Via The Mouse’s Cord):

Thus, our attacker was able to break into Palin’s account using nothing but the password recovery feature and a little bit of research.  And again, just to reiterate, even if you don’t believe this is what happened to Palin, this procedure actually does work.

The good news for most of us is that we’re not Sarah Palin, so the details of our lives aren’t plastered all over a Wikipedia article.  Regardless, the kinds of security questions usually asked are not all that hard to get answers to even for the average person.  Some of the answers, like birthdays and your mother’s maiden name, are all part of the public record and you can get those things for anyone without much hassle.  For those more “personal” details like where you met your spouse, most of us wouldn’t think twice about answering the question in casual conversation.  In all, finding answers to these questions might be slightly out of range of a faceless hacker from Anonymous, but it should be well within the grasp of a less than ethical coworker with an axe to grind or a spouse who suspects some infidelity.

Read more about why secret questions are bad in Schneier’s Secret Questions Blow a Hole in Security.  Go ahead, it’s short.


What about “site authentication images?”

Another scheme employed by a few banks requires you to choose an image.  Phishers imitating the bank’s site are not supposed to be able to show you the image you chose, and you are supposed to be wise enough to catch the bad guys and punish them by not entering your password.

Unfortunately, it doesn’t work. 

Study Finds Web Antifraud Measure Ineffective

In this study, Harvard and M.I.T researchers brought 67 Bank of America customers into a controlled environment and asked them to log on to their accounts.  Since the security images had secretly been removed, the subjects should have balked.  However, 58 of the 60 subjects who made it far enough to log in did enter their passwords.  And it gets better – the security images were replaced by a site maintenance message with conspicuous grammatical errors.  Less than 10% of the subjects even noticed the pictures were gone.  I can understand that.  I have dozens of accounts and only two have pictures. I suspect that most web sites are designed with the assumption that this is the only one you use.

The last paragraph of the article sums up the situation succinctly (my italics):

… She [Rachna Dhamija] said that the study demonstrated that site-authentication images are fundamentally flawed and, worse, might actually detract from security by giving users a false sense of confidence.

RSA Security, the company that bought PassMark last year, “has a lot of great data on how SiteKey instills trust and confidence and good feelings in their customers,” Ms. Dhamija said. “Ultimately that might be why they adopted it. Sometimes the appearance of security is more important than security itself


I need to add one more reference to cap it all off.  Bruce Schneier makes a pretty good case that all of this song-and-dance security theater is not really effective even when it is done right.  The bad guys have moved on to new methods that skirt the need to authenticate at all:

Two-Factor Authentication: Too Little, Too Late

Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses. Two new active attacks we’re starting to see include:

Man-in-the-Middle Attack. An attacker puts up a fake bank Web site and entices a user to that Web site. The user types in his password, and the attacker in turn uses it to access the bank’s real Web site. Done correctly, the user will never realize that he isn’t at the bank’s Web site. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.

Trojan Attack. An attacker gets the Trojan installed on a user’s computer. When the user logs into his bank’s Web site, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.



So, why does my bank require me to answer a “security verification question?”

Because they think it adds security. 

Because they think we think it adds security. 

Because they think they have to do something to be “two factor”

And by the way, the make and model of my first car (which I’m still driving) is “The Muppet Show.”

A Couple of Ads

Thursday, February 18th, 2010

I came across a couple of advertisements this week I thought were clever.

I have been doing a bit of running at the gym, so I’m getting an unaccustomed taste of TV advertisements.  This one keeps catching my eye.  I like the way it starts with something ordinary, then builds and overlays the sounds to make something new.  Reminds me of the old Coke I’d-Like-To-Teach-The-World-To-Sing commercial.

The other is a print advertisement for color bleach that comes from a blog about advertising I follow – AdGoodness.

You will want to view this full size.

If you move your head farther away, there appears to be a stain on the shirt.  Move your head close up, and it disappears.  You get some of the effect by focusing directly on the image versus off to the side.

Positive Proof of Global Warming

Wednesday, February 17th, 2010


Romantic Cryptography

Saturday, February 13th, 2010

What if you desire to express your love to someone, but fear the consequences if he or she does not return your feelings?

It’s an age old problem – how many of us know someone who asks a friend to go talk to so-and-so to “see if she likes me?”

Just in time for Valentines day, I found this reference to this research paper, Romantic Cryptography, on Light Blue Touchpaper:

Abstract. We show how Alice and Bob can establish whether they love
each other, but without the embarrassment of revealing that they do if
the other party does not share their feelings.
This is a “secure multiparty computation” of the AND function, where
the participants cooperate in producing the result of the AND, but without
learning the input bit contributed by the other party unless the result
implies it.

It’s an interesting algorithm involving scales and “voting” with weighted balls such that no information is revealed unless both parties indicate interest.


They provide some interesting variations using Love/Don’t Love images that only produce an image if both parties fancy the other:


The paper is short, and is certainly worth a skim at the least.


Pass It Along

Monday, February 8th, 2010

I got this from The Good, Clean Funnies List

My husband, Michael, and I were at a restaurant with his boss, a rather stern older man. When Michael began a tale, which I was sure he had told before, I gave him a kick under the table. There was no response, so I gave him another poke. Still the story went on. Suddenly he stopped, grinned and said, “Oh, but I’ve told you this one before, haven’t I?”We all chuckled and changed the subject. Later, on the dance floor, I asked my husband why it had taken him so long to get my message.

“What do you mean?” he replied. “I cut the story off as soon as you kicked me.”

“But I kicked you twice and it still took you awhile to stop!”

Suddenly we realized what had happened. Sheepishly we returned to our table. The boss smiled and said, “Don’t worry. After the second kick I figured it wasn’t for me, so I passed it along!”

Lost Generation

Sunday, February 7th, 2010

This video is clever, meaningful, and well executed.

The video was entered in an AARP contest, “u @ 50”, by a 20 year old and took second place (or so I was told).

Make sure you read as you listen…