Showing posts from February, 2010

Netscape won't save you now...

Click on the image to get the full comic, or better yet check it out at its original location . Courtesy of

Spies, Buried Treasure, and Crypto: What More Do You Want?

Wired has a very interesting article talking about the FBI's investigation into the case of Brian Regan, who tried to sell a bunch of classified documents to foreign governments. Here is a link . The most interesting thing that I took from the article is that either A) The NSA was unable to crack a simple Caesar Cipher (ROT25), or B) they just didn't bother to share the results with the FBI after claiming to make no progress. Honestly I don't know which option would be better...

Even More Markov Modeling: What's in a Probability?

I've twice gotten into heated debates, (and I remember the count since I still find it weird that I've gotten into arguments about this), that there is no universal letter frequency probability values for the English language. Both times I stumbled into this argument by answering the question, "So, do passwords match letter frequency analysis of the English language as a whole", by replying, "Well, it depends on what training set you use to calculate the probabilities of the English language, but for the most part, yes". In reality I should have just said, "For the most part yes..." Letter frequency analysis, and its big brother Markov models, depend entirely on their training sets. This means while different training sets may share common characteristics, there is no one LFA, or Markov set of probabilities that perfectly model the English, (or any other), language. But what about those tables Wikipedia posted ? Well, they are based on a study in 1

Shmoocon Bound

Leaving sunny, warm Florida for DC. What am I thinking... Ah, yah Shmoocon . I'll probably be wearing my FSU hat if anyone wants to grab a few drinks, (I know the picture on the side of the blog isn't very good for identification purposes).