I've twice gotten into heated debates, (and I remember the count since I still find it weird that I've gotten into arguments about this), that there is no universal letter frequency probability values for the English language. Both times I stumbled into this argument by answering the question, "So, do passwords match letter frequency analysis of the English language as a whole", by replying, "Well, it depends on what training set you use to calculate the probabilities of the English language, but for the most part, yes". In reality I should have just said, "For the most part yes..." Letter frequency analysis, and its big brother Markov models, depend entirely on their training sets. This means while different training sets may share common characteristics, there is no one LFA, or Markov set of probabilities that perfectly model the English, (or any other), language. But what about those tables Wikipedia posted ? Well, they are based on a study in 1