Fifteen years ago, the approved method for gaining possession of a vehicle other than your own was to wait for the owner to wander off, then jimmy the door and hammer a screwdriver into the ignition. Bowing to auto-insurance industry pressure, auto makers have removed that option in many high-end cars, which are no longer practical to steal.
This has made the insurance companies very happy, but, unfortunately, it is getting a lot of their clients killed, since high-end cars are no longer being taken when the owners are away, but when the owners are there, car keys in hand.Interesting, no? Read the full article here.
Due to a conversation I had with a friend recently, I did a little research into presidential vacationing. I found this link which enumerates the amount of vacation taken by the last four presidents. Let’s pretend an average person gets 2 weeks of vacation per year (probably more like 1 week, but let’s be generous). So, pre-supposing weekends are off, and that there are 56 weeks in a year, that’s 280 work days in a year. Two weeks off is 10 of those days, for a total of about 3.5% of the year that a person gets as a vacation. Update (9/14/04): As Jeff points out, I had a brain-fart on the number of weeks in a year. There are 52 weeks in a year, which makes it 260 work days in a year. Two weeks off, or 10 days, is approximately 3.85% of the year. Alternatively, if you view weekends as vacation time, that’s two times a week, or 104 days, plus 10 extra days of vacation, that’s 114 days of vacation, or about 31.2% of the year. Now, to compare with the presidents…
Let’s start with Jimmy Carter. Carter was President for a single 4-year term. Over those 4 years, he took 79 days of vacation. Assuming he didn’t get any weekends off, that’s 5.4% of the time that he was on vacation. Not too bad, all things considered. He was the most powerful man in the world, but he basically took about 20 days off per year—about 3 weeks.
Next up: Ronald Reagan. Reagan was President for 8 years, and took 335 days of vacation during that time. Doing quick math, that’s almost 365 days, which means it’s almost a YEAR of vacation, which means he was on vacation almost 1/8th of the time. Doing more math, to be more precise, that’s 11.4% of the time that he was on vacation—twice as much as Carter.
Next up: George Herbert Walker Bush (Bush Sr.). Bush Sr. was President for 4 years, and took 543 days of vacation time. 543. Five hundred fourty three days of vacation. That’s a year and a half! Out of FOUR! That’s 37% of the time! He was on vacation more than one out of every three days. He got paid for four years of work, but only worked two and a half of them. Put another way, every year, he took more than four months of vacation. Yowza.
Next up: William Clinton. Clinton was President for 8 years, and during that time took 152 days of vacation. As a percentage, that’s approximately 5.2% of the time—even less than Carter.
Next up: George Walker Bush Jr. Bush Jr. has been president for less than 4 years so far. As of August 2003 (with a year and a half remaining in his (first) term), he had been on vacation 250 days, or approximately 27% of the time. That means every year he’s only taken three months of vacation, on average. Compared to his father, he’s a workaholic!
Not so fast, says Fred Kaplan, staff writer for Slate Magazine. Prior to September 11, 2001, George W. Bush was on vacation for 96 days. Given that he’d taken office on January 20th of 2001, that means that as of September 11th, he’d been “on the job” for 234 days. If 96 of those were vacation days, he was on vacation 41% of the time. Apparently he got guff for that, and has since been taking less vacation, bringing his average down to a measly 27% almost two years later by taking a mere 154 more days of vacation, which, giving him the benefit of the doubt, means he averaged only to be on vacation 21% of the time after September 11th (as of August 2003).
But let’s not depend on a single source… It seems The Washington Post has been doing some counting too, and they disagree with Doug Griffin of Counterbias. As of April 2004, Bush spent 233 days at his ranch in Crawford, Texas. Add to that his 78 visits to Camp David and 5 visits to Kennebunkport, Maine gives you a grand total of more than 500 days on vacation at one of Bush’s usual retreats. That’s a lot! That’s more than 40% of his time STILL on vacation!
Conveniently, Yahoo has an Ask Yahoo feature that has answered the following question: How many vacation days has George W. Bush taken to date as President? How does that compare with Clinton? They come up with similar figures (and they cite the Washington Post).
Back in 2001, the San Jose Mercury News newspaper commented on it as well, with an article titled Bush Break Longest in a Generation commenting that Bush’s 30+ days of vacation during August 2001 was the longest vacation since Nixon in 1969.
But this sounds like a bunch of liberal whining, doesn’t it? I mean, don’t presidents get weekends off? 42% has gotta be misleading, doesn’t it? And in fact a couple people attempt to “debunk” this number. The way to debunk these numbers is to point out that these numbers include weekends and federal holidays, and that these so-called “vacations” were working vacations, where Bush met with foreign leaders and went out to “listen to the people”.
Unfortunately for such debunkers, as it turns out, United States Presidents do not get weekends off. Presidents are at work more than United States Postal Service workers (some of them at least get Sunday off). Or at least, they’re supposed to be. And “listening to the people” turns out to be little more than in-office campaigning with taxpayer money—Bush gave a bunch of speeches to various Republican rallies during his “vacations” … which may not qualify as vacation exactly, but certainly can’t be construed as doing his job as president or working “for the people.” And in any case, when compared to the vacation days of the recent Democratic presidents (which might include weekends and “working” vacations as well) it still is not a very complimentary picture.
(Note: I am not addressing the accusations of some that this vacationing that Bush undertook affected his performance, particularly with regards to information about terrorist activity with an eye towards preventing what happened on September 11th. Bush was briefed, a little bit, while he was on vacation (note: this is not working, this is equivalent to watching the nightly news, where the nightly news happens to be classified), so it seems to have been a bigger, more systematic problem than strictly “Bush’s fault”. That said… by the time September 11th rolled around HE WAS ON VACATION NEARLY HALF OF THE TIME. Get over your damn selves.)
Originally authored on: 2004-08-23 01:52:12 I really find this whole thing rather irritating. I honestly don’t care what Bush was doing during Vietnam. If he was in Vietnam, that would be great. If he was in Arkansas, that would be great. If he was in Texas, that would be great.
The big issue is that we don’t know where Bush was. We’re pretty certain it wasn’t Vietnam, which is why he shouldn’t really say much about the war record of someone who was. I don’t think Kerry should say anything about Bush’s record in the Texas National Guard.
What I find distasteful is people who claim that there is proof that Bush was fulfilling his duties. The fact is, there isn’t. He may have been, and if he was, like I said, great. But there isn’t any documentation. Some would assert that because he got an honorable discharge, he must have fulfilled his duty. This is not necessarily true, as you can see.
Anyway, since it has come up multiple times, and it seems that no further evidence is forthcoming, I figure I ought to document the dumb thing. At the very least, so I can point to it when it comes up again, I’m going to document what I can.
A decent summary of the questions raised is posted here although as it’s from Salon, a known hotbed of liberalism, so some might doubt it’s veracity.
A website I haven’t seen before, but has a very complete record is one called “Calpundit” which has not only a record of the relevant issue, but also has scanned-in copies of many of the relevant documents. here here and here
The summary seems to be: there is no sufficient record or proof that Bush fulfilled his required duty in the Texas National Guard. This does not mean that he didn’t do it. This means that there is no proof that he did. Some have said if he released his pay stubs from that time period (assuming he kept them), then that would resolve the issue. To my knowledge, he has not done so.
If that isn’t sufficient documentation, and if I find or am directed to more info, I’ll add more. Help cataloging this crap is welcome and requested.
Update (8/24/2004): USA Today has a good summary of things thus far.
Update (9/7/2004): The Miami Herald has a report summarizing what is missing, and documenting the Allied Press (AP) lawsuit against the government to obtain the missing documents. According to the AP, there are five categories of missing documentation, as follows (this is quoted from the article):
Update (9/9/04): According to a new FactCheck document Three new things have come to light (much of this text (the text in italics) is directly lifted from the FactCheck document, though FactCheck has more references, more details, is better documented, and doesn’t have my editorializing):
This complements the previous, and more complete article at FactCheck. It essentially points out that while documents are fairly suggestive that Bush was where he said he was, but are not conclusive.
Update (9/13/04): More discussion of how much you can get away with and still be honorably discharged here.
Update (9/21/04): It appears that Bush’s records were tampered with.
Update (9/28/04): Another summary, this time by the LA Times. Summarized here: One key point of the article is that Bush did not meet one set of obligations as a Guardsman: the training minimum for members of the Ready Reserve. (Guard members are also in the Ready Reserve, which has a different set of attendance standards than the Guard.) And addressing the issue of Bush’s failure to take a flight medical exam, the article notes that an “array of Guard officials…said they could not recall another pilot who skipped his mandatory medical exam.” Ret. flight surgeon Jerry Marcontell, who was the flight surgeon for Bush’s air wing, said, “There were cases where they’d be a few weeks late because their regular jobs might get them in a bind. But I don’t remember anyone missing a physical for months at a time. Certainly not a year.” Bush’s aides have provided varying and contradictory explanations why he did not take that exam and was subsequently grounded. Still, the Mystery of the Missing Exam remains, well, a mystery. Now why can’t Bush clear that up?
Remember Diebold? They were the company that supplied California with a whole bunch of electronic voting machines, and then were caught doing all sorts of unethical things like install illegal software on the voting machines (illegal because that software is legally required to be inspected and approved by the state). They were prosecuted for that illegal activity (that prosecution is still pending). Now a high-ranking Diebold executive quit his job at Diebold to go to work for the state of California, as the man in charge of voting machines.
Can you say “conflict of interest”?
Crossfire invited Jon Stewart on to talk about his new book, and he took them to task for being, in his words, “partisan hacks”.
My god, I have an unbelievable amount of respect for this man.
http://homepage.mac.com/duffyb/nobush/iMovieTheater231.html
( local, high-res copy is here (84MB) - you may need to download the 3ivx and DIVX video codecs to view it. )
TheStar.com has a decent followup to Jon’s appearance on Crossfire. Here too is Stewart’s own followup on the Daily Show.
(the following is mostly from an email that I sent)
Crossfire is an interesting show to start a crusade on (if that’s indeed what he’s doing) because it’s probably more like what Jon would like to see than not. I mean, if you’re out to call someone a political hack on a debate show, you’d be far closer to the mark if you did so on something like Fox’s Hannity & Colmes (Hannity’s a douchebag and Colmes is a wuss). But on the other hand, if you’re going to go somewhere to call the media on it’s rather blatant mis-handling of important political news, you need to do it somewhere where people will not simply quickly go to commercials and escort you off stage as soon as they realize what you’re up to.
I don’t know much about Tucker Carlson—when I watched Crossfire, it was Novak that was the regular contributor, and he really is a douche-bag (hey, he published the whereabouts and identity of an undercover CIA operative: major doucheitude). But in terms larger than just Crossfire, I think he’s got a perfectly valid point about where media is going these days. I mean, when, for example, fact-checking the political debates, the media figures will research and find 3 problems with Kerry and 3 problems with Bush and call it “balanced”—when in fact, the three problems with Kerry are not knowing Pell grants, not knowing that Bush did meet with the Congressional Black Caucus once, and having left the word “projected” out of one of his sentences about the surplus in the budget in 2000, and the problems Bush had were lying about the recipients of his tax cuts, lying about his position on the man who attacked the United States (Osama), and taking credit for protecting Americans from contaminated vaccine when it was in fact the British who protected Americans. They’re NOT equal gaffes! And more than that, the media doesn’t even have reasoned discussions about the pros and cons of the actual policy suggestions. Instead they prefer to focus on the facial expressions (Gore in 2000 and Bush in 2004) and similar ancillary crap (Mary Cheney, for example).
I think Jon’s point is that, regardless of how “balanced” Crossfire may seem to be, they aren’t actually providing much of a service to their viewers, they’re just providing entertainment. One side goes “rah! rah! you suck!” and the other side echoes it back—nobody’s mind is actually convinced or changed. At best, you’re simply aware of a headline you weren’t before—and if that’s all you want, you can watch Jon’s show to get the same thing. Jon gets away with simply listing headlines and making fun of them, because his show is on Comedy Central, whereas more mainstream news outlets, like CNN, have more of a responsibility to be actually useful. (It may be a fair argument to say that CNN’s responsibilities are to it’s investors, and it should show whatever to maximize their income, but let’s be honest: the porn industry pays better, and really maximising their income would result in something like Naked News. If they refuse to stoop that low, then they must have some other agenda than pure money.)
Politicians go on shows based on reputation, and what it will do for them. I think (and this is just my, very obviously whacked out, opinion) that if a show developed a reputation for scrupulous attention to detail, absolute honesty, and religious devotion to manners and aversion to personal attacks or other debate fallacies, then a politician’s willingness to go on such a show would be a benchmark of honesty and openness, and they’d do very well. And when they don’t have guests, they can reasonably discuss the issues amongst the regular contributors. This Week with David Brinkley, back in the early nineties was just that kind of show, I think (the show has since gone downhill)—but maybe I was just young and impressionable. Crossfire is an interesting case, because it was designed specifically for little more than one- or two-line zingers, shouting over each other, with no real mandatory fact-checking except what the hosts decide to do to call the other on it (so, only fact check it it’s really important to your argument that the other guy was fibbing). I really doubt that they could really change much, which is why I think (and hope) that Jon was simply using it as a platform to launch a more fundamental crusade for quality journalism.
It disturbs me that people no longer (can?) trust the media to actually do factual reporting. Everyone has a spin, a slant, whether they admit it or not and these days one has to either admit that you only want to hear one side, or you have to go out and seek a dozen or more sources in order to get anything even approximating a balanced view. That’s crap! I don’t want to have to do the research—that’s what the news agencies are for. Why on earth should I have to go to several different news agencies and read several different stories about the same event just to be sure that I know all of the major facets of that event? I mean, there ought to be some bigger distinction between Al Jazeera and Fox News or CNN than simply political leanings and budget size! And we’re sliding in that direction, sadly. And I can understand why we might be sliding in that particular sad direction—in my email conversations with people I disagree with, the conversation starts out reasoned, but by not carefully monitoring the contents of the conversation, over time it degenerates into name-calling and “well, you’re guy sucks even more” arguments instead of real substantive discussion. Quality political discussion is really hard, and factual reporting is even harder. Its success is largely based on reputation, which takes a long time to build (particularly in this day and age of widespread suspicion of talking heads).
While I don’t know what Crossfire can do about it—probably nothing in the short term—I think it’s a valid criticism of the media that many people are recognizing, and it should be addressed (we’ll need some real strong editors with thick skin, spine, balls of iron, and a herculean sense of journalistic ethics). I dearly hope that Jon is actually going to work for that goal, and that it wasn’t a one-off to get some laughs while being rude to another show.
My observation was that while I watched him say that I was thinking “oh my god, he’s saying what I’m sure everyone is thinking”. I mean, the audience was laughing their asses off, and they were there because they LIKE the show! Perhaps they recognized that they were attending more for the theatre of it than to actually hear something that might convince them to change their minds on some topic?
One of the more typical accusations that Kerry supporters will make against Bush supporters in moments of pique is that Bush supporters simply do not live in a world of reality. Of course, this is part of a series of name-calling and nasty personal attacks in both directions, and nobody wins.
Interestingly, a new study has come out that lends some weight to this particular personal attack. Now, I’d like to state right up front that I don’t generally subscribe to this position, and at worst I blame the leadership for misleading (lying) to the rank and file… but it certainly is an interesting result. Remember, this speaks to the statistical average, and not any particular Republicans by name.
Even after the final report of Charles Duelfer to Congress saying that Iraq did not have a significant WMD program, 72% of Bush supporters continue to believe that Iraq had actual WMD (47%) or a major program for developing them (25%). Fifty-six percent assume that most experts believe Iraq had actual WMD and 57% also assume, incorrectly, that Duelfer concluded Iraq had at least a major WMD program. Kerry supporters hold opposite beliefs on all these points.
Why do you suppose this is? Why do such large segments of the population believe factually incorrect things?
… how would the race for president look if everyone actually understood what the true facts of the matter were?
heh, it’s too true to be funny.
I don’t feel this way right now, but somehow it speaks to me.
Ever see those “punch the monkey” banner ads? Ever thought “man, this would be a lot easier if the monkey was strapped to a table”? Ever follow through with that thought in real life? Well, these losers have.
To quote:
PETA has sent the 253-page complaint and a videotape to the Department of Agriculture, requesting the lab be shut down until an investigation can be conducted.
“The tape shows experimenters using their power over the monkeys to torture and torment them, while lab supervisors stand by or even join in,” said PETA President Ingrid Newkirk.
To wit, they would strap monkeys to tables, and punch them repeatedly - rumor has it, they did it to create bruises so they could test concealer cream.
A report out from Philadelphia details the gross abuse of power and true mental sickness of the priests in power in that area. This is reported in an article in the National Catholic Reporter and an editorial in the same.
The report from Philadelphia explains how the two archbishops involved buried reports of sexual abuse specifically with an eye towards escaping the statute of limitations. Towards that goal, they were brilliantly successful: as a result of their efforts, the grand jury found that they could not indict any of the priests involved because the statute of limitations had expired.
And because of the way the archdiocese is set up legally, as an unincorporated association rather than a corporation, its officials also could not be prosecuted for crimes such as endangering the welfare of children, intimidation of victims and witnesses, and obstruction of justice.
“As a result, these priests and officials will necessarily escape criminal prosecution,” the report said. “We surely would have charged them if we could have done so.”
Despite most of the grand jury members, prosecutors, and detectives investigating the archdiocese being Catholics themselves, the Church decried an “anti-Catholic bias” and said the grand jury had tried to “bully and intimidate” the Cardinals. This same Church, which required a three year investigation and innumerable supoenas to get the information into the public, also decried the grand jury process as secretive and criticized the “tremendous power” of the district attorney.
One priest, Fr. Gerald Chambers, was transferred so many times—17 different assignments in 21 years—that according to the archdiocese’s records, church officials were running out of places to send him where his reputation for molesting children was not already known.
…
[Cardinal] Bevilacqua agreed to harbor a known abuser from another diocese, Fr. John P. Connor, “giving him a cover story and a neighborhood parish here because the priest’s arrest for child abuse has aroused too much controversy” in Camden, N.J.
Priests were even excused from being dismissed by virtue of committing
other crimes. For example:
[Fr. Stanley Gana] not only had sex with boys, he also had sex with women, abused alcohol and stole money from parish churches, the report said. So that is why Gana “remained, with Cardinal Bevilacqua’s blessing, a priest in active ministry,” the report said. “You see,” explained Lynn to one of Gana’s victims, “he’s not a pure pedophile.”
In another case, an abuser priest—Fr. John Gillespie—who wanted to apologize to his victims for his crimes—was transferred to another parish, not because he might molest his victims again, but because he might apologize to them, the report said. “If he [Gillespie] pursues making amends with others,” therapists at an archdiocese treatment facility warned, “he could bring forth … legal jeopardy.”
Some more interesting excerpts:
Another archdiocesan priest, Fr. Raymond Leneweaver, had T-shirts made for a group of altar boys that he abused, a group he named the “Philadelphia Rovers.” The priest repeatedly pulled one boy out of class in the parish grade school, took him to the school auditorium, forced the boy to bend over a table, and rubbed against him until the priest ejaculated, the report said.
…
While the cardinal knew of the priest’s proclivities, the parents of his unsuspecting victims did not. One father of an abuse victim, the grand jury report said, beat the victim and his brother, one to the point of unconsciousness, when they tried to tell their father of the abuse. “Priests don’t do that,” the devout father replied, according to the report.
…
One 14-year-old boy came to the priest for counseling after a family friend had abused him. “Fr. Gana used his position as a counselor and the ruse of therapy” to escalate the abuse.
…
“Notes in archdiocese files prove that the church leaders not only saw, but understood, that sexually offending priests typically have multiple victims, and are unlikely to stop abusing children unless the opportunity is removed,” the report said.
“In the face of crimes they knew were being committed by their priests, church leaders could have reported them to police,” the report said. “They could have removed the child molesters from ministry, and stopped the sexual abuse of minors by archdiocesan clerics. Instead, they consistently chose to conceal the abuse rather than to end it. They chose to protect themselves from scandal and liability, rather than protect children from the priests’ crimes.”
…
“The grand jurors find that, in his handling of priests’ sex abuse, Cardinal Bevilacqua was motivated by an intent to keep the record clear of evidence that would implicate him or the archdiocese,” the report said. “To this end, he continued many of the policies of his predecessor, Cardinal Krol, aimed at avoiding scandal, while also introducing policies that reflected a growing awareness that dioceses and bishops might be held legally responsible for their negligent and knowing actions that abetted known abusers,” the report said.
…
When the priest pedophilia scandal broke in Boston, Bevilacqua tried “to hide all he knew about sex abuse committed by his priests,” the report said. He had his spokesperson tell the media in February 2002 that there had been only 35 priests in the archdiocese credibly accused of abuse over the last 50 years, even though the archdiocese “knew there were many more,” the report said. The grand jury put the number of abusive priests at 63.
The cardinal also announced to the public in April 2002 that no priest with accusations against him was still active in ministry, even though several still were. “He certainly was not credible when he claimed before this grand jury that protecting children was his highest priority—when in fact his only priority was to cover up sexual abuse against children,” the report said.
From the editorial:
Next month the U.S. bishops gather for their annual meeting… . The bishops in that meeting room in Washington will know that the truth finally came out in Philadelphia not because the diocese decided the community deserved to know it, but because prosecutors relentlessly pursued it.
…
Of what use are we as a believing community if we can’t get this right? Who cares what our chalices are made of or what gender pronouns we use in our prayers or what we say about the unborn or the poor or anything else in our moralizing agenda if we can’t tell the truth about what happened to our children?
This is a rather absurd column, from Cary Tennis, but the first bit of it is so thoroughly true and amusing at the same time that I not only added a piece of it to my quotes collection, but felt the need to include it here:
How can you expect to enjoy life without heartily disliking a good many people? Do not be afraid to dislike the people you dislike. Disliking people is an oft-neglected pleasure. People have so many dislikable traits, it is a terrible waste to miss out on disliking them.
Just so that this is here for posterity… (yay bitterness!)
I had received a letter from the ND Graduate School way back on September 30th stating:
It is a policy of the Graduate School that students who have not yet passed their oral candidacy exam and had their dissertation proposal approved by the end of their 8th semester of enrollment are ineligible for further funding from the Graduate School. Our records show that you have not yet accomplished these goals. If you have not done so by May ‘06, your financial aid will be terminated. Please make every effort to meet these objectives.
Which I think is safe to describe to as a nastygram. It’s been hanging prominently on my desk ever since.
Now, one of the members of my proposal committee is Dr. Kogge, who is (as all who know him already know) eternally very busy. As such, and pursuant to the deadline indicated in the letter from the Graduate School, I scheduled my proposal for May 5th. This is the last day in May (more or less) that Dr. Kogge would be in town, so it seemed a natural day to give me the most time. After discussion with my advisor (I admit, though, that I cannot remember exact details of this conversation), this seemed reasonable.
When I say that “I scheduled my proposal for May 5th” what I mean is that I talked with all of my committee members, told them that May 5th was the day, got scheduling information from them, reserved the departmental conference room, and sent them all an email verifying the schedule. In every case, the committee member I talked to added me to their schedule (Palm Pilot, MeetingMaker, etc.) in my presence.
I talked to the department secretary, Jane, who said that there were basically two deadlines with regards to my proposal that I needed to keep track of. As part of my proposal committee, the Graduate School requires what’s called an “outside chair” (someone from another department whose sole purpose is to symbolically keep each department honest), and the first deadline I was to keep track of, according to Jane, is that they need at least 10 days (10 BUSINESS days, so two weeks, in other words) to arrange an outside chair. The second deadline was described as more of a good-will deadline: give my committee members at least two weeks to read the proposal. That being the case, the “big day” for my proposal was set at: April 21st.
Getting ready for this big day, I was working VERY hard the two weeks leading up to the 21st; staying up until all hours of the night, getting up early, putting everything I had into the “big push”. Finally, I finished the proposal on Wednesday, April 19th, around 5 in the afternoon.
I received an email from Jane wondering when I was supposed to be defending, as Kogge had apparently asked her. I hadn’t updated her on the time for the outside chair because, well, it had simply slipped my mind—and I hadn’t missed the deadline for doing so yet.
I updated Jane on the schedule, and she informed me that Kogge was looking for something to read, and stressed that I had better be getting my document to my committee. I told her I was on-track (as far as I knew) to get it to them on Friday.
I received an email from Dr. Kogge that read:
Strangely, the subject of that email (a detail I did not notice until it was pointed out to me later) was MS defense.
I responded that I was unaware that the customary deadline was so early, and explained that I had thought that only 2 weeks lead time was necessary for a proposal, and that I had been planning to give it to everyone on Friday, but that I was sorry if I had misunderstood the deadlines. I asked if getting it to him by Friday would be minimally acceptable. Dr. Kogge responded only to say that I needed to sync with Jane so that everything went smoothly—something which, by that time, I had already done.
I received an email from Jane that read:
In response, I immediately did as she instructed me, and included with it a copy of her email instructing me to do so, to ensure that Dr. Kogge didn’t think it was the final draft.
Dr. Kogge responded, and CC’d the email to Jane:
This was a rather frightening email, as I appear to have made Dr. Kogge mad. I also appear to, essentially, have completely and utterly failed to appropriately schedule my proposal. At this point I didn’t know what would happen, or what the consequences were for such failure (I knew financial aid would be cut, but I didn’t know if that also meant I was out of the college, or if I needed to take out large loans, or quite what the result would be). I was hyperventilating.
Jane sent me an email apologizing for telling me to send him a copy of my incomplete work, finishing:
Finally, I sent the complete version to my advisor, Dr. Thain, with the remarkably calm-sounding note:
He responded:
Which is precisely what I did, the next day, bright and early.
The Notre Dame Graduate School’s website contains a purportedly handy checklist of deadlines to be aware of for graduate students who are looking to make sure they are on-track. As of right now (April 23rd, 2006), this checklist titled Graduation Checklist and Deadlines for August 2005 Graduation. Did you catch that? 2005. If you look more closely, the deadlines listed in the smaller bullet-points appear to be for 2006, but with the big headline saying 2005, I wouldn’t trust them.
The 2005/2006 calendar on the ND Graduate School website does not explain what date the proposal must be completed by. Nor does it illuminate when the end of the semester is.
If you read the Notre Dame Graduate School’s website, you will see the part that explains what the official requirements are, under the heading “Candidacy Examination” :
The candidacy examination should be passed, and the dissertation proposal approved (if the approval process is not part of the candidacy exam), by the end of the student’s eighth semester of enrollment. The examination consists of two parts: a written component and an oral component. The written part of the examination normally precedes the oral part. It is designed, scheduled, and administered by the department. The oral part of the examination is normally taken after the completion of the course work requirement. The oral part, among other things, tests the student’s readiness for advanced research in the more specialized area(s) of his or her field. In total, the examination should be comprehensive. Successful passage indicates that, in the judgment of the faculty, the student has an adequate knowledge of the basic literature, problems, and methods of his or her field. If the proposal defense is part of the oral, it should be a defense of a proposal and not of a completed dissertation.
A board of at least four voting members nominated by the department and appointed by the Graduate School administers the oral part of the examination. Normally, this board has the same membership as the student’s dissertation committee. Board members are chosen from the teaching and research faculty of the student’s department. The Graduate School should be consulted before the department or the student invites a faculty member outside the student’s department to be a board member.
A faculty member appointed by the Graduate School from a department other than the student’s department chairs the examination board. This chair represents the Graduate School and does not vote. After completion of the examination, the chair calls for a discussion followed by a vote of the examiners. On a board of four, three votes are required to pass. If a department chooses to have five members, four votes are required to pass. The chair should, before the examination begins, ask the student’s adviser to confirm departmental regulations for conduct of the examination and voting procedures. The chair sends a written report of the overall quality of the oral examination and the results of the voting immediately to the Graduate School.
In case of failure in either or both parts of the doctoral candidacy examination, the department chair, on the recommendation of a majority of the examiners, may authorize a retake of the examination if this is permitted by departmental regulations. An authorization for retake must be approved by the Graduate School. A second failure results in forfeiture of degree eligibility and is recorded on the student’s permanent record.
Note that no discussion of the timing of things (like needing to have them sign off on the paper 5-6 days before the presentation) is discussed. Also note that the next section discussed on that webpage is “Admission to Candidacy” which apparently means something very different from the “Candidacy Exam”, and only needs to be done much much later in the process.
I discussed my situation with some of my office-mates, and when I relayed my experience to Tim Dysart he became somewhat distressed. He was scheduled to defend his proposal on May 4th (the day before I had been), and had not yet distributed his paper to his committee members. His advisor is Dr. Kogge, but he had apparently not been informed of or held to the same requirements that I had been. This is when Tim pointed out to me that the subject of Dr. Kogge’s first email had been “MS defense”, even though the language within the email was somewhat ambiguous. He suggested that Dr. Kogge was merely confused. I find this possible, but I point out that when I scheduled my proposal with Dr. Kogge (about a week or two earlier), he had immediately said “ah, you must be in the same boat as Tim is, up against that May deadline”, so I would have thought that if he was keeping Tim’s situation straight, that I would be associated as similar.
Because I needed to reschedule my proposal defense, and because Dr. Kogge was going to be unavailable before May 28th or so (and in any case, were I to get things done before the end of the semester under the new deadlines that had been explained to me, I was virtually out of time anyway), I was forced to examine the question: what really happens if you can’t make the May deadline? And when exactly is that May deadline anyway?
At Emily’s suggestion, I went to the Graduate School (on the top floor of the Main Building) to find out. When I asked the receptionist when the end of the semester was, she looked smiled and said, “well, it depends…” I explained that the Graduate School had sent me a letter threatening me with the removal of my paycheck if I didn’t get my proposal done before the end of my 8th semester and wanted to know what the very last day for that was. She looked uncomfortable and said she didn’t know, and disappeared into the back offices to find out. She came back explaining that they were pretty flexible, but that Graduation (May 15th) was really the very last day they’d consider it. When I asked what exactly the penalties were, the receptionist again disappeared into the back offices and brought forth a woman named April who could answer my questions. April explained that the penalty merely meant that my Graduate School funding would be cut off. To be more precise, since I do not receive any financial aid directly from the Graduate School itself, the penalties, and thus the deadline, do not affect me or any other student who is financed similar to me (for example, virtually no graduate students in the Engineering department are affected).
In the end, I think what needed to happen happened, and things are reasonably good going forward. Just a lot more stressful than it needed to be.
It astonishes me how idiotic some compilers can be.
I’ve been working with some unusual compilers recently: the PGI compilers from the Portland Group, and the XL compilers from IBM. I’ve been attempting to get them to compile some standard libraries for testing parallel operations. For example, I want them both to compile LAM-MPI, and BLAS, and LAPACK.
In fighting with them to get them to function, I’ve discovered all sorts of quirks. To vent my frustrations, I am documenting them here. Specifically, IBM’s XL compilers today.
I’m compiling things with the arguments -O4 -qstrict -qarch=ppc970 -Q -qcpluscmt
. This takes some explanation. I’m using -O4
instead of -O5
because with the latter, the LAPACK libraries segfault. That’s right. Fortran code, with nary a pointer in sight, segfaults. How that happens is beyond me. The -qarch=ppc970
flag is because, without it, code segfaults. What does this mean? This means that the compiler can’t figure out what cpu it’s running on (which, hey, I’ll give them a pass on that one: I’m not running this compiler on a “supported” distribution of Linux) and is inserting not only bad code but bad pointer dereferences (HUH?!?).
When compiling LAPACK, you’re going to discover that the standard Fortran function ETIME
, which LAPACK uses, doesn’t exist in XL-world. Instead, they decided it would be more useful to have an ETIME_
function. See the underscore? That was a beautiful addition, wasn’t it? I feel better already.
While compiling LAM with any sort of interesting optimization (the benefits of which are unclear in LAM’s case), you’re going to discover that XL’s -qipa
flag (which is implicitly turned on by -O3
and above) can cause extremely long compile times for some files. How extreme? I’m talking over an hour on a 2Ghz PPC with several gigabytes of RAM. But don’t worry! Even though it looks like the compiler is stuck in an infinite loop, it’s really not, and will actually finish if you give it enough time. Or you could just compile LAM without optimization, it’s up to you.
Next nifty discovery: some genius at IBM decided that all inline functions MUST be static. They have to be, otherwise the world just comes apart at the seams. Nevermind the fact that the C standard defines the inline
keyword as a hint to the compiler, and specifically forbids the compiler from changing the semantics of the language. What does this matter? A common, sensible thing a library can do is to define two init functions, like so:
inline int init_with_options(int foo, int bar, int baz)
{
...do stuff...
return n;
}
int init(void)
{
return init_with_options(0, 0, 0);
}
Now what do you suppose the author of such code intends? You guessed it! He wants to let the compiler know that dumping the contents of init_with_options()
into init()
is a fine thing to do. The author is not trying to tell the compiler “nobody will ever call init_with_options()
.” But that’s what the XL compilers think the author is saying. Better still, the documentation for XL explains that there’s a compiler option that may help: -qnostaticinline
“Wow!” you say to yourself, that sounds promising! Nope. The option doesn’t seem to do a thing. You should have been clued in by the fact that the documentation says that that option is on by default. No, sorry, all inline functions are static, and there’s nothing you can do about it. If you didn’t want them static, you shouldn’t have given such hints to the compiler.
Here’s another good one: what does the user mean when he specifies the -M
compiler flag? Well, let’s think about this. The documentation for that option says:
Creates an output file that contains information to be included in a “make” description file. This is equivalent to specifying -qmakedep without a suboption.
Now, what do you think -M
really does? Oh, it does what it says, alright: creates a .d file. But it also doesn’t stop the compiler from actually attempting to COMPILE the file. So, now that you’ve got your dependencies built so that you know in what order to compile things, you tell make to have another go at building things. But what’s this? It’s already been compiled (incorrectly!)! Joy! My workaround is to run the compiler like so:
rm -f /tmp/foo
xlc -M -c -o /tmp/foo file.c
Now, when gcc and other compilers handle the -M
flag, they spit out dependencies to stdout, rather than creating a file. Many complex Makefiles that you really don’t want to go mutzing with rely on that behavior. How do we get XL to do the same? Here’s one you wouldn’t have suspected: -MF/dev/stdout
What’s the -MF
flag supposed to do? Modify an existing Makefile, that’s what. See, isn’t that an excellent idea?
Speaking of excellent ideas, IBM decided that the C language needed some extensions. And I can’t begrudge them that; everybody does it. Among the extensions they added was a __align()
directive, along the lines of sizeof()
, that allows you to specify the alignment of variables that you create. You’d use it like so:
int __align(8) foo;
Unfortunately, in the standard pthread library headers, there are several struct
s defined that look like this:
struct something {
void * __func;
int __align;
}
You see the problem? Of course, there’s no way to tell XL to turn off the __align()
extension. You would think that using -qlanglvl
might do it, because it supposedly allows you to specify “strict K&R C conformance”. You’d be wrong. Your only option is to edit the headers and rename the variable.
Other ways in which XL tries to be “intelligent” but just ends up being idiotic is it’s handling of GCC’s __extension__
keyword. For example, in the pthreads headers, there is a function that looks like this:
int pthread_cancel_exists(void)
{
static void * pointer =
__extension__ (void *) pthread_cancel;
return pointer != 0;
}
The reason GCC puts __extension__
there is because pthread_cancel may not exist, and it wants to have pointer
be set to null in that case. Normally, however, if you attempt to point to a symbol that doesn’t exist, you’ll get a linking error. XL, of course, barfs when it sees this, but not in the way you think. XL attempts to be smart and recognize common uses of __extension__
. Somehow, somewhere, the error you get will be:
found ), expected {
Really makes you wonder what the heck it thinks is going on there, doesn’t it? The solution? Remove “__extension__
” and it works fine.
That’s all for now. I’m sure I’ll find more.
This past weekend I went home to my parent’s house in Ironwood, Michigan, to celebrate Thanksgiving. By all accounts, it was a lovely weekend. I had a great time, lots of fantastic food, and got to spend time with almost all of the most important people in my life that I rarely get to see.
The return trip, however, was very irritating. My original return-trip itinerary was that I would fly from Minneapolis/St. Paul to Phoenix, and catch an express flight from Phoenix to Albuquerque. Seems simple, right?
The plane to Albuquerque was late.
And by late, I mean irritatingly late. Frustratingly late. Thankfully late. Thankfully? Yes, because my flight from Minneapolis got into the terminal at 9:25 (early!), at some gate in the low A’s (A5?), and boarding of my next flight was due to begin at 9:31. Not, of course, that I had any delusions of getting to eat dinner, but sometimes stray wishes enter the brain just to make things more exciting. After hustling down from one gate to the next, the plane was not there. Not there not because it had left, but because it had never arrived; it was expected to arrive in time to start boarding by 11:45, but the lady behind the desk seemed optimistic we might get on board as early as 11:30 (note, a full two hours late). What luck, right? Dinner! Of course, this dream was short-lived. The nearest restaurants to the gate were already closed, and the next-nearest was only serving booze until 10 (the kitchen had long-since closed). I sat down and ordered a Sam Adams just in time for last call. They kicked everyone out before 10:15.
This was, on the whole, fully unsatisfying. In part because I was really getting rather hungry, and in part because I just wanted a nap (I got up at around 5:30 that morning, after all), or at least to be able to sit down, savor a beer, and stare into the distance for a while. Such was not to be. But, now that I had almost a full hour and a half in front of me, I set about investigating the possibility that somewhere in that God-forsaken airport, some bright young entrepreneur realized that the airlines sometimes bring in new customers even at odd hours of the night. After discussing this with the airport security personnel, who are all-knowing in the ways of airport comestibles, I exited security and arrived at the Paradise Bakery & Grill. This was an oasis in a sea of closed shops and metal grating. They had but one employee and limited cold-cuts-only sandwiches (because it was after hours), and a line about a half-hour deep. I stood, patiently, to get what may have been the most unexpectedly tasty roast beef sandwich of my life. I sat, enjoyed, sipped my cup of ice-water, and nibbled on the slightly under-cooked chocolate chip cookie that came with the sandwich.
By the time I finished, and this is due in part to the fact that I chit-chatted with some of the other unfortunate souls who had also discovered the Paradise Bakery from the security folks, it was past 11:00. This is an important thing to note because, of course, all but one of the security checkpoints close down at that point. One security crew, in the C terminal, are still at their posts. As best I can tell, their job is to frisk Paradise Bakery customers who, because of the late hour, don’t care anymore. I made it back through security and made the long trek back to the far end of the A terminal in good time, and arrived at gate A25 to discover that the plane was there, and that boarding (or, more specifically, pre-boarding) had just begun. After waiting for the various frequent-flier clubs and zones 1 and 2 to board, I entered the airplane, took my seat, and got settled.
This is when things began to get a little more unusual.
Everyone slowly stowed their baggage and took their seats as directed. The people in the emergency exit rows consented to sitting and not doing anything for the extra legroom, and the stewardesses counted the number of passengers, twice. Then, at the point where everyone fully expected the airplane to push back from the gate and get underway, there was an uncomfortable pause. It wasn’t very long, just long enough to be uncomfortable. That was when someone up front informed us all that there was a test that needed to be run on the plane, but that the test needed to be run without passengers. Never fear, of course, we were encouraged to leave our belongings in the plane because we’d be back shortly, but we had to leave. Some of us, myself included, believed the nice man, and so left things. I left my jacket, others left everything. Some were smart, or merely had no luggage, and took everything they had off the plane.
Once off the plane, with a few stragglers still exiting, we were informed by the gate staff that, in fact, we were changing planes. Not only would we need to go back and get our belongings, but we would need to go to gate B6. Note that this is a different terminal. So, back on the plane we went, pushing past other passengers exiting the plane, to fetch our things. On the way down, we ran into the pilot, who demanded to know why we were getting back on the plane. When we told him that we had to because we were changing planes, he got a pained and worried look on his face, and jogged up the jetway to do what can only be guessed would be “knocking some heads.” Belongings gathered, everyone exited the plane and started the long walk to gate B6. Some, who didn’t have luggage or who had wisely chosen not to trust the airline, had already made the trek. By the time I’d gotten down to around the closed Starbucks just past gate A15, they announced over the speakers that, in fact, we were not changing planes, and to come back to gate A25. Back we went.
The fellow manning the gate announced the obvious, that there was apparently some confusion, and that we should all sit tight at gate A25 while the bigwigs figured out what was going on. Those who had made it all the way to gate B6 took several minutes to return, and so only heard the explanation from sarcastic co- would-be passengers. The plane departed for testing, and everyone made themselves comfortable. After a half-hour, the gate staff broke out the normally-$5 “snack packs” and small bottles of water to help with passenger morale. These were greedily devoured by the crowd, no doubt due to some combination of wishing to recoup their losses, thinking they were taking vengeance, boredom, and actual hunger, given that stores had been closed for almost three hours now.
Finally, they announced that the plane had, in fact, failed the test, which as it turns out was less routine and more because the guy who was supposed to hook up his tractor to push the plane away from the gate had noticed some problem with the landing gear. We would indeed be changing planes, and they needed us all to vacate the premises and go to gate B6 now.
No one walked quickly, and everyone appeared to be rather tired. One man near me made several comments revealing that his brother lived in town, and had he gone straight home when the plane was originally delayed, he’d have had dinner and been in bed already. One group commented that once in Albuquerque they were looking forward to a 3-hour drive to their real destination. When I got to gate B6, I could see out the window that there was a plane sitting there, but several key features were missing. First, the people to let us on the plane, and second, the pilots, who one assumes were busy disposing of the previous plane to wherever they dispose of planes. At this new gate, we waited, for somewhere around another half an hour. CNN switched from a long expose on autistic children to a repeat of some live interview show that I forget the name of that promised an in-depth look at Michael Richard’s comments at the comedy club. Finally, the gate workers arrived and let us onto the new plane.
We entered, got stowed luggage, and got settled once more. The stewardesses—one looking just as tired as the rest of us, the other so perky she must have been taking amphetamines of some sort—handed out pillows and blankets. Again, at about the time that we should have pushed away from the gate, there was an awkward pause. No one spoke, hoping against hope.
It really is an unusual thing, to be amongst so many people, in perfect silence.
The silence was broken by the pilot, who explained, with a detectable amount of irritation mixed with some sort of “please forgive us” overtones that apparently someone had forgotten to put fuel in the plane, and if we would all be patient, the fuel truck would soon arrive and disgorge 8,300 pounds of fuel into the belly of the beast. We sat, and waited. It was at this point that I broke out my free “snack pack”. I had no more illusions that this would be quick. The crinkling of my wrappers sounded strange in the surreal-ly quiet cabin. I quickly consumed the cream crackers with processed cheese, a disturbingly yellow Quaker apple-cinnamon breakfast bar, a small chocolate-chip biscotti, and best of all, some shortbread cookies. The dried fruit pack went into my laptop bag for later (if ever). Finally, the captain announced that the fuel had been delivered, and a few moments later, the plane pushed back from the gate. The passengers, myself included, were too tired to cheer, but they began to have quiet conversations once again. I leaned into the pillow they had brought, and fell asleep before takeoff.
The plane finally landed at about 3am, Albuquerque time. I made it to my car, and back home, without incident, finally getting to sleep around 3:30 in the morning.
Some half-crazed moron at Microsoft, in an attempt to be helpful, made an idiotic decision.
Of what do I speak? Microsoft Entourage (11.3.6.070618) attempts to be both convenient and pretty by replacing apostrophes (') with curly quotes (’). Ordinarily, I wouldn’t complain. I like curly-quotes as much as the next guy, and I regularly use a vim plugin called UniCycle to achieve the same effect. HOWEVER, Entourage knows that it only wants to send text email in the ISO-8859-1 (aka “Latin1”) character set, which does not contain a curly-quote. This presents the age-old conundrum: “wanna curly quote, can’t have a curly quote”. So Entourage must choose a different character from the ISO-8859-1 character set to use instead of the curly quote. The obvious choice would be the apostrophe ('); people are used to it, and after all it is a quote! But what does Entourage choose? A superscript 1, like this: ¹
What goon came up with this? A superscript 1, in most fonts (except at very small sizes) looks nothing like a quotation mark. It looks like the number one! Which is exactly what it is! Yes, it’s in the Latin1 character set (0xB9) but, let’s be honest here, how many fonts do you suppose have a superscript one character but NOT an apostrophe? Or a curly quote? Besides looking stupid, Microsoft isn’t actually improving their compatibility!
P.S. I have no idea why superscript 1 gets to be its own character in the Latin1/ISO-8859-1 character set. Seems silly to me, but then, so does ¤.
This is just petty, but Apple? What’s up with libtoolize
? I know, I know, you decided you wanted to call it glibtoolize
, and that’s fine! That’s fine, I don’t mind. But why did you distribute an autoreconf
that still believed in libtoolize
? That’s just dumb.
I recently was doing some work on the computer of an elderly friend of mine, and had a bit of a scare with a hard drive that appeared to have failed. Turns out the boot block had been corrupted somehow, which was easy enough to fix from another computer (yay Linux!). Anyway, this made me stick my nose into S.M.A.R.T. statistics on hard drives. There’s a nice little tool for OSX that sits in the menu bar and keeps an eye on your disks for you (SMARTReporter). I figured there had to be something similar for Windows. In the “free” department, there’s very little available that’s worth beans, but I was able to find something called HDD Health. No sooner had I installed it than it started telling me that the Seek Error Rate was fluctuating wildly (generally it would go from 100 to 200 and back again every couple minutes). This was rather sudden! I got worried about the health of the drive, and started backing things up on it… then I looked it up on the internet. Apparently this is a common thing with Western Digital drives (which is what this computer had): their Seek Error Rate tends to fluctuate like that, and it doesn’t mean anything at all. The general recommendation seems to be “download the diagnostic tools from Western Digital; those will be authoritative”. So I did, and they said the drive was in perfect health.
Well, so much for being worried!
It does seem to speak to the temperamental (and largely useless) nature of S.M.A.R.T. statistics. Thing to keep in mind: they don’t always mean much.
This is something that’s been bugging me for a while here, and I might as well write it down since I finally found a solution.
I have an atomic-increment function. To make it actually atomic, it uses assembly. Here’s the PPC version:
static inline int atomic_inc(int * operand)
{
int retval;
register unsigned int incrd = incrd; // silence initialization complaints
asm volatile ("1:\n\t"
"lwarx %0,0,%1\n\t" /* reserve operand into retval */
"addi %2,%0,1\n\t" /* increment */
"stwcx. %2,0,%1\n\t" /* un-reserve operand */
"bne- 1b\n\t" /* if it failed, try again */
"isync" /* make sure it wasn't all just a dream */
:"=&r" (retval)
:"r" (operand), "r" (incrd)
:"cc","memory");
return retval;
}
Now, what exactly is wrong with that, eh? This works great on Linux. The general GCC compiles this just fine, as does the PGI compiler, IBM’s compiler, and Intel’s compiler.
Apple’s compiler? Here’s the error I get:
gcc -c test.c
/var/tmp/ccqu2RmV.s:5949:Parameter error: r0 not allowed for parameter 2 (code as 0 not r0)
Okay, so, some kind of monkey business is going on. What does this look like in the .S file?
1:
lwarx r0,0,r2
addi r3,r0,1
stwcx. r3,0,r2
bne- 1b
isync
mr r3,r0
It decided (retval) was going to be r0! Even though that’s apparently not allowed! (FYI it’s the addi
that generates the error).
The correct workaround is to use the barely documented “b” option, like this:
static inline int atomic_inc(int * operand)
{
int retval;
register unsigned int incrd = incrd; // silence initialization complaints
asm volatile ("1:\n\t"
"lwarx %0,0,%1\n\t" /* reserve operand into retval */
"addi %2,%0,1\n\t" /* increment */
"stwcx. %2,0,%1\n\t" /* un-reserve operand */
"bne- 1b\n\t" /* if it failed, try again */
"isync" /* make sure it wasn't all just a dream */
:"=&b" (retval) /* note the b instead of the r */
:"r" (operand), "r" (incrd)
:"cc","memory");
return retval;
}
That ensures, on PPC machines, that the value is a “base” register (aka not r0).
How gcc on Linux gets it right all the time, I have no idea. But it does.
There seems to be some disagreement, at Apple Computer, about exactly what the definition of the word “ignore” is. From the “sort” man page:
-d Sort in `phone directory’ order: ignore all characters except letters, digits and blanks when sorting.
What does that suggest to you? Well, let’s compare it to the GNU “sort” man page:
-d, —dictionary-order
consider only blanks and alphanumeric characters
So you’d THINK, right, that sorting with these two options would be equivalent, right?
Nope!
Here’s a simple list:
- 192.168.2.4 foo
- 192.168.2.42 foo
How should these things be sorted when the -d option is in effect? You’ve got a conundrum: is a space sorted BEFORE a number or AFTER a number?
Curse you, alphabet! You’re never around when I need you!
And, of course, BSD and GNU answer that question differently. On GNU, the answer is AFTER, on BSD the answer is BEFORE! Oh goody.
Here’s a better way if you need the sorting results to be the same on both BSD and GNU: replace all spaces with something else non-alpha-numeric that isn’t used in the file (such as an underscore, or an ellipsis, or an em-dash). Then sort with -ds (no last-minute saving throws!), then replace the underscore (or whatever) with a space again.
And if you need it to be consistent on OSX platforms too, make it a -dfs sort (so that capitals and lower-case are considered the same).
For whatever reason, w3m refuses to build on my Intel OSX box with the latest boehmgc library. To get it to build, you must forcibly downgrade to boehmgc 6.8 or 6.7 or something earlier.
Also, I noticed that w3m isn’t marked as depending on gdk-pixbuf. Strictly speaking, it doesn’t, but it does if you have --enable-image=x11
. :P Add this to your Portfile:
depends_lib lib:libgccpp.1:boehmgc bin:gdk-pixbuf-config:gdk-pixbuf
Also, it seems that either w3m or gdk-pixbuf-config appends an extra library to the config line for gdk-pixbuf-config (essentially, they specify -lgdk_pixbuf
AND -lgdk_pixbuf_xlib
). That extra library causes build problems for w3m; you can fix it by editing /opt/local/bin/gdk-pixbuf-config
and removing the -lgdk_pixbuf
from what it prints out (however, if you use other software that uses gdk-pixbuf-config, you may need to put it back once w3m has finished building).
Its error handling is CRAP.
I am coming to this realization because I recently lost a BUNCH of messages because of a bad delivery path (I told procmail to pipe messages to a non-existent executable). So what did procmail do? According to its log:
/bin/sh: /tmp/dovecot11/libexec/dovecot/deliver: No such file or directory
procmail: Error while writing to "/tmp/dovecot11/libexec/dovecot/deliver"
Well, sure, that’s to be expected, right? So what happened to the email? VANISHED. Into the bloody ether.
Of course, determining that the message vanished is trickier than just saying “hey, it’s not in my mailbox.” Oh no, there’s a “feature”, called ORGMAIL
. What is this? According to the procmailrc documentation (*that* collection of wisdom):
ORGMAIL Usually the system mailbox (ORiGinal MAIL‐
box). If, for some obscure reason (like
‘filesystem full’) the mail could not be
delivered, then this mailbox will be the last
resort. If procmail fails to save the mail
in here (deep, deep trouble :-), then the
mail will bounce back to the sender.
And so where is THAT? Why, /var/mail/$LOGNAME
of course, where else? And if LOGNAME
isn’t set for some reason? Or what if ORGMAIL
is unset? Oh, well… nuts to you! Procmail will use $SENDMAIL
to BOUNCE THE EMAIL rather than just try again later. That’s what they mean by “deep, deep trouble.” Notice the smiley face? Here’s why the manual has a smiley-face in it: to mock your pain.
But here’s the real crux of it: procmail doesn’t see delivery errors as FATAL. If one delivery instruction fails, it’ll just keep going through the procmailrc, looking for anything else that might match. In other words, the logic of your procmailrc has to take into account the fact that sometimes mail delivery can fail. If you fail to do this, your mail CAN end up in RANDOM LOCATIONS, depending on how messages that were supposed to match earlier rules fare against later rules.
If you want “first failure bail” behavior (which makes the most sense, in my mind), you have to add an extra rule after EVERY delivery instruction. For example:
:0 H
* ^From: .*fred@there\.com
./from_fred
:0 e # handle failure
{
EXITCODE=75 # set a non-zero exit code
HOST # This causes procmail to stop, obviously
}
You agree that HOST
means “stop processing and exit”, right? Obviously. That’s procmail for you. Note that that second clause has gotta go after EVERY delivery instruction. I hope you enjoy copy-and-paste.
Another way to handle errors, since successful delivery does stop procmail, is to add something like that to the end of your procmailrc, like so:
:0 # catch-all default delivery
${DEFAULT}
# If we get this far, there must have been an error
EXITCODE=75
HOST
Of course, you could also send the mail to /dev/null
at that point, but unsetting the HOST
variable (which is what listing it does) does the same thing faster. Intuitive, right? Here’s my smiley-face:
>:-P
Unlike my previous whining about compilers, this one I have no explanation for. It’s not me specifying things incorrectly, it’s just the compiler being broken.
So, here’s the goal: atomically increment a variable. On a Sparc (specifically, SparcV9), the function looks something like this:
static inline int atomic_inc(int * operand)
{
register uint32_t oldval, newval;
newval = *operand;
do {
oldval = newval;
newval++;
__asm__ __volatile__ ("cas [%1], %2, %0"
: "=&r" (newval)
: "r" (operand), "r"(oldval)
: "cc", "memory");
} while (oldval != newval);
return oldval+1;
}
Seems trivial, right? We use the CAS instruction (compare and swap). Conveniently, whenever the comparison fails, it stores the value of *operand
in the second register (i.e. %0 aka newval), so there are no extraneous memory operations in this little loop. Right? Right. Does it work? NO.
Let’s take a look at the assembly that the compiler (gcc) generates with -O2 optimization:
save %sp, -0x60, %sp
ld [%i0], %i5 /* newval = *operand; */
mov %i0, %o1 /* operand is copied into %o1 */
mov %i5, %o2 /* oldval = newval; */
cas [%o1], %o2, %o0 /* o1 = operand, o2 = newval, o0 = ? */
ret
restore %i5, 0x1, %o0
Say what? Does that have ANYTHING to do with what I told it? Nope! %o0
is never even initialized, but somehow it gets used anyway! What about the increment? Nope! It was optimized out, apparently (which, in fairness, is probably because we didn’t explicitly list it as an input). Of course, gcc is awful, you say! Use SUN’s compiler! Sorry, it produces the exact same output.
But let’s be a bit more explicit about the fact that the newval
register is an input to the assembly block:
static inline int atomic_inc(int * operand)
{
register uint32_t oldval, newval;
newval = *operand;
do {
oldval = newval;
newval++;
__asm__ __volatile__ ("cas [%1], %2, %0"
: "=&r" (newval)
: "r" (operand), "r"(oldval), "0"(newval)
: "cc", "memory");
} while (oldval != newval);
return oldval+1;
}
Now, Sun’s compiler complains: warning: parameter in inline asm statement unused: %3
. Well gosh, isn’t that useful; way to recognize the fact that "0"
declares the input to be an output! But at least, gcc leaves the add
operation in:
save %sp, -0x60, %sp
ld [%i0], %i5 /* oldval = *operand; */
mov %i0, %o1 /* operand is copied to %o1 */
add %i5, 0x1, %o0 /* newval = oldval + 1; */
mov %i5, %o2 /* oldval is copied to %o2 */
cas [%o1], %o2, %o0
ret
restore %i5, 0x1, %o0
Yay! The increment made it in there, and %o0
is now initialized to something! But what happened to the do{ }while()
loop? Sorry, that was optimized away, because gcc doesn’t recognize that newval
can change values, despite the fact that it’s listed as an output!
Sun’s compiler will at least leave the while loop in, but will often use the WRONG REGISTER for comparison (such as %i2
instead of %o0
).
But check out this minor change:
static inline int atomic_inc(int * operand)
{
register uint32_t oldval, newval;
do {
newval = *operand;
oldval = newval;
newval++;
__asm__ __volatile__ ("cas [%1], %2, %0"
: "=&r" (newval)
: "r" (operand), "r"(oldval), "0"(newval)
: "cc", "memory");
} while (oldval != newval);
return oldval+1;
}
See the difference? Rather than using the output of the cas
instruction (newval
), we’re throwing it away and re-reading *operand
no matter what. And guess what suddenly happens:
save %sp, -0x60, %sp
ld [%i0], %i5 /* oldval = *operand; */
add %i5, 0x1, %o0 /* newval = oldval + 1; */
mov %i0, %o1 /* operand is copied to %o1 */
mov %i5, %o2 /* oldval is copied to %o2 */
cas [%o1], %o2, %o0
cmp %i5, %o0 /* if (oldval != newval) */
bne,a,pt %icc, atomic_inc+0x8 /* then go back and try again */
ld [%i0], %i5
ret
restore %i5, 0x1, %o0
AHA! The while loop returns! And best of all, both GCC and Sun’s compiler suddenly, magically, (and best of all, consistently) use the correct registers for the loop comparison! It’s amazing! For some reason this change reminds the compilers that newval
is an output!
It’s completely idiotic. So, we can get it to work… but we have to be inefficient in order to do it, because otherwise (inexplicably) the compiler refuses to acknowledge that our output register can change.
In case you’re curious, the gcc version is:
sparc-sun-solaris2.10-gcc (GCC) 4.0.4 (gccfss)
and the Sun compiler is:
cc: Sun C 5.9 SunOS_sparc 2007/05/03
This isn’t the most accurate title, but…
I’ve got a SlingLink Turbo that I use for connecting my Macs upstairs to my cable modem downstairs. I went with a network-over-powerline option, because I’ve been having all kinds of intermittent interference problems with my wireless connectivity. So, rather than running an extra-long patch cable up the stairs and taping it down to the carpet, I went the SlingLink route. It seems to be designed specifically for SlingBox applications, but it forwards plain ol’ ethernet signals, and it’s about $40 cheaper than the NetGear equivalents. Huzzah for getting a bargain!
First impression: fabulous! I went along happily for weeks, enjoying my newfound reliable network connection. Then I tried downloading the latest Ubuntu ISO images via BitTorrent, and within five to ten minutes, the internet connection went offline. If I went downstairs and turned the cable modem off-and-on again, the internet would come back. For five to ten minutes. Then it’d go down again.
Surely, I say to myself, that’s a cable-modem problem, right?
I had to have the tech guys from Time Warner’s Cable group come out (twice!) before I finally figured out that it wasn’t their fault (the first time they said they replaced the splitter, and presto, the network was fine! I didn’t go after the ISO again for a while so…). Turns out I didn’t need to restart the cable modem, all I had to do was restart the SlingLink node and I’d get another five to ten minutes out of it. But it ONLY happens when BitTorrent is running; otherwise, the network connection is rock solid!
Weird, no?
So, to experiment, I tried limiting the BitTorrent connections: no dice. Then I tried limiting the BitTorrent bandwidth and all of a sudden the network would stay up. Somewhere between 100Kb/s and 150Kb/s is the cutoff. Something about BitTorrent’s bandwidth seems to either confuse the SlingLink node OR triggers some sort of antiviral cutoff in the SlingLink hardware (either way is annoying). For the record, it’s not a pure bandwidth issue: I can transfer files over the SlingLink network at speeds of over 400Kb/s. As soon as I introduce BitTorrent, though… down she goes.
Maybe it’s a packet-size issue. Maybe it’s a connection-tracking issue. I have no idea. But at least now I know that SlingLink has its limitations. And now, so do you.
Continuing my series of pointless complaining about compiler behavior (see here and here for the previous entries), I recently downloaded a trial version of PGI’s compiler to put in my Linux virtual machine to see how that does compiling qthreads. There were a few minor things that it choked on that I could correct pretty easily, and some real bizarre behavior that seems completely broken to me.
Let’s start with the minor mistakes it found in my code that other compilers hadn’t complained about:
static inline uint64_t qthread_incr64(
volatile uint64_t *operand, const int incr)
{
union {
uint64_t i;
struct {
uint32_t l, h;
} s;
} oldval, newval;
register char test;
do {
oldval.i = *operand;
newval.i = oldval.i + incr;
__asm__ __volatile__ ("lock; cmpxchg8b %1\n\t setne %0"
:"=r"(test)
:"m"(*operand),
"a"(oldval.s.l),
"d"(oldval.s.h),
"b"(newval.s.l),
"c"(newval.s.h)
:"memory");
} while (test);
return oldval.i;
}
Seems fairly straightforward, right? Works fine on most compilers, but the PGI compiler complains that “%sli” is an invalid register. Really obvious error, right? Right? (I don’t really know what the %sli register is for either). Turns out that because setne
requires a byte-sized register, I need to tell the compiler that I want a register that can be byte-sized. In other words, that "=r"
needs to become "=q"
. Fair enough. It’s a confusing error, and thus annoying, but I am technically wrong (or at least I’m providing an incomplete description of my requirements) here so I concede the ground to PGI.
And then there are places where PGI is simply a bit more pedantic than it really needs to be. For example, it generates an error when you implicitly cast a volatile struct foo *
into a void *
when calling into a function. Okay, yes, the pointers are different, but… most compilers allow you to implicitly convert just about any pointer type into a void *
without kvetching, because you aren’t allowed to dereference a void pointer (unless you cast again, and if you’re casting, all bets are off anyway), thus it’s a safe bet that you want to work on the pointer rather than what it points to. Yes, technically PGI has made a valid observation, but I disagree that their observation rises to the level of “warning-worthy” (I have no argument if they demote it to the sort of thing that shows up with the -Minform=inform
flag).
But there are other places where PGI is simply wrong/broken. For example, if I have (and use) a #define
like this:
#define PARALLEL_FUNC(initials, type, shorttype, category) \
type qt_##shorttype##_##category (type *array, size_t length, int checkfeb) \
{ \
struct qt##initials arg = { array, checkfeb }; \
type ret; \
qt_realfunc(0, length, sizeof(type), &ret, \
qt##initials##_worker, \
&arg, qt##initials##_acc, 0); \
return ret; \
}
PARALLEL_FUNC(uis, aligned_t, uint, sum);
PGI will die! Specifically, it complains that struct qtuisarg
does not exist, and that an identifier is missing. In other words, it blows away the whitespace following initials so that this line:
struct qt##initials arg = { array, checkfeb }; \
is interpreted as if it looked like this:
struct qt##initials##arg = { array, checkfeb }; \
But at least that’s easy to work around: rename the struct so that it has a _s
at the end! Apparently PGI is okay with this:
struct qt##initials##_s arg = { array, checkfeb }; \
::sigh:: Stupid, stupid compiler. At least it can be worked around.
PGI also bad at handling static inline functions in headers. How bad? Well, first of all, the DWARF2 symbols it generates (the default) are incorrect. It gets the line-numbers right but the file name wrong. For example, if I have an inline function in qthread_atomics.h
on line 75, and include that header in qt_mpool.c
, and then use that function on line 302, the DWARF2 symbols generated will claim that the function is on line 75 of qt_mpool.c
(which isn’t even correct if we assume that it’s generating DWARF2 symbols based on the pre-processed source! and besides which, all the other line numbers are from non-pre-processed source). You CAN tell it to generate DWARF1 or DWARF3 symbols, but then it simply leaves out the line numbers and file names completely. Handy, no?
Here’s another bug in PGI… though I suppose it’s my fault for outsmarting myself. So, once upon a time, I (think I) found that some compilers require assembly memory references to be within parentheses, while others require them to be within brackets. Unfortunately I didn’t write down which ones did what, so I don’t remember if I was merely being over-cautious in my code, or if it really was a compatibility problem. Nevertheless, I frequently do things like this:
atomic_incr(volatile uint32_t *op, const int incr) {
uint32_t retval = incr;
__asm__ __volatile__ ("lock; xaddl %0, %1"
:"=r"(retval)
:"m"(*op), "0"(retval)
:"memory");
return retval;
}
Note that weird "m"(*op)
construction? That was my way of ensuring that the right memory reference syntax was automatically used, no matter what the compiler thought it was. So, what does PGI do in this instance? It actually performs the dereference! In other words, it behaves as if I had written:
atomic_incr(volatile uint32_t *op, const int incr) {
uint32_t retval = incr;
__asm__ __volatile__ ("lock; xaddl %0, (%1)"
:"=r"(retval)
:"r"(*op), "0"(retval)
:"memory");
return retval;
}
when what I really wanted was:
atomic_incr(volatile uint32_t *op, const int incr) {
uint32_t retval = incr;
__asm__ __volatile__ ("lock; xaddl %0, (%1)"
:"=r"(retval)
:"r"(op), "0"(retval)
:"memory");
return retval;
}
See the difference? <sigh> Again, it’s not hard to fix so that PGI does the right thing. And maybe I was being too clever in the first place. But dagnabit, my trick should work! And, more pointedly, it DOES work on other compilers (gcc and icc at the bare minimum, and I’ve tested similar things with xlc).
Once upon a time, in 2002, the BSD folks had this genius plan: make the standard C qsort() function safe to use in reentrant code by creating qsort_r() and adding an argument (a pointer to pass to the comparison function). So they did, and it was good.
Five years later, in 2007, the GNU libc folks said to themselves “dang, those BSD guys are smart, I wish we had qsort_r()”. Then some idiot said: WAIT! We cannot simply use the same prototype as the BSD folks; they use an evil license! We can’t put that into GPL’d code! So the GNU libc folks solved the problem by reordering the arguments.
And now we have two, incompatible, widely published versions of qsort_r(), which both do the exact same thing: crash horribly if you use the wrong argument order.
<sigh>
Okay, here’s an alternate history:
… Then some lazy idiot said: WAIT! The existing qsort_r() is a poor design that requires a second implementation of qsort()! If we throw out compatibility with existing qsort_r() code, we can implement qsort() as a call to qsort_r() and no one will ever know!
<sigh>
Either way, we all lose.
(I have no argument with the alternate history point… but why’d you have to call it the exact same thing??? Call it qsort_gnu() or something! Make it easy to detect the difference!)
I ran across another PGI compiler bug that bears noting because it was so annoying to track down. Here’s the code:
static inline uint64_t qthread_cas64(
volatile uint64_t *operand,
const uint64_t newval,
const uint64_t oldval)
{
uint64_t retval;
__asm__ __volatile__ ("lock; cmpxchg %1,(%2)"
: "=&a"(retval) /* store from RAX */
: "r"(newval),
"r"(operand),
"a"(oldval) /* load into RAX */
: "cc", "memory");
return retval;
}
Now, both GCC and the Intel compiler will produce code you would expect; something like this:
mov 0xffffffffffffffe0(%rbp),%r12
mov 0xffffffffffffffe8(%rbp),%r13
mov 0xfffffffffffffff0(%rbp),%rax
lock cmpxchg %r12,0x0(%r13)
mov %rax,0xfffffffffffffff8(%rbp)
In essence, that’s:
%r12
(almost any register is fine)%r13
(almost any register is fine)%rax
(as I specified with “a”)%rax
to the variable I specifiedHere’s what PGI produces instead:
mov 0xffffffffffffffe0(%rbp),%r12
mov 0xffffffffffffffe8(%rbp),%r13
mov 0xfffffffffffffff0(%rbp),%rax
lock cmpxchg %r12,0x0(%r13)
mov %eax,0xfffffffffffffff8(%rbp)
You notice the problem? That last step became %eax
, so only the lower 32-bits of my 64-bit CAS get returned!
The workaround is to do something stupid: be more explicit. Like so:
static inline uint64_t qthread_cas64(
volatile uint64_t *operand,
const uint64_t newval,
const uint64_t oldval)
{
uint64_t retval;
__asm__ __volatile__ ("lock; cmpxchg %1,(%2)\n\t"
"mov %%rax,(%0)"
:
: "r"(&retval) /* store from RAX */
"r"(newval),
"r"(operand),
"a"(oldval) /* load into RAX */
: "cc", "memory");
return retval;
}
This is stupid because it requires an extra register; it becomes this:
mov 0xfffffffffffffff8(%rbp),%rbx
mov 0xffffffffffffffe0(%rbp),%r12
mov 0xffffffffffffffe8(%rbp),%r13
mov 0xfffffffffffffff0(%rbp),%rax
lock cmpxchg %r12,0x0(%r13)
mov %rax,(%rbx)
Obviously, not a killer (since it can be worked around), but annoying nevertheless.
A similar error happens in this code:
uint64_t retval;
__asm__ __volatile__ ("lock xaddq %0, (%1)"
:"+r" (retval)
:"r" (operand)
:"memory");
It would appear that PGI completely ignores the bitwidth of output data!
I recently spent a bunch of time trying to resolve some delivery problems we had with Gmail. Some of it was dealing with idiosyncratic issues associated with our mail system, and some of it, well, might benefit others.
In our mail system, we use qmail-qfilter
and some custom scripts to manipulate incoming mail, along with a custom shell script I wrote to manipulate outbound mail. Inbound mail, prior to this, was prepended with three new headers: DomainKey-Status, DKIM-Status (and friends), and X-Originating-IP. Outbound mail was signed with both a DomainKey and a DKIM signature. All of my DomainKey-based manipulation was based on libdomainkeys and, in particular, their dktest
utility. Yes, that library is technically out-of-date, but for a long time there were more DomainKey-compliant servers out there than DKIM-compliant servers, so… it made sense. The DKIM-based manipulation is all based on Perl’s Mail::DKIM module, which seems to be quite the workhorse.
Our situation was this: we have several users that use Gmail as a kind of “back-end” for their mail on this server. All of their mail needs to be forwarded to Gmail, and when they send from Gmail, it uses SMTP-AUTH to relay their mail through our server. This means that their outgoing mail is signed first by gmail, then by us. The domain of the outgoing signature is defined by the sender.
So, first problem: we use procmail to forward mail. This means that all mail that got sent to these Gmail users got re-transmitted with a return-address of nobody@our-domain.com (the procmail default). Thus, we signed all of this relayed mail (because the sender was from one of the domains we have a secret-key for). This became a problem because all spam that got sent to these users got relayed, and signed, and so we got blamed for it (thus causing gmail to blacklist us occasionally).
Gmail has a few recommendations on this subject. Their first recommendation is to stop changing the return address (which is exactly the opposite of the recommendation of SPF-supporters, I’d like to point out). They also suggest doing our own spam detection and putting “SPAM” in the subject of messages our system thinks is spam. I used Gmail’s recommended solution (which would also prevent us from signing outbound spam), adding the following lines to our procmailrc:
SENDER=`formail -c -x Return-Path`
SENDMAILFLAGS="-f${SENDER}"
This caused new problems. All of a sudden, mail wasn’t getting through to some of the Gmail users AT ALL. Gmail would permanently reject the messages with the error message:
555 5.5.2 Syntax error. u18si57222290ibk.46
It turns out that messages sent From the Gmail users often had multiple Return-Path
headers. The same is true of messages from many mailing lists (including Google Apps mailing lists). This means that formail
would dutifully print out a multi-line response, which would then cause garbage (more or less) into the sendmail
binary, thereby causing invalid syntax, which is why Gmail was rejecting messages. On top of that, formail doesn’t strip off the surrounding wockas, which caused sendmail to encode the Return-Path
header incorrectly, like this:
Return-Path: <<mailinglist@somedomain.com>
<bogus@spamsender.com>>
This reflects what would happen during the SMTP conversation with Gmail’s servers: the double-wockas would be there as well, which is, officially, invalid SMTP syntax. The solution we’re using now is relatively trivial and works well:
SENDER=`formail -c -x Return-Path | head -n 1 | tr -d'<>'`
SENDMAILFLAGS="-f${SENDER}"
Let me re-iterate that, because it’s worth being direct. Using Gmail’s suggested solution caused messages to DISAPPEAR. IRRETRIEVABLY.
Granted, that was my fault for not testing it first. But still, come on Google. That’s a BAD procmail recommendation.
There were a few more problems I had to deal with, relating to DomainKeys and DKIM, but these are someone idiosyncratic to our mail system (but it may be of interest for folks with a similar setup). Here I should explain that when you send from Gmail through another server via SMTP-AUTH, Gmail signs the message with its DK key, both with a DKIM and with a DomainKeys header. This is DESPITE the fact that the Return-Path
is for a non-gmail domain, but because the Sender
is a gmail.com address, this behavior is completely legitimate and within the specified behavior of DKIM.
The first problem I ran into was that, without a new Return-Path
, the dktest
utility from DomainKeys would refuse to sign messages that had already been signed (in this case, by Gmail). Not only that, but it would refuse in a very bad way: instead of spitting out something that looks like a DomainKey-Signature:
header, it would spit out an error message. Thus, unless my script was careful about only appending things that start with DomainKey-Signature:
(which it wasn’t), I would get message headers that looked like this:
Message-Id: <4d275412.6502e70a.3bf6.0f6dSMTPIN_ADDED@mx.google.com>
do not sign email that already has a dksign unless Sender was found first
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gmail.com; h=mime-version
That’s an excerpt, but you can see the problem. It spit an invalid header (the error) into the middle of my headers. This kind of thing made Gmail mad, and rightly so. It made me mad too. So mad, in fact, that I removed libdomainkeys from my toolchain completely. Yes, I could have added extra layers to my script to detect the problem, but that’s beside the point: that kind of behavior by a tool like that is malicious.
The second problem I ran into is, essentially, an oversight on my part. My signing script chose a domain (correctly, I might add), and then handed the signing script a filename for the private key of that domain. HOWEVER, since I didn’t explicitly tell it what domain the key was for, it attempted to discover the domain based on the other headers in the message (such as Return-Path
and Sender
). This auto-discovery was only accurate for users like myself who don’t use Gmail to relay mail through our server. But for messages from Gmail users, who relay via SMTP-AUTH, the script would detect that the mail’s sender was a Gmail user (similar problems would arise for mailing lists, depending on their sender-rewriting behavior). So what it would do is assume that the key it had been handed was for that sender’s domain (i.e. gmail.com), and would create an invalid signature. This, thankfully, was easy to fix: merely adding an explicit --domain=$DOMAIN
argument to feed to the signing script fixed the issue. But it was a weird one to track down! It’s worth pointing out that the libdomainkeys dktest
utility does not provide a means of doing this.
Anyway, at long last, mail seems to be flowing to my Gmail users once again. Thank heaven!
So, Google, vaunted tech company that it is, seems to be doing something rather unfortunate. One of my friends/users, who uses Gmail as a repository for his email, recently notified me that email sent to him from other Gmail accounts showed up as “potentially forged”. Interestingly, this only happened for email that was sent from Gmail to an external server (i.e. mine) that then got relayed back to Gmail. Examining the “raw original”, here’s the differences:
Now, since this doesn’t happen to messages sent from-Gmail-to-Gmail directly, and I’m very certain that my email server isn’t doing it either (I sniffed the outbound SMTP traffic to prove it), I’m guessing that this message… “normalization”, for lack of a better term… is a function of their ingress filter. But all of those changes are enough to invalidate the DKIM signature that Gmail generated… or, I suppose, anyone else’s DKIM signature.
<eye-roll>
Come on, Google, get your act together.
This page contains an archive of all entries posted to Kyle in the People Suck category. They are listed from oldest to newest.
Paper Summaries is the previous category.
Philosophy is the next category.
Many more can be found on the main index page or by looking through the archives.