Friday, November 18, 2011
Google's onmousedown link trickery
So you do a Google search. You find something interesting, but instead of clicking on it perhaps you just want to copy the link. Say you want to IM it to a friend or paste it into a malicious site search service. So you right-click on the link and "Copy Link Location" or whatever it is for you people on Chrome/IE/Safari/Opera/RockMelt(loljk). Then you paste it into your destination, but what you get looks like this:
Obviously Google's using some redirection trickery for some kind of internal purpose. But when you hover your mouse over the link, the url that shows up at the bottom of the window is the normal, short, non-Google link! And that hover-over url never lies, right? How is this happening?
Well, the good news is that technically the hover-over url isn't lying. It is indeed the correct url at that moment. But try right-clicking on the link, and before you do anything else, notice the hover-over url again*. It's changed! It seems Google is using some Javascript tricks (like an onmousedown event) to show us the expected url at first, then change it at the last second.
Sneaky? Maybe.
Annoying? Definitely.
Clever? Absolutely.
*A note in case you try this yourself: this only seems to happen on like 1/4 of the links on a typical page. And there's no way to predict which ones it will affect.
Sunday, November 6, 2011
What happened at 1:17AM, November 6?
I just realized a weird side effect of daylight savings. In the fall, it creates an entire hour where times are non-unique. What I mean is, if I said I brushed my teeth at 1:17AM on November 6, would you know when that happened? It could be the first time 1:17 rolled around or the second.
I only thought of this as I sent a text during those ambiguous two hours and saw the timestamp. I use Google Voice, which shows the time sent next to each text. So I wondered if I sent one at 1:25 in the first hour and then at 1:17 in the second hour, would it show the one at 1:25 before the one at 1:17? I guess so.
Of course this mostly only matters if you think about software that timestamps things. Which is one reason why Unix time was invented, I guess. But I also start thinking about things like police reports or other important documents where you might say X happened at 1:17AM on the night of November 6. That doesn't specify exactly when it happened! There's no standard way to indicate what exact time you're talking about.
Again, this is one reason computers use Unix time, though I just discovered that apparently Unix time has this problem too. Its rule is that it always increases by 86,400 seconds per day. But some days are longer than others because of leap seconds. So again, we find that some times, like 915148800, are ambiguous. Now that really seems to pose a problem for software like server logging, etc. Why would you make the same mistake, Unix guys?
Note: Hmm, I wonder when I posted this? I guess we'll never know!
I only thought of this as I sent a text during those ambiguous two hours and saw the timestamp. I use Google Voice, which shows the time sent next to each text. So I wondered if I sent one at 1:25 in the first hour and then at 1:17 in the second hour, would it show the one at 1:25 before the one at 1:17? I guess so.
Of course this mostly only matters if you think about software that timestamps things. Which is one reason why Unix time was invented, I guess. But I also start thinking about things like police reports or other important documents where you might say X happened at 1:17AM on the night of November 6. That doesn't specify exactly when it happened! There's no standard way to indicate what exact time you're talking about.
Again, this is one reason computers use Unix time, though I just discovered that apparently Unix time has this problem too. Its rule is that it always increases by 86,400 seconds per day. But some days are longer than others because of leap seconds. So again, we find that some times, like 915148800, are ambiguous. Now that really seems to pose a problem for software like server logging, etc. Why would you make the same mistake, Unix guys?
Note: Hmm, I wonder when I posted this? I guess we'll never know!
Friday, November 4, 2011
Luddite Fallacy: not wrong anymore?
Smash away!
I just read this article suggesting that the current job crisis might be a symptom of a larger trend: disappearing middle class jobs. The author cites technology and outsourcing as the causes. The technology part reminded me of an idea that's been forming in my head for a while.
Technology is bottoming out the cost of anything whose price was held up by the difficulty of communication or automation. The news, music, and even postal industries all were undercut when the internet made it dirt cheap to transfer information.
Of course, it also made all those goods cheap and plentiful for everyone. For a while I thought that was part of the answer to why technology doesn't create the massive unemployment the Luddites feared. But.. lately I've realized this time might be different.
So maybe the way this is working is that there are benefits to society but the benefits don't address the drawbacks. So we get to live in a world with an abundance of information always at our fingertips but that doesn't help the fact that none of us have jobs.
Thursday, October 13, 2011
Steve Jobs is bigger than Michael Jackson
Remember in 2009 when Michael Jackson died, broke the internet, and the entire rest of the summer was filled with people playing his music? Well I remember at the time checking out the spike in searches for "Michael Jackson" and seeing it indeed exceeded any single event I could think of, including Obama in November '08.
Well, this last week I got notified of Steve Jobs' death by two different groups of friends within an hour and then realized the trouble I'd had accessing Wikipedia earlier that night happened exactly when the news broke. I thought "is this going to be a mini-Michael Jackson thing?" Turns out, it's not a "mini" one:
I found it interesting to see that it's not just people over in the tech world who find this to be hugely significant news.
Postscript: The point of this post isn't to comment on the actual event. First, of primary importance is the fact that a man died after fighting a terrible disease, which is sad for him and his family. Second, to be clear, despite having significant problems with Apple, I'm honestly quite worried about the future of computers without Apple pushing everyone to make better and better devices. They have problems with openness, but Google, Microsoft, and everyone else has a problem with making things intuitive, tasteful, and above everything, useable. Apple pushes the rest to be better.
![]() |
Google Trends data: interest in Michael Jackson trumped Obama on election day and inauguration |
Well, this last week I got notified of Steve Jobs' death by two different groups of friends within an hour and then realized the trouble I'd had accessing Wikipedia earlier that night happened exactly when the news broke. I thought "is this going to be a mini-Michael Jackson thing?" Turns out, it's not a "mini" one:
Google Insights for Search data: Obama, Michael Jackson, and Steve Jobs' spike. Steve Jobs' is so recent it's squashed over to the right but look closely at the top of that peak. |
I found it interesting to see that it's not just people over in the tech world who find this to be hugely significant news.
Postscript: The point of this post isn't to comment on the actual event. First, of primary importance is the fact that a man died after fighting a terrible disease, which is sad for him and his family. Second, to be clear, despite having significant problems with Apple, I'm honestly quite worried about the future of computers without Apple pushing everyone to make better and better devices. They have problems with openness, but Google, Microsoft, and everyone else has a problem with making things intuitive, tasteful, and above everything, useable. Apple pushes the rest to be better.
Thursday, September 22, 2011
Monday, September 12, 2011
Thursday, September 8, 2011
Why don't we learn programming by example?
A lot of the past year I spent in a highly concentrated study of programming. I've been learning both the semantics of programming languages and the high-level art/philosophy of coding.
And there's a lot of advice flying around. It doesn't take you long to discover that programming is a field whose inhabitants are keen to look at it not just as a job, but as a highly important zen/philosophical/artistic way of life. They care a lot about how you code. Comment your code, don't overcomment your code, use top-down design, use bottom-up design, code for readability, code for efficiency, use descriptive variable names, refactor often, modularize everything, and don't break out of a loop early unless you turn around three times and spit first.
I'd certainly like to follow all of it. I'm trying to synthesize it all into some idea of the right way to do it. But something I notice is that there are terribly few examples to go by. I find it odd that the experience of learning programming, be it from a school, a book, or a website, is not full of examples of other people's real-world programs.
I've realized that learning to write well-written code is very similar to learning to write well-written English. It's hard to declare rigid rules that you can just follow to get there. There's plenty of advice, but advice in a vacuum isn't extremely useful. You need positive examples of good writing. A lot of what makes effective writing is that it's easy to follow for people used to it being laid out in a certain way. It also uses constructions that are efficient and effective. That's for both written English and written programs.
You learn to write English well by reading books and essays written by the masters. But you're supposed to learn to write code well by.. writing code. I find it strange that there aren't far more examples of well-written programs in books and university classes. There are whole books and websites of collections of essays and stories! Why not programs? There are certainly enough people who care about it, let me tell you.
This post was prompted by reading an essay by Steve Yegge on overly-commented code by novice programmers. I'd always heard people putting an emphasis on well-commented code, so it was interesting to hear the arguments for why it can get cumbersome. I thought it was compelling, and I'd like to put the advice to use in my coding. I want to learn, Steve Yegge! Really! So please, just show me how it should be done! I want to be an E.B. White or Christopher Hitchens, but I can't do it without examples!
And there's a lot of advice flying around. It doesn't take you long to discover that programming is a field whose inhabitants are keen to look at it not just as a job, but as a highly important zen/philosophical/artistic way of life. They care a lot about how you code. Comment your code, don't overcomment your code, use top-down design, use bottom-up design, code for readability, code for efficiency, use descriptive variable names, refactor often, modularize everything, and don't break out of a loop early unless you turn around three times and spit first.
I'd certainly like to follow all of it. I'm trying to synthesize it all into some idea of the right way to do it. But something I notice is that there are terribly few examples to go by. I find it odd that the experience of learning programming, be it from a school, a book, or a website, is not full of examples of other people's real-world programs.
I've realized that learning to write well-written code is very similar to learning to write well-written English. It's hard to declare rigid rules that you can just follow to get there. There's plenty of advice, but advice in a vacuum isn't extremely useful. You need positive examples of good writing. A lot of what makes effective writing is that it's easy to follow for people used to it being laid out in a certain way. It also uses constructions that are efficient and effective. That's for both written English and written programs.
You learn to write English well by reading books and essays written by the masters. But you're supposed to learn to write code well by.. writing code. I find it strange that there aren't far more examples of well-written programs in books and university classes. There are whole books and websites of collections of essays and stories! Why not programs? There are certainly enough people who care about it, let me tell you.
This post was prompted by reading an essay by Steve Yegge on overly-commented code by novice programmers. I'd always heard people putting an emphasis on well-commented code, so it was interesting to hear the arguments for why it can get cumbersome. I thought it was compelling, and I'd like to put the advice to use in my coding. I want to learn, Steve Yegge! Really! So please, just show me how it should be done! I want to be an E.B. White or Christopher Hitchens, but I can't do it without examples!
Tuesday, September 6, 2011
Visa's post-credit-card-fraud strategy a bit odd
I recently got notified by Bank of America that they'd detected fraud on my account. Meaning someone nasty got a hold of my info. They told me they were changing my card number and mailing a new one to me.
So today I got it and noticed that they'd only changed the last four digits. Having spent a lot of time this year thinking about security (thanks, Security Now), this struck me as strange. Have you noticed how the last four digits are the ones everyone seems to just give away anyway? On receipts, online banking, mailings, etc., they always indicate your card by writing "XXXX XXXX XXXX 1632."
I used to think they accepted the lowered security of those last four because you still had the other twelve that are never given out (ignoring the fact that the first ~4 are entirely deterministic). But now I can assume there's someone out there with my old number, and the only thing Visa gave me to protect against them is those last four, weakly guarded digits.
Now, I know the chances of this person ever finding those last four are vanishingly small. It's probably not even someone close by, and they're not going to be going through my receipts or mail. Plus, I omitted the part where the CSC (those 3-4 digits on the back) is also different. So I'm not actually worried.
It's just funny that while cybersecurity people are arguing about researchers who figured out how to break AES encryption in 190 quadrillion years instead of 760 quadrillion years, in the credit card world they're pretty much saying "Hey c'mon, what're the odds someone finds all four of these digits?"
And hey, maybe they're being a bit more realistic.
image credit: clanao.com (Google Images)
Wednesday, August 31, 2011
Anyone still thinking of buying a netbook? Get 2GB of RAM.
Well I was, and I did, and it's still awesome. Despite the fact that apparently tablets are killing netbooks.
Anyway, the point of this post is that when I was getting one, I was wondering whether 1GB of RAM was really enough in this day and age. Out of the sample size of Best Buy and Costco, few netbooks had more than 1GB. So "maybe the manufacturers know best? I don't need 2GB?" Googling "1GB enough" didn't answer my question. So here's my contribution: 1GB is not enough.
I got an Asus Eee PC 1015PE with 1GB of RAM. It was very slow, and I could tell it was accessing the pagefile all the time. After I upgraded it to 2GB, it sped way up. And I saw that upon startup it was already using 700MB of RAM (with Windows 7 Starter). And with Firefox regularly taking 300MB on its own, I knew 1GB wouldn't cut it. These days when I open up Task Manager it's usually in the neighborhood of 1.1GB in use. So modern OS's and browsers have reached that point. Get 2GB.
image credit: netbookreviews.net
Thursday, August 25, 2011
Yes, West Coast, we know.
Last earthquake post I promise, but this just sums up very well all the comments from the West Coast.
(via pleated-jeans)
(via pleated-jeans)
Subscribe to:
Posts (Atom)