109 posts with category “Tech”

WordPress 2.5 – March 10

Wordpress 2.5 - Write Post

WordPress 2.5 comes out in less than two weeks! I read something about the “Media Uploader” on the development blog, and, curious, I searched for more details, and came across this WordPress 2.5 Beta demo site. The login name is admin and the password is demo.

Aside from the stunning visual overhaul, there are several immediately noticeable vast improvements in some of the features:

  • Customizable thumbnail (and medium) image sizes — this has been requested forever, and WordPress finally listened. Used to be that every image you uploaded was copied and resized to a width of 128 pixels for automated thumbnail creation, which made a potentially cool feature virtually useless. Now they just need to introduce cropping.
  • Better private post protection — keeping posts private is so unintuitive in WordPress 2.3. The post needs to be marked as “Private” using a radio button, but hitting the “Publish” button instead of the “Save” button after editing a private post stupidly disregards that preference. Now privacy is indicated by a checkbox that flips privacy on and off and keeps it that way.
  • Tag management — I guess we all knew this was coming. It seems like the developers were so eager to get tag support out the door that with 2.2 or whatever it was they didn’t mind that you couldn’t edit any of the tags you create when you publish. Tagging a post just threw tags into the dark recesses of the WordPress database, where they became inaccessible except as part of a tag cloud on your site. But now we have an interface to delete, add, and edit them just as we do categories.

WordPress 2.5 - Media Uploader

It’s pretty sweet. The media uploader is particularly awesome. I can’t wait to install it. The designers still assume all their users can’t read fonts smaller than 16pt. I guess they’re trying to ensure they look Web 2.0 enough. And it looks like the Shuttle Project isn’t going anywhere after all.

One Response

Template Feed/Archive URL Structures for Various Blogging Platforms (Updating)

Being still very interested in web feeds, both practically and philosophically, I subscribe to them often. Occasionally I’ll find a site that seems as though it should have a feed, but contains no link to one within a meta declaration or within the body of the site. Still, most content generators generate feeds, regardless of whether their users make the feed URLs public. In cases like this, it’s fun to poke around and see if I can’t guess the correct URL.

The same goes for archives; certain Blogger users, for example, apparently turn archive links off, so all that’s easily visible are the last ten posts or so on the front page. But, of course, as is especially the case with something as prefab as Blogger, the archives are accessible through a very predictable URL schema.

And what about comment feeds? These are even more scarcely linked to, but in many cases do exist.

Here are the ones I know so far. I plan to update this post as I discover more. This is as much for my reference as it is for yours. So, bookmark it, and, y’know, subscribe to the comments. If you know of any other schemata, please comment. And if you’d like to create your own feeds from any site, give Feed43 a shot. It’s a bit tough to learn, but I’ve successfully made several useful feeds with it.

MySpace

  • All blog posts: http://blog.myspace.com/blog/rss.cfm?friendID=[friendID]

Continue reading

4 Responses

Some day, Songbird will:

  • have a proper CoverFlow clone that doesn’t lag or rely on Java (like AlbumApplet), and that allows for custom locations of art on the drive.
  • monitor folders for new music.
  • have an integrated BitTorrent client that puts music from trackers directly into your library.
  • jump to the location in a page where the currently playing mp3 was found.
  • properly recognize all XML podcasts (a known issue).
  • allow you to browse by when albums were added, when they were played (not just last played, but over their entire history), and by hotness.
  • submit to Last.fm.

And on that day…

2 Responses

How to Save One, Many, or All Items from a Google Reader Feed Locally

Google Reader, employing Google’s petabytes of storage, archives every feed item it’s ever pulled for you. This has always amazed me, as I’m sure I and everyone else must be using far more in Reader than the 5 gigs we get from Gmail. Still, they don’t have much of a choice; it wouldn’t do anybody good if you could only see the 10 or 20 items present on a feed’s XML file at any given time. And even though they’re probably clever enough to only have to store one copy of every item for that item’s hundreds of thousands of readers, they’ve practically built a third copy of the internet (after their cache).

A nice fallout of this archiving is that whenever content you’ve subscribed to disappears from the web, you’ll still be able to access its (admittedly homogenized) Reader copy, forever; “forever” here meaning “presumably for as long as Google is around.” When (if?) Google dies, will its data die with it? Despite my intuition that Google will long outlast current notions of what computers are and how they work, I still don’t like entrusting important data to other people, not to mention data that is accessible only through the web. I want a local copy.

But they don’t make it easy for you. Reader is all AJAXed out, so even simple page saves don’t work. Copying/pasting would be a nightmare. Screenshots? Too sloppy. Emailing copies of each item? Too time-consuming. Tagging them with a special tag, making that tag’s feed public, then subscribing in, like, Thunderbird or something? Even if that weren’t absurdly roundabout, the public feeds only have twenty or so items.

I’m talking specifically about a blog I loved, but that up and disappeared one day, completely, leaving the only copies of the lost data scattered throughout Netvibes, Newsgator, Bloglines, and Reader. Google searches turned up nothing like a straightforward guide to saving from Reader, which surprised me. But there were clues, and using only a couple tools, I finally got it. It’s actually pretty easy, I was able to save 118 items in about ten minutes with this method. Let me show you it.

You need Firefox, the two plugins Greasemonkey and ScrapBook, and the Greasemonkey script Google Reader Print Button. Then it’s just a matter of clicking “Print” for each item you want to save, which opens it in its own tab, then using ScrapBook’s “Capture All Tabs…” function, which automatically does a “Save Page As, Web Page, complete” into your %AppData% folder for each tab, then finally optionally using ScrapBook’s “Combine Wizard” (in the tools menu of the ScrapBook sidebar [Alt+K]) to put all the items into a single folder with a single index.html file.

The “printing” part is the most cumbersome, but goes by pretty quickly with the repetition of a series of clicks and keystrokes:

  1. Click “Print”
  2. Press Esc (to close the print dialogue)
  3. Press Ctrl+Tab (to get back to Reader)
  4. Press J (to go to the next feed item)

Do that mindlessly for a couple minutes, and they’ll all be there, waiting to be saved. I’m gonna put the word “disk” in here too so that anybody Googling for a solution might find this.

6 Responses

Firefox 3 Rendering Improvements

Firefox 3 is scheduled to be released later this fall; I haven’t really been following its development, but one thing I have heard about and am excited about is its (or, more accurately, Gecko‘s) new graphics library, Cairo.

Cairo Image Resizing

First I heard that it would resample rather than simply rescale images, as demonstrated in the image above (via Acts of Volition).

Later I learned that it will also render fonts more smoothly. I enjoy the soft way pages look in Safari for Windows, the result of a different rendering engine, WebKit, so this is something I’m really looking forward to. Here’s an example of Cairo’s font rendering, as seen in Camino 1.2+ for Mac, via hicksdesign:

Cairo Font Rendering

There are very specific reasons for the intentional differences in these approaches to font rendering. It’s a matter of personal preference, and I think my preference will be for Cairo. Some are floored by the superiority of WebKit, and designer Jeffrey Zeldman makes a solid, objective case for it; others are horrified.

Finally, Gecko’s non-standard CSS attribute -moz-border-radius, a precursor to CSS3‘s border-radius attribute, will make image-less rounded div corners easy and pretty (via Acts of Volition):

Cario Border Radius

I would have posted screenshots of my own, but I don’t trust these alpha builds not to eff things up.

3 Responses

FFmpeg Quality Comparison

Flash video is so great.

Anyway I used to use MediaCoder to convert to flash video, but when it gave me errors, and refused to tell me the specifics of those errors, I took it old school to the command prompt with FFmpeg (which MediaCoder uses anyway). This gives you a lot of useful info about the source file you’re encoding, such as audio sampling rate, frame rate, etc.

Wanting to find a balance between picture quality and streamability, I began encoding a short length of AVI video at different compression levels. FFmpeg calls this “qscale” (a way of representing variable bitrate qualities, much like LAME‘s -V parameter), and the lower the qscale value, the better the quality. The available qscale values range from 1 (highest quality) to 31 (lowest quality). Going worse than a 13 qscale produces unacceptably poor quality, so that’s as low as I went for the purposes of this test.

I encoded 3:14 minutes of an AVI, resizing it to 500Ã — 374 pixels, and encoding the audio at 96kbps and 44.1KHz, which sounds fine, and is a negligible part of the ultimate file size, so going lower wouldn’t be very beneficial. Plus I find that good audio can create the illusion that the whole thing is of higher quality. Poor audio just makes it sound like “web video.”

Here are the results, courtesy of Google Spreadsheets:

FFmpeg quality vs. filesize chart

The filesize, of course, goes down as quality goes down. And the loss in filesize also decreases, not just in amount, but in percentage as well, as indicated by the red line. For instance, the value of the red line at qscale 3 is 33.97%, which means that in going from qscale 2 to qscale 3, 33.97% of the filesize is shaved off.

However, because these losses are not perfectly exponential, I knew that there had to be qscale values that were more “efficient,” in a sense, than others — values that, despite being high, and causing a lower change in filesize than the previous step in qscale, still caused a comparably large change in filesize. For instance, still looking at the red line, you’ll notice that going from 2 to 3, as I said, shaves off 33.97% of the filesize, while going from 3 to 4 only shaves off 23.93% of the filesize; and that is a 29.56% decrease in change-in-filesize, which is a relatively large cost. We want the change-in-filesize to remain as high as possible for as long as possible.

Now, if you follow the red line from 4 to 5, you’ll see that that’s a 20.32% loss in filesize, which is pretty close to our previous 23.93% loss in filesize in going from 3 to 4. In fact, we’ve only lost 15.09% of change-in-filesize from the previous step. So these are the values we really want to examine: change in change-in-filesize, represented by the orange line.

This is nowhere close to exponential, nor does it follow any predictable decline. It darts around, seemingly at random. And we want to catch it at its lowest values, at points that represent changes in qscale that were nearly as efficient as the previous change in qscale. So the most desirable qscale values become, quite obviously, 5, 9, and 11.

What this means is that if quality is your primary concern (and you’re not crazy enough to encode at qscale 1), go with 5. qscale 5 turns 3:14 minutes of video into 30.62MB, which requires a download rate of 157.84KB/s to stream smoothly. qscale 11 will give you about half the filesize, and require a download rate of 77.37KB/s. But, because that’s the level at which picture quality really begins to suffer, and because most people don’t really mind buffering for a few seconds initially, I’m probably going to stick with qscale 9, whose videos take up 91.58 kilobytes per second, and which is by far the most efficient qscale anyway, with only a 4.92% change in change-in-filesize.

One caveat: This whole examination presupposes (as far as I can tell) that if it were possible to measure and chart the changes in the actual perceived visual quality of videos encoded at these qscale values, the curve would be perfectly geometric or exponential, with no aberrations similar to those above, and with all extrapolated delta curves showing no aberrations either. Given that, it might be easier to believe that every step you take through the qscale is of equal relative cost, and that there are no “objectively preferable” qscale values. But that is a lot more boring.

25 Responses

Happy Birthday, Forum SpamBots!

In dealing with spambots as the administrator of a couple phpBB forums, I’ve noticed that most of them, when registering, give the birthdate of March 28, 1983. I thought there might be some explanation for this consistency — like the January 1, 1970 dating of phantom phpBB posts — but a Google search for “march 28 1983” only turns up threads on various forums mentioning the coincidence, without offering any explanation.

I did find out that these spambots are likely the product of XRumer, a spamming tool, so my guess is that March 28, 1983 is the default value of a configurable birthdate, but I’m not really interested in installing it to find out.

3 Responses

Prodigy

In the early Nineties my family had one computer, a Zeos 386, and Prodigy was my very first experience with anything resembling the internet, including e-mail. I spent a lot of online time playing this labyrinth game called Mad Maze, which you can once again play in its entirety, as long as you use Internet Explorer.

Mad Maze is being discussed at:

and on these blogs:

Leave a Comment