Get it All
Together

Amid all the tragedy of the Japanese tsunami situation, we got a chance to inject a bit of normalcy into our day and go to the mall. While there, I had a chance to check out a Motorola Xoom at the Verizon store. Of course as any good developer would, I immediately began running through a battery of my own websites and projects to see how they compared on the native browser.

The result was somewhat shockingly disappointing: not only did Xoom’s browser appear not to want to render the web font packages I had installed on one of my sites, but in fact did not degrade properly either. The font packages are Font Squirrel packages and work decently with every other browser I’ve encountered. But on the Xoom, serifed fonts which had the base “serif” as their last choice of fonts in the family ended up rendering in a sans-serif text.

Is that because the browser is choking on the web font and just giving up? Is that because there is no default serifed font (how is that even possible?)? I do not know and only got a few moments of playing before the inevitable cheery sales person investigated my interest and harshed my groove. So I didn’t get a chance to find out. But boy, would I like to!

There’s lots of information available on WordPress’s codex about how to create themes for your blog, all of it dealing with the filters and actions you can use to manipulate the active content on your blog in cool ways. It goes without saying that those are important pieces of information, and in fact there are some outstanding tutorials for putting together a theme that you should definitely check out. WPBits is one such resource.

But I’d like to talk about some concepts that happen before you get to the active content: the HTML, and how you go about designing a theme which has to exist in all those varying pieces yet all go together to create one cohesive whole. This is a very high-level tutorial, meaning that specific details will be omitted in order to introduce the most basic concepts. I’m writing more about the workflow than about the process, if you follow me!

Continue reading

Anyone who’s serious about blogging – regardless of what it is they blog about – has found themselves consumed in the business of analyzing traffic.  If not financial reasons or effective political messaging, mere vanity compels us to find out more about the people who visit our pages, especially since there are so many people who view blog after blog without commenting or contributing.

But between your web-host’s analytic software, log files, link popularity checks and a whole host of other sources, you’d think that getting to know your audience would be a lot less difficult than it’s proven to be.  Knowing how many hits you get a day is fine; knowing how many bots are indexing your stuff is great; knowing what the average Pages Per Visit is helps on some levels.  But what you really want to know is: when a person checks out my page, how are they seeing it?  Where do they come in, where do they leave, what do they see along the way?

Enter pMetrics by Performancing.  Finally, we have a metrics system that tells us not only about the raw, statistical data of our webpage, but finally offers some insight into the browsing patterns of our readers.  Other packages attempt to do so, but in the case of the most popular ones, the data is so packed in amongst the interface that it becomes sterile and meaningless.  One of the major advantages of pMetrics is that, despite being packed with hugely helpful information, the interface is clean, baby!  Nice and readable.

With pMetrics, you get to use their “spy” mode and actually see folks on your page in real time, following them around the site.  Rather than boiling down the user experience into PPV, you can click on the “Visitors” tab and check out individual user experiences, seeing where they came in, everything they clicked on while on your site, and where they bounced.  Additionally, you can see their referrer and if they’ve come in on a search, you get to see their search terms.

It’s hard to imagine how you can overstate the critical nature of this kind of information.  It’s equally hard to overstate the paucity of this information prior to the launch of pMetrics.  PPV, while important, reveals nothing about *why* people view the number of pages they do.  Viewing individual users’ experiences can help you better understand why some people come to your page and bounce while others stay for an hour.  Are those early bouncers going places where there’s no compelling links elsewhere?  Are they going to places where there are compelling links out of your website?  Did you make a kind of “honey pot” of content for some users but not others?

Then for perspective, you can go back and look at that all-encompassing image of your traffic in a whole new light.  pMetrics offers the standard views of top referrers, content, search terms and others for your viewing pleasure.  But this time when you look at raw data, you know a whole lot more about how that data got where it was and what you might need to do about improving your site’s stats.

All in all, I’m very glad to have this service in my arsenal of SEO tools.  Best of all, the service is brand new, so improvement is bound to happen.

To that end, let me point out a few areas of potential improvement:

For one, while it’s fantastic seeing all the individual user experiences, there’s no reason to leave out the “bot experience.”  Those of us using Site Maps would really like to know exactly how the bots are moving through our sites, and even if your aren’t using Site Maps, knowing where the bots are going and how they’re indexing may give you clues as to where the site needs to be improved for indexing.

Another suggestion would be to develop a kind of “heat map” technology that allows us to see our site color coded to display where the majority of people are clicking.  Where they click is where they look, and that’s probably where you want your most important content.  You can sort of get this from context in the “Visitors” section, but a more intuitive UI would be hugely helpful.

Technorati Tags: , , ,

Powered by ScribeFire.

God, I just love, love, LOVE Firefox.  Is there a web developer alive out there that disagrees?

While updating my Firebug extension, I noticed that one of the bug fixes for 1.0.1 was a compatibility issue with another extension, the HTML Validator.

“Pray tell,” I asked, “what could this machine be?”  Well, as it turns out, its yet another breathtakingly useful tool in the arsenal of a web developer, this one validates a page based on the W3C standards without posting it to the W3C site for the results:

Html Validator for Firefox and Mozilla

HTML Validator is a Mozilla extension that adds HTML validation inside Firefox and Mozilla. The number of errors of a HTML page is seen on the form of an icon in the status bar when browsing. The details of the errors are seen when looking the HTML source of the page.

Of course, this simple explanation sells the plugin somewhat short.  The big thing is this: ordinarily, you would need to go to the W3C validator and put in the URL, then it spits back a bunch of errors and warnings with line numbers.  All well and good, but now you have to go through your code to find out where that line occurs. . .  and oh, by the way, if you’re writing PHP-generated HTML, then there isn’t a standing document where you can find that line.

Aggravating?  Oh, you betcha!  But with the Validator extension, just as with FireBug, you get to see the currently-laid out HTML as it happens and find exactly what is causing the error, thus saving yourself considerable time digging through your PHP include files to figure out which one’s causing the freakin’ error message!!!

Technorati Tags: , , , ,

powered by performancing firefox

Whilst playing around with the latest development on the DragonFlyEye.Net site, I’ve rediscovered a plugin I installed a while back, FireBug.  FireBug is a plugin for Firefox that. . .  well, . . .  it does so many things, its tough to put it in one sentence.  Actually, they did a good job of it here:

Firebug – Web Development Evolved

Firebug integrates with Firefox to put a wealth of development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page.

Even this is something of an understatement.  I had been using FireBug as a way to see what errors were coming up in my JavaScript, but basically, that’s nothing more than a shortcut to the JavaScript Console.  When I checked out the FireBug site again, I noticed that it said that it required the DOM Inspector component of Firefox to unleash it’s full potential.  So, I uninstalled the old version and installed Firefox 2.0.0.1 (which also rocks, by the way).  That’s when the real fun started.

Most of us who work with web development know that the biggest hassle, especially with JavaScript, is knowing exactly what’s been output and what’s been changed in the HTML when an application runs.  Even if you right-click and “View Source,” you still end up with *none* of the changes added by the JavaScript.

But not with FireBug, baby!  You can actually see the DOM as it happens at any specific moment, and watch the changes happen.  Holy crap, is that ever helpful!  No more guessing, no more hoping you’ve nested things exactly right.  Now, you can see for sure.  Best of all, they package it in the ever-helpful DOM format with plus sign navigation like you’re used to with Windows Explorer.

And how about those CSS woes associated with JavaScript?  Is the DIV that I’m adding really going to inherit those properties, or is it inheriting those properties?  Well, without FireBug, you just have to try it and see.  *With* FireBug, whilst traversing the DOM, you can click on any section and get to see it’s associated styles.  You see what style is being applied, what style is being inherited, and what the fully-computed style will be:

Firebug – Web Development Evolved » Blog Archive » Computed Style

I promised myself I wouldn’t add any more features to 1.0, but with the freshly posted Beta 9 I broke that promise to restore a feature from version 0.4 which many users have missed. The Style tab now allows you to show the computed style of an element instead of the list of cascaded CSS rules. To turn this on, look in the Options menu of the Style tab.

Yeah, baby, yeah!  There are so many other features and doo-dads, it’s hard to list them all.  Besides, I don’t work for the company, after all.  I just ran across this because I’d run across a pod-cast about Ajax by some English dude who I think helped develop it.  Cheers to you, dude, if you read this!  For both the pod cast and the plugin.

Oh!  One more thing I find super-bitchin’ about this plugin: it shows you the download times for every single component of your page.  That’s an awesome tune-up feature and helps you make informed decisions about what code to keep and what code to lose or at least avoid loading when unnecessary.  That’s led to a new revolution (I’m sure, entirely obvious to many) in the way that I’m approaching my JavaScript loading for the website.

Vive la revolucion!

Technorati Tags: , ,

powered by performancing firefox

One point I did not mention on the other blog, which is definitely an SEO advantage of using the “sans w” address redirect:

Generally, your Pages Per Visit covers your entire website, but when ranking which pages get hit the most, once again www.yostuff.com is different than yostuff.com.  Since Google Webmaster Tools allows you to set a Preferred Domain, it is logical to assume that the form of the domain that the Googlebot crawls is important to how it ranks pages.  In fact, the above-linked blog post more or less spells this out exactly.

So, get out there, set your preferred domain and make sure your server points users in the right direction.  As most of us who’ve been doing this a while can attest, it’s the little things that count.

Note: this has been cross-posted to dragonflyeye.net for the sake of informing my hippie bloggin’ buddies.
OK, so that’s just a humorous side-effect, but funny, nonetheless. . .

Anywho, I’m pretty exited about my latest little tweak to this here website, and I thought I’d share. A very wise personage who runs a site called Corz.org has a fantastic tutorial on the mod_rewrite Apache directive which has been something of a bible to me in the last few months as I endevoured to create the latest version of DFE.

Foremost among his/her tutorials of interest is the two-part examination of the mod_rewrite Apache directive. If you don’t know what that is, you’re not alone. Despite it’s anonymity, it is responsible for the “clean urls” look of WordPress and other modern blog software, where instead of nasty-looking “http://www.somewhere.com/index?file=25%category=stuff%otherthings=things“-style URLs, you get “http://www.somewhere.com/stuff/things/this_is_an_article.”

But that’s long leagues from all that it can do, as you will discover if you read that tutorial. One thing you can do with it, and what I’ve just done, is to use it to redirect reader’s browser from “http://www.yourblog.com” to “http://yourblog.com; regardless of what specific page they request, they will always be directed to a page that does not have the “www.” The directive looks like this:

# Begin rewrite rule: nix the www
Options +FollowSymlinks
RewriteEngine on
rewritecond %{http_host} ^www.yostuff.com [nc]
rewriterule ^(.*)$ http://yostuff.com/$1 [r=301,nc]
# End domain rewrite rule

If you wish to incorporate this directive into your site, you simply add it to the .htaccess file on the root of your WordPress blog (and if that’s not the root of your domain, which it isn’t in my case, you’d also need to add it to the root directory). I’ll get more into how it works below, but first to answer an obvious question:

Why would you do a thing like this, you ask? Well, there are a number of reasons. For one, the URL without the superfluous “www” just plain looks nicer and is less bulky. Better yet, this directive will encourage people to use the “sans w” version by visually associating the site with the cleaner URL without annoying people who either chose to use the older style or followed an old link in the older style.

But the biggest reason, for me, was that I needed to assure myself that the host portion of the domain name was always consistent for the sake of implementing AJAX and JavaScript code. Strange as it may seem, even though “www.dragonflyeye.net” and “dragonflyeye.net” are functionally equivalent addresses to users, they are not the same thing to the browser or to JavaScript. One points to a domain and the other points to a specific host on the domain (the server labeled “www”). Technically, those address variations could potentially refer two different hosts altogether.

JavaScript’s “sandbox” security rules prevent code downloaded from one host to execute code on another host. If a user is on “www.dragonflyeye.net” and clicks on an AJAX-enabled link that points to “dragonflyeye.net,” they’re going to get an error message instead of what I intended because they are unknowingly violating that rule.

That’s just bad. Not only bad, but embarrasing as well. But by using this directive, I avoid all that mush. Now, I can confidently code for dragonflyeye.net without worrying that someone might have come to the wrong (but still technically right) address. Even if they have, they will be directed to the correct address.

WordPress users take note: in order to change this directive in your .htaccess file and have it work right, you’re also going to need to change the URL of your site (so that it doesn’t have the “www”) in the “Options” section of your WordPress Admin site. Not to worry: once this is done, you won’t have any problem with older links to your site. That’s the whole point!!

LifeHack.org turns in a great roundup of tips on keeping readers and making a blog usable. The basics? Keep it readable:

Six Improvements to Your Blog – lifehack.org

Format Your Text- Take the extra time to write “pretty” posts, such as it were. Make it so that people can read what you’re typing, and do your best to keep the tone communicative, and not too dense. Translation: big fat paragraphs of dense text usually don’t make for “friendly” blog reading. (Look at David Byrne’s journal. Great stuff, but soooooooo long.) And get friendly with things like bulleted lists, shorter and longer paragraphs, use of bold, etc. But not too much. It’s a condiment.

The author of this post is spot-on in this article. Even if you think you’ve got it down, it never hurts to read the above article. You might find something you hadn’t considered. I like that he points out the need for short paragraphs, for example. Generally, large paragraphs that are well-written can easily be divided up a bit, since one thought should naturally lead to the next, anyway. But smart people tend to forget that their readers need logical breaks in the stream of consciousness, especially people used to writing in intellectual or academic circles.

On this level and so many more bulleted out for you in the above-linked post, some of the big blogs out there do more harm than good. Take the Daily Kos as one example. How many different ways to they violate LifeHack’s relatively simple rules of readability? Counting can make your head spin. In fact, I never go to Daily Koz ~ and I don’t care how much it affects my political blog not to be involved here ~ because the whole freakin’ page makes my eyes bug out of my head.

And because pages like this are so hard to read, other bloggers of like mind often emulate the unreadability and assume that this makes them hip. Some blogs made it big early and thus continue despite their readability shortcomings, others bull-dog their popularity with active and persistent SEO tactics, but for the rest of us, making the page readable is quite possibly the most essential component of achieving popularity.

So in the interest of furthering the usability discussion, allow me to add a few bullet points of my own using Kos as a “do not” example:

  1. Lines draw the eye, use them wisely: (I could write a whole blog on this, and maybe I just might) When creating borders around elements, be aware that the simple introduction of a solid line naturally draws the eye to follow where it leads. If you look at DailyKos, you can count at least twenty lines making up just the top three inches of the page. Moreover, they’re high-contrast lines separating orange and white and some of them are at 45 degree angles, besides. Holy crap! Keep borders to a minimum, and where you use them, try to see where they lead the user’s attention. It might lead them to move on.
  2. Sidebars are content, too!: When I read web pages, I like to be able to glance at the sidebars and see if there’s anything worth checking out elsewhere. So do other people, and that’s what sidebars are there for: to entice users towards increased Page-Per-Visit (PPV) or ad revenue. But in order to achieve this, the sidebar should flow naturally from the main content. When you look at DKos, it is impossible to see how the two right columns relate to the left. In fact, it almost looks as if you’re looking at three different web pages in frames. In Kos’s case, I would largely blame the use of ad content in the centre column for this “Islands in the Stream,” effect.
  3. Contrast is powerful stuff: I alluded to this in bullet #1, but I’ll state it explicitly here. Contrast is a powerful tool of usability, and thus you need to use it carefully. Kos looks like a creamsicle might in the midst of a bad acid trip. Once again, holy crap! They’re beating you over the head with the white and orange. Far better would be to use related or complimentary colors that blend into a whole while adding a small bit of contrast for the sake of drawing the eye and adding visual flavour.

That’s about all I’ve got at the moment. The big thing is to leave your page alone, walk away and have a beer, then go back and take another look. Or ask your friends to look, you’re probably always bugging them to, anyway. Get a fresh perspective on what you’ve got and think in terms of what you would think as a stranger to the website.

On a side-note, while I can’t prove it conclusively, I have a hunch that too much visual separation is probably not too good for SEO, either. Google has put a lot of effort forth in recent years to increase it’s search bots’ sensitivity to “readability” rules. That makes sense because things that aren’t readable on a webpage are more likely to be “Black-Hat” SEO tactics, and anyway, they’re not going to be terribly useful to the reader.

I also suspect, on this readability level, that keeping paragraphs short and focused is probably also good for SEO. That’s because a short, focused article is going to have a high density of related words that Google will see as an important article, but it is unlikely to have the same word repeated too many times, which will trip Google’s BS monitor.

Technorati Tags: , , ,

powered by performancing firefox