NATHAN ASHBY-KUHLMAN > Blog entries from July 2003

Article URLs week: Day 4

It’s the fourth day of Article URLs week, and today we’re looking at URLs at some of the top news sites. Largely, they do a better job than other sites we’ve seen:

Tomorrow will be the final day of reviewing individual site’s URLs. Saturday we’ll wrap up with conclusions.

Comment by Jason, posted August 1, 2003, 12:31 am

if i may, submitted for your disapproval is the Toronto Star, the largest paper in Canada. a sample:
http://torontostar.com/NASApp/cs/ContentServer?
GXHC_gx_session_id_=bdcda2ebdf22a959&pagename=
thestar/Layout/Article_Type1&c=Article&cid=
1059689420236&call_pageid=968332188492&col=968793972154

(Massive cleanup follows "successful" concert)

Comment by Jason, posted August 1, 2003, 12:33 am

I apologize if the long url breaks your layout in any way. Sorry.

Comment by Nathan Ashby-Kuhlman, posted August 1, 2003, 2:49 am

Jason, I fixed the layout. That URL is utterly awful — even more utterly awful than the worst ones I’ve already pointed out. Using some kind of session ID in the URL is just disgusting. At least the server does seem to still give you the article if the session has expired, though — one tiny piece of forethought in a stupid design that definitely deserves an F.

Comment by Wes McGee, posted August 3, 2003, 6:13 pm

Actually, concerning the Wash Post, it's not as clear cut as it appears. A number of 'special articles' get section treatment in its URLs. Movie Reviews follow this format http://www.washingtonpost.com/wp-dyn/style/movies/reviews/A8091-2003Jul31.html, which can back track you to a list of recent reviews (/style/movies/reviews) or simply the Style section (/style) or this article on Metrobus being found at http://www.washingtonpost.com/ac2/wp-dyn/metro/specials/metrorail/A15307-2003Aug2. (Yep, you can omit the '.html', but only if you place an /ac2/' between the domain name and the filepath.

For whatever reason, whenever they post links for you to follow from the website itself, it prefers the all purpose wp-dyn/article/. Probably because it's shorter. If you notice in the last link I gave you, the Post started putting pointers as to how to make these secret links.

POST A COMMENT on “Article URLs week: Day 4”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

Article URLs week: Day 3

Today, in the third installment of Article URLs week, I’ll focus on sites that use dates poorly.

Dates need to be arranged in year/month/day order, because that is the only hierarchical way to do it — months come inside of years, and days come inside of months. (In addition to improving your URL design, hierarchical dates simplify your life by making your operating system sort files in a logical order.) Also, four-digit years are permanent and two-digit years are not. “2003” is okay, but “03” will create a Y2.1K problem.

Now that I’ve managed to criticize Boston.com, tomorrow we’ll focus on some of the other top news sites to see how they do.

POST A COMMENT on “Article URLs week: Day 3”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

Article URLs week: Day 2

To continue Article URLs week, as promised yesterday I’ve found some sites serving up truly awful URLs, and a few others using truly respectable ones.

Here are three sites that fail miserably:

And here are three sites worthy of emulation:

We’ll continue looking at more news sites’ article URLs Wednesday.

Comment by Jirka (ji_bo BLA BLA yahoo.com), posted July 29, 2003, 4:46 pm

Good work, Nathan. I like your last three posts (in fact, I planned to start rating sites' URLs myself, but I still don't have my weblog).

The question is: are you "eating your own dog food"? :-)

Let's take the URL of your last post: http://www.ashbykuhlman.net/blog/2003/07/28/0847. Immediately, my understanding was that (1) you have your own site http://www.ashbykuhlman.net with weblog http://www.ashbykuhlman.net/blog/ being just a part of it, (2) "0847" is the number of your posts up to now.

Well, no.

First, URL http://www.ashbykuhlman.net/blog/ leads to the same page as http://www.ashbykuhlman.net/ so we can say the word blog is kind of redundant "garbage" here. :-) Maybe you want to have the word "blog" in all your URLs and/or maybe you plan to use URL http://www.ashbykuhlman.net/ for something more general in the future. However, there are plenty of blogs without the word blog in their URLs and they're OK (like scripting.com). So although to say that the word "blog" in your URLs is garbage is probably too strong, having two access URLs for the same resource is definitely confusing.

(And what's more confusing is that URL http://www.ashbykuhlman.net/blog - i.e. the one without slash at the end - leads to 404 error. I've probably never seen this - everybody lets readers to skip final slashes and redirects their browsers to the appropriate URL if it exists. By the way, your other URLs like http://www.ashbykuhlman.net/blog/2003/07/29 or http://www.ashbykuhlman.net/blog/2003/07 - i.e. the ones without slashes at the end of URL - don't return 404 error. Kind of inconsistence...)

Second, the string "0847". When I moved from your newest post to the previous one, the number was "2227". OK, one can realize the number is the time of the post. Still, it's pretty confusing because it's not immediately clear what the number means. I already mentioned scripting.com, so here's another example of a clearer approach: http://scriptingnews.userland.com/2003/07/29#When:10:08:00AM - there's no confusion there.

Anyway, I'll keep reading your blog to learn more interesting things. :-)

Comment by Nathan Ashby-Kuhlman, posted July 29, 2003, 5:26 pm

Jirka, you raise some great points.

First, I fixed the problem where http://www.ashbykuhlman.net/blog, without the slash, brought up a 404 error. That was a configuration mistake I hadn’t intended.

I agree http://www.ashbykuhlman.net/blog/ is redundant with http://www.ashbykuhlman.net. Yes, the point of the “blog” part of the URL was to allow uses of the site other than just blogging (although I’m not really doing any of that). As opposed to personal sites, I think online news sites have fewer purposes other than publishing articles, but you’re still right that the “blog” part is “garbage” by my own standard. Maybe I’ll remove it.

I also am growing to dislike the four-digit timestamps you originally thought were ID numbers. I’ve been stating preferences the past few days for using slugs/words to identify news sites’ articles rather than long numbers, and here on my own site I am strongly considering switching to URLs like this: http://simon.incutio.com/archive/2003/07/28/phpXpath.

Comment by Steven Jarvis, posted July 29, 2003, 9:56 pm

Great series, Nathan! I've got a devil's advocate question for you: why do URLs need to be hackable? My wife (who is remarkably non-websavvy) would never in a thousand year think about hacking an URL. I'd say the same is true for at least 90% (and probably much higher than that) of the audience of news websites. *I* like hackable URLs, and I agree in general that they should be hierarchical, if only because I like at least the appearance (such as that given by liberal use of mod_rewrite) of a well-organized site. Isn't hacking an URL really just a fall-back point when the site's navigation fails you?

And I promise this question isn't prompted by the guilt at the state of the URLs at my work site (i.e., http://www.nwanews.com/times/story_news.php?storyid=108869 where the storyid is meaningless even to me). Really. ;)

Comment by Nathan Ashby-Kuhlman, posted July 30, 2003, 3:38 am

Steven, that’s an important question, and I think there are two ways to answer it.

First, the practical answer: What’s wrong with designing for the 10 percent who know the trick? All print newspapers use page numbers, just as all news sites use URLs, but that doesn’t mean all print readers use the front-page “index” to find out what page number editorials or comics are on today. Some people (like me) just prefer to browse rather than going directly to something specific. But the print newspaper keeps the direct navigation available for those who find it useful.

Or consider phone numbers — you can generally still pinpoint a landline to a specific town or neighborhood using its area code and exchange. Even if few people use the organizational trick often, the organization is still superior to randomly assigned numbers. As long as hackable URLs to serve Web-savvy readers do not interfere somehow with serving less Web-savvy readers, it benefits the greater good to use them. There’s also something to be said for evangelism. The more sites use hackable URLs, the more Web readers might catch on to trying them. I do see a day coming when news is delivered primarily online, and how limiting the medium would be if much of the audience still only knew its “beginner” features!

The second answer is more philosophical. Hacking a URL shouldn’t just be a fall-back point to the site’s navigation, but an ever-consistent reflection of the site’s navigational hierarchy. The point, then, is not whether anyone actually does hack URLs but whether they make sense to hack. For example, the URL of my work site’s baseball section (http://www.tcpalm.com/tcp/baseball) doesn’t tell me it’s a child of the sports section (http://www.tcpalm.com/tcp/sports/). The fact that the URL is not hackable is really just a clue to a confusing (CMS-imposed) navigational hierarchy.

Comment by Steven, posted July 31, 2003, 9:51 am

Nathan,

As to your first answer, there is nothing wrong with designing URLs for the 10% of us who hack them as a means of navigation (I know *I* certainly appreciate it), and having alternate means of accessing a site's content is almost always a good thing.

I think the end of your second answer goes a short way toward answering the question of why most news sites have non-hackable URLs: the limitations of the various CMSes that power these site. Whether home-grown or commercial, most do not produce hackable URLs, and cost (especially for the commercially available CMSes) is no indication where human-readable URLs are concerned.

Administrators of those news sites who start to think about the value of human-readable (and -hackable) URLs might be able to positively influence the vendors who create those CMSes (in the case of commercial CMSes) or get their own staff to work on modifying an in-house CMS (for those who have custom CMSes) to create such URLs. However, I think most commercial CMS vendors have a long list of other problems that would be better addressed, such as producing valid and accessible (X)HTML. That being said, good URLs are an important part of a whole news site.

Comment by Julie, posted August 1, 2003, 6:47 pm

Right on, Nathan. And I would add to your practical and philosophical reasons, the not as pressing but not altogether insignificant either psychological reason:

Your URLs should be neat and orderly because they make an impression about your organization on those who actively view or use them and those who receive them in e-mails or IM. It's the same reason you show up for a job interview in a suit and tie.

http://www.cnn.com/2003/US/07/30/airline.warning/index.html

Impression: Clear. Organized. They really have their act together.

http://torontostar.com/NASApp/cs/ContentServer?
GXHC_gx_session_id_=bdcda2ebdf22a959&pagename=
thestar/Layout/Article_Type1&c=Article&cid=
1059689420236&call_pageid=968332188492&col=968793972154

Impression: It's a miracle they can find their own stories. They just showed up at the interview wearing mustard-stained T-shirts and wrinkled cargo pants.

Unfortunately, I suspect Steven's point is a good one. Since most of the garbage is tied to poorly designed CMS that often have even bigger issues, for most afflicted sites the problem is unlikely to go away any time soon. Then again, admitting they have a problem is half the battle ;)

Comment by Steven, posted August 4, 2003, 4:07 pm

Julie, I *absolutely* agree with you about the impression an URL gives. I have the same issues with poor grammar and spelling. All show whether the creator has paid attention to detail or not. As for the CMS issue, yeah, I think clean, useful URLs rank lower on the scale of importance than valid and semantic code, though I also believe that, unfortunately, it's very difficult to win that first half of the battle. ;)

Comment by David Blomquist, posted August 6, 2003, 10:13 pm

Nathan, thanks for the A grade on freep.com's naming convention. Your hierarchy makes sense, but there is a reason why we attach the date to the file name as we do: Believe it or not, freep.com is still produced with Pantheon Builder, and those of you who remember that beloved product know that it doesn't natively support an environment in which destination folders rotate daily (e.g. /2003/07/01/sports/lions). So attaching the date onto the file name was the best workaround we could concoct.

Comment by Nathan Ashby-Kuhlman, posted August 7, 2003, 8:58 am

David, it’s interesting to me how how often Pantheon Builder has come up in the comments on this series. I can only imagine how full some of the folders on your Web server are by now!

If you wanted to, there might be a further workaround you could do. It looks like your server runs Apache, and if so, you could use mod_rewrite to make your public URLs independent of the way the files are stored internally. For example, a configuration line with a regular expression like this:
RewriteRule ^sports/lions/([0-9]{4})/([0-9]{2})/(0?([0-9]{1,2}))
/([A-Za-z0-9]+) sports/lions/$5$4_$1$2$3.htm

would return the file located internally at
/sports/lions/lnote7_20030807.htm
whenever someone accessed the URL
/sports/lions/2003/08/07/lnote

Comment by David Blomquist, posted August 8, 2003, 11:43 pm

Yes, indeed you could do that, Nathan -- but not with Pantheon Builder, because Builder can only produce indexes using the actual file names (well, there are some exceptions, but it lacks regular expressions, so they're not really worth considering). So you'd have to write a widget to massage the file names going out as well as coming in. And frankly, I'm not sure it's worth the work.

Why? Well, I don't see much evidence in the user logs of people trying to hack URLs this way. You'd think it would be useful, but -- at least in the Free Press experience, and at a smaller site where I worked before coming to Detroit -- the vast majority of users just don't drill topically through the site. (The singular exception is sports, and there, the traditional reverse chron index seems to do the job.)

This is why we devote virtually all of our overnight editorial production time to the home page and main sports index, and why we archive these pages as "back issues." They are like the display windows at Marshall Field or Macy's, and every bit as important.

I'm not dissing a logical URL structure -- there are very good arguments for it, and you make them. I'm just saying that on my list of priorities, it isn't the first place I'm inclined to throw very scarce programming time.

Comment by David Marsh, posted August 11, 2003, 12:38 am

I have a few questions regarding creating permanent URL's.
By placing documents in sections like "http://domain.com/products/memory" doesn't it restrict the document from being associated with another section or product in this case?
By placing documents in a date hierarchy "http://domain.com/archives/2003/08/10" doesn't it restrict the document to a particular date? What happens if the document is updated with new information? The URL does not then indicate that the content has some new fresh updates to it. It may be considered old information and may be harder to find if searching for documents by hacking the URL.

Why not give each article/document/bog entry a unique title as the only identifying feature of the URL. Then it allows the site to change its hierarchy and taxonomy without affecting the URL. Documents can be re-classified or updated without creating any confusion by placing any structure or hierarchy in the URL.
Take Google as an example. When I am looking for information I am never thinking of a particular website or the URL hierarchy that may be used for the document I am interested in. I type some keywords and select a document I am interested in.
I agree the URL should be readable but it has no need for hierarchy in it. Having a hierarchy in the URL might give me some clues as to how the document was categorised at the time it was published but it may not be relevant anymore if the document has been reclassified or if the document has been updated or amended.
Keep it simple. Give it a unique title which I think should be a string from 1-50 characters.
I would love to hear any comments on whether I am on the right track here or not or some good reasons for creating hierarchies and how to manage these hierarchies in URLs.

Comment by Nathan Ashby-Kuhlman, posted August 11, 2003, 1:05 am

David (Marsh), maybe I can clarify things by saying that I’m not trying to propose a URL scheme for any kind of content on any kind of site. I am trying to propose a URL scheme for news articles on news sites. The key part of your argument as I read it is that having hierarchy in URLs can become very confusing later on if the document is reclassified or updated or amended. But on news sites, that almost never happens! News articles are published one day and maybe follow-ups are published the next. Sometimes articles are updated or revised within a single day, as more information about a news event gradually emerges, but news organizations almost never go back and alter older coverage (and as anyone who’s read “1984” knows, well they shouldn’t).

As long as original documents don’t get modified, whatever hierarchy they get put into at the time of their creation can stand for their entire lifetime without reorganization. So that’s why I think the hierarchy — particularly including the publication date — would work well on news sites. In particular, creating permanently unique titles on news sites would be almost impossible. For example, the news coverage this past week would have used up all possible permutations of “Arnold Schwarzenegger” “California” and “recall” very quickly. For non-news sites, though, I think you make a good point.

Comment by David Marsh, posted August 12, 2003, 12:13 am

I agree with your comments on creating a URL scheme for news sites and your proposed format. It is exactly what I would use and will use in the future. Thanks.

I was hoping to get some feedback on whether a hierarchy approach can work at all in terms of maintaining permanent URL's and how to maintain reclassified data that may now be tied to a different URL if the URL for the content were to be derived from the taxonomy/hierarchy of the content.

I'm just not sure. On one hand showing a hierarchy in the URL for content is a great way to give a person some feedback on their current position within the site hierarchy but on the other it ties that content to particular hierarchy forever.

My proposed flat structure with just a title seems a little inadequate but I can't see a good compromise.

POST A COMMENT on “Article URLs week: Day 2”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

Article URLs week: Day 1

Welcome to the first day of Article URLs week. I’m going to be judging news sites’ article URLs on readability, brevity, cleanliness, hierarchy, and permanence.

Each URL will get an A through F letter grade. I will reduce a given URL’s grade for containing redundant “garbage” (marked as deleted text), being obviously impermanent, or using numbers rather than words to identify articles. I will increase its grade for being hackable or using hierarchical year/month/day dates. Let’s get started:

On Tuesday we’ll hunt for some URLs worthy of As and Fs.

POST A COMMENT on “Article URLs week: Day 1”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

Article URLs week: Principles

Too many news sites still post articles at ugly URLs like

http://www.al.com/news/birminghamnews/index.ssf?
/xml/story.ssf/html_standard.xsl?
/base/news/105929756463150.xml

rather than simple, pretty ones like

http://www.nytimes.com/2003/07/27/business/27MCI.html

Each day this week, I will evaluate typical article URLs at a bunch of news sites, concluding Saturday with recommendations for what the “ideal” format for article URLs might be. To repeat classic recommendations still not always followed, here are some principles of good URL design:

  • URLs should be human-readable. From the nytimes.com URL above, I can correctly guess that it goes to a business story about MCI published on July 27. But with the al.com URL, all I can guess is that it was published in the news section. While the page is still loading, an informative URL gives your readers important confirmation that they’re getting what they want.
  • URLs should be short. Short URLs encourage e-mailing and linking, which bring in traffic. Long URLs often get split onto multiple lines in e-mails, generating frustrating 404 errors when they’re clicked. If your site sends out e-mail newsletters, you may have problems if your own site has too-long URLs.
  • Corollary to the first two principles: URLs should not contain useless parts. Why does all the index.ssf garbage need to be in the al.com URL? In addition to confusing people, the content management system mumbo-jumbo just wastes bandwidth on the site’s home page, which has to link to all the articles like that.
  • URLs should be hierarchical. As you read from left to right, they should move from general to specific. This lets them be “hackable” so users can move to a more general level of the hierarchy by chopping off the end of the URL. The nytimes.com URL is not hackable, but it does present the date in the correct hierarchical way. Other date formats some sites use, like the American month-day-year, are not hierarchical.
  • URLs should be permanently unique. An article tomorrow or next week should not have the same address as an article today. What if someone clicked a link in one of your own e-mails a few days late and got a completely different article than he or she was expecting? Even if your site removes articles after a period of free availability, never reuse those URLs.

Tomorrow I’ll begin judging many more news sites’ article URLs on these principles.

Comment by Adrian, posted July 28, 2003, 12:27 am

Sweet! I look forward to this series.

Comment by Richard Tallent, posted August 2, 2003, 2:58 pm

Good stuff... my only criticisms:
1. Stories often fall into multiple categories.
2. Making reporters, etc. come up with a filename is silly.

I'd like to see a CMS that would work with URLs like this:

http://www.acme.com/article?news,business,MCI,2003-08-02,fraud

IOW, all it has is some combination of keywords that create an "Google-I'm-feeling-lucky"-sort of fingerprint. Thus, the publisher can reuse the existing "keywords" field, and if the keyword fingerprint ever becomes non-unique, the CMS can just spit out a list of all matching articles--this adds a step for the user, but they can also see other articles they might be interested in.

Comment by Dominic Mitchell, posted August 2, 2003, 5:33 pm

Heh, if more URLs were short, we wouldn't need those damnable shortening "services" that everybody is so keen on... The ones where you have no idea where you're going when you click on them (shades of slashdot trolls), and that might not be around next week/month/year.

-Dom

Comment by Nathan Ashby-Kuhlman, posted August 2, 2003, 7:50 pm

Richard, your idea of using a combination of keywords to identify an article is an interesting idea. I’ve been arguing that news sites need to fit the section, date and article name/number into a hierarchical structure, but if your keyword system would let the keywords come in any order that would add a lot of flexibility for people guessing at URLs.

I’m not sure exactly what you mean that coming up with a filename is silly. I’ve never run across a newspaper that does not use such filenames (slugs) to produce its print publication, so all I’m saying is I’d rather URLs use those existing descriptive names than invent their own meaningless sequential ID numbers. In your sample URL someone would have to come up with those keywords — is the only difference in what we’re talking about that your URLs would allow more than one?

Comment by Nancy McGough, posted August 3, 2003, 3:00 am

Another thing that I like to do with my URLs is to have them point to the directory, for example like this:

http://www.ii.com/internet/messaging/imap/isps/

and then let the HTTP server determine the file name, which at the moment is index.html but in the future might be index.shtml or something that hasn't yet been invented.

Another thing that's useful about using this type of hierarchical naming, where the hierarchy names are meaningful, is that Google seems to use the words in the path as part of its indexing algorithm. Of course, this may change.

Comment by Már Örlygsson, posted August 5, 2003, 9:26 pm

"URLs should be hierarchial", sure, but you should add, "but the hierarchy should *not* neccessarily reflect the site's navigation tree".

Site structures (i.e. sitemaps) change all the time, so it's usually best to avoid having the URLs reflect some navigational structure that will most likely be revamped 6 months from now.

Comment by roman orszanski, posted November 20, 2003, 8:06 pm

You might want to add another principal: dates should be optional.
While you might want to view an article as it was published on a given date, you might equally want to view the latest version of an article.
Thus while the first draft of an article might be
http://fred.org/theory/urls/2002/nov/13/strange.htm,
the updated article would be
http://fred.org/theory/urls/2003/feb/3/strange.htm,
but the permanant link would be to
http://fred.org/theory/urls/strange.htm, which always shows
the current version of an article (or the version with all comments to date, etc).
If your blog automatically used the last update date, the earlier URL
may well fail — ideally it should point to the same article in its current state (possibly with a "diffs" link to take you back to the original).

It really depends upon whether you view the two drafts as separate articles, or separate views in the evolution of a single article.

In either case, the hierarchy minus the date should form
the permalink. The article itself should link to the "other" versions.

Comment by nyob, posted January 9, 2004, 2:06 am

whatever nathan. it's real easy to tell everyone what they "should" do and not tell them "how" to do it. you must want to be a consultant or something.

Comment by Nathan Ashby-Kuhlman, posted January 9, 2004, 2:35 am

Nyob, since the techniques for implementing cleaner URLs are well-documented elsewhere, I did not want to rehash them here. Instead, I wanted to discuss the goals of using those techniques.

Comment by Christoph, posted February 12, 2004, 5:31 pm

Yeah, checkout weblog.cemper.com for really great url codings.

POST A COMMENT on “Article URLs week: Principles”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

USAToday.com adds rotating Flash teases

USAToday.com’s redesign this week adds a rotating Flash “billboard,” CyberJournalist reports. The site’s upper-right corner now includes five “ear” teases that fade in and out every few seconds:

One of the five rotating USAToday.com teases, which includes a headline and a large horizontal photo.

I don’t really like tickers, but USAToday.com’s animation works much better than I expected for several reasons:

  • Fast transitions increase usability. The fade-out/fade-in process happens in less than a second. With some sites’ tickers, you have to wait for each headline to scroll fully into view. Furthermore, since each frame is static for about five seconds, there isn’t the distracting omnipresent motion of some tickers.
  • Pictures may increase usability. Seeing the first picture again is an excellent visual clue that the animation has cycled back to the beginning. It takes more effort to recognize this when you have to read (and remember) each headline.
  • Manual control increases usability. In other words, there’s a way to go back to the previous frame in the animation if you decide you were interested in it right after it disappeared. (It would be better, though, if the method for doing this was a little more obvious than clicking on one of the five circles).
  • There is a non-Flash alternative. In browsers without Flash, one of the five teases will be shown, without any animation. (It might be better to list all five headlines — not just one — in the plain-HTML version, though. Also, unfortunately there is no alternative for browsers without JavaScript.)

My only other suggestion is that the section logos (e.g. "Inside: Life") ought to be rebuilt as vector symbols in Flash. The heavy JPEG distortion used to keep their file size under control as bitmaps is quite obvious. On the whole, though, I like USAToday.com’s rotating teases a lot better than, say, the ad-like Flash bar on CJOnline.com. Perhaps, when done well, rotating teases can be a good way to present more content at attractive sizes in a limited space.

Comment by Julie, posted July 11, 2003, 8:07 pm

Hmmmm... putting aside my general dislike of both Flash and tickers, I agree that this is one of the better implementations I've seen. It took me a few rotations though to recognize that clicking on the little yellow circles takes you to the various panels. (There is probably a more intuitive -- if less artistic -- method)

No my real problem here is not with their Flash news banner but the fact that in addition there are, at times at least, two Flash motion ads (currently Dell and Samsung). I'm starting to feel seasick. ;P

Comment by Stuart, posted July 21, 2003, 2:10 pm

> No my real problem here is not with their Flash news banner but the fact that in addition there are, at times at least, two Flash motion ads

A standalone (i.e. desktop) Flash ticker, such as the Reuters Desktop Ticker, would solve this, while providing continuous, dynamically updated content.

POST A COMMENT on “USAToday.com adds rotating Flash teases”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

Try using larger photos for more impact

Webmaster Robin Sloan created a visually stunning design for Pointssouth.net, the news site presenting the work of recent college graduates who are Poynter’s summer fellows.

Points South features a much larger photo than most news Web sites would ever dare to run on their home pages. Friday’s picture and the ones that have come before it measure a whopping 600 pixels by 350 pixels, larger even than most news sites use in slide shows. I say this is “stunning” because photos of that size can frame their subjects within a large wide-angle view of the surroundings, rather than being limited to the tightly-cropped closeups news sites use to make pictures fit into confined spaces. A large photo also intrinsically has more impact than a smaller one.

The small picture sizes most news sites use limit photojournalistic possibilities, and that’s a shame. Someday, when everyone is using very large monitors, displaying large photos won’t be an issue. In the meantime, news sites could start using larger photos by simplifying their front page designs to promote fewer stories in the same space. (That’s a natural design for Points South because it only carries a few stories at a time). They could also start selectively serving larger images to visitors with larger monitors.

Even the classic reason not to use large pictures — awful download times for modem users — is less and less relevant as more and more people get broadband. We are seeing site redesigns to support larger ads — how about some redesigns to support larger photos?

POST A COMMENT on “Try using larger photos for more impact”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

TCPalm.com RSS feeds

I rarely talk about work — at TCPalm.com — in my blog, but I’m making an exception now to announce our new RSS feeds for our three daily newspapers on Florida’s Treasure Coast:

We’re starting off very simply, with just a few top stories each day rather than all the local news coverage. I hope to change that soon, and to offer feeds for more sections.

We are also offering JavaScript “includes” of these same headlines, a technique I have previously thoroughly scorned but which I have to admit is useful — it opens up syndication possibilities for people who know HTML and CSS, but not any scripting languages necessary to parse RSS.

HTML needs a “client-side include” technique other than JavaScript. OBJECT would be better if it didn’t have poor browser rendering and didn’t force the content to be a self-contained block. Part of the power of server-side includes, or their poor substitute of JavaScript document.write() calls, is the ability to insert pieces of invalid HTML — like a “header” segment that opens a few tags later closed by a “footer” segment.

Comment posted June 18, 2005, 7:05 pm

I use other newspapers' RSS feeds daily, yours are the only ones that want me to register to access content. I am not prepared to continually give out personal information to read content that can be accessed without doing so.

POST A COMMENT on “TCPalm.com RSS feeds”

Your name: (optional)

Your site: (optional)

Comment: (only <a>, <em> and <strong> allowed)

This page last modified on Friday, January 10, 2020 at 3:22 pm