About Site Admin

Website administrator for the WhyDontYou domain. Have maintained and developled a variety of sites, ranging from simple, plain HTML sites to full blown e-commerce applications. Interested in philosophy, politics and science.

Google Sketchup

Thanks to a link on the “past thinking” blog I found out about some very interesting software produced to create 3D models. So far it looks VERY cool – I have not had time to check it out properly but I will do so soon.

One of the most interesting things about this is, is the ability it has to link into Google Earth maps – potentially great fun.

You can find out more about this at http://www.sketchup.com/.

Posted in Uncategorized

Content Negotiation – Mirrored Post

As mentioned in the last post, there is an excellent article available at http://www.autisticcuckoo.net/archive.php?id=2004/ 11/03/ content-negotiation, but sadly the author of this article has expressed his disinterest in continuing with his blog. While it is possible that he will continue to pay his hosting fees and continue to re-register the domain name, this is not certain so, to try and at least retain this article we have copied it (verbatim) below.

Original Source – http://www.autisticcuckoo.net/archive.php?id=2004/11/03/content-negotiation

We have, for some time, tried to inform people about the fact that there is no point whatsoever in using XHTML as long as you serve the documents with a text/html media type. For those who still want to use XHTML and gain at least something for some

users, we have recommended content negotiation. On several occasions people have asked us to publish a write-up on how to do that, but there hasn’t been time to sit down and write it. Now, finally, we have tried to whip something together that we hope can serve as a guide.

What Is Content Negotiation?

Content negotiation means that the server in one way or another
negotiates with a user agent (browser, search engine, etc) that requests a document. The negotiation means that the user agent announces which media types (also called content type or MIME type) it can handle and, optionally, which one it prefers. The server then serves the document in the way that best suits the user agent.

The user agent announces which media types it can handle through a header in the HTTP request it sends to the server. The header is called Accept and can look something like this:

Accept: text/xml, application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, image/png, image/jpeg, image/gif;q=0.2, */*;q=0.1

The example is what our instance of Mozilla sends. (We have inserted blanks between the media types so that the text will wrap.) Our interest now lies with application/xhtml+xml and

text/html;q=0.9. The part after the semi-colon, q=0.9, is called a quality value and is a value between 0 and 1, inclusive, with up to three decimal places. The higher the quality value, the more the user agent prefers that media type. If no quality value is specified for a particular media type, it means q=1.0. The example thus shows that Mozilla prefers application/xhtml+xml to text/html.

The usual meaning of content negotiation is that the HTTP server itself decides which media type the user agent prefers, and then automatically chooses between a number of different documents. Normally the file suffix is used to associate to different media types, so the server might choose between

index.xhtml and index.html.

This article describes another type of content negotiation; one that is performed through a server-side script. Most web hosts offer some kind of server-side scripting, usually PHP or ASP. Our example uses PHP, since it is available for more platforms and is open source, while ASP is Microsoft-specific. We don’t delve into the finer details here, but presume that you are sufficiently familiar with PHP.

To round off this explanation of what content negotiation means, we want to emphasise that it’s not merely an issue of deciding which media type to send. When you have chosen a media type, you should also serve the document with a content that corresponds to the chosen media type. You either serve XHTML as application/xhtml+xml, or you serve HTML as text/html.

About the Examples

The code samples in this article are written for PHP 4.1.0 or higher. For older versions you need to replace $_SERVER with $HTTP_SERVER_VARS. If the code is executed in a function, you then need to declare the array as a global (global $HTTP_SERVER_VARS;).

This article presumes that the document’s content is marked up as XHTML 1.1, and that it doesn’t contain anything that cannot be converted into HTML 4.01 Strict, for instance element from other XML namespaces, or CDATA sections.

Parsing the Accept Header

First of all we need to find out whether or not the user agent supports the application/xhtml+xml media type and, if so, whether it prefers that to text/html.

  1. $xhtml = false;
  2. if (preg_match('/application\/xhtml\+xml(;q=(\d+\.\d+))?/i', $_SERVER['HTTP_ACCEPT'], $matches)) {
  3. $xhtmlQ = isset($matches[2]) ? $matches[2] : 1;
  4. if (preg_match('/text\/html(;q=(\d+\.\d+))?/i', $_SERVER['HTTP_ACCEPT'], $matches)) {
  5. $htmlQ = isset($matches[2]) ? $matches[2] : 1;
  6. $xhtml = ($xhtmlQ >= $htmlQ);
  7. } else {
  8. $xhtml = true;
  9. }
  10. }

The $xhtml variable indicates whether or not we will serve the document as XHTML. The initial value is false, since many older browsers lack support for XHTML.

On line 2 we check whether the Accept header contains
application/xhtml+xml plus an optional quality value. This regular expression isn’t 100% fool-proof, since it doesn’t limit the value range to [0,1], nor does it limit the number of decimal places to 3. For all intents and purposes, however, it doesn’t matter.

On line 3 we extract the quality value, if present. If not, we set the quality value for application/xhtml+xml to 1.

On lines 4 and 5 we perform the corresponding check for text/html. Line 6 compares the quality values and sets $xhtml=true if the user agent prefers application/xhtml+xml to text/html. Line 8 handles the case of a user agent that specifies application/xhtml+xml in the

Accept header, but not text/html.

After these lines of code we thus have a Boolean variable, $xhtml, which indicates whether the document will be served as XHTML.

Prepare HTML Conversion

If the user agent doesn’t support XHTML, or if it prefers HTML, we have to convert the document’s content from XHTML 1.1 to HTML 4.01. We do this with a simple function:

  1. function xml2html($buffer)
  2. {
  3. $xml = array('/>', 'xml:lang=');
  4. $html = array('>', 'lang=');
  5. return str_replace($xml, $html, $buffer);
  6. }

Lines 3 and 4 declare two arrays, where the elements in the $xml array will be replaced by the corresponding element in the $html array.

On line 5 each occurrence of /> is replaced by > in the $buffer string. At the same time, each occurrence of xml:lang is replaced by lang.

And Finally…

Only a few details now remain. If the $xhtml variable is true, we need to write the document type declaration for XHTML 1.1 and a <html> element with the proper XML namespace. Most likely we also want to start with an XML

declaration, and link to our style sheets through processing instructions.

If the user agent doesn’t want XHTML, we need to write a document type declaration for HTML 4.01 Strict and a <html> element without an XML namespace. Style sheets should be linked through ordinary <link> elements (or be imported in a <style> element). Furthermore, we need to instruct the PHP interpreter to buffer all output to the response stream, and to call our conversion function on the result before sending it back to the user agent.

Before we write anything at all, however, we must send a couple of HTTP headers: one that says which media type we use, and one that informs proxy servers that content negotiation has taken place so that they can consider that in their caching algorithms.

  1. if ($xhtml) {
  2. header('Content-Type: application/xhtml+xml; charset=utf-8');
  3. header('Vary: Accept');
  4. echo '<?xml version="1.0" encoding="utf-8"?>', "\n";
  5. echo '<?xml-stylesheet type="text/css" xhref="/css/screen.css" media="screen"?>', "\n";
  6. echo '<?xml-stylesheet type="text/css" xhref="/css/print.css" media="print"?>', "\n";
  7. echo '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">', "\n";
  8. echo '<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">', "\n";
  9. } else {
  10. header('Content-Type: text/html; charset=utf-8');
  11. header('Vary: Accept');
  12. ob_start('xml2html');
  13. echo '<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">', "\n";
  14. echo '<html lang="en">', "\n";
  15. }

Don’t forget to link to the style sheets in the <head> if the document is served as HTML.

There is a blatant shortcoming in the example shown in this article: the W3C validator. It doesn’t send application/xhtml+xml in its Accept

header, so it’s impossible to validate the document as XHTML. It is trivial to let a query parameter control the choice of media type, but that is left as an exercise for the reader.

(note: We are aware of some possible copyright issues, and we have attempted to contact the original owner to get permission to repost it verbatim here. At the time of this post, no replies had been received and we can only assume the original source is no longer on line. If you are the original source and would like this post removed please contact us and we will take this post down immediately)

Posted in Uncategorized

Sitepoint Quote

Found this from the sitepoint link posted in a previous article:

Ajax is meant for those situations where you have a small part of a web page that you want to update with information from the server without reloading the whole page (often a single word or a small set of links). If you are looking to replace more than say 10% of the page then you need to rethink whether Ajax is the appropriate way to do it. At least some of the Ajax that people are currently writing is more so they can demonstrate that they can write it rather than that the page actually needs it. Once Ajax ceases to be flavour of the month then it will go back to being used only in those situations where it is appropriate (the way people used to use it several years ago before the name Ajax was applied to this particular technology).

(Posted by felgall – URL: http://www.sitepoint.com/forums /showthread.php?t=371856)

I thought that was an excellent summation.

Posted in Uncategorized

More on AJAXian Issues…

Sorry to keep beating this one – but the site at http://www.lastcraft.com/blog/index.php?p=19 “Listen Kids, AJAX is not Cool” is excellent. If you are planing to ajax-ify your website or application then it is well worth adding the Last Craft site to you list of research material (you do research dont you?). This article highlights some of the more common AJAX-esque mistakes and, to an extent, typifies what is at fault with the web 2.0 buzz.

In a well written manner the article goes into what is wrong with most of the AJAX demos and, alludes to the general pointless-ness of it all. For me, the idea behind the new technology is to make life easier and improve the “user interface.” From what I have seen to date, AJAX does neither.

Strangely, when you read the comments it appears some people have taken an active dislike to the author, is this die-hard AJAXers defending themselves? 🙂

Posted in Uncategorized

Another AJAX link

Sorry, I forgot to add this to the last one.

If you want to use AJAX, then it is definitely worth your time checking out http://alexbosworth.backpackit.com/pub/67688

Posted in Uncategorized

Web 2.0 Nonsense

Having tried to avoid ranting about the “web 2.0” nonsense that is being thrown around the web and USENET at lot these days I have finally had to succumb to the temptation and rant a little.

If you have an even passing interest in web design, and you haven’t had your head buried in sand for the last twelve months, you cant have missed the hype surrounding how Web 2.0 technologies (mainly AJAX) will be the future of the web. Internet (and to a lesser extent general PC) magazines have been falling over themselves to hype the “new” way of doing things (even though the XMLHttpRequest it hinges on is ancient) and websites which have adopted the web 2.0 mantras are pushed remorselessly. (Flickr, del.icio.us etc.,)

Now, generally speaking here at Why Dont You we are more than happy to adopt (pointless) new technologies just for the sake of it. I mean, we even use Ubuntu… 🙂 However I cant help but think the Web 2.0 obsession is getting out of hand. It’s “poster child” is AJAX and, while this is useful, there are massive limitations to its implementation. Add to this the potential learning curves involved and round off with the browser problems (what happens if the client doesn’t have a JS enabled browser…) – all of a sudden it seems that this is actually a niche technology.

If you are designing a cutting edge site, geared to impress other web designers with your jedi-like editing powers then go for it. Web 2.0 your site to death.

If however, you are designing a site for the general public then steer clear. When people are trying to do their online shopping they dont want fade in / fade out effects. When people are using the browser provided by their ISP along with whatever net-paranoia software they can get their hands on, all that finely crafted event-driven JS vanishes. This is the sad reality of the internet, away from the excitement of web magazines. People want websites which function. All the glitter that Web 2.0 / AJAX provides is (IMHO) pointless.

Most sites (and most designers) have enough trouble getting their sites to work in two different browsers when it is plain HTML and CSS. Add in the new platforms we are constantly being told are “the way forward” and it just seems that Web 2.0 is all hype and no function.

For thoese who have converted, some decent enough AJAX sites are:

Enjoy.

Posted in Uncategorized

PC Magazines continue downwards

Sorry if things look like “WhyDontYou” are taking offence against PC magazines in general but somethings are too annoying to pass over! Take this months PCW for example.

Previously, we have ranted at some length about the crackpottery that is involved in “pricing” cover disks – well, we have had no impact on the market 🙂 and this months PCW proudly boasts software worth “more than” £365 on its “massive” 8GB cover DVD. I have already posted about the pricing of cover disks (read more) so I wont do much on that for now.

The made up value aside, this is definite proof that they now have much more space on the disks than they know what to do with. This month’s cover disk includes an amazingly esoteric array of pointless things – for example four (count ’em) linux distributions – Ubuntu 6.04, Slax, Gentoo and Fedora Core. Not that getting any of these to work will be easy – you have to load the DVD and then burn off the iso you want (is that really easier than DL’ing the iso in the first place – especially as the magazine gives no advice on this).

This leads me to the critical tipping point of the magazine. I can let most of the nonsense and padding it fills itself with slide. It is a magazine after all. I can ignore the vastly over priced cost of the magazine (although only just…). I can ignore the mountains of advertisments (which are there to keep the cost down…..) but its getting close now.

The killer is the Linux/Unix section.

What a surprise. PCW dedicates (and has done since the dawn of time) a whole two pages to the real operating system of choice – LINUX. Now, this is a magazine which has felt the need to put four distros on the cover disk so you would think they were up to speed with providing information and advice on the open source OS. Sadly this is not the case.

Once more, this is just another case of two more pages on Ubuntu. Argh. Why!?!

I could understand it if this was either a one off, or if Ubuntu had some quirks which were common to Linux. Neither is the case. I ranted about this last month – obviously PCW dont listen to me though 😉 and they have done it again.

The title of this months section is “Resolving Ubuntu Screen setup” – what madness this is. There is nothing they go on about that carries over to other distros – it even begins by mentioning how “one of ubuntu’s biggest drawbacks” is the lack of admin utilities other distros have… The whole article is written as if Ubuntu sponsor PCW. Maybe they do…

Is this the thin end of the wedge? Has Ubuntu been working behind the scenes to achieve Linux domination?

Well, I have no idea but I do know that the only way I will get next month’s PCW is if someone gives it to me…

Posted in Uncategorized

Web Design Articles

Following up on an old article on this, where I pointed out the “issues” I had with .net’s e-commerce challenge… (read the original article)

Recently I got an email from Phil (www.branchesdesign.co.uk) about the write up we did, and largely he agrees with what we said. He also pointed out some things we missed – but generally it supported the “concern” that the software match off was a bit biased.

He said:

I found your site whilst looking through my stats and found you linking to the .Net demo site we did for Olivers Organics using ‘out of the box Actinic’. I have to agree with you whole-heartedly. The so- called panel of ‘experts’ at .Net just… wasn’t! I don’t understand why they just didn’t have a wander outside their offices and grab 10 people off the street to have a review session.

I couldn’t agree more. That would have been a MUCH better idea, and really would have actually provided some insight for other developers.

Whilst I agree that Hannah’s {the “winning” entry} design was very nice indeed, it did appear as though some of these ‘experts’ didn’t even look at all of the sites in much detail. For instance, on ours a reviewer said there was no special offer on the home page. In the screen grab accompanying the review it can be seen! Oh dear.

The winning web site was evry nice – we said that the first time round, although on subsequent comparisons I am not really still sure it deserved to win. I had missed the special offer thing – but it is a good example of the problem with using industry “experts” to review a site.

.Net’s credibility, unfortunately, took a bit of a nose-dive in my estimations. Still, we haven’t done too badly out of it with lots of hits to the Organics site… c’est la vie.

Sadly, I cant really say I ever gave .net much “credibility” as far as its experiments go (it could have a whole site of bad science…) 🙂 It is good that it has generated traffic – the organics website (http://www.branchesdesign.co.uk/oliversorganics/) is very good, as is the designers site – www.branchesdesign.co.uk. It gives a feeling of “Cosmic Justice” to think that, despite the poor structure of .net’s code-off the entrants are getting some reward!

On an unfortunate note, this month’s .net magazine shows they refuse to learn from their mistakes and have set aside the best part of six pages for a “bad science” special. The article is about how a website has 50 miliseconds to make an impact… Now, I am sure .net are just faithfully repeating a research abstract but they should know better. Without going into too much detail about how this article is really not much more than a “filler”, I will highlight one question – when does the time start? From when you click a link to when the page renders is going to be about 100 times longer than that – even for a well optimised page on a LAN. It is an example of a farcial headline grabbing statement, which then descends into a mediocre peice about web design.

Posted in Uncategorized

Wiki Images?

For some reason, over the last five hours or so Wikipedia hasnt been showing images. Is this price of fame, or has upload.wikipedia.org just gone down?

This is in addtion to the problems I have had over the last few weeks where typing www.wikipedia.org or wikipedia.org into the address bar of IE causes it to crash – only en.wikipedia.org works here.

Odd to say the least.

Posted in Uncategorized

Google Maps Experimenting

Well, in the interest of furthering the available skill set at WhyDontYou, there has been some early experimentation with the Google Maps API and using things like AJAX. For those of you who have read this months .net, you will immediately see where the inspiration came from…

Have a look at our implementation of the google maps which shows the locations of Sci-Tech and CompuSkills sites. This works pretty well and we are (so far) more than happy with the .net tutorial. Well done guys.

Also, check out the “dashboard” – this is a 100% accurate implementation of the tutorial provided by .net (in which they claim it should work fine in FF and IE) yet it completely fails to work in either firefox or internet explorer. Bah.

Posted in Uncategorized

.net tutorials go downhill…

It is coming thick and fast today. Obviously this latest issue of .net is either suffering from too many pages and not enough content or it is actually an April Fool. Not only is the ruby on rails tutorial torturous to the point of unreadability but they follow this up with a tutorial on PHPizabi.

This time, the writing is perfectly readable and the tutorial follows reasonable step by step processes. However, it takes it to the level of idiocy. It runs from page 95 – 99 and is about four pages longer than it should be. The opening part of the tutorial is about how easy PHPizabi is to use and set up, yet it takes more tutorials than the impenetrable Ruby on Rails… What lunacy is this?

Page 1 is dedicated to unzipping the software and FTPing to your webspace. A whole page and 9 steps. If you use something other than the WinXP inbuilt zip handling or you use your own FTP client this whole page is pointless. Even if you are the three people in the world who use this set up, the article is pointless unless your level of IT literacy is incredibly low – if it is, why on Earth are you setting up your own social networking / dating site?

It continues in this vein. Each step is so simplistic you have to question the target audiences ability to actually read what is displayed on the screen. It is surrounded by “TOP TIP” boxes with things like this:

If you’re on an earlier version of Windows, you won’t be able to automatically extract the zip archive PHPizabi arrives in. Try PowerArchiver from www.powerarchiver.com. It also supports GZIP and RAR formats.

Good information but, I suspect, some what redundant for anyone other than a hermit who has been in his cave since 1994.

The next top tip is brilliant:

Parts of the current version of PHPizabi are encrypted using ionCube, a PHP encoding and decoding system. You may need specific server side support for ionCube, or perform some additional installation stages. Check out tinyurl.com/pzhtf

Amazing. People who need to be taught in NINE stages how to unzip an archive are also assumed to be able to determine the requirements for installation of this. Wow.

Honestly, I can only assume that this issue was one big April Fools joke, or that it is a bit of a quiet period and they were struggling for things to write tutorials about. Any sane human being would have swapped the Ruby torture-tutorial for the PHPizabi nonsense in a heartbeat. Maybe the editor has been on holiday?

Posted in Uncategorized

Leverage

Well, after the last rant I thought I would check up the useage of the term leverage. In that rant I mentioned how .net had used the phrase “We’ll leverage Rails to generate our application directory…” in a tutorial.

I had a moment of doubt about the term – maybe it had been used properly. Off to the internet I did go. The wonders of Dictionary.com came to my assistance and defined the word as:

    1. The action of a lever.
    2. The mechanical advantage of a lever.
  1. Positional advantage; power to act effectively: “started his… career with far more social leverage than his father had enjoyed” (Doris Kearns Goodwin).
  2. The use of credit or borrowed funds to improve one’s speculative capacity and increase the rate of return from an investment, as in buying securities on margin.

Now, correct me if I am wrong but none of them are appropriate for the word being utilised as it was. Is there a reason why the sentence couldnt have read “We’ll use Rails to generate our application directory…” or is that not Web 2.0 enough for .net magazine?

Posted in Uncategorized

Magazine rants… continued…

Well, it is a new month now so obviously more rants are required 🙂

Before anyone gets the wrong idea, I actually quite like .net magazine and think it really is a worthwhile read. I even subscribe to it! So please take my complaints with that in mind.

Following on from our previous rant, it seems things have not improved this month. In fact, I suspect a lot of the articles are in fact “Aprils Fools” – only slightly late.

The cover disk still claims it offers tools worth “over £140” – however I would be hard pushed to value the software anywhere near that level for insurance purposes… I suspect this is something that will never, ever go away with regards to computer magazines and their cover disks. Personally I would be happier if they dropped the magazine price by a pound and sacked the cover disk. I may be in the minority though 🙂

Anyway, the main thing I want to complain about is the bloody “Ruby on Rails” tutorial. If you have the magazine it is on pages 82 – 89 and is, simply put, the single worst tutorial I have ever come across in my life.

It is not just badly written, but this is a tutorial which appears to be aimed at getting novices up to speed with the rails development framework and help them produce an application.

You can tell it is going to be bad. This is the first paragraph:

Ruby on Rails (RoR) is an open source framework for the rapid development, testing and deployment of agile database-backed web applications. It is the marriage of Ruby, which is an elegant and powerful scripting language, and several classic programming design patterns. The result is a full-stack framework designed around the Model-View-Controller ( MVC) design pattern, which means you can use Ruby in all tiers of your application.

Now, I am not imagining things am I? Was that even in English? I cant help but get the feeling that the author (I will not name him, you can find it in the magazine) knows less about Ruby / Rails than he is letting on and has resorted to printing marketing blurbs from 37Signals.

Normally, .net tutorials are well written, informative and easy to follow. The Ruby on Rails article is none of that. While it is possible that if you follow the tutorial from start to finish you will have a working Ruby application, this is far from likely. The whole thing jumps from stage to stage, and of course suffers from the common computer tutorial problem of starting out for dummies then you turn the page and are expected to code the Hubble space telescope.

Every few sentence contain phrases like “We’ll leverage Rails to generate our application directory…” Seriously. It actually uses phrases like this as though they mean something. It is the worst abuse of the English language I have seen in (non-PR related) published material in a long time. The rest of the tutorial suffers from a combination of assumptions and “terminology gaffes.”

In parts, it seems to assume no prior knowledge at all, then jumps to startling difficult concepts which are hardly explained. The “migrations” are brought from no where and then readers are expected to start generating them. The section reads:

For our lightweight message board application, we need to generate two database tables: one for the discussion threads and another for posts left by users. To begin using migrations, run the migration generator for both by typing: (code)

Now, oddly, the only earlier reference to the word migration is about migrating data from one system to another (the examples give are MySQL to PostgreSQL). It is amazing. This happens repeatedly.

In essence, I suspect that even if you followed the tutorial line by line you would not end up with any better idea on how to use Ruby on Rails to develop web applications. I know I didnt.

Posted in Uncategorized

Bad Science – Bad Statistics…

After our recent hiatus (every one has to have time off :-)) it was entertaining to return to one of BadScience.net’s classic subjects – Media Mangling of Stats.

This weeks article (at http://www.badscience.net/?p=230) is about the way news papers fight for headlines by really overdoing the actual data. The headline claims of the number of children using cocaine has “doubled” is based on an increase from 1.4% to 1.9%. Even my basic understanding of maths doesnt see that as a “doubling.”

Well worth a read.

Posted in Uncategorized