CSS and Mime Types

Well, I found out my HTML4.01 strict & CSS problem at littleghost.com is not me.

I finally got to the point of understanding the problem — the MIME type for CSS was not properly set. This only creates issues when one uses the strict doctype and attempts to import/link a CSS file. Odd.

And to make it worse, it’s only an issue (display issue) for Netscape. Works fine in IE — but the style sheet does fail the W3C validator, so that’s no good. Should pass (when the same CSS is parked somewhere else [different domain] and called from page on littleghost.com, all is well).

I wrote to Concentric, and they assured me that the type was set. It appears they were just looking at the pages displaying in a browers, and I’m all but certain that the browser was IE (why wouldn’t it be?).

So I set up some example pages for them to look at, and they finally got it. It has something to do with the configuration on my domain. They tested a test html/css file set they had; worked fine and validated at W3C in some other location. They move it into my domain, and they begin to see what I saw.

They are escalating the issue.

While it’s good they finally got it, what if I was a newbie? There first response was “yes it works; the mime type is set”.

How would I have been able to tell them they were wrong? I would have spent weeks coding/tweaking to make it work…and never understanding why it didn’t.

I’m actually pretty good at this stuff, but I had to prove to myself (so I could prove to them) that it was them, not me before they’d try to address the issue. If I didn’t have this Linux box here (so I could kill/add MIME types) I might have been screwed, even knowing what I do.

I felt I needed to do this so I could tell them (as I did) that I have this working in three different environments, two NT and one Linux — but same code fails at Concentric. And what if I didn’t have another domain to park the style sheet on so I could show them that a call to Concentric results in failure, call — from same code — to get the CSS from another domain was successful.

That’s one of the reasons I like to have two domains, but it’s always nice when they are configured correctly. I’m just learning this administration stuff myself, so I use the actual domain hosts I have as examples of what should be done. So I can see if what I’ve done here will work in the “real world.”

Ah well, we’ll see if they get back to me on this one. I’ve been at Concentric (ok, “XO”) for five years now, and don’t really want to move the domain unless I have to. Just too much of a hassle.

About databases…

OK, I was thinking about databases.

What am I thinking about now?

I’ve been coding my brains out lately, but in a very helter-skelter way that (occasionally) dovetails nicely.

The following is a list of what I’ve been working on lately:

  • Littleghost.com: Revamping my littleghost.com Web site for the first time since launch. See earlier entry.
  • HTML 4.01/XHTML: Now that Netscrape is finally standards complient, it’s time to really knuckle down and figure out how to use the tools the W3C has given us over the years that we just could not use effectively. This is a large part of the Littleghost.com redesign. (Note: A recent survey [by who?] said that IE 5-6 have 95% of the market. Fortunately, Netscape’s offerings were strongest with their new offerings, NOT v4.x).
  • Perl & PHP: For various reasons for various tasks, I have been doing a lot of Perl and PHP. I like both a great deal. I have been working with both languages for about 2-3 years, but never really got a lot of time to use them. I’m making time now. I wish my providers supported PHP (one does, but have to code them with a she-bang like Perl scripts and put in the CGI-BIN, which makes them fairly non-portable)
  • Web services: As mentioned in an earlier entry, this is something that I got into because Amazon and Google are opening their APIs to a degree, and use of XML tools make both sites accessible. No linking; no frame-out. Import the raw data and knock yourself out…
  • Javascript/DHTML/CSS: As part of my “standards” search/pursuit, I’ve been doing a lot of this, and making sure it works in IE and Netscape. For the last year or so I’ve been designing for IE solely, and there are still some quirks required to make anything the same in Netscape, even the new versions. So — OK — the true “standards-complient” browsers are not here yet, but they are getting damn close. Thank god the damn LAYER tag is gone….

I’ve been doing a little XML, some Cold Fusion, some stored procs and messing with three different databases (mySQL and Postgres on Linux; MS SQL Server on Win2000), as well. Probably not as much as I should, but there is only so much time.


One other thing I have been getting into lately is shell scripting. I finally found a book (PDF, on the Web, free) on BASH scripting (I use Bash on my Linux box; to be honest I don’t know if I have the Korn or Bourne shell on there. Doesn’t look like it).

Shell scripts are a pain in the ass, but excellent coding practice. They are difficult because they are so precise. With HTML, you can get away with almost anything (no close TR? the browser understands). With Cold Fusion, you get away with a lot (not case sensative, loosely typed etc). With Perl, it’ll slap you for case, but other matters are handled transparently (variable $num not exist? Then “$myNum = $num + 7” will equal 7. No error).

Shell scripts require all sorts of rule-following, the most difficult — to me — is the space issue:

I like writing: $c = $b + 4;

Shell scripts require no spaces: $c=$b+4;

Yeah, same thing, but …. just not my usual coding practice.

But good — you HAVE to be precise with shell scripts, which is a good thing. (However painful)

In a case of my “learnings” dovetailing, what I’m doing with the shell scripts is writing scripts to back up important files/directories on my Linux box to the Win2000 box and vice versa. This required the following tools/skills:

  • Shell scripts to do all the work, which includes FTP get/puts and so on
  • Installation/administration of an FTP server on my Win2000 box (freeware)
  • Installing the command line tool for WinZip, so I could write batch files to zip up selected directories
  • Scheduling — on the Win2000 box — the Zip batch files
  • Scheduling — via CRON — the jobs on the Linux box (all jobs run off the Linux box except the Win2000 directory zip, which are batch jobs. Linux is much better for scheduling and scripting [have tar, gzip, permission handlers etc all at your fingertips] )

It’s been an eye-opener.

I currently have eight CRON jobs running every night; before the crons run, I have two scheduled batch jobs on the Win2000 box zip things up.

Pretty cool.

And the best part is that I wrote these a month or so ago, and I just let them go. And they keep working. (Yes, I do check that they ran, and occasionally try to “restore” from a backup: never failed yet).

This was a lot of work — simply because a lot wasn’t in place (FTP server etc), but because I do have at least passing familiarity with the crontab, scheduler and so on, it was pretty straight-forward. Lot of work; lot of time — but no “deal breaker” dead ends. Just busy work, to some extent. I would figure I’d need this or that; I’d do it. No biggee.

Sometimes being an inquisitive geek pays off.


I wrote — over a year ago — that I was “thinking about databases” and all that.

That train of though turned into a guest editorial on the subject of open source databases vs. commercial products.

It was interesting to write — made me think — and, of course, the response from readers was the really interesting part.

Sure, I got flamed, called an idiot and all that, but there was a lot of knowledge and experience behind the responses in many cases.

Basically, the article said “Open source solutions are in many areas comparable or better than commercial products, but this is not true in the case of OSS databases. Why no outcry (or am I missing the outcry)?”

And — basically — the response from readers was that what is out there is fine; the options offered by commercial products were just not needed or could not be cost-justified.

Wow. Blew me away.

Because the most widely accepted/deployed OSS database — mySQL — is really a piece of crap. It doesn’t pass ACID tests, it’s filled with proprietary (instead of ANSI-complient) SQL (such as the “concat” operator! Scary…), and does not support a lot of the things that make it a database.

While there were dissents — and those who said that, yeah, Oracle is good, but I’ll never use the 10 million configuration options offered (fair) — the general response was that mySQL is just fine for the job.

Basically, people are using mySQL — and other OSS databases, such as Postgres, SAP etc. — much like flat files. Just a big table or two; maybe joining the two in some cases. Very denormalized. The advantage to using a database instead of a flat file even in this case, of course, is that one doesn’t have to write the logic to extract/order/limit the data pulled from the “data store” — SQL is used.

And then you can extend it later — add another table etc — very easily. And — importantly — without changing/adding any business logic.

And that is a good thing.

But it was just a bit scary to me: I had thought that the OSS crowd, in general, was more sophisticated about databases than that. I got notes and there were posts from people who had been doing this for years and saying, basically, that they don’t use primary keys and so on.

Again, the “relational flat file” syndrome.

While I agree that many projects do not need the weight of an Oracle or MS SQL installation, but one should still adhere to good database design and usage no matter the product used. It just seems odd — and surprising — to me that the users of OSS software don’t seem to put a lot of stock in these “best practices.”

On the other hand, I’m judging from the people who posted. And those who posted — or wrote — are probably always going to be those who disagree, not the ones nodding their heads and thinking “yep….”.

All in all, an interesting project (the article/responses) overall.

Where’d I go?


Can’t believe it’s been a year since I posted here — actually, almost 15 months.

I do remember reading something somewhere recently (/.?) that mentioned an article that correlates the rise in blogging with the rise of unemployment among the blogger types — techies.

Makes sense, and sort of works here.

But whatever. Onward.


I’ve finally gotten around to redesigning/recoding the littleghost.com site.

When I got the domain back in July 1997, I spent a weekend putting together a look and feel and all that….and pretty much have not changed it since.

Sure, I added sections here and there over the last five years, but I never really touched the GUI. Added a touch of a style sheet and so on, but nothing remarkable.

So I have begun the process of recoding the site. I’m trying to accomplish the following:

  • Slowly bring the look and feel of the separate sections together
  • The look and feel will be HTML 4.01 compliant and pass the W3C tests for HTML and CSS. Style-sheet driven site
  • The coding should be XHTML complient, as well. This will take a bit more work, replacing tables and BR tags and so on
  • Make it look virtually the same in IE6 and NS7 — those are the only browsers I’m really worried about. (Note: The site will not render well in NS4.x, because of that browser’s poor CSS support.)

As always, this site is really for experimentation and so on — it’s not supposed to be a real site that people really want to visit. For all the servers and so on I have locally, having them remotely is different.

For example, there is some bug at Concentric that does not allow the inclusion of (or, at least, acknowledgement of) a style sheet if the doc type is html 4.01 strict. Replace with HTML 4 transitional, and all is fine. Weird. I have to figure out just what is happening there.

So, currently, I have the style sheet called from geistlinger.com, and it’s fine. Go figure. Works fine locally on NT (Win2000 pro) and Linux (Apache). So I dunno. More things to check into! Oh boy….

Conversion is going well so far; I’m glad I waited until I had a little more experience in HTML 4.01 coding before converting — it’s not really as straight-forward as you might think, especially when you approach it (like I do) with an HTML 3 & HTML 3+ mindset. Still hard to think of DIVs and not TABLEs, how to align, messing with the inheritance issues of CSS styles and so on.

It’s been a nice learning experience.

So far, I’ve converted over the main page, the postcard section (for the most part — large CGI rewrite necessary, as well) and the Term Glossary (need to import a new version of this from my Linux box).

I have not decided whether or not to change this area — Blog This! — to the new format. Would be a good exercise, but the first issue is functionality, and I don’t want to mess this up just for uniformity in looks. The looks will get there; I have to make certain the functionality is not affected.


Other than that, I’ve been doing a lot of coding, from Perl through PHP to Cold Fusion. Database work has been relatively light recently, just a stored proc here and there, some tweaks as new sections need it and so on.

One thing I did spend several days on is using Google’s open API as a Web service. This rocks.

Basically, I can make calls to Google’s database and pull back the results to my (Linux) machine and massage the results as I see fit. It’s done via a SOAP wrapper and a local WSDL style sheet (provided by Google).

We’re talking a Web service. And it works. How cool is that?

I’d love to publish it out here on littleghost.com, but the necessary SOAP wrapper (I wrote the program in Perl) is not available on either of my domains — so I can only run it on my Linux box. Still cool….

Amazon has a similar program going; I have to try to see if I can get that to work. Maybe this time I’ll do it in PHP (need the PHP SOAP wrapper for my Linux box, however…).

Lot’s to learn out there, and the industry leaders in services are turning out to be companies like Google and Amazon, and not the players like IBM, Sun, M$ and so on. Interesting. While the “real” players (IBM etc…) will catch up quickly, I think it’s interesting that the pure players — the “all Web” players (Google etc…) are really making a difference, and making the promise of Web services (which is wildly overhyped currently) a reality for the average Joe Developer to see.

You go guys….