CSS vs. Tables

Yes, the old battle, CSS vs. the traditional positioning tool: Tables.

As much as I hated to do it, I’ve ripped out the DIVs that defined the left-hand column (blogroll etc) and the main content area and made that into a table.

I didn’t want to do this, but I was having too much trouble across browsers getting things stable. This does it, and it does it in a very good way, the way we expect tables to behave.

I’m not giving up on CSS for positioning, but I just spent about five hours trying to get what I want (not much, really, two columns, full length color in each, fluid).

I know it can be done; I’ve done it a few different ways, but I’m looking for behavior that is, well, table-like. Specifically, I want the following (and remember this is just for the left-hand and main content column):

  • Two column layout, that maintains color/border for full-length of longest of the two columns (challenge in CSS; need to have a parent wrapper that holds the two child columns)
  • Layout must be fluid. One column can be a fixed width if necessary, but it must be fluid in the manner expected by table-based layouts.
  • Work in common browsers (IE 5-6; Netscape/Mozilla at minimum)
  • No absolute positioning (I plan to offer themes for the blog at some point; absolute positioning would complicate this).

That’s really not a lot to ask, but … it’s a challenge.

But that’s fun, too.

And – thank ROOT for the Web – there are a lot of resources out there. I just have to find the help I need somewhere and make it work. I’ve done this before on other projects; just having issues here and I’m not sure why, which is the wild card.

Onward.

Build vs. Maintenance – OSS vs. MS

Like anyone reading this dross, I spend my days getting a monitor tan.

Unlike most geeks, however, I don’t see the whole OSS vs. MS thing as a religion. Software are tools; use the proper tool for the proper job (if possible; often not but that’s a whole ‘nother entry).

I was thinking about this recently, having finished up quick Perl and ColdFusion demos for (different) clients.

With the work I’ve done and I’ve seen – and there has been more than a fair amount of each – I am starting to see a general pattern for the use of OSS vs. MS technologies. This is a great oversimplification, but bear with me.

I’m seeing the following:

  • MS tools/technologies: Used with tech groups that are not as skilled technologically; used to quickly launch applications/build sites for clients.
  • OSS tools/technologies: Used by tech-savvy and tech-skilled groups; tools used to simplify ongoing project work and to automate maintenance tasks (backups, weblog parsing…)

Yes, let the flames begin!

OK, let’s defend what I’ve said:

  • Skill level: Some users of MS products are people that can kick my tech butt all over the kernel; however, you don’t have to grok server/DB/code innards to use MS products. That’s part of their appeal. Let’s take MS SQL Server as an example: Beyond queries/stored procs, all other tasks are GUI driven: Add user, create DTS, drop table etc. Wizards and clickable/right-clickable icons are all that’s needed to keep this (pretty damn good) DB running. If you really know what you’re doing, it makes a world of difference, but the point is you don’t have to know what you’re doing. Ditto for IIS, InterDev and so on. On the other hand, even setting up Apache on Linux can be daunting – you might even have to use vi to manually edit a httpd.conf file?? What does that even mean?? With OSS, you pretty much have to know what you’re doing – there is the man file, but you can’t right-click on anything at the command line to get a nice dialog box with help. While this required knowledge is a good thing in most cases, it’s also a barrier to entry – a bad thing.
  • Use: Launch: If you want to get a database-driven site launched – say, for a demo – in the fastest time, I still maintain ColdFusion is the best (I’m lumping this in with MS, for this example ONLY – it’s a no-brainer tool). It eliminates all the database connectivity issues for you, and – in conjunction with MS SQL Server – allows you to bang a site out in record time. Also, there is the hosting issue: If you run on an NT server, it will support ASP; Unix sites will not necessarily support PHP (for example). Launching an OSS site is usually a little more complex: Often (usually) have to build the DB tables with scripts (not a GUI!), have to deal with the differences between, say, Perl and PHP as far as DBI is concerned and so on. Overall, slower. And if you go the Java route….way slower, be it servlets/EJB/JSP or some combo.
  • Use: Maintenance: As soon as you get into the maintenance mode – assuming there IS such a need/interest, OSS kicks MS butt all over the place. One word: Scripting. OSS is sorta based on this (CLI); MS is not (GUI). I run both NT (2000) and Linux boxes at home; all backups and other maintenance needs are handled where ever possible by Linux: Just more straightforward and flexible (Perl script, Bash shell script, flexible CRON…).
  • Caveats: Many caveats have been listed above; please note. Basically, this entry is to show how/why certain projects are begun in a given language (greatly simplified…). For demo or stuff that y0ou need to just launch and not extend much…duh! – Do it as easily as possible. If there will be a need/many needs to expand/extract/maintain the site, other tools – possibly complex – may be better suited. Again, this entry is a vast overview.

As noted above, this is a vast generalization, but I think it’s true.

And that doesn’t make either OSS or MS better/worse than the other.

And it doesn’t mean MS tools can’t be used for enterprise sites, or OSS is only for experts and so on. I’m just seeing trends…

The tool for the job, remember? I’m just seeing the job clearer now; before, I saw only the tools clearly.

The Scobleizer

One of the blogs I’ve been following for the last few months is Robert Scoble’s.

He really needs no introduction for most bloggers; if you’re clueless, just know that Scoble is a Microsoft higher-up who has a blog that touches on a lot of stuff, mainly Longhorn, as that is his area at MS. This is his personal blog, and he claims little interference from Gates/Ballmer et al, and it reads that way.

One of the interesting things I always take away from reading him is the notion that – first and foremost – MS is a business: It exists solely to make money. Everything else comes from this.

And I don’t write this in a negative way – it’s just reality.

MS is not out to help you or me; MS is out to make money. It’s called capitalism, and is often practiced in these United States.

OK?

But this capitalistic streak in MS means the following:

  • MS can’t fix things that even it wants to: As Scoble notes, he’d love to get IE back up to being the most standards-compliant browser. But as he shares, how can you make a business case for spending, say, $100M to fix an old tool when a new version (embedded in Longhorn) is underway? You can’t.
  • Other updates are tough to make: Simply because of the tight integration of tools and DLLs and all that, changes to anything is difficult – to change one DLL, for example, you have to make sure it works on all these platforms (Win 9x line, NT line, CE) – across all languages its deployed – and with all the tools that hit it (will this affect the print driver for the Epson123 printer with the Tablet OS blah blah…). Lots of dependancies. Remember the recent snafu with the Mac OSX update – Panther – that erased some users’ hard drives? That’s bad, but OSX users are (relatively) few. Imagine if the Windows XP Service Pack 2 (coming, I understand) did the same thing? Villagers with torches would be marching on Redmond!….
  • MS is going to make decisions that lock you in: Why should this be a surprise? Hell, it’s only after years of squawking that Sun has (sort of) released Solaris for Intel (i.e., non-Sun hardware). This allows the vendor (MS/Sun) to make more money, and also has a benefit for users: If you buy, say, MS Advanced Server, you know MS SQL Server will run on it without a hitch (in theory, OK?). Sure, you can slap Oracle or mySQL on the box, but you don’t have the tight integration to the OS that allows some cools stuff to happen.
  • Locked-in software is easier to support/extend (shared APIs etc) So you’re going to see an even bigger push for closed standards. Face it, open standards are great, but to get everyone on board with them is virtually impossible. And the standards formed are often weaker than a proprietary solution, simply because you can’t be all things to all people. You can, however, be all things to some (MS buyers) people much more easily. (NOTE: As indicated above, this integration can come at a cost)

Basically, Scoble frequently points out that MS is a business, a successful one at that. Part of the price users have to, as well as MS itself has to, pay for this success is that MS cannot be as nimble as small companies with a handful of products and one or two business targets. MS is all over the map, and even that small DLL change can effect a lot of stuff, which – in turn – affects the bottom line. MS != evil; MS == pragmatic.

I have to agree, at least to a degree.

Obviously, Scoble is talking from the point of view of a MS honcho, but he doesn’t sugarcoat things. He lightly slams MS in some cases, and in others – such as updating IE6.x – presents compelling arguments as to why that just can’t happen.

He does gloss over some issues – he does not really mention the whole security/lawsuit morass that the MS campus is sinking into, but I can excuse that. He is a MS honcho, and – his own blog or not – with that title comes responsibility.

And no, I don’t agree with him all the time. But he’s a nice counterpoint to all the anti-MS rants (see just about any thread on /.), and he frequently has interesting points of view.

I personally just don’t see how he has the time to write all he does – and he frequently responds in the comments threads, as well.

Information – biased or otherwise – is never a bad thing.

Tweakin’

Made a slight tweak to the gallery section today.

Before, if no gallery parameter was present in the URL – or this parameter turned out to be a bogus gallery name (typo of someone messin’ with my URLs…), it would default to the “All Pictures” gallery.

Thinking about it, it made more sense to default to the gallery index.

Which is currently what it does.

Makes URL typing easier, as well. Instead of typing: “….gallery.cgi?gallery=[whatever gallery]”, I can just drop all params to get users to the index, which is the jump off point, anyway. Simply “…gallery/gallery.cgi”

This is good.

The New Net | The Desktop as the Net?

Sure, change is inevitable – and, in most cases, a good thing.

How to make the distinction on what the change Microsoft’s Longhorn will make is a bit perplexing. (Read some of Tim Bray’s observations, or read a Microsoft Avalon article).

Things are changing more than just an OS change. This isn’t just a new OS, it’s a change on par with the DOS CLI to GUI that the original Macintosh introduced (to the masses – don’t flame me on the whole PARC history; I know…).

I don’t quite understand it all – I’ve read too little about it all to venture solid opinions – but it appears to bring the scripting capabilities of HTML to higher-level languages. However, the scripts (in the case of Avalon, XAML – an XML-based language, I guess) are merely wrappers for distinct classes in the API. So, much like a H3 tag in HTML represents – if you will – an API call to the browser’s rendering engine (really a parsing operation, but bear with me), the XAML is an API call to the actual OS.

Youch! That’s powerful.

And allows the representation of objects/text to be the same in applications or the browser (hey, calling the same API).

I’m going to have to look more closely at all this, and see just what the heck it means for those not on Longhorn. Then what happens?

This is bigger than I ever thought.

< A few minutes later >

I just finished the Avalon article, which included this conclusion:

Avalon and XAML represent a departure from Windows-based application programming of the past. In many ways, designing your application’s UI will be easier than it used to be and deploying it will be a snap. With a lightweight XAML markup for UI definition, Longhorn-based applications are the obvious next step in the convergence of the Web and desktop programming models, combining the best of both approaches.

— Charles Petzold, Create Real Apps Using New Code and Markup Model

Hey, I was on the money about the Net/desktop (apps) convergence concept. Scary….

The End of the Gallery

Again, end of the gallery, not the end of the galaxy.

No apocalypse now.

By end of the gallery, I mean I’ve finished up the backend of the gallery tool.

As outlined in my last entry, I decided to build a PHP/MySQL backend (uses Perl/flat files for front end). While it was a relatively straight-forward process, it was more work than I anticipated – isn’t it always?

And the [intentional] use of MySQL was a bit of a hindrance, but I wanted to use MySQL because it’s the predominant OSS DB out there, and I need more practice on it. And this project is pretty much a good fit for MySQL: Nothing too involved, just some selects and inserts. And all locally, so it’s a no-brainer (yes, perfect for me).

Here’s how the backend project ended up:

  • Add/edit gallery page (all at once)
  • Edit image name/desc (all at once)
  • Add new image/reload existing image (processes and moves file to local and remote server)
  • Gallery-to-Image mapping (gallery at a time)
  • Include file for header (menu/DB connectivity etc)
  • Processing page to generate all necessary TXT files for front end

I used the same CSS sheet as used for the front end (with some back-end classes additions tacked on), so the UI is the same and that’s one less file to maintain (good…).

As far as the database goes, it’s pretty much a trivial exercise – see the code below:


/*list of galleries*/

create table gallery (

gallery_id int primary key auto_increment,

gallery_file varchar(255),

gallery_name varchar(255),

gallery_desc text,

date_added datetime

)

/*image with captions*/

create table image (

image_id int primary key auto_increment,

image_file varchar(255),

image_name varchar(255),

image_desc text,

date_added datetime

)

/*mapping table, images to galleries*/

create table mapping (

image_id int null,

gallery_id int null

)

As you can see, three tables, the last of which is just a mapping table between the first two, so any picture can belong to any number of galleries.

Lots of busy work, but – for the most part – nothing earthshaking.

One of the nice aspects of this project was getting more experience with PHP and files – I’ve done it before, many times, but always separated by large chunks of time. A refresher is always nice.

Actually, it was a nice refresher in PHP, in general. I’ve been working more with Perl and ColdFusion recently, and I keep forgetting about how much I like PHP. And the more of it I learn, the more there is to like.

One new aspect of PHP – for me – was the FTP tools. I’d just never had the occasion to need them in PHP.

When I mentally architected this tool and decided on PHP, I didn’t even know if PHP supported FTP – I knew that it must, and that it probably wasn’t a hack, but I didn’t know. I just assumed that it did, and – if not – I’d just run exec() in PHP to either a shell or Perl script to do the FTP business.

Thankfully, PHP’s FTP tools are as I expected: Pretty extensive and pretty damn accessible.

The two complaints I have with PHP’s FTP functions are the following:

  • The syntax is always – GET or PUT – remote, local. I am used to – Unix based – source [space] target. I was hosed on this for about a half hour, until I actually RTFM. Little weird to me, but consistent across the PHP FTP functions, and consistency is good.
  • I’m probably missing something, but I don’t see support for MGET or MPUT – each GET or PUT is discrete, as far as I can tell (and, here, I have RTFM). Not a problem in this case for me, as I’m looping through galleries, creating them and uploading them. So it’s a one-at-a-time thing, anyway. But what if I wanted to upload all the JPEGs in a directory? I can’t do a “mput *.jpg .” type thing, as one can with most CLIs. Have to grab list and loop. OK, but still would be nice….maybe in v5

Overall, the Gallery Project was a blast, and it’s turned out well.

I need to do some tweaking – for example, build an FTP function for my MPUT-type needs – but it’s pretty solid and the damn thing actually works!

Time to scan in more pics….

Birth of the Gallery

No, no, no – put away the pointy-ear caps, you Trekkies: Birth of the Gallery, not Galaxy.

As I’ve mentioned, I’ve been incorporating some pictures of mine into this blog, including a random “Pic ‘0 the Day” (see left-hand column).

OK, so I had all these pics scanned in and uploaded, but … only one a day would appear.

Which was nice, but why not make a gallery of the pictures?

Better yet, how about multiple galleries – the pictures grouped by subject matter or what have you?

Yeah, why not?

So I worked on a method to get this working, and I have half of it done: the user-presentation layer.

Enter the Gallery, and feel free to browse around.

OK, as mentioned, I have only half the project done: the part posted. Since I’m on Blogger and run off their database (for text only, not other stuff), I have limitations.

And my host does not allow databases on my plan (didn’t allow them at all until just recently), and the scripting languages supported are thin: Basically, this is a job for Perl and flat files.

It all came together fairly easily; I’m surprised that it worked well. I built it remotely and uploaded it and it worked flawlessly the first time. Wow. That’s cool.

  • I have one file that is the list of all images (image name), title and description (since all images are in one directory, file names are unique). Call it the caption list.
  • Another file is the list of galleries – the name of the .txt file that lists the gallery contents, the gallery name and gallery description. Currently, only four lines (four galleries)
  • One .txt file each for every gallery; just a list of images in the gallery (the caption file contains the details, with the image name acting as the flat-file equivalent of primary/foreign key).

In this way, I can build galleries with whatever images exist; images can exist in more than one gallery – however, the title and description always resides in the caption file, so maintenance is trivial.

Ah, maintenance. That’s the second part.

How to maintain that – on my personal machine – and then push to the Web site daily (or whatever period I pick).

While flat files work great with Perl on my host, maintaining flat files doesn’t make a lot of sense. This really calls for a database app that pushes the data to flat files for publication.

Otherwise, it will be quite difficult to control.

So I’m thinking of building it as a PHP-mySQL application on my local machine. Build tools to add/alter the galleries, and then have a tool push the changes to my host.

Hmm…will be interesting.

Until then, enjoy what I have. I enjoyed building it, the twisted fool I am…

Geek Love

I confess – we geeks are a strange breed. (Actually, it’s surprising that we are allowed to breed…)

I had an algorithm for testing an e-mail address in Perl, but I just didn’t like it. Wasn’t robust enough for me.

I figured – and I’m sure I’m correct – that this has been a million times by a million people, and it would be for the taking somewhere on the Web.

Well, I found a couple of regexes that were close, but – again – not quite what I was looking for.

So I rolled my own (again…), and it think it’s what I want.

If the e-mail address doesn’t match this mask, invalid address:

/^([a-zA-Z0-9])+([\.a-zA-Z0-9_-])*@([a-zA-Z0-9_-])\.([a-zA-Z0-9_-]{2,4})/

Update 11/11/03: Improved below...
/^([a-zA-Z0-9])+([\.a-zA-Z0-9_-])+@([a-zA-Z0-9_-])+\.([a-zA-Z]{2,4})$/

Notably, what this does that my other one didn’t is the following:

  • Allows periods (dot), hyphens and underscores in first part of address (before @), but does not allow these special characters to be the first character.
  • Allows only one @ character (flaw in my last regex)
  • Requires 2-4 character domains (.ca, .net, .info). I haven’t checked this out at ICANN, but I think that 2-4 characters is the current upper and lower limits (another flaw in my last regex).

Go ahead, embrace your inner and outer geek…